Soon Machines Could Be Made To React And Adapt To Unfamiliar Surrounding Objects, Like Humans
Man made Robots could soon think like humans and learn from the environment, depending on its preprogrammed analyzing skills. Of late, artificially intelligent objects have been relying on image-recognition technology to perceive dimensions and detect shapes. The objects are not always those with which it has been trained or tested, but any similar or unknown objects. A group of researchers from KU Leuven at Belgium have shown, that in the aforementioned situation, a robot can be made to respond to changes in its surroundings, in a way very similar to humans.
Neurological complexities have given birth to human interpretation which has been the key reason to drive researchers to envision an object with human like psychology. The group has put forward some thought provoking questions to which human give distinct responses. For example, if a person is driving a car and a blurred object comes in the way, intuition in combination with reflex will try to push the brake and stop the car knowing it is a live animal or human and not any waste material swept up by wind.
However, the most pertinent question in this context will be, what is the probable response from any self-driving car, considering other conditions are valid? Can it make a quick judgment or take a different approach to think and perceive it? KU Leuven researchers Jonas Kubilius and Hans Op de Beeck feel that an image-trained object can indeed understand the difference. They explained that the newest technology is heavily dependent upon deep artificial neural networks and related algorithms that mimic the human brain.
Using deep neural networks, scientists have proposed that a system not only gains judgmental skills and distinguishes one object from the other, but is also receptive to new shapes and finds similarity with that in other objects. So, if an apple is shown to it, it can detect fruits of similar size and shape that possess matching characteristics with it.
According to the team, the robots would in the future, achieve an efficient visual system and vocabulary at par with humans and could become the next big breakthrough, with intelligence flowing through their artificial neural network. The complete research report was published in the PLOS computational biology journal.
Source: #-Link-Snipped-#
Neurological complexities have given birth to human interpretation which has been the key reason to drive researchers to envision an object with human like psychology. The group has put forward some thought provoking questions to which human give distinct responses. For example, if a person is driving a car and a blurred object comes in the way, intuition in combination with reflex will try to push the brake and stop the car knowing it is a live animal or human and not any waste material swept up by wind.

However, the most pertinent question in this context will be, what is the probable response from any self-driving car, considering other conditions are valid? Can it make a quick judgment or take a different approach to think and perceive it? KU Leuven researchers Jonas Kubilius and Hans Op de Beeck feel that an image-trained object can indeed understand the difference. They explained that the newest technology is heavily dependent upon deep artificial neural networks and related algorithms that mimic the human brain.
Using deep neural networks, scientists have proposed that a system not only gains judgmental skills and distinguishes one object from the other, but is also receptive to new shapes and finds similarity with that in other objects. So, if an apple is shown to it, it can detect fruits of similar size and shape that possess matching characteristics with it.
According to the team, the robots would in the future, achieve an efficient visual system and vocabulary at par with humans and could become the next big breakthrough, with intelligence flowing through their artificial neural network. The complete research report was published in the PLOS computational biology journal.
Source: #-Link-Snipped-#
0