MIT Engineers Develop Algorithm To Identify Boundaries In Digital Images
MIT engineers have developed algorithm to identify the boundaries of objects in digital images. The new algorithm developed is at least 50000 times more efficient than existing algorithms which handle the problem. With the new development, one of the central problems in computer vision has been solved to a great extent. Human eye can easily identify various object boundaries in digital images; but it's very challenging to do the same with the help of computer programs. For example, it's difficult to make a computer identify a lamp post boundary from the facade of a building behind it. However Jason Chang from MIT's Electrical Engineering Department along with John Fisher of MIT's Computer Science & Artificial Intelligence department have developed a new algorithm to handle this problem. Their new algorithm can be immensely useful in medical imaging, in tracking moving objects and especially in 3D object recognition. We recently wrote about #-Link-Snipped-# which impressed us with its 3D object edge recognition capabilities.
#-Link-Snipped-#
Courtesy of Jason Chang
The engineers wanted to imitate the human eye in order to determine boundary. "We want an algorithm that's able to segment images like humans do," Chang says. "But because humans segment images differently, we shouldn't come up with one segmentation. We should come up with a lot of different segmentations that kind of represent what humans would also segment."
More Coverage: #-Link-Snipped-#
#-Link-Snipped-#
Courtesy of Jason Chang
The engineers wanted to imitate the human eye in order to determine boundary. "We want an algorithm that's able to segment images like humans do," Chang says. "But because humans segment images differently, we shouldn't come up with one segmentation. We should come up with a lot of different segmentations that kind of represent what humans would also segment."
More Coverage: #-Link-Snipped-#
0