From 2D Images To 3D Exploratory Visualscapes
Breakthrough in digital imaging and virtual 3D computer vision.
Here a composite image, showing a photograph and three 3-D reconstructions derived from it. The original photograph is the one on top left, while the others are 3D reconstructions created virtually from that one original.
The first group of scientists at Carnegie Mellon University's School of Computer Science have found a way to help computers understand the geometric context of outdoor scenes and thus better comprehend what they see.
The discovery promises to revive an area of computer vision research all but abandoned two decades ago because it seemed insoluble. It may ultimately find application in vision systems used to guide robotic vehicles, monitor security cameras and archive photos.
Using machine learning techniques, Robotics Institute researchers Alexei Efros and Martial Hebert, along with graduate student Derek Hoiem, have taught computers how to spot the visual cues that differentiate between vertical surfaces and horizontal surfaces in photographs of outdoor scenes.
They've even developed a program that allows the computer to automatically generate 3-D reconstructions of scenes based on a single image.
In their latest work, to be presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 17-22 in New York City, the Carnegie Mellon researchers will show that having a sense of 3-D geometry helps computers identify objects, such as cars and pedestrians, in street scenes.
The program also takes advantage of the constraints of the real world -- skies are blue, horizons are horizontal and most objects sit on the ground.
To demonstrate the utility of this technique, the researchers have designed a graphics program to automatically generate 3-D reconstructions by "cutting and folding" along vertical and horizontal lines in an image.
Their technical design breakthrough was in discovering that computers can often discern which surfaces in a digital image are vertical or horizontal, and whether a vertical surface faces left, right or toward the viewer.
But for the real treat please look at these animated samples of 2D images which have been converted into 3D navigable spaces.
For more examples please see:
But it doesn't end here. Researchers at Microsoft have done something at least as impressive, useful and unique as what you have just seen.
The work done by Microsoft Live Labs has focused on ways to seamlessly interconnect all digital images available on the web to allow individuals to explore, dive and walkthrough all of the digital image space that has been so far captured by billions of amateur camera shots.
Microsoft calls this Photosynthesis, or the ability to link images together. With Photosynthesis, Whenever digital images are taken in a common environment, it is as if you formed a hyperlink between them. If you think now about the emergent universe of hyperlinked images out there on the web and at the possibilities that an automatic search engine-type crawler can create by grouping and linking those images together you can get an idea of what it is now possible to do and see through the eyes of many.
In this short video clip from Microsoft Live Labs you can see some direct applications of this technology allowing you to explore in detail, infinite viewpoints and specific parts of any physical environment that has been photographed by multiple cameras and from more than one angle.
Here is how you could explore and discover Rome's Saint Peter's cathedral beyond what your eyes and camera were able to capture during your last journey:
In essence: Microsoft Photosynth takes a large collection of digital photos of a place or object, analyzes them for similarities, links them in a way in which a full 3D environment is precisely reconstructed and displays them in a fully virtual 3-dimensional space.
Find out more: http://labs.live.com/photosynth
blog comments powered by Disqus