Computers can understand the geometry of their surroundings
Moderator: Moderators
- unpossible
- Posts: 871
- Joined: 10 May 2005, 19:24
Computers can understand the geometry of their surroundings
this is pretty special
these guys have come up wth a way of letting computers interpret the 3d world from a 2d image
http://www.cmu.edu/PR/releases06/060613_3d.html
the program can take a reasonably simple photograph and make a simple 3d model out of it. that's a big step forward in ability. you can't run it without Matlab though
these guys have come up wth a way of letting computers interpret the 3d world from a 2d image
http://www.cmu.edu/PR/releases06/060613_3d.html
the program can take a reasonably simple photograph and make a simple 3d model out of it. that's a big step forward in ability. you can't run it without Matlab though
They already do, the technology has existed for at least a decade and has already bene implemented in experimental robots, and even has uses in thigns such as NASA mars rovers and speed cameras.
The CGAL library is capable of creating geometric objects or polygons out of photographs in c++ for those who're interested
The CGAL library is capable of creating geometric objects or polygons out of photographs in c++ for those who're interested
- unpossible
- Posts: 871
- Joined: 10 May 2005, 19:24
is it photographs or a single photograph? the point of this was to be able to interpret the geometry from a single image (like we're able to) witout having to move about/ get a different view pointAF wrote:They already do, the technology has existed for at least a decade and has already bene implemented in experimental robots, and even has uses in thigns such as NASA mars rovers and speed cameras.
The CGAL library is capable of creating geometric objects or polygons out of photographs in c++ for those who're interested
It's been possible with multiple images for a while, but from one image it is a breakthrough. The reliance on flat planes with strong vertical and horizontal lines in this method would not suit a mars rover well as mars is lacking in those features.AF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
- unpossible
- Posts: 871
- Joined: 10 May 2005, 19:24
-
- Posts: 201
- Joined: 30 Apr 2005, 01:06
The AF chatbot is buggy.unpossible wrote:didn't answer the questionAF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
I don't see how the software can decide the depth of the building with a single photograph. And of course, I guess it assumes some symmetry in the building. There is no way one can rebuild the Guggenheim museum of Bilbao in 3D with a single photograph. Also, the result will heavily depend on the lens of the camera, etc. I doubt it can work well if the image is distorted with a fisheye lens, for ex.
The majority of life forms we know of that can see and have multiple eyes can also tell the different levels of depth with only one eye able to see the view just as they would with both eyes.
How else do you know that a tree in the distance on a photograph is actually in the distance? Or that a peewee far away is nearer to you than a peewee even further away on a computer screen? After all the 2 eye depth perception wont work on screens and photographs as they simply see the same image and if that where all that there is to it then we'd interpret it only as a flat screen and say all the objects on ti are equidistant from each other.
How else do you know that a tree in the distance on a photograph is actually in the distance? Or that a peewee far away is nearer to you than a peewee even further away on a computer screen? After all the 2 eye depth perception wont work on screens and photographs as they simply see the same image and if that where all that there is to it then we'd interpret it only as a flat screen and say all the objects on ti are equidistant from each other.