Page 1 of 1
Computers can understand the geometry of their surroundings
Posted: 15 Jun 2006, 11:55
by unpossible
this is pretty special
these guys have come up wth a way of letting computers interpret the 3d world from a 2d image
http://www.cmu.edu/PR/releases06/060613_3d.html
the program can take a reasonably simple photograph and make a simple 3d model out of it. that's a big step forward in ability. you can't run it without Matlab though

Posted: 15 Jun 2006, 16:10
by Zoombie
Cool. Really cool.
Now we can plot our assasination attemps even easier!
Posted: 15 Jun 2006, 17:37
by Masse
this hopefully means that robots can really see in "3d" some day !
they need no stupid sensors anymore to see a wall in front of em

Posted: 15 Jun 2006, 20:45
by AF
They already do, the technology has existed for at least a decade and has already bene implemented in experimental robots, and even has uses in thigns such as NASA mars rovers and speed cameras.
The CGAL library is capable of creating geometric objects or polygons out of photographs in c++ for those who're interested
Posted: 16 Jun 2006, 13:30
by Comp1337
That shit is really impressive!
Posted: 16 Jun 2006, 15:14
by unpossible
AF wrote:They already do, the technology has existed for at least a decade and has already bene implemented in experimental robots, and even has uses in thigns such as NASA mars rovers and speed cameras.
The CGAL library is capable of creating geometric objects or polygons out of photographs in c++ for those who're interested
is it photographs or a single photograph? the point of this was to be able to interpret the geometry from a single image (like we're able to) witout having to move about/ get a different view point
Posted: 16 Jun 2006, 20:58
by AF
the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
Posted: 16 Jun 2006, 22:42
by Weaver
AF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
It's been possible with multiple images for a while, but from one image it is a breakthrough. The reliance on flat planes with strong vertical and horizontal lines in this method would not suit a mars rover well as mars is lacking in those features.
Posted: 16 Jun 2006, 22:58
by unpossible
AF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
didn't answer the question

Posted: 16 Jun 2006, 23:08
by BigSteve
Thats all well and good... but can this software tell mw how many spring players it takes to change a light bulb?
Its been bugging me for ages ^^
Posted: 16 Jun 2006, 23:16
by Fanger
theyd never change it steve first they spend ages debating whether it was balanced or not.. and then come up with new features to add to the lightbulb..
Posted: 16 Jun 2006, 23:24
by Aun
Fanger wrote:theyd never change it steve first they spend ages debating whether it was balanced or not.. and then come up with new features to add to the lightbulb..
The switch should be mod-specific... It wouldn't work well with TA mods...
Posted: 16 Jun 2006, 23:24
by Cabbage
add some reclaimable rocks too it!
Posted: 17 Jun 2006, 11:28
by el_muchacho
unpossible wrote:AF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
didn't answer the question

The AF chatbot is buggy.
I don't see how the software can decide the depth of the building with a single photograph. And of course, I guess it assumes some symmetry in the building. There is no way one can rebuild the Guggenheim museum of Bilbao in 3D with a single photograph. Also, the result will heavily depend on the lens of the camera, etc. I doubt it can work well if the image is distorted with a fisheye lens, for ex.
Posted: 17 Jun 2006, 18:12
by AF
The majority of life forms we know of that can see and have multiple eyes can also tell the different levels of depth with only one eye able to see the view just as they would with both eyes.
How else do you know that a tree in the distance on a photograph is actually in the distance? Or that a peewee far away is nearer to you than a peewee even further away on a computer screen? After all the 2 eye depth perception wont work on screens and photographs as they simply see the same image and if that where all that there is to it then we'd interpret it only as a flat screen and say all the objects on ti are equidistant from each other.