Computers can understand the geometry of their surroundings

Computers can understand the geometry of their surroundings

Post just about everything that isn't directly related to Spring here!

Moderator: Moderators

Post Reply
User avatar
unpossible
Posts: 871
Joined: 10 May 2005, 19:24

Computers can understand the geometry of their surroundings

Post by unpossible »

this is pretty special
these guys have come up wth a way of letting computers interpret the 3d world from a 2d image :o

http://www.cmu.edu/PR/releases06/060613_3d.html

the program can take a reasonably simple photograph and make a simple 3d model out of it. that's a big step forward in ability. you can't run it without Matlab though :(
User avatar
Zoombie
Posts: 6149
Joined: 15 Mar 2005, 07:08

Post by Zoombie »

Cool. Really cool.

Now we can plot our assasination attemps even easier!
User avatar
Masse
Damned Developer
Posts: 979
Joined: 15 Sep 2004, 18:56

Post by Masse »

this hopefully means that robots can really see in "3d" some day !
they need no stupid sensors anymore to see a wall in front of em :-)
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

They already do, the technology has existed for at least a decade and has already bene implemented in experimental robots, and even has uses in thigns such as NASA mars rovers and speed cameras.

The CGAL library is capable of creating geometric objects or polygons out of photographs in c++ for those who're interested
User avatar
Comp1337
Posts: 2434
Joined: 12 Oct 2005, 17:32

Post by Comp1337 »

That shit is really impressive!
User avatar
unpossible
Posts: 871
Joined: 10 May 2005, 19:24

Post by unpossible »

AF wrote:They already do, the technology has existed for at least a decade and has already bene implemented in experimental robots, and even has uses in thigns such as NASA mars rovers and speed cameras.

The CGAL library is capable of creating geometric objects or polygons out of photographs in c++ for those who're interested
is it photographs or a single photograph? the point of this was to be able to interpret the geometry from a single image (like we're able to) witout having to move about/ get a different view point
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
User avatar
Weaver
Posts: 644
Joined: 07 Jul 2005, 21:15

Post by Weaver »

AF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
It's been possible with multiple images for a while, but from one image it is a breakthrough. The reliance on flat planes with strong vertical and horizontal lines in this method would not suit a mars rover well as mars is lacking in those features.
User avatar
unpossible
Posts: 871
Joined: 10 May 2005, 19:24

Post by unpossible »

AF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
didn't answer the question :shock:
User avatar
BigSteve
Posts: 911
Joined: 25 Sep 2005, 12:56

Post by BigSteve »

Thats all well and good... but can this software tell mw how many spring players it takes to change a light bulb?
Its been bugging me for ages ^^
User avatar
Fanger
Expand & Exterminate Developer
Posts: 1509
Joined: 22 Nov 2005, 22:58

Post by Fanger »

theyd never change it steve first they spend ages debating whether it was balanced or not.. and then come up with new features to add to the lightbulb..
User avatar
Aun
Posts: 788
Joined: 31 Aug 2005, 13:00

Post by Aun »

Fanger wrote:theyd never change it steve first they spend ages debating whether it was balanced or not.. and then come up with new features to add to the lightbulb..
The switch should be mod-specific... It wouldn't work well with TA mods...
User avatar
Cabbage
Posts: 1548
Joined: 12 Mar 2006, 22:34

Post by Cabbage »

add some reclaimable rocks too it!
el_muchacho
Posts: 201
Joined: 30 Apr 2005, 01:06

Post by el_muchacho »

unpossible wrote:
AF wrote:the technology has been available since at least the early 90's, it's in use on the mars rovers and most experimental robots that can see.
didn't answer the question :shock:
The AF chatbot is buggy. :wink:

I don't see how the software can decide the depth of the building with a single photograph. And of course, I guess it assumes some symmetry in the building. There is no way one can rebuild the Guggenheim museum of Bilbao in 3D with a single photograph. Also, the result will heavily depend on the lens of the camera, etc. I doubt it can work well if the image is distorted with a fisheye lens, for ex.
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

The majority of life forms we know of that can see and have multiple eyes can also tell the different levels of depth with only one eye able to see the view just as they would with both eyes.

How else do you know that a tree in the distance on a photograph is actually in the distance? Or that a peewee far away is nearer to you than a peewee even further away on a computer screen? After all the 2 eye depth perception wont work on screens and photographs as they simply see the same image and if that where all that there is to it then we'd interpret it only as a flat screen and say all the objects on ti are equidistant from each other.
Post Reply

Return to “Off Topic Discussion”