Let's Make Robots!

Discussion of object recognition with distance sensors

I would just like to open the floor to any thoughts on the code would take for a robot to do a sweep with a distance sensor and get an idea of what it was looking at overall. We have seen object recognition (edge finding) a couple different ways and code to send a bot to open areas. What I am wondering now is about checking distances at different points of the sweep and (here's the important thing) to figure out what it is looking at.

Now I am looking for broad ideas, here folks. Simply, can a robot (the kinds we make here, programmed in basic) know what a corner is? Can it see that there is a corner of a box jetting out, or an opening? I would figure it could simply store all the numbers in terms of the data from the sensor but it is the comparison of that data that is key. For example, if the bot were to average everything it saw and then went back through all the numbers to find that a few in the middle were a lot high than the average. Or could it notice that the numbers slowly increase/decrease before they go to infinity --thus an outside corner that it is seeing at an angle. Stuff like that.

I am NOT looking for any code here. DO NOT GIVE ME CODE! --I just need a direction to go to start thinking about this.

***Big, nice, overall ideas here, folks***

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Localization is defininately something im going to try and implement.  I would like Crum to be able to know when hes in an area thats already been explored and update his position using the map thats already built (SLAM), that should help to remove some odometry errors.  I would also like to be able to load a map, either on the remote pc or to onboard memory and have Crum work out where he is.

Thanks!  

Your right of course, must've been half asleep...

Cheating? I understand your point if its a personal goal,  but  I for one can not find my way around the city without a google map.  So, I have surrendered the fact that part of my brain is on some googleplex machine out there.  So I guess nature "cheats"...  So, I can find a grocery store now.

It works for me, however, I understand this dependency has can have some disadvantages.  For example if the googleplex machine was unplugged, I would starve - and you would be eating high on the hog.

Geeze, I hope they don't unplug it, I'm hungry already :P

Its a personal goal. It sorta falls in the category of whether to call the remote controlled demolishing machines of robotwars actual robots.

 

If there was no person behind a joystick my definition of "robot" would be satisfied. 

And my money would be on the one which has "Borg'ed" out telementry going to the Dept. Of Defence mainframe :)

OK you got me. There is something strangely cool in a SF kind of way to have a remote brain. Still; my goal is to get the tiniest simple chip and squeeze all the processing power out of it in order to get a bot going.

Sir,
I humbly kowtow to your uber techy skills and noble quest of assembly enlightenment. 

Just wanted to suggest remote brain through telemetry as an option, especially for those of us who are not as skilled in the micros as yourself (i.e. me) and need a big fat canvas of memory and processing power to support the weight of my bloated code!

(humbly leave room with much genuflecting)

I did the great Frits! test  and I got 1 point. Which makes me an arty.  But this discussion doesn't really apply here because CTC has build Walter in such a way that it could carry two full size tower PCs and his whole family without a sweat. 

This is a tricky question. different sensors react differently to objects. Like if you put a box in a room, you could move around that box just fine, it appears solid. But if you put that box on it's side with an opening now in the way of a bot, how will it know what it is? It no longer appears to take the same space. if you point an ir sensor it would most likely be able to navigate the box a  bit easier than a ping sensor(ping bounces off angles all over the inside of a box), especially at an angle.

I know this doesn't help, but it was just a thought when reading the replies..

I don't see how this could be done without mapping of some sort, even it it's just wheel encoders. I mean. stick yourself in a room and blindfold yourself. have someone spin you around and start walking forward. you'll hit something but if you counted your steps, you'd know how to get back to your point of origin. Without mapping you'd be gong blind all the time and wouldn't be able to recognize objects that you've come across. 

What is the minimal set of components were you looking at using? This would help to generate ideas.

 

 

Hmmm, don't pattern recognising gizmo's exist that could be hacked into a picaxe project? Like a speach recognition module via i2c. I doubt it would object against working on very different data, as long as it were properly trained.

You could go crazy even and have it analyze the echoes from the room.Just feed it an audio sample off of a mic after sending some pulse. Like an omnidirectional SRF, but with recognition, rather that ToF calculations. BatBot?

Just an extra wild thought.

I dig the "surroundings recognition" stuff, a lot, but I really don't have a problem with the robot drawing a map along the way. I do want to avoid the "robot's in a maze with only 90 degree walls, and a MIT student, and GPS and crap" kind of mapping. --Maybe just, "Oh yeah, I allmost ran into that last time through"...