Discussion of object recognition with distance sensors
I would just like to open the floor to any thoughts on the code would take for a robot to do a sweep with a distance sensor and get an idea of what it was looking at overall. We have seen object recognition (edge finding) a couple different ways and code to send a bot to open areas. What I am wondering now is about checking distances at different points of the sweep and (here's the important thing) to figure out what it is looking at.
Now I am looking for broad ideas, here folks. Simply, can a robot (the kinds we make here, programmed in basic) know what a corner is? Can it see that there is a corner of a box jetting out, or an opening? I would figure it could simply store all the numbers in terms of the data from the sensor but it is the comparison of that data that is key. For example, if the bot were to average everything it saw and then went back through all the numbers to find that a few in the middle were a lot high than the average. Or could it notice that the numbers slowly increase/decrease before they go to infinity --thus an outside corner that it is seeing at an angle. Stuff like that.
I am NOT looking for any code here. DO NOT GIVE ME CODE! --I just need a direction to go to start thinking about this.
***Big, nice, overall ideas here, folks***