Let's Make Robots!


Navigates, draws maps
Simulacra.jpg102.97 KB

I have got the sonar sensor working and scanning and added this to the data sent to the pc.  I have also written a very basic obstacle avoidance program which basically involves the robot turning for a certain time whenever an obstacle is sensed.  I am going to develop this program so that the sonar data is also used to make decisions on what way to turn.  I would also like to send data from the pc to tell the robot which way to turn so that unexplored areas of the map can be explored.  For now i have added a video of Crum producing a map utilising the sonar data as well as the ir sensor data.  I have also altered the mapping software slightly so that the shade of grey displayed on the map reflects the probability of an obstacle in that cell (not very clear in the video).  I have also added functionality to change the scale of the map and the resolution of the occupancy grid.

So my mapping robot is dead, robbed for parts!  It could only move on flat, smooth surfaces and i wanted something that could drive around the house. Meet crum.  Ive built Crum to continue my endevour to create a robot that can draw a map.  The body is made of expanded pvc (an idea taken from several robots posted recently, thanks guys!).  Its armed with 3 ir sensors and a sonar sensor.  The old mapping robot used just ir sensors and i think the addition of the sonar will help with the mapping and the avoiding of chair legs!!  Ive also included an I2C LCD screen, looks cool and very useful when debugging.  Crum has the compass module and the wireless link, same as my old bot.  Ive been working on software for the pc end where data from the robot is recieved and displayed.  Ive decided to try an occupancy grid approach to mapping.  Data from the robot can be used to draw a map but thats as far as it gone at the minute.


LCD display












Above is a pic of the LCD screen in all its glory.  Ive also got the sonar working this evening which is nice.

Ive played around with the code and managed to get the robot to plot an occupancy grid map using just the IR sensors at the moment.  The screen shot below shows the grid being filled with white cells to represent unoccupied space and black cells to show where objects are located.  Im getting a lot of false or inaccurate readings at the moment from the IR sensors.  I dont think they are very accurate at longer distances, hence the addition of a sonar sensor.

























Update: Ive just added a video of the robot drawing a map.  It is currently only using the ir sensors to measure distances to objects. Each cell in the grid has a value associated with it that is altered when the robot sees either an object or free space in that cell.  When a threshold is reached the program either colours the cell white or black. I programmed the robot to travel slowly in a square path, no obstacle aboidance or wall following yet.  It is clear from the video that even over a short distance, odometry errors occur leading the program to think the robot is in a different place than it actually is.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Cool robot!  I've got a few questions purely out of curiosity about how it works ... 

How much area is one of the pixels on your map?  If it's say ... 1 square foot, then does the robot mark that pixel (or variable in an array ... whatever) on the map as occupied if there's a table leg there, even if the rest of that square foot is empty?  Also, how does it measure distance?  You said it has a compass which of course lets the program know its heading, but is distance measured only by how long the wheels spin for?  If so, how much margin of error is there with the tires slipping on the floor?  It seems like if Crum was operating for more than a few minutes, he would start being pretty inaccurate.  I'm not trying to criticize ... I'm just curious.  :)  Sorry for all the questions.

At the moment each pixel represents 1cm, as i was mapping a small area before, this may have to change when i start to map a larger area.  For the occupancy grid each cell is 5cm by 5cm, again this will probably change for a larger area.  If it detects an object in a cell then, like you say, it will mark that cell as occupied, regardless of how large the object in the cell is.  I have been working on this project on and off for quite a while now but im only now getting into the occupancy grid side of it.  If the resolution of the map is high, most objects will not take up more space on the map than they do in reality, but a high resolution map will take up more memory.  As my map will be stored on a pc, memory will not be too much of an issue, but if it was to be done onboard the robot memory may be limited, so its a bit of a balancing act.

You are right about the error that will inevitably be introduced by measuring the wheel rotations.  Accumulative errors can get very large over time, even a couple of minutes.  When i get the mapping sorted i would like to try and implement a SLAM algorithm to correct for the errors.  From what ive seen it gets pretty complex but i may get there one day!!

Yeah it sounds complex.  I also have no idea how it works.  :)

Have you considered implementing GPS?  That could, of course, present its own problems though, since I'm assuming you'll mostly want to be using this bot indoors.

Maybe I'm just repeating something you've already considered, but what about having the bot locate itself based on the map?  For example, if it's been running for, say, 2 minutes, return to a place on the map that has already been explored and re-calibrate based on surroundings so that you can have a common "starting point" that will reduce the margin of error.  So he returns to the starting point, figures out where he is based on his surroundings, and gets accurate within a centimeter.  Then he goes back over to an unexplored part of the maps, keeps roving, and returns after another 2 minutes.  The "starting point" or "calibration point" could of course be moved over time so that the bot wouldn't have to travel long distances to get back to that point and in doing so make anything too far away from that point very poorly mapped.

Maybe that's overly complex and won't help you at all, or maybe you're way ahead of me in your programming, but it's just a thought.  :) 

Thats basically what SLAM is (simultaneous localization and mapping).  The robot corrects accumulated errors by checking where it is in the map.  Check this out if your interested. 

It just so happens I am interested, thank you.  :D

I haven't done programming since high school (5 years ago) so I'm a little bit rusty and I'm having to re-learn C for Arduino programming, but in a general sense I find programming extremely interesting, especially when it comes to robotics.

Good luck implementing it into your program!

Thanks, i think ill need all the luck i can get!!! :)

Very nice. I like the way you mounted the IR sensors. Fancy display btw. Can you tell me where you got it?

I look forward to seeing more pictures and screenshots 

The LCD came from here.  Its I2C or serial which is nice.  More pics and video on the way soon, hopefully.

The iRobot Create seems to do this pretty well.  After spinning around a bunch of times it's still only a few degrees off.


When do we get to see a video now that you have the compass calibrated?!

Also, where did you get the components for Crum's Sketchup?  Like the ultrasonic?