Let's Make Robots!

Universal recharge, navigation and mapping system

WARNING! this blog is just my thoughts on the subject and may be prone to rambling, monologs and contradictions.

I want to develop a universal navigation system for my robots. By universal I mean that the same code should work on almost any robot with only minor modification to allow for I/O pins used and number / type of sensors. This code is being written for the Arduino but I will document the code well enough that it can be easily adapted for other processors.

 

Theory Stage:

Think of your robot as a blind person. The range finder be it LIDAR, sonar or IR is like a white cane allowing you robot to detect an object without colliding. Most robots on LMR are at this point now but that's where they end. I want to go to the next level.

Once a blind person learns where things are in a room then he has a mental map and can plan his moves. Now that person navigates by dead reckoning and can move about more quickly. The white cane becomes a backup system that provides correction if they drift off coarse and a means of detecting changes such as a chair being moved to a different position.

This ability to map a room and navigate by dead reckoning is the "next level" I wish to develop. I think it is necessary if you want your robot to become useful in the home.

In the case of "Mr. Tidy" I want the robot to wander a room looking for cups / bottles o the floor. The robot should know where to take these things once it has found them and through experience, map of locations where empty cups and bottles are likely to be found so that it can find them more quickly in the future.

In the case of a "pet" robot then you might want it to run to the door and make a barking noise when it hears someone knocking on the door. You don't want the robot to take so long finding the door that the person knocking has already sat down and is having a cup of coffee.

As this will be a "dead reckoning" navigation system it will not rely on compass modules, GPS or any other complex (expensive) sensors. It will be self contained with no wireless links to PC's.

The only exception to the rule of "self contained" I have allowed is a simple/cheap docking station with a single IR LED that will plug into a power point. After all, if I take a robot to friends house or on a business trip then I still need to recharge the batteries.

The docking station will cosist of a panel with the IR LED in the center. A large brass plate on either side of the LED will provide a positive and negative terminal. The robot has two springs or antenna that make contact with the terminials to recharge the batteries. The docking station will become a reference point at the center of the robots map.

When the robot is taken to a new house or if the docking station is moved to a new room in the same house then the robot will soon realise that it is in a new location and begin making a new map. If the robot has enough memory then it may create several maps.

I will be testing this on 6 different robots.

  • Mr. General - 2x contiuous rotation servos - no encoders
  • Mr. Tidy - 2x DC motors - 2x simple encoders
  • Rover 5 with treads - 2x DC motors - 2x quadrature encoders
  • Rover 5 with mecanum wheels - 4x DC motor - 4x quadrature encoder
  • QuadBot chassis - 4x legs / 8x servos - no encoders
  • Chopsticks - 8x legs / 24x servos - no encoders

This covers a wide range of locomotion systems, some with encoders and some without. The maps will be stored in the Arduino's EEPROM or an SD card.

As some of the test robots I am using do not have encoders I was thinking that instead of mapping the rooms by distance I would try mapping the room by time taken to travel from one point to another at a given speed.

Robots that do have encoders can then measure their exact speed for more accurate navigation.

 


 

08-07-2011

Ro-Bot-X found a good link about mapping that describes my aproach better. I did not know the correct terminology as I had not read about it before.

My system is based on "Topological" mapping. Junctions and objects will in future be refered to as "Nodes". The X - Y co-ordinates of these nodes relative to the docking station will be measured in different units depending on the robots design.

Robots such as Mr. General that use continuous rotation servos for locomotion with no encoders will use units of time to measure the distance assuming a given speed.

Robots Such as Mr. Tidy and Rover 5 that use encoders can use time as a unit of measure as well with the encoders being used to precisely control the speed of the motors to reduce error. Alternatively they can measure the distance travelled based on the number of rotations of the wheels.

Robots that walk using legs such as QuadBot or Chopsticks can use their steps as a unit of measure. This is a method often used by blind people.

Lost or a new map?
When your robot cannot find things in expected locations it must determine is it lost, perhaps due to a mischeivous owner picking it up and turning it around or is it in a new location.  

NOTE: Doors must be differentiated in the map somehow otherwise the robot will think there are walls in funny places, decide it is in a new location and start re-mapping.

If the robot has just been turned around or just turned on then it will have no idea where it is. At this point it must wander using obstacle avoidance mode until it finds the docking station so it can orientate itself.

Let's assume I am using Mr. Tidy to move cups it finds randomly around the house to a specific location. If I just pick him up and turn him around then once he has relocated the docking station he will find all the nodes are where they should be (allowing for a small percentage of error) and he can continue as normal.

If I move his docking station or take him to a new house then even when he knows where the docking station is, few if any nodes in his memory will be where they are supposed to be. Depending how much memory you have then Mr. Tidy can either start a new map or rewrite the old map.

A way to teach your robot different areas of a house would be to use a TV remote. As the robot will use an IR receiver to locate and align with the docking station it can also recognise signals from a TV remote. When the robot is in a specific area press a button on the remote (for example use the numbers 0-9 to define 10 different rooms in the house. Upon receiving the signal the robot can store it's present location.

Lets say your programming your robot to go to the door and bark like a dog when it hears a knock at the door. You might use your TV remote to designate the front door as node 1 by pressing the number 1 button on the TV remote. After that, whenever the robot hears a knock on the door it could start barking like a dog and go to node 1 in it's map.

 


Planning stage:

 

Below is a very crude layout of my appartment as the robot would see it. Pale yellow objects are furniture without legs. Pale green circles are table legs and chair legs etc. Red lines are doors that could be open or closed at different times of day.

The light blue circle is the docking station. The orange circle represents an area where the robot is most likely to find cups and empty bottle to be collected. The green circle is where empty bottles should go for recycling. The purple circle is where empty cups can be taken. The pink circle is where socks should be taken.

The first thing I see is that my topological map really nees to have zones as well as nodes. A zone will basically be a room or an area within a room. In my map above I would want the robot to recognise the table and chair legs as a zone within the dining room to be avoided. Probably the best way to map a zone is as two nodes that represent diagonally opposite corners of a rectangular area.

 

1st step in mapping:
In this diagram you can see the path my robot would take when mapping the appartment for the first time using a typical "follow the right hand wall" search pattern. Each arrow represents a point where the robot had to change direction.

It should be noted that this process of gathering raw data by following a wall has the added advantage that it can be used for robot calibration. Most walls are pretty straight and most houses have the walls at right angles. As the robot tries to maintain a set distance from the wall the left/right motor speeds can be calibrated for traveling in a straight line and making 90 degree turns.

This first stage of mapping will not work with every home. If for an example there is a corridor surrounding a room or group of rooms then the robot will not find the surrounded group by following a wall. As this mapping idea will be self updating you could just wait until the robot stumbles across the missing rooms while traveling from A to B.

Another solution is for the robot to recognise unusually large empty areas as an area to cut through the middle of once it has finished following a wall.

There are about 64 of these points. In my case I am programming my robot to only go into large areas (at least the width of a small door frame). This prevents the robot getting into tight corners and forces the robot to group table and chair legs that are close together into a single object.

Initially I would have my robot store these points where it had to change direction as 3 byte nodes (9 bit X co-ordinate, 9 bit Y co-ordinate and 6 bits description). This would only take up 192 bytes of data. Unfortunately this "Raw" data is not very easy to work with. We need to process it to make a useful map.

At this point we should define what information is considered useful. The robot should know what room it is in where it needs to go and roughly how to get there. This does not require the dimensions and shape of the rooms. The robot needs to know what room it's in and what doors it needs to pass through to get to the required location.

As long as it knows roughly what direction to travel in then the robots standard obstacle advoidance routines should be able to handle minor problems like the family pet lying in the middle of the room.

A bigger obstacle will be finding a door closed. If the robot cannot find an alternative route then it must signal for assistance or wait patiently until the door is opened.

Other information that is necessary would be noteworthy objects or locations. In my map I've marked some locations where the robot should place objects it has picked up depending on the size, weight and colour of the object.

One location that the robot should be able to determine for itself is the location where it is most likely to find empty cups or bottles. This may vary from time to time so the robot may need to store multiple locations and even delete locations that have not been active for an extended period of time.

Creating a map from the raw data:
In the map below I have circled the doorways. These are crucial to the map. If you think of the rooms and objects as cities on a map then the doorways represent the highways linking these cities.

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

have you made any more progress on this?  im interested in implementing this in my next project.

Hi Oddbot.. this may be way off I am very new,  but is there a way to put an non powered object, [reflector,color, mirror, what ever] on say a velcro strip on the bottom of a door frame or other land mark  that would use no power but would let the robot see it and know its a door way, stairway, cat what ever and then it would know were it was or atleast how to behave upon seeing such a thing, if it was simple and cheap any one could use it, and it would provide referance points for the robot even if you picked it up and moved it. Best of all might be to have several land marks, door means door, stair means stair, cat means anoy,  Like I said I am new and have learned alot from you and this site, I want to build some thing to patrol my yard and greet friends andthis mapping idea seems like the way to go,.

Well, my main goal here is to avoid such things if possible because I want to be able to take my robot to different places without having to setup any markers. I just want to arrive at a new place, plug in the docking station and turn it on.

Because you want a system that works outdoors you cannot use IR sensors unless your robot will only work at night. By replacing the IR LED on the docking station with a laser you can probably get this system working in daylight. I would de-focus the laser slightly so it can be detected over a wider angle. This will also reduce the intensity a bit so that a dog or cat won't be hurt if they briefly glance at it.

if you want to mark locations with a sticker then I would look at a barcode system. Get a reflector strip and make a binary pattern on it with black electrical tape or paint. Get a phototransistor that is sensitive to 850nm IR and use a red laser pointer to scan the strip. Mount the laser and the phototransistor on a servo to perform the scan.

It should be possible to read the laser in sunlight but you will need to experiment a bit.

So far I am stuck on finding a good way for the robot to define a doorway and or passage that links 2 rooms. Once I get that sorted out I will update this page.

How about creating 'maybe' fields on the map?

What I mean is: the robot at the first pass won't see if that's a leg a door or a real wall. If it's stuck somewhere, it'll wait and then do the rounds again... if encounters different patterns, it's either because it's a new location (or got lost) or the difference are made by changes in the environment... So the robot should determine if it's a new map or not, and if not just save the difference as maybe...

Next time if it's stuck should wait next the maybe in the way and start the next round by checking that first.

Unfortunately I have been too busy lately to give this project much thought. It's a long term project :(

Right now the "Maybe" factor is playing a big part in my current robots object detections system using an ultrasonic range finder.

Maybe it's a wall to be avoided?
Maybe it's a can to be collected?

 

I have always been wanting to do navigation and mapping like you describe. I don't know if you guys have seen this tutorial but it might help you out. http://www.societyofrobots.com/programming_wavefront.shtml

I've seen it and there is actually an Arduino version of the code somewhere. But it is still based on grid mapping, not topological mapping. You still need a large chunk of memory to store your map, perhaps achievable on a uSD card. This is the way I was thinking to do mapping on my MiniEric robot. It gets a little too complicated for my programming skills.

Ok, I can see that a raw map can be constructed based solely on turns, but I wonder something about that.

The X and Y coordinates have to be generated from something, (and since most people will not have GPS,) I presume the X & Y will come from the turn and the distance traveled (whether by time or wheel count or whatever). --But how can we be sure the turns were exactly 90°?  Here is my concern:

My robot (and probably many others) will not necessarily make a perfect 90° turn. If the turns are all "right hand" turns, and each turn is 1° off, the effect will be cumulative. After the example of 64 turns, he would be potentially 64° off from the direction he thinks he is facing.  Would there be a subroutine in there to allow the operator to do an initial run-through and adjust the angle, or how might that error be handled, so it is not a problem?

-- OR --

Should this question be ignored and considered on a robot-by-robot basis? Let the individual correct his robot's movement angles by whatever means he can in his own program or hardware?

 

 

The robot can do a bang-bang style wall following, using just one range sensor or even a bumper sensor. The robot turns towards the wall until it is too close, then turns away from the wall until the distance is far enough so it turns towards the wall again. When a outer corner is reached, the robot "looses" the wall, so it has to turn a lot until it finds the wall again. Similar, when an inner corner is reached, the robot has to turn away from the wall a lot until the distance is far enough to turn back towards the wall. So actually the robot does not have to be able to turn precisely or drive perfectly straight. 

Again, there is a tutorial by the same author here:

http://forums.trossenrobotics.com/tutorials/how-to-diy-128/following-a-wall-3283/

Perhaps it would be a good idea to look through other interesting tutorials by Mike Ferguson:

http://forums.trossenrobotics.com/tutorials/members/lnxfergy-1768/