Let's Make Robots!

Universal recharge, navigation and mapping system

WARNING! this blog is just my thoughts on the subject and may be prone to rambling, monologs and contradictions.

I want to develop a universal navigation system for my robots. By universal I mean that the same code should work on almost any robot with only minor modification to allow for I/O pins used and number / type of sensors. This code is being written for the Arduino but I will document the code well enough that it can be easily adapted for other processors.

 

Theory Stage:

Think of your robot as a blind person. The range finder be it LIDAR, sonar or IR is like a white cane allowing you robot to detect an object without colliding. Most robots on LMR are at this point now but that's where they end. I want to go to the next level.

Once a blind person learns where things are in a room then he has a mental map and can plan his moves. Now that person navigates by dead reckoning and can move about more quickly. The white cane becomes a backup system that provides correction if they drift off coarse and a means of detecting changes such as a chair being moved to a different position.

This ability to map a room and navigate by dead reckoning is the "next level" I wish to develop. I think it is necessary if you want your robot to become useful in the home.

In the case of "Mr. Tidy" I want the robot to wander a room looking for cups / bottles o the floor. The robot should know where to take these things once it has found them and through experience, map of locations where empty cups and bottles are likely to be found so that it can find them more quickly in the future.

In the case of a "pet" robot then you might want it to run to the door and make a barking noise when it hears someone knocking on the door. You don't want the robot to take so long finding the door that the person knocking has already sat down and is having a cup of coffee.

As this will be a "dead reckoning" navigation system it will not rely on compass modules, GPS or any other complex (expensive) sensors. It will be self contained with no wireless links to PC's.

The only exception to the rule of "self contained" I have allowed is a simple/cheap docking station with a single IR LED that will plug into a power point. After all, if I take a robot to friends house or on a business trip then I still need to recharge the batteries.

The docking station will cosist of a panel with the IR LED in the center. A large brass plate on either side of the LED will provide a positive and negative terminal. The robot has two springs or antenna that make contact with the terminials to recharge the batteries. The docking station will become a reference point at the center of the robots map.

When the robot is taken to a new house or if the docking station is moved to a new room in the same house then the robot will soon realise that it is in a new location and begin making a new map. If the robot has enough memory then it may create several maps.

I will be testing this on 6 different robots.

  • Mr. General - 2x contiuous rotation servos - no encoders
  • Mr. Tidy - 2x DC motors - 2x simple encoders
  • Rover 5 with treads - 2x DC motors - 2x quadrature encoders
  • Rover 5 with mecanum wheels - 4x DC motor - 4x quadrature encoder
  • QuadBot chassis - 4x legs / 8x servos - no encoders
  • Chopsticks - 8x legs / 24x servos - no encoders

This covers a wide range of locomotion systems, some with encoders and some without. The maps will be stored in the Arduino's EEPROM or an SD card.

As some of the test robots I am using do not have encoders I was thinking that instead of mapping the rooms by distance I would try mapping the room by time taken to travel from one point to another at a given speed.

Robots that do have encoders can then measure their exact speed for more accurate navigation.

 


 

08-07-2011

Ro-Bot-X found a good link about mapping that describes my aproach better. I did not know the correct terminology as I had not read about it before.

My system is based on "Topological" mapping. Junctions and objects will in future be refered to as "Nodes". The X - Y co-ordinates of these nodes relative to the docking station will be measured in different units depending on the robots design.

Robots such as Mr. General that use continuous rotation servos for locomotion with no encoders will use units of time to measure the distance assuming a given speed.

Robots Such as Mr. Tidy and Rover 5 that use encoders can use time as a unit of measure as well with the encoders being used to precisely control the speed of the motors to reduce error. Alternatively they can measure the distance travelled based on the number of rotations of the wheels.

Robots that walk using legs such as QuadBot or Chopsticks can use their steps as a unit of measure. This is a method often used by blind people.

Lost or a new map?
When your robot cannot find things in expected locations it must determine is it lost, perhaps due to a mischeivous owner picking it up and turning it around or is it in a new location.  

NOTE: Doors must be differentiated in the map somehow otherwise the robot will think there are walls in funny places, decide it is in a new location and start re-mapping.

If the robot has just been turned around or just turned on then it will have no idea where it is. At this point it must wander using obstacle avoidance mode until it finds the docking station so it can orientate itself.

Let's assume I am using Mr. Tidy to move cups it finds randomly around the house to a specific location. If I just pick him up and turn him around then once he has relocated the docking station he will find all the nodes are where they should be (allowing for a small percentage of error) and he can continue as normal.

If I move his docking station or take him to a new house then even when he knows where the docking station is, few if any nodes in his memory will be where they are supposed to be. Depending how much memory you have then Mr. Tidy can either start a new map or rewrite the old map.

A way to teach your robot different areas of a house would be to use a TV remote. As the robot will use an IR receiver to locate and align with the docking station it can also recognise signals from a TV remote. When the robot is in a specific area press a button on the remote (for example use the numbers 0-9 to define 10 different rooms in the house. Upon receiving the signal the robot can store it's present location.

Lets say your programming your robot to go to the door and bark like a dog when it hears a knock at the door. You might use your TV remote to designate the front door as node 1 by pressing the number 1 button on the TV remote. After that, whenever the robot hears a knock on the door it could start barking like a dog and go to node 1 in it's map.

 


Planning stage:

 

Below is a very crude layout of my appartment as the robot would see it. Pale yellow objects are furniture without legs. Pale green circles are table legs and chair legs etc. Red lines are doors that could be open or closed at different times of day.

The light blue circle is the docking station. The orange circle represents an area where the robot is most likely to find cups and empty bottle to be collected. The green circle is where empty bottles should go for recycling. The purple circle is where empty cups can be taken. The pink circle is where socks should be taken.

The first thing I see is that my topological map really nees to have zones as well as nodes. A zone will basically be a room or an area within a room. In my map above I would want the robot to recognise the table and chair legs as a zone within the dining room to be avoided. Probably the best way to map a zone is as two nodes that represent diagonally opposite corners of a rectangular area.

 

1st step in mapping:
In this diagram you can see the path my robot would take when mapping the appartment for the first time using a typical "follow the right hand wall" search pattern. Each arrow represents a point where the robot had to change direction.

It should be noted that this process of gathering raw data by following a wall has the added advantage that it can be used for robot calibration. Most walls are pretty straight and most houses have the walls at right angles. As the robot tries to maintain a set distance from the wall the left/right motor speeds can be calibrated for traveling in a straight line and making 90 degree turns.

This first stage of mapping will not work with every home. If for an example there is a corridor surrounding a room or group of rooms then the robot will not find the surrounded group by following a wall. As this mapping idea will be self updating you could just wait until the robot stumbles across the missing rooms while traveling from A to B.

Another solution is for the robot to recognise unusually large empty areas as an area to cut through the middle of once it has finished following a wall.

There are about 64 of these points. In my case I am programming my robot to only go into large areas (at least the width of a small door frame). This prevents the robot getting into tight corners and forces the robot to group table and chair legs that are close together into a single object.

Initially I would have my robot store these points where it had to change direction as 3 byte nodes (9 bit X co-ordinate, 9 bit Y co-ordinate and 6 bits description). This would only take up 192 bytes of data. Unfortunately this "Raw" data is not very easy to work with. We need to process it to make a useful map.

At this point we should define what information is considered useful. The robot should know what room it is in where it needs to go and roughly how to get there. This does not require the dimensions and shape of the rooms. The robot needs to know what room it's in and what doors it needs to pass through to get to the required location.

As long as it knows roughly what direction to travel in then the robots standard obstacle advoidance routines should be able to handle minor problems like the family pet lying in the middle of the room.

A bigger obstacle will be finding a door closed. If the robot cannot find an alternative route then it must signal for assistance or wait patiently until the door is opened.

Other information that is necessary would be noteworthy objects or locations. In my map I've marked some locations where the robot should place objects it has picked up depending on the size, weight and colour of the object.

One location that the robot should be able to determine for itself is the location where it is most likely to find empty cups or bottles. This may vary from time to time so the robot may need to store multiple locations and even delete locations that have not been active for an extended period of time.

Creating a map from the raw data:
In the map below I have circled the doorways. These are crucial to the map. If you think of the rooms and objects as cities on a map then the doorways represent the highways linking these cities.

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I see what you mean. Good idea on saving memory space. I'm not good at splitting bits and getting them back together so I did not think of this way, although I tried to use something similar for the FireFighting competition, following Mike Ferguson's code for his Crater robot. Did not work completely, but I understand the principle. Regarding the object identification, I was thinking of a 3D scan routine, where the robot would try to determine the width and height of the object (perhaps the shape too) and try to classify it somehow using this information. This meand always start the scan at the same distance from objects when measured on a level plane, pan the sensor left to find the edge, then right, then tilt it up to find the height of the object. This requires only a Sharp IR sensor mounted on a pan/tilt system. A color sensor would be nice too. 

Here is an idea to compensate for errors on a timed mapping. The motors slow down when the battery is getting low on power, which can be compensated by a little formula depending on the measured battery voltage (that you need anyways, to know when to look for the charging station). The multiplying factor has to be determined for each robot, because they have different motors and locomotion methods, but it is doable.

About the charging connection, I was thinking of having bumper switches with the metal levers connected to the positive and negative and head on to the station until both switches would be pressed, then measure the voltage to determine if there is a good charging contact. How do you plan on determining when the charging is complete? Also, a universal charger, or robot specific? I like having a LiPo on my robots but the charger is cumbersome to use, perhaps I should invest in a simpler charger that can charge a 2S and 3S without me pressing buttons on it (suggestions?). A NiMh charger is also specific to a cell number, so I guess unless the robots use the same battery type and voltage, they have to have separate chargers...

What you could do is add a microprocessor to your charger and let it press buttons by pulling transistors high and use them to simulate button presses.

You could have an IR-sender on all robots which send out a unique code which the microprocessor controlling the charger reads and then chooses settings based on that specific robot.

That is up to the end user however it defeats my plans on keeping things simple for beginners. As my robots all use NiMh batteries I just have a simple trickle charge circuit. Mr. General, Mr. Tidy and the explorer PCB used with the 2 motor Rover 5 chassis all have the trickle charge circuit built in. For Chopsticks, Quadbot and the 4 motor Rover 5 with mecanum wheels it is just a matter of adding a 3W, 18 ohm resistor and a diode. NiMh batteries can be accurately peak charged by monitoring their temperature although this method works best with a constant current charger.

I love this OdBot! I was tackling this problem some time ago with my MiniEric robot, but I stumbled on something and put it on the side, while working on other projects. Since MiniEric is Arduino based, I am sure I can make it work with your "magic code" err... I meant mapping code... I was even thinking of using a SPI external EEPROM (I have a 256k) for map storage. I am curious of your mapping system, I was thinking to use a byte for each square the size of the robot (20x20cm for MiniEric) and encode in that byte what is found in the room. Different value for a wall, hardwood floor, carpet, even have specific places the robot should remember, like the couch, the computer chair, the fridge, the charger, etc. I know the robot can't actually differentiate some of them, but I was thinking of using a pre-made map and just update it when necessary. If you look at the code attached to MiniEric page, you'll see an attempt at my living room map. But I failed at writing the algorithms for finding the shortest path to a specific destination (say from the computer chair to the fridge...). The robot is cappable of recognizing voice commands like MiniEric, Fetch, a Beer... and that should trigger a functions that calculates the shortest path to the fridge, go there, activate the function (vision recognition) to find the beer can, grab it, then turn around and reverse the course to the starting point and drop the beer can there. I did remote control it with the TV remote and brought a half empty water bottle, but that was about it. So yeah, I am extremely interested in your solution, as you are way better at programming than I am.

Cheers!

G'day Ro-Bot-X,

creating the map and working from it is the hardest part. The grid system with one byte per square is the simplest aproach but it can waste a lot of memory which is a precious resource for any robot not using an SD card or external EEPROM.

For example: My Spider controller has 4K of EEPROM. If used entirely for mapping a grid then this only gives me a grid of 64 x 64 bytes. If each square in the grid represents a square meter (yard) then it migh be enough memory but a lot would be empty space and therefore wasted.The average Arduino has only 1K or less of EEPROM which cannot afford to be wasted on empty space.

One aproach is what I call junction mapping. This is where only objects of interest and junctions (points where the robot has a choice of direcions) is being mapped. This uses a relatively small amount of memory and is very efficient for ploting a course from point A to point B. 

With the junction mapping system 3 bytes might be needed for each object or junction but no memory is wasted on empty space. In this case you might use 9 bits for an X co-ordinate, 9 bits for a Y co-ordinate and 6 bits to describe the object or junction. This gives a grid of 512 x 512.

By making the mapping system adaptive so it is not limited to a square grid with the docking station in the center then larger areas of odd shapes can be mapped although this would require more complex code that took up more memory so it might only work with Arduino Mega's and their clones.

Another problem that you have encountered is how well can a robot determine what is in that space. I want the robot to create it's own maps based on it's sensor inputs. For simple start here robots this means the map only stores A to B information.

More complex robots like Mr. Tidy can determine what is a table leg, what is a cup and what is a wall based on the size of the object. So in the Case of Mr. Tidy the robot would store locations in the map of objects it can pick up such as cups. Walls would not be memorized but used to determine junction points.Something like table or chair legs would probably be mapped as objects to be avoided when plotting a map from A-B.

At this point I have not written any code. I am in the planning stage where it is all scribbles on paper.

 

One of your sentences was, "More complex robots like Mr. Tidy can determine what is a table leg, what is a cup and what is a wall based on the size of the object. "

I can picture a robot's log:

** Scanning

** Moving forward

** Identified: TABLE LEG

** Identified: CUP

** Identified: WALL

** Turning

** Moving forward

** Identified: SLEEPING DRUNKEN HUMAN

** Identified: CHAIR LEG

       ("Oh, wait.  What was that back there?" )

** Dead-reckoning map movement

       ("Ah, here it is.")

** Identified: CUP

       ("I'll just pick that up.")    ::[scrunch]::   ("Oops.")

** Identified: BROKEN CUP

       ("I know what to do.  I will use my claw arm as a plow and push the pieces elsewhere.")

::[pushes pieces up against sleeping drunken human]::        ("There, now they won't think I did it.")

** dead reckoning return to last mapped position

Yeah well Mr. Tidy can make an educated guess as to the width of the object. If he's not sure then he can try and lift the object. If it is a table leg as thick as a cup or bottle then the robot won't be able to lift it. As for the drunken human (probably me) I need to do some experimenting with neural nets before Mr. Tidy tries to frame me for his mistakes.