Let's Make Robots!

A.I. X1

Learns to navigate

This is my first experiment with artificial inteligence using a simple Doodle Bot with a RTC and SD card for extra memory. When I say artificial inteligence I am not talking about a supercomputer that you can discuss philosophy with over a cold beer.  Simply a robot that will try to learn from past experience as previously discussed in my blog: http://letsmakerobots.com/node/34177

The challenge: This simple robot will pretend it is cleaning a floor. To clean a floor efficiently the robot needs to cover the entire area without missing any spots in a meaningful pattern. The experiment will be done using a whiteboard as the floor to be cleaned. The robot will trace it's path with a whiteboard marker so we can see where it has been and if it is learning or not. Obstacles will be put in the robots path to test it's learning ability.

The only navigation sensors are 2 low resolution (8 counts per wheel revolution) wheel encoders and the IR receiver which will use the docking station as a beacon. The 3-axis accelerometer will be used to detect collisions.

The A.I. theory: This robot does not use a neural net nor does it borrow the brain power of a laptop or PC. It simply remembers what has happened to it before and uses those previous experiences to try and guide present actions. When it's battery starts to get flat then it will use it's IR receiver to locate the docking station and charge it's batteries. During this time it will sort through it's memory looking for ways to improve it's decision making process and compress the memories for storage as long term memory.

Any one who has programmed a robot before will tell me that this is a difficult task to acheive and I agree. I plan to break the experiment down into different steps. At the end of each step I hope to have some useful functions that can be used by others.

Step 1: Get the robot to store it's raw sensor data onto the SD card as short term memory with a time stamp of when the even occured. Organize and compress the data when charging the batteries. At this point only a simple "bump, backup and turn" routine will be used to gather the data. As a bonus, the data gathered should be useful for making a map of the area to be cleaned for the second step.

Encountered my first problem, the SD library is huge! by the time I have the SD, Wire and microM libraries installed and initialized, 9642bytes are used. Without the SD library only 3078 bytes is used. I have already planned to avoid using a library for the DS3107 (which incorporates the wire library) because of memory restraints.

As the Micro Magician only has 16K (assume no bootloader) this does not leave much room. Fortunately I should receive a sample of the Micro Magician Pro next week which uses the ATmega328P and also has a 5V regulator which will be better for the DS1307 RTC which goes into low power mode once the battery voltage drops below about 3.8V.

As a fun sidenote, the Wire library has been updated for V1.0 of the Arduino IDE and as they have changed some names the instructions on the net are no longer suitable for older versions. It's possible this may also cripple the DS3107 library as it uses the wire library and would also need to be updated.  I looked at the library code and it has been updated to work with older and newer versions of the wire library.

For now I am proceeding with STEP 1 as I think the ATmega168 still has enough memory. Later steps will definitely require an ATmega328P or better. I have found some code for accessing the SD card in a raw format. This code is a lot smaller but would not allow the card to be read by a PC.



UPDATE: 19-9-2012

Ok, I haven't had a lot of time for coding and so far I am just experimenting with a new I2C library.

The good news is that I now have a prototype of the new Micro Magician Pro to play with.The two big advantages for this project is that I now have an ATmega328P processor with 32K of flash memory to help cope with the libraries and a 5V regulator for the DS1307 RTC.

The RTC will communicate quite happily at 3.3V as any signal above 2.2V is considered a logical 1. The problem is that Vcc must be 1.25x Vbat otherwise it goes into low power mode and stops communicating on the I2C lines.

I am now thinking that for my robot to successfully cover all the floor without missing any spots it will need more than the docking station homing beacon for guidance so I am going to upgrade to 3 IR beacons and attempt to get the robot to triangulate it's position.

The IR beacon (shown here without the IR LEDs) is basically the same as a TV remote except it just sends the same code out over and over again. Dip switches 1-7 are used to set the 7 bit code. Switch 8 is the power switch. These beacons do not use an MCU, just a simple CMOS circuit which helps minimize power consumption. This PCB is the size of a AAA battery and will work with any voltage from 3V to 6V making it perfect for running of a single AAA Li-ion rechargeable battery.



UPDATE: 23-9-2012

I've started on the actual coding of step 1 after sorting out some issues with my I2C.You can download the I2C library I am using from here: http://www.dsscircuits.com/articles/arduino-i2c-master-library.html. I am

At the moment it is not much more than "bump and turn" code to generate data to be stored, the basic storage to the SD card and timestamp generation. I won't have anything worthy of video until at least next week when my company has a 5 day holiday.

Here is the current structure of a single "memory event". The number at the left is an index number used for selecting specific information from the event.

Note index 0, this is a quick reference byte that records the reason for the event being recorded e.g. impact detected or motor stalled.This can be used later when the robot is looking for memories of similar occurances hapening. For our robot to be inteligent it must be able to cross reference and analyze these memories.

	AI short memory data structure

0	memory trigger event	byte
1	year			byte
2	month			byte
3	date			byte
4	day			byte
5	hour			byte
6	minute			byte
7	second			byte
8	magnitude		msb
9	magnitude		lsb
10	deltx			msb
11	deltx			lsb
12	delty			msb
13	delty			lsb
14	deltz			msb
15	deltz			lsb
16	x-axis			msb
17	x-axis			lsb
18	y-axis			msb
19	y-axis			lsb
20	z-axis			msb
21	z-axis			lsb
22	0G time milliseconds	msb
23	0G time milliseconds	lsb
24	left encoder count	msb
25	left encoder count	lsb
26	right encoder count	msb
27	right encoder count	lsb
28	left speed		msb
29	left speed		lsb
30	right speed		msb
31	right speed		lsb
32	left & right brake	byte
33	left & right stall	byte
34	servo position		msb
35	servo position		lsb
36	battery voltage		msb
37	battery voltage 	lsb
38	IR command received	byte
39	current mode		byte
40	position X co-ordinate	msb
41	position X co-ordinate	lsb
42	position Y co-ordinate	msb
43	position Y co-ordinate	lsb
44	position Z co-ordinate	msb
45	position Z co-ordinate	lsb
46	direction		byte
47	action taken		byte
48	reason for action	byte
49-63	spare			byte

A lot of this data is not really relavent and can ultimately be eliminated from a memory event. For example, 0G fall time is only relavent if the robot actually falls. The problem is that our processor is no super computer and trying to sort through this information while the robot is functioning would significantly slow down it's reaction time.

Another problem is that until the data is analyzed we do not know what data is important. If for example an impact occurs then we need to know the reason. Was the impact the result of a simple collision or did the robot fall off the table? Did the robot hit a wall (maybe the motors stalled) or did something else hit the robot (the angle of impact was not the angle of travel).

This is why we record it all and sort it later. In order to sort the information I need to create several algorithm's. Just as Patrick's Maze solving robot used one algorithm to follow a wall and another to eliminate wrong turns. I need to generate algorithms to perform functions such as mapping, navigation, collision avoidance and position triangulation.

So far this is raw sensor data and about 15 bytes spare for anything I haven't thought of yet. The idea is that everytime something significant occurs an entry will be made in the short term memory. I have a more detailed description of the theory in my blog: http://letsmakerobots.com/node/34177

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

This project has temporarily stalled because I have been to busy with other projects. I do have some new ideas but currently I've gotten no further than having the robot store "events" on the SD card as "short term" memory.

Hello there ! 

I am currently working on a dagu quadbot with a spider controller and i want to make the bot figure out how to walk all by itself .I don't have much experience in robots but i'm quite good at programming, plus i learn fast :p.

Can someone please explain how the bot can "learn" from previous errors ? i have no idea how to make him choose between actions.I plan to use an ultrasonic sensor to see if he moves forward or backward.

Thanks in advance and sorry for the bad english, i'm from france  ;) .

I think you need to set a Goal for your robot. Say the goal is to move to a certain object. The robot does not have rules on how to move yet, so it needs to determine how to do that. To be able to determine if it moved, it needs a distance sensor so it can monitor the distance changes towards the object, a smaller distance being interpreted that it got closer to the goal. So that is the first rule. That's easy.

The hard part is to determine how to move each servo to create a constructive movement. The robot needs feedback and a rule to interpret that feedback if it results in a closer to the object distance or not. I guess you could add bend sensors to each limb and touch sensors in the tip of the feet so it knows when the feet support the weight of the robot or they are moving in the air. You may also want to add a touch sensor on the belly of the robot so that the robot lifts it's belly and not drag it on the floor. A general rule may be set to try to create energy efficient moves and monitor the ammount of moves necessary to make a step towards the goal. Try to keep the movement of each servo minimal.

Since you are using a Spider controller which has a "Mega" controller on it, you have lots of analog inputs for the sensors and plenty of processing power to do all that. The cost of the sensors might be high, but in the end it might be worth it.

Oh, and another thing, store the best movements in the internal EEPROM, that will act as the learned memory of the robot.

Another thought crossed my mind. We have a sense of direction (from our sight vision) and when we learn how to move we basically point our limbs towards the object (goal). The first thing we try is to reach the object with our hands, when we see that is not possible, we lean forward until we fall and we need to support ourselves. The we try again to reach the object and move each limb in the direction of the object, while trying to support ourselves not to fall. Try to express this in a programming diagram and you will see that is not an easy task. Keep it simple at first then add to it as you make tests.

Let us know whow things go!

Well in fact i didn't even know that those sensors exist, but i don't think i will buy them.To make the robot reach his goal i thought of making a list of pre-programmed moves .He will have to find the best combination to move toward the goal, that way i don't have to worry about sensors and stuff.

Also, i plan to make move lists that will have a random factor, and then i will save those lists to compare the robot results for each, that way he will find the best list and the best combination for it. I think that way i won't need any additional sensor, but maybe one day i'll change my mind ^^. Anyway i'm getting to work on it next week ( Holidays !! =D ).

Thanks for your helpful reply !

How do you know that a combination of moves is better than the other without some sort of sensor?

The sensors can be on the robot or external to the robot but you still need them, mkey?

I appreciate people who can stand on their own but sometimes you really need to "stand on the shoulders of giants". The ideas you are all talking about here have been thoughrouly explored for decades and have been proven not to scale.

My advice is that you take the free online artifficial intelligence and machine learning classes, it will save you a lot of time.

The classes are taught by the authors of some of the most succesful artifficial intelligence projects in the world: Sebastian Thrunn, Peter Norvig and Andrew Ng.

The methods they teach have been proven to work in real world systems such as the google self driving car or the autonomous model helicopter that can perform fying stunts (http://videolectures.net/ijcai09_coates_sah/).



That's what we need to figure out next.

I see that you're using 64 bytes (some are spare for now) for each event that needs to be recorded. I know that the SD card has a huge memory compared to the ATmega, so it doesn't matter how many bytes you're using for each entry. But the ATmega has a limited RAM space, will that be enough to process the stored data? I'm sure that it will slow down the robot, perhaps a dedicated micro to do all that processing will help out in the long run. I'm following this project with great interest, keep it updated, OddBot!


I am only using an old 1Gb SD card for now and yes it is huge compared to what is in the ATmega328P. The thing is the processor only needs to work with a relatively small amont of the recorded memory. This is why the first byte of each event is the trigger.

Just say a motor stalls. The robot really only needs to look at maybe the last  5 or 6 records triggered by that event which matches it's current situation to see what solution was used last time and how well it worked.

My current code takes almost 15K of the 30K available but there is about 10K of libraries. This still leaves me with 15K for algorithms to process the data. However, if I were to make an "A.I. Sheild" then it would end up with a dedicated MCU, just like your "open logger" from SparkFun. That way the SD card library, code for the RTC and perhaps the long term memory sorting code could all be stored on it and free the main processor for actual "Thinking".

What surprises me right now was that this was meant to be a very simple robot but the memory structure is already about twice the size of what I envisioned originally. I am not sure yet just how quickly the robot can read and write to the SD card so I think a speed test will be on my list of things to do.

Hi OddBot, nifty challenge. 'Here Be Dragons! ' hope you find em =D

this looks awesome, will be looking forward on this progress :)