Let's Make Robots!

Robot A.I. idea for non neural network approach.

In this Blog I am simply gethering my thoughs in preperation for a long term project. Hopefully some programming Guru's will post some good ideas. Maybe it will inspire someone else. I am just one nueron in the huge brain of humanity.

Personal History
When I was 12, I taught myself to program in GW basic on an IBM clone that ran at 8MHz with monochrome, text only graphics and a 5¼inch floppy disk drive. At that time I wanted to write an artificial inteligence.

My theory was that an A.I. was really just a database with some code that would break down your text input into basic concepts, feed it to a database in a meaningful way and convert the results into a text output that hopefully made sense.

Current Theory
30 years later my programming skills are still pretty basic (pun intended) but I think my general theory is still valid. The most important part is that the A.I needs a database, in this case a memory of events that have happened to it which can then be applied to current events to determine a suitable response.

From what I have read about nueral networks they work on a similar principle. My understanding though is that they would really work best in a machine built using graphic cards where you end up with lots of simple processors performing simple repetative tasks since each neuron is effectively a very simple processor.

In this blog I want to explore the possibility of a simple A.I. using an Arduino or Picaxe processor not using a neural network.In theory, a neural network is very adaptable and should work with any robot configuration. It is also somewhat unpredictable and random to begin with. One advantage of a non-neural network is that you can program specific directives into it from the beginning that will not change.

The most critical part of such a program will be memory and time recognition so an SD card or I2C external memory and real time clock (RTC) would be recommended for any prototype.

Memory Organization
The key to inteligince, artifical or otherwise is memory. Unless you have an Eidetic Memory then you can basically divide your memory into 2 categories, short term and long term. As our robot will have limited memory and processing power I want to divide it's memory in a similar way.

My concept of this memory division is this: When the robot is running about doing robot things, all events will be recorded along with a timestamp from the RTC into short term memory. When the robot is at rest on a charging station then it can do some robot dreaming. In this case, sort through the short term memory and create a compressed version in long term memory. This compressed version will not be as detailed, only storing significant events such as a successful completion of a task or an error such as a collision. Time stamps are used to sort the information into chronological order. As the long term memory is compressed it should also be quicker for the program to sort through when determining a correct response to a situation.

Behaviour Determination
This is perhaps the most important part. The entire reason for storing previous events in memory is so that the robot can derive a proper response to a situation.When the robot encounter a situation, for example, a collision is detected, then the robot can search it's memory for similar events.

If your robot is fairly simple and does not know where it is then it may search the memory to find the maneuver with the highest success rate such as "reverse for 500mS, turn right 30°, continue". A more advanced robot with GPS or some other means of determining location would refine the memory search for previous similar events at that location.

What is Good and Bad to a Robot?
Perhaps the most difficult part is determining what input is bad and what is good. With only limited sensors, what is a bad input or situation to be avoided and what is good? If you look at a baby who is just starting to touch things and try to put them in it's mouth then it is simple - pain or unpleasant taste is bad. No pain is interesting and pleasant taste / sensations is good.

For your robot, good and bad input depends on the task of the robot. If the robot must clean a floor then while a large impact is bad, a gental, continuous pressure against it as it cleans the foor near a wall could be a good thing as it lets the robot know it is not missing any of the floor near the wall.

Experiment 1
For my first experiment I want to keep things simple. Navigation is important for any mobile robot. This first robot will pretend it is cleaning the floor. This means it must cover the entire area of the floor as efficiently as possible without missing any spots.

It will use the RTC to ensure it only cleans from 2am to 5am when everyone should be asleep to reduce the likelyhood of being stepped on. As people may tend to get up during the night for a snack or to use the bathroom The robot will learn through trial and error to avoid those locations during certain times and may even change it's start and finish time on certain days to suit the habbits of the occupants.

I am just going to mount a Micro Magician controller on a Doodle Bot base. The 3-axis accelerometer will provide collision detection and the IR receiver will be used to locate the charging station. Doodle Bot's simple encoders will give 8 counts per wheel revolution and a voltage divider will allow the robot to monitor it's battery voltage. A left and right metal strip along either side will provide terminals for recharging. The front of the Doodle Bot is a wedge shape so it should be easy to follow an IR beacon between two electrodes and wedge itself in for recharging.

I can take advantage of the fact that this robot is designed to draw. By using a large whiteboard for a floor and giving the robot a whiteboard marker I can see where the robot has been and how experience affects the robots path. By setting up a few walls on the board I can easily create a miniature household for the robot to clean.

For my experiment I am using an RTC and SD card reader from Futurelec that I have had sitting in a draw for over 3 years now.

The results of this experiment should be some basic code that would be useful to any mobile robot capable of roaming around the home.

 


 

Update: 10-9-2012
I've had a lot of feedback already and think I need to clarify my intentions. I hope to come up with an A.I. library that anyone can use with any Arduino. There are plenty of SD card reader and RTC sheilds about. If all goes well then I might make a suitable shield for my Micro Magician.

At this point, my theoretical library would have a function for storing raw data (short term memory) on the SD card as well as one or more functions for sorting / compressing the data to become long term memory.

Once I have my Doodle Bot modified and ready for experiment 1 I will post my results and code.

 


 

Update: 12-9-2012
I have built my prototype for experiment 1. It is time to start practicing what I preach. I will continue this blog as a robot post.

 


 

UPDATE 16-9-2012

I am still writing my first bit of code which is not much more than a datalogger for the sensors. At this point in the experiment I need to work out the best way to organize the "memories" so that they can be easily interpreted as short term memory and sorted/compressed for long term memory.

Technical difficulties
The first problem I spotted was that the SD card library is huge (about 5K) so I am going to be struggling with the 16K of the current Micro Magician. Fortunately I should have the first prototype of the Micro Magician Pro next week which has an ATmega328P as well as both 5V and 3.3V regulators. This will make the RTC happy as it needs a 5V suppy which is currently being supplied by the USB cable.

The second problem I had was that the Wire library does not work very well with an 8MHz clock. I have now found a new, updated I2C library which solves this problem as well as having cool new features such as fast mode (400Khz), scan() which looks for available I2C addresses and pullup() which enables the processsors internal pullup resistors which is ideal for a 3.3V processor reading a 5V I2C device.

 


 

UPDATE 23-9-2012

Ok, I finally sorted out my problems with the I2C code and the DS1307, as with most difficult problems it was a combination of hardware and software. I have now upgraded to the first prototype of my Micro Magician Pro and a 7.4V, 2 cell LiPo rated at 1000mAh. The onboard 5V regulator allows me to power the DS1307 without limiting myself to 4x 1.2V NiMh batteries.

Now that I have the RTC working properly I wanted to take advantage of it's 56 byte of non volatile ram. Unlike the EEPROM memory in modern MCU's, the non voloatile memory can be written to as many times as you like without damage occuring. This is because it is ordinary RAM and the RTC battery stops it from being lost when you turn the robot off.

For now I am using some of this memory to record the periods of time where the processor is shut down. To do this, my code reads the time from the RTC once every second and writes a copy to the NV RAM. In the setup function which only runs when the processor first starts up, the program reads back the last time stored in memory and compares it with the current time. It then writes this "Shutdown Period" into a second section of the NV RAM.

This data will not be useful for my current experiment but could be handy at a later stage. My main reason for doing this now was to test that I had everything working properly. I suspect the NV RAM will come in handy for other things later.

Now I can start some serious programming!

The first thing I need to do is work out my file structure for the short term memory. The exact structure will depend a lot on your robot and what sensors it has. The short term memory will record all data for later, indepth examination while the batteries are charging (dream time).

When I sat down to plan my memory structure I was surprised at just how many bytes of memory each "event" would need. The robot does not have many sensors (a 3-axis accelerometer, 2 encoders and an IR receiver) but I still ended up with a minimum of 49 bytes. I am going to round this number up to 64bytes per memory event. This way I have a few spare if I've forgotten anything or want to add more sensors later.

Here is the current structure of a single "memory event". The number at the left is an index number used for selecting specific information from the event.

Note index 0, this is a quick reference byte that records the reason for the event being recorded e.g. impact detected or motor stalled.This can be used later when the robot is looking for memories of similar occurances hapening. For our robot to be inteligent it must be able to cross reference and analyze these memories.

	AI short memory data structure

0	memory trigger event	byte
1	year			byte
2	month			byte
3	date			byte
4	day			byte
5	hour			byte
6	minute			byte
7	second			byte
8	magnitude		msb
9	magnitude		lsb
10	deltx			msb
11	deltx			lsb
12	delty			msb
13	delty			lsb
14	deltz			msb
15	deltz			lsb
16	x-axis			msb
17	x-axis			lsb
18	y-axis			msb
19	y-axis			lsb
20	z-axis			msb
21	z-axis			lsb
22	0G time milliseconds	msb
23	0G time milliseconds	lsb
24	left encoder count	msb
25	left encoder count	lsb
26	right encoder count	msb
27	right encoder count	lsb
28	left speed		msb
29	left speed		lsb
30	right speed		msb
31	right speed		lsb
32	left & right brake	byte
33	left & right stall	byte
34	servo position		msb
35	servo position		lsb
36	battery voltage		msb
37	battery voltage 	lsb
38	IR command received	byte
39	current mode		byte
40	position X co-ordinate	msb
41	position X co-ordinate	lsb
42	position Y co-ordinate	msb
43	position Y co-ordinate	lsb
44	position Z co-ordinate	msb
45	position Z co-ordinate	lsb
46	direction		byte
47	action taken		byte
48	reason for action	byte
49-63	spare			byte
	

A lot of this data is not really relavent and can ultimately be eliminated from a memory event. For example, 0G fall time is only relavent if the robot actually falls. The problem is that our processor is no super computer and trying to sort through this information while the robot is functioning would significantly slow down it's reaction time.

Another problem is that until the information is fully analysed we do not know what information is significant and what is not. This is where the robots "dream time" comes in. The idea is that the robot uses the time in the docking station to analyse the events with 20-20 hind sight and try to find ways to improve it's response in a situation.

My goal is to have all this done by my lowly MCU although I admit it may be neccessary to use a PC to analyse the data on the SD card while the robot sleeps. At this stage it might sound like I am attempting the impossible but stop and look at Patrick's maze solving robot. The robot runs through the maze using a standard "follow the wall" routine and then analyzes the path it took to optimize it. Next time the robot goes through the maze it makes no wrong turns.

My current approach for STEP 1 is to just save it all in a standard format as quick as possible and sort it later when the batteries are on charge. This may change as I write the code. I'm having a few problems with the impact code right now which may be due to either mechanical vibration or electrical noise from the I2C interface.

 

 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Wow this is great!  What a complex idea this is, I like the whole thought process that you've laid out Oddbot.  Lets say you were to grow your robot into a fully fledged 2 legged humanoid robot, that could go running (or 4 legged like those Boston Dynamics robots)...what type of programming and board would you use?  There would be so many servos, a lot of real-time balancing and vision data to process.

Could PICAXE Basic be used with an appropriate board?  Or would the processing power be so large that a whole desktop computer would have to be attached to the robot?  PICAXE Basic seems like such a straightforward language that it would be very manageable to make some type of AI selft balancing robot. But i've only scratched the surface, and i'm basing understanding on seeing how easy it seems to move servos by writing such little code.

There's at least 3 new boards coming out that are not using PICAXE basic though:

Auduino - Intel Galileo

Arduino Tre

86Duino Zero and 86Duino One

Would these new boards be able of handling the calculations?

You know how when calculating current, resistance, voltage, we can use that ohm's laws equation?  Is there a way to calculate how much processing power one would need for certain robot functions?  

I think I'd like to start my robot project with the basic hardware that would be required to process all of the real-time info needed for a self-balancing, walking and running robot.  I'll just start with the robot following lines etc, and as I learn more programming, i can just build, trouble-shoot, and integrate new code into what I have, instead of finding out that my setup is limited and then having to buy new hardware and learn a new programming language etc.  Its going to take all of my brainpower just to wrap my head around simple programming, and I just want to start on the right path. I'm ready to give it a go and see what type of robot i end up with in 3 or 4 years.

Thanks in advance for your opinions and insight!

...I'm just going to repost this in beginners to get some additional info, I just realized that this was your blog.  Great work!

It seems to me there are two popular types of A.I. That appear in Sci Fi. The first is an AI that is pure logic. These AI's don't express emotion and generally do as they are told. An example would be Jarvis from the Iron Man movies. The second AI is perhaps a more intelligent, intuitive AI that can develop emotions. These AI's are more likely to disobey a human and think for themselves. Perhaps these AI's should be considered Artificial Persons or Artificial Lifeforms? Of course both types could either betray or be loyal to a human. The only difference is that one AI would make it's decision based on pure logic and the other would claim emotional reasons which could still be considered logical but just more complex because of a personal element. What one AI sees as betrayal, the other sees as simply unreliable or illogical. What one AI considers to be loyalty to humans, the other considers a logical trade off where a repairman, no matter how unreliable is essential for survival.

Sorry for what I'm about to write being long, I don't know how to do a Vulcan Mind Meld...

I think a good bot will need both...just like there is duality in people.

When I started, I built a bot that navigated, ran missions, wandered, avoided things, used logic.

Recently I have been reading about Affective Computing, and computer models for emotions, mood, personality, etc.  I decided to stick a toe in the water, adding the 10 basic emotions to my bot and having them compete for dominance inside the robots emotion service, and making the bot express those emotions through altered face, speech.  Under the hood, this is already starting to yield a lot of benefits in my opinion and changing my opinions on how to put bot brains together.

The next step will be personality, at this point, I think I will start with an array of traits (value 0-100%) which will amount to the degree to which the robot is predisposed to like some thing or some action...examples...likes talking, likes people, likes moving around, likes safety, likes shooting, likes remembering, etc, etc., etc.

While at first I didn't really see the need for emotions or a personality, I now think purely from a software building standpoint, it will ultimately make the software more maintainable and greatly enhance the ability to build complex autonomous behavior without spaghetti code.  Instead of programming the bot or defining missions for the bot, I'll program it by saying "Take on Personality X" and go try to accomplish these goals or simply "Do what you are predisposed to do."

Before I get there, I need a better memory for the bot.   I'm VERY interested in your experiences with memory and dream state processing of the data.  I'm hoping to start soon on something conceptually quite similar.  My bot is using OpenCV, gathering info about colored objects of interest, their bearings, elevation, etc.  Its also scanning 1 out of n frames for OCR processing so that it can recognize text written on posters, books, etc. around rooms.  Right now it doesn't remember any of this data very long or do very useful things with the data.  I plan to change all that.

My plan is build my next generation 3D short-term and long-term memory (for each room in my house or outdoors).  OpenCV has so many matrix functions that might work well to use to store and process memories (instead of images).  I need a room coordinate system and anchors that tie the room to the GPS coordinate system and to adjacent rooms.  I think I can have the robot locate itself by spotting some landmarks accurately within a room (I know fairly accurate elevation angle and compass bearings to anything it "sees", which right now is text or colored shapes.), looking up these landmarks in long term memory, determining what room it is in, loading the rest of the long term memory for that room, and then using this to keep itself located in space.  A single spotted and correlated landmark that is elevated (above the bot) gives me a lot of info.  I already know bearing, which puts me on a line, and from elevation, I can get distance, if the bot already knows the precise location of the landmark in LT memory.  A second landmark that is on a significantly different bearing would really lock down the location and reduce error down to something very small.  When I have 2 or more landmarks, I could just skip elevation and use compass bearings only.

I find this interplay between short and long-term memory (that you've wirtten about in some posts) to be fascinating.  I can't really conceive yet what the process should be or how to make that happen.  It would be nice if the bot was able to learn and evaluate its own memories and create new long-term ones.  I'm not a neural net guy.  An arduino is the guts of my bot, but the higher brain functions, emotions, and vision processing are on a Motorola Bionic.  The bot is able to sync its relational db with a PC, so I could do the memory processing off bot, but I don't see the need.  I like doing things on bot, feels right to me, and the phones have a lot of processing and storage power.

Any suggested books or papers I should read before embarking on these parts?  Did you get yours working?  Any algorithms that are especially pertinent to this type of problem?  Any experiences or advice you want to share?  I'll be diving into this in the coming weeks/months.

Regards,

Martin

I saw you work for Dagu.  I'm hoping to put my bot brains and body on a Wild Thumper soon.

The jarvis type is for commercial use.but robots like asimo is simply for fun not for application as far as I know.I believe someday these two will mix up and create robot which can think new ideas and can make logical decision on its own with 100% accuracy for man kind.Will sense what human needs and do it without error.

Something like this which can copy human but unable to think which is now in research stage.

One small request.I would really want to see if you think about giving Chopstick AI

as it is my (and I believe many others) favourite one in LMR.I wonder what it will do if apart from all the thing it already have done.(Imitating other spiders,or might create a spiderWeb!!!) :)

I would like to do more with ChopSticks but I just do not have the time.

Thanks both of you Ro-bot-X and oddbot.

The library you are using is it arduino ide?or the micro magician has its own library on sd card interface?

and I'm a little bit confused about the pin connection you said.

D10(SS)     from Arduino to SD card pin 1(CD/DAT3)
D11(MOSI)  from Arduino to SD card pin 2(CMD)
D12(MISO)  from Arduino to SD card pin 7(DAT0)
D13(SCK)   from Arduino to SD card pin 5(CLK)

but from the micro magician manual, D10-D13 are Digital PWM pins and SS,MOSI,MISO,SCK   ISP pins.Does it mean                

D10-D13 are connected to SS,MOSI,MISO,SCK and then to the interface?  Plz correct me if I'm wrong

I am just using the standard Arduino SD card library for now.

All Arduino boards and clones using the ATmega8, ATmega168 or ATmega328 use the ISP pins for D10-D13. Look closely at the pin map here: http://arduino.cc/en/Hacking/PinMapping168

 

Many thanks Oddbot.Sorry for my ignorance.and just founded the library in my Ard IDE.

I'll watch on this project.Best wishes