Let's Make Robots!

A Simple Way To Give a Bot Motivations, Decision Making, Better Autonomy

Recently, I got to thinking...there must be a better way to program autonomous behavior (than my previous attempts).

I thought about people, people have various motivations, why shouldn't bots?  When people get hungry, they eat.  When they get curious, they look around.  When sleepy, they sleep.  Surely bots have needs and priorities that should influence their behavior.

To try out the concept, I started with 5 motives.  Remember, these are not the actions themselves merely the motivation at a given point in time of the bot to do a particular action.

1.  IdleMotive

2.  Scan Motive

3.  Explore Motive

4.  Listen to Music Motive - my bot currently likes music and so starts playing immediately on startup, not because I told it to, but because that's what it likes to do.

5.  Talking Motive (Does the bot talk about everything its doing or just the important stuff?...I think this feature will really help to not annoy me with repetitive jibber jabber!)

I realize this is all basic stuff, but I had to test the concept before I pile on a ton of motives.


So I wrote a simple service "MotiveSvc" that holds an array of "Motives".  Each motive represents something the bot might like (or dislike) doing.  Each motive has a start value, a goal value, and a movement rate.  Ten times a second, the service interates through each motive and "moves" its value towards its "goal value", up or down, at its movement rate.

Decision Making

Whenever the bot is not otherwise busy, it asks the service "What do I want to do?" and the service responds with the highest priority (value) motive.  The motive is then called and the motive executes some task or series of tasks.  


When any task or set of tasks is complete, the service is given a chance to readjust priorities based on new information found.  

Examples:  If your bot just did a 180 degree video scan and found some thing or someone interesting, it might be more motivated to look in that direction, drive over to it, shoot it, etc.  ...When x happens, bump up motivation of y by 25%.

In my case, I then do a similar readjustment of emotional states based on new info.  Emotions are similar to Motives in that there is a set of them competing for emotional dominance, but thats another blog.

Behavior Adjustment - Swapping Personalities

The idea here is to adjust the settings (the start values, goal values, and movement rates, and feedback changes) until you get a single personality that you like, or make multiple sets of values and load the one you want at a given time.  My bot has a database onboard, so I intend to create and name multiple personalities that I can load at any time.  An SD card or some other persistence mechanism could be used.  Then I can make one that is talkative to help in debugging, and another that likes to shoot things.

Results So Far

So far, I like it a lot.  With just a few motives, the bot seems a LOT more life-like, autonomous.  I find myself watching the bot "do its thing" more while hitting far less buttons on my remote control.  It will take a lot more work to get all the motives I'll ultimately want, tweak the settings and feedbacks so the resultant behavior makes sense.  All in all though, I think it is a much more promising way to build autonomous behavior than my previous approaches, which were more procedural or cause and effect type behavior.  Having said that, there is still a place for both of those patterns as well.  I think Motives are a good layer to put "on top" of the the other two.

If anybody is reading this far...Does the concept of Motives seem interesting or fruitful in your experience?  Any other ideas/solutions that people have tried or seem more appropriate?

Other Random Food for Thought

What type of behavior could result if you made a bot like "Money", "Praise", or other concepts?  Could motives be made to be alterable by the bot itself in response to rewards/punishments in the environment?  Let's say a bot likes money but hates moving around due to battey consumption.  If people gave it money when it moved around and performed (Street Performer Bot), might the bot decide to like moving around when in the presence of people in order to get money?

Anybody got any crazy (non-sexual) motivations they might like to see in a bot?


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Well done and a nice approach. Our Swarm Robot project was always lacking of those behavier behavior (i knew there is something wrong) functions.

I would really like to see somne basic habbits or behavier behavior (and again) in a robot. So far you explained it very good buit there is still a long way to go to make it work like in real life and not sort of pre-programmed.

Wow...seriously cool stuff with swarms, doubt I could ever figure out how to build something constructive for $5.  I've always been fascinated by swarming concepts.  I'm hoping to add another bot to my family in the next year to be a "Wingman" for my current one.  One day I'd like to try my hand at getting bots to wander in small formations...line ahead, wedge, echelon right, left, etc.

Yeah, I certainly agree about there being a long way to go to avoid a pre-programmed look.  I will say it "looks and feels" like a step in the right direction.  Getting the emotional responses, speech, and motives coordinated to common purposes seems like many many days/weeks of tinkering.  I think its tricky to balance the desire to move around and "do something" with a willingness to sit still and observe and interact with people.  I find my bot seems most natural when it moves less...kinda seems like a 2 yr old when it wanders.  I've looked into trying to implement one of the conversational simulators that are available, but they look like they require a lot of training before they can avoid lots of non-sensical talk.  Right now my bot understands about 30 words in simple sentences...I think I'll need 300 or 3000 to make it interesting. 

At present I've taken a detour from behavioral stuff to develop a better vision system...trying to get my bot to have a memory of many objects of many different colors in a room rather than just one color at a time.  I'm hoping to come up with a visual mapping system and a better localization method based on correlating visual cues to the map.  I think it'll suck up my free time for a while.  I think as the bot is able to recognize more diversity in its environment through its sensors, I'll get more behavioral ideas.

When I get more behaviors added, I'll drop a post or some video.  In the meantime, if you think of any wacky or interesting behaviors/motives for a bot, send them my way!




Yep, an interesting topic. We already had some discussions about similar things. I will just collect this one for now and see when I have time to do some work in this direction. 

And yes, I will post all the wacky and interesting or both here :-)

Interesting idea.  This might be the start of creating robots that are autonomous and react in a human way. 

Another approach might be to model this as Maslow's needs hierarchy


since people's motivations are really a hierarchy.  We are complex creatures and our motivations are pretty complex.  Our responses are always shaded by these motivations. 

For instance, if a stanger hits you, without even thinking about it, you might hit them back.  But if a family member hits you in the arm, your response to them would be shaded by the relationship as well as recent events.  You might hit them playfuilly back or if it is your older brother, hit him with everything you got! 

What I would envision as a way to model this in code would be to list possible responses to a particular event.  Then run through previous events that occurred with a weighting scheme (and the most recent having a higher weighting) as well as a weighting to how the event fits into Maslow's needs hierarchy.  The proper response(s) would then bubble up to the top.  If they are very close together in weighting and wouldn't interfere with each other, it might do multiple things.  For instance, the robot might do both:  hit your brother back and tell him to stop hitting you.

This is very simplistic and not thought out very well.  I believe this idea has legs though...

Thanks for getting me to think about this.




Great input.

I originally had Maslow in mind when I got started, but my understanding was so shallow...what I remembered from school.  I'm realizing now that I strayed from it.  About the only part of Maslow in the code at the moment is that the "Goal Value" for each motive can be set at a different level.  This means that ultimately if unaddressed, some motives will supercede others and take highest immediate priority.  So, breathing becomes more important in the moment when air is needed and self actualization goes on the back burner.  Seems like Maslow should have turned his pyramid upside down so you could read it top down in priority order.

I never got any further with Maslow than that, so your examples about possible responses to particular events is beyond my psych knowledge.  I'm going to let that one brew in my head awhile.  I believe Watson and other AIs come up with multiple responses and rank them.  Ah, ok, ...so I need to rank them as to how well they satisfy the needs hierarchy...which could be my motives.  Interesting.  Seems like the motives might be ok, but the decision-making process needs to be a lot more sophisticated.  I think I'll have to resort to writing use cases or something to figure out how to make all these concepts tangible enough to spin code.

Another topic you mentioned...previous events.  I don't really have a programming conception of "previous events" yet...no long term memory except a map.  (more time to brew needed)  I'm working on a visual memory, but an event memory is also needed.  After reading the links you sent, and the related "Attachment Theory" stuff, it seems a "Relationship Memory" is also needed.  I haven't yet figured out how to get a bot to recognize a person, but I do see on google that many people have done so with a phone, so I believe I'll get there as soon as I can get the time to focus on it.

Thankyou so much for all these thoughts.  This is exactly the kind of input that really helps at this early stage.



What you are describing sounds a lot like Subsumption Architecture (now called behavior based) programming.  Look it up, you might get a few ideas.

It's cool how great minds come up with the same ideas. :)

Keep it up.  I am following this with interest.

Thanks for the suggestion to look it up...I did, and it helped.  It seems I have much to learn about robotics.  I see a lot of benefits in using the techniques as software patterns, especially for the lower level services that are fundamental to any bot.

The wikipedia on subsumption referred me to MIBE architecture, which was a further evolution from subsumption, which seems eerily similar to the ideas I've been writing about here for higher level behaviors...motivations competing for dominance...decision-making, etc.  It goes on to bring up Activation Trees and a Behavior Conflict Resolver subsystem, which I am already finding a need for. Its basically a mechanism (in my terminology) to allow two motives to excute two different sets of tasks concurrently, providing they do not try to take control of the same resources (drive motors for example) at the same time.

Thanks again for the help.