Let's Make Robots!

Emotions and Motivations as behaviour architecture

Hi all, a few days ago, I started designing the behaviour-based architecture for my robot Hurby. I thought to design it as a Subsumption architecture (see my previous blogs) in which different behaviours compete to be selected as dominant, and hence move Hurby accordingly.

In one of my previous posts, I receive a comment from a LMR user (mtriplett) - thanks again Martin -. He suggested me making use of different motivations to drive those behaviours (like MIBE architecture). Motivations depends on emotions that Hurby can feel at different moments. 

I've been reading a lot about Emotion and Motivation behaviour-based methologies during last days, and now, I'm really sure that this is the way I want to follow, so I will change the focus and rethink all I have done till now.

What I had already developed was a subsumption architecture in which different behaviours compete to win (see figure):

I had only developed the most basic behaviour "Happiness". According with its happiness level, Hurby would move faster or slower and emit voices or sounds.

Now, what I want to do is redesign the whole architecture to enable emotions in Hurby's behaviours. I've (by the way) identified these primary emotions:


  • Boredom
  • Satiation
  • Frustation


These emotions will flow into several motivations, like:


  • Find someone to play with
  • Maximise playing time
  • Avoid damage from someone


These motives pursue several goals: reduce boredom and frustation and get satiation. But all of them can be resumed into a great one: Happiness.

So, graphically, the architecture I want to implement now would look like something similar to this one:

Let's talk a bit deeper about each motive.

Find someone to play with

When Hurby is inactive (nobody is playing with it) it can measure time elapsed since last time somebody played with it. It is what I've named TSLP (time since last play). This variable measures its boredom level, I will use a Fuzzy inference system (FIS) to get boredom deegre in a range from 0 to 1.

On the other hand, Hurby has a presence IR detector and a voice recognition module. Both sensors serve to detect if someone is moving nearby.

According with its boredom level, Hurby will try to catch the attention of someone who moves around it. This motivation to find someone to play with, will increase as more and more bored it is.

The goal that pursues this motivation is reached when someone around, after seen Hurby's movements or voice/sound callings, starts playing with it: touching it or talking to it (telling a known command that Hurby can recognise).

When goal is reached, TSLP variable will go to 0 (last action has just occured and hence time elapsed since last play is zero). As this variable (TSLP) represents boredom level, now this motivation is satisfied and Hurby will give less importance to this motive, allowing other motives to be selected as dominant.

Also, as this motive flows into Happiness behaviour, actions driven by this motive will be modulated by its happiness state. For example, if Hurby is unhappy, it will catch someone's attention with sad words: "Hi, there is a long time without you. Are you fine??, Come here and tell me!!". On the other hand if happy it would try to catch your attention with funny movements and cheerful comments: "Hi again! I've got a joke for you, come here!!, Come on!!".

Maximise playing time

In this case, satiation is the emotion which drives this motive. Hurby is hungry of playing time and it wants to be satiated of it. To meet this goal it will try that someone playing with it, don't stop until a minimum time will be elapsed (satiation time).

This motive is directed by 2 main inputs:

  • Time since first action: elapsed time since someone started to play with it. It is proportional to its satiation level.
  • Playing time trend: measured in seconds elapsed between actions (touches and/or recognised voice commands).

According with both inputs, Hurby will try to keep playing if it detects that playing time trend decreases while not yet satiated. It would make you questions or ask you about actions to do: "Ohh please, tickling me, I want to laugh loud!!". If the person playing with it, go on with the play and tickled it, then its satiation level will increase. 

Once maximum satiation level is reached (at least a minimum period of time (T)  played) this motive will loose strength against others.

Avoid damage from someone

Even if Hurby is trying to catch someone's attention or it is playing, if it is lifted by its ears, it hurts. In this case its only motivation is stopping you doing that.

This motive is driven by  2 inputs:

  • Time being lifted: measures time since damage condition started.
  • Hurt status: tells if it is lifted by an ear or not. 

In this case, when Hurby detects someone is lifting it by an ear, it will start moving strongly, and telling commands like "Please, that hurts, let's me go!!". If this situation continues, it will increase the strength of its movements and sounds.

The goal is reached when Hurby detects that its ears aren't pressed anymore, but its frustation degree will be influenced by the time elapsed, and of course it will affect to its basic happiness state.

Happiness as modulating behaviour

Those emotions: boredom, satiation and frustation, flow into a common happiness state, which will modulate all the actions (movements and voice/sounds) that Hurby will do. This modulation allows Hurby to change dynamically its character and showing different behaviours.



As a first approach I think I can code this new architecture quickly. Perhaps next week I could have results (a video or similar). Nevertheless I have the feeling that more in deep motives will raise when I start coding and testing it.

We shall see what happens!!!

See you.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I enjoyed watching you go through this process and learned a bit on the way.  Thanks for sharing!




Very interesting. I hope you succeed in making a fuctional version.

I hope Hurby will like his new code! :)

Hey, thanks for the shout out.  I've been looking through my downloads trying to find some papers for you.

There are some fairly mature emotional models out there that boil everything down to around 9-10 emotions in some kind of 3d emotion vector space.  I never used the 3d vector space, but I did use their emotions in my ultra dumbed down keep it simple stupid way.  I added a "Fine" emotion to mine that is basically neutral (not happy, not sad, not afraid, just on the positive side of ok, satiated?)

Your avoid damage from someone would fall under a "Fearful" emotion in their model.  You might need some kind of "Self Preservation Motive" that would evaluate circumstances and crank up the fear if appropriate, or do fight or flight behaviors.  I haven't built fear behaviors into mine yet other than simply showing fear in the eyes, so this is just guessing.  I would argue that happiness is an emotion in this model, but that your bot might also need a happiness motive to get it to initiate behaviors that might lead to happiness.  I think your "satiation" might be the same as "fine" in my lingo.

For me, a critical determiner of the balance of the behaviors was the picking the right min, max, goal, and degrade rate (the amount the emotion changes per iteration) for each emotion.  Most of the emotions gravitated towards a low value, while boredom gravitated towards a high value...max boredom = sleep.

In retrospect, I wish I had used a database or some other type of metadata storage to store events and the values of changes to make to emotions/motivations when each event occurs.  Example:  Ears Pulled Event boosts fear by 20%, decreases happiness by 50%.  Unfortunately for me, these rules are built into the code right now.  This would make tuning and balancing the behaviors over time a whole lot easier.

Good luck!

In case I didn't make sense and that you can relate to code better, here is a Java excerpt (this runs on Android) that contains the essence of how I did emotion configuration and calculation:  Note: Instead of using 0-100%, or 0-1 with decimals and floating point math, I used integer values 0-10000...10000 represents 100% or 1.  I expect you'll come up with something a lot better than what I did.  I can post the rest of the code if desired.  The essence of the motivation side is the same, a service with an array of motives, except that motives actually have a few lines of code.  I guess emotions are basically empty while motives kick off behaviors and alter the values of emotions.

package mab.droid.robotics;


import mab.droid.baseclasses.BaseEmotion;


public class EmotionSvc {


      public static final int EMOTION_NEUTRAL = 0;

      public static final int EMOTION_ANGRY = 1;

      public static final int EMOTION_ANNOYED = 2;

      public static final int EMOTION_BORED = 3;

      public static final int EMOTION_CONCENTRATED = 4;

      public static final int EMOTION_DEPRESSED = 5;

      public static final int EMOTION_FEARFUL = 6;

      public static final int EMOTION_HAPPY = 7;

      public static final int EMOTION_SAD = 8;

      public static final int EMOTION_SURPRISED = 9;


      public static final int DEGRADE_QUICK = 150;

      public static final int DEGRADE_SLOW = 50;

      public static final int DEGRADE_MEDIUM = 25;

      public static final int DEGRADE_LONG = 12;

      public static final int DEGRADE_VERY_LONG = 6;


      private static final int NUM_EMOTIONS = 10;

      private static BaseEmotion _Emotions[] = new BaseEmotion[NUM_EMOTIONS];


      private static int _EmotionalState = EMOTION_NEUTRAL;

      private static boolean _IsInitialized = false;


      public static void InitEmotions()


            ConfigEmotion(EMOTION_NEUTRAL, "Fine", 2500, 2500, 0);

            ConfigEmotion(EMOTION_ANGRY, "Angry", 0, 0, DEGRADE_SLOW);

            ConfigEmotion(EMOTION_ANNOYED, "Annoyed", 0, 0, DEGRADE_LONG);

            ConfigEmotion(EMOTION_BORED, "Bored", 0, BaseEmotion.MAX_VALUE, DEGRADE_LONG);

            ConfigEmotion(EMOTION_CONCENTRATED, "Concentrated", 0, 0, DEGRADE_LONG);

            ConfigEmotion(EMOTION_DEPRESSED, "Depressed", 0, 0, DEGRADE_VERY_LONG);

            ConfigEmotion(EMOTION_FEARFUL, "Fearful", 0, 0, DEGRADE_SLOW);

            ConfigEmotion(EMOTION_HAPPY, "Happy", BaseEmotion.MAX_VALUE, 0, DEGRADE_QUICK);

            ConfigEmotion(EMOTION_SAD, "Sad", 0, 0, DEGRADE_SLOW);

            ConfigEmotion(EMOTION_SURPRISED, "Surprised", 0, 0, DEGRADE_QUICK);


            _IsInitialized = true;



      public static void ConfigEmotion(int pIndex, String pEmotionName, int pStartValue, int pGoalValue, int pDegradeRate)


            BaseEmotion MyEmotion = new BaseEmotion();






            _Emotions[pIndex] = MyEmotion;



      public static void CalculateEmotionalState()


            int DominantEmotionIndex = 0;

            double CurrentEmotionIntensity = 0;

            double DominantEmotionIntensity = 0;







            for(int i = 0; i < NUM_EMOTIONS; i++)


                  CurrentEmotionIntensity = _Emotions[i].Degrade();


                  if(CurrentEmotionIntensity > DominantEmotionIntensity)


                        DominantEmotionIndex = i;

                        DominantEmotionIntensity = CurrentEmotionIntensity;




            _EmotionalState = DominantEmotionIndex;



      public static int getEmotionalState()


            return _EmotionalState;



      public static BaseEmotion getCurrentEmotion()


            return _Emotions[_EmotionalState];



      public static int getCurrentEmotionPercentage()


            return _Emotions[_EmotionalState].getValuePercentage();



Hi Martin, thank you very much for the suggestions and the sample code is very useful. I'm coding something quite similar to yours, but what you call min, max, goal and degrade rate, I'm implementing it through a fuzzy inference system with a extremely simple rule base.

As I process that fuzzy system at a periodic pace (for example once per minute) and when an event (touch or voice command) is detected, degrade rate can be variable... (well, at least theoretically... according wiht input values and those activated fuzzy rules at a given time). I have to check if this idea works properly or not. Adjusting input/outputs fuzzy sets and fuzzy rules database I can change min/max goal values and degrade rate.

I'm working in this basic happiness motive, and within today or tomorrow I'll have a first test program. In this test, Hurby'll start completely happy (100%) and as I won't  play with it, it will decrease its happiness to 0%. I've modified fuzzy sets to test this behaviour in a period of 2-3 minutes, so I will post a video showing how it reacts on real time.

I spent some time tonight reading up on Fuzzy Logic, as I never learned much about it.  I am starting to see how using it could unconvolute (is that a word?) some behavioral code I have.  Thanks for the posts, I'm looking forward to seeing what you do and continuing to learn.

It would be nice to have a central repository for Behavioral Programming papers and discussions, either in the forums or here.

I would suggest two forum threads, hopefully made sticky, under Programming where this can be a useful resource for both ourselves and other people. In many ways behavioral programming is easier than traditional programming, at least for choosing goals. It is perfectly OK to ignore this by having a set of goals fixed in programming.

I would like to figure out a framework where emotions and motivations are described in a text format (JASN, XML, etc.) so that we don't have to recompile our code as often when fine tuning, but that may be a dream for later.

I understand that even sharing help and information, our implementations may be totally different. I find this a good thing at this stage. Even though I would love to see other working examples of code, I think it's too early to decide on a single implementation yet. Let the frameworks compete and discover the strengths and weaknesses of each so that we may help each other each improve our frameworks.

And do I hear those two fateful words echoing down the corridors of time? **Mortal Combat!**. :)

Jay (or as some have said, DT)