Let's Make Robots!

Robbie the Robot

Navigation, fetch objects, recognise people

This is Robbie the Robot


The project is a couple of years in the making he is a

2 wheel differential drive robot with a Inmoov upper body

the servos in the arm have been replaced with gear motor controlled with I2C

the head has 2 Dynamixel servos for pan and tilt

The attached video shows the first driving test

using a PS3 joystick.

The second shows a test of the arm moving with the gear motors

instead of the servos

next test will be with the arm controlled with ROS MoveIt



Here is a video of Robbies arm controlled With Ros Moveit

the arm is being moved to random location

Update 25 Dec 2014

All Robbie wanted for christmas was a coat of paint

and some new parts. Just have to finish some wiring

then he is fully operational If i have some spare time

I want to finish the web interface. After hours of navigation

simulation he is ready to start autonomous operation


Update 16/01/15

Autonomous robot

This we get from wikipedia

A fully autonomous robot can:

  • Gain information about the environment (Rule #1)

  • Work for an extended period without human intervention (Rule #2)

  • Move either all or part of itself throughout its operating environment without human assistance (Rule #3)

  • Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications (Rule #4)

An autonomous robot may also learn or gain new knowledge like adjusting for new methods of accomplishing its tasks or adapting to changing surroundings.

I have been asked the question how autonomous is Robbie and do you let him move on this own?

While in principle he has all the systems, and has demonstrated that they work on there own and sometimes they all work together. The fact is in the last 2 years he has been tethered to the battery charger and partially disassembled. Stage 1 is now complete we have a working robot. What we don't have is trust in him and reliability. Stage 2 of this build is address those problems trust will come with reliability but autonomy needs more , below is a list of some tasks the robot should do


  • Charge the battery, this part works using a behaviour tree

  • monitor the systems to be part of the above

Sensing the environment

  • Is anyone near me, face recognition work but needs to be improved

  • where am I, while localisation will give a map reference we need a name ie lounge room

  • day and night shut down nodes that wont be used at night

  • short and long term memory

Task performance

  • Goto a place, did I achieve my goal?

  • get something, did I achieve my goal?

  • locate something, did I achieve my goal?

Indoor navigation

  • Localisation

  • update the known world what has changed

And we also need to log activity, success and failures to measure performance, in the lab he can go through a door with out touching but in real life? Same for localisation.


Update 05/07/15


Its been a while since the last update. Other than the changes to the drive base covers all the work has been to improve reliability. The covers are an effort to keep out the dust (objects) and improve cooling also helps give a more finished look.

On the autonomous robot project I thought it would over quickly but it looks like being a very long project, the basics are solid simple behaviour works well I can leave power on to all systems leave the robot unattended the kids can move and interact with Robbie using voice control with out fear of crashing into walls or run out of power.

The next challenge is system health monitoring at the moment I only monitor battery power, I need to monitor the software as well looking for stalled or crashed nodes if movebase stalls in the middle of a drive Robbie will just keep driving, most of the software crashes were the result of the computer starting to fail (it has now failed totally)

Arm navigation with ROS Moveit is improving tuning the various parameters is very important joint speed makes a big difference in performance and I suspect inertia is also taken in to account. The biggest problem I had was missing commands joint goals were sent to the controllers but never arrived turns out it was a early sign of computer failure. Robbie wont have his new computer for a couple of weeks I can use the time to finish some of the smaller items on the todo list.


What's next?

Get_Beer project works in simulation in real life the grasping needs work

point N look project pick a point of interest Robbie will drive to the object point his head to the object and move the end effector to touch the object the kinect in his head will be used for the final positioning and recognition. The navigate to the point is working the look and point part is untested


Update 15/09/15

Robbie's computer is still broken so I was able to catch up on some tasks I never had time for.

The potentiometer were never very accurate I have designed magnetic encoders as a replacement they are more accurate and and just plug in the the existing structure they will be fitted on the next rebuild.


The overall control was a very hard to maintain and expand natural language is not very good with robot control some verbs are shown as nouns thus wont be interpreted as commands. In NLTK you can define what words are verbs or nouns but maintaining the files is troublesome, I tried pattern_en but it suffers from the same limitations. I also tried WIT a online language processor the learning curve is too steep and I wanted a local solution. Robbie's chat engine on the other hand works well.


I never really looked into pyaiml's capabilities but it can run system programs with command line args. For testing I reduced the loaded aimls to two one for commands the other for general response.

Of course that just puts me back to where it was before but with a lot more potential. Pyaiml will throw a error message for a unknown command I made it so it will append the command to a aiml file I only have to add the meaning later I can automate this but for now I want control over it, this sort of gives me a learning robot.

One of the intriguing possibilities is to query Knowrob ontologies.

For now I can add the name of a person from the face recognition node.

Next task is to make a semantic map and name the objects so when asked his location Robbie will answer “near the desk in the garage” not x,y,z.


All of Robbie's physical task are now controlled through behaviour trees program with action servers any task can be pre empted and resumed if there is a fault or error condition. The behaviour tree also monitors and controls Robbie emotions tasks will give pleasure doing nothing will result in boredom, when boredom reaches a certain level Robbie will do a random task that varies from uttering a quip using a Markovian chains, moving his head or arms to driving around in circles.

Using simulators like Rviz and Gazebo has made these tasks much easier.


update 19/12/15


Robbie is now fully functional again after his computer problems the reason the arms missed commands was due to the controllers resetting. after I supplied power to the USB hubs every thing worked as required.

To increase the accuracy of the arm I started replacing the potentiometers with magnetic encoders to fit the new encoders required a few modification to the gearbox, I incorporated a bearing in the top of the gearbox and a mount for the magnet in the drive gear plus a few extra tweaks to increase the strength of the assembly not all modification will be fitted at the same time some will wait until the next major rebuild


Moveit Update

Robbie's moveit configuration is working again accuracy is 15 cm not very good but magnetic encoders will help plus a better calibration. Obstacle avoidance suffers because planning only just misses the obstacles. Robbie now has a point at node where he will point to a published target pose.


Face recognition

We are now running the COB face recognition package this works well in daylight but the garage is to dark Robbie makes a few errors, I need to add more lights. The AI will say Hello when he first recognises a face then after 5 minutes he will just say Hi. The name of the recognised face is returned to the chat bot so he knows who he is talking to


Object recognition

will recognise a pre programmed object but wont learn a new object because ECTO requires direct access to the kinect driver Freenect uses a different driver and Openni will not work under indigo

2d recognition, shift and surf and not included in the opencv package so its very flaky



Increasing the Global inflation will make Robbie plan further away from obstacles.


Autonomous operation

shutdown command will not work when Robbie is started using robot upstart also depth registered points from the top kinect will not always work unless something uses it straight away the lower kinect has the point cloud to laser scan and gives no trouble. I will start face recognition on start up and see if it remains stable. We haven't had any jitters or strange events since we started using the powered hubs for the arduinos. The current high temperatures are causing a few resets I need a bigger fan and more vents in the CPU bay


Robbie's Emotion system

has been turned off for the moment since he spent most of the time bored and kept quoting a markovian chain from Sun Tzu. It needs a lot more configuration and thought but its fun for a while


As the design of Robbie matures I'm starting to add covers to hide the wires and keep dust off electronics but this has induce a few extra problems

  1. Heat build up

    more fans need to be included in the design


  2. striped out threads

    printed PLA and MDF wont hold a thread for very long so now I will add M3 threaded inserts and M4 rivnuts to the structure







Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Just fantastic work! What are the future plans for Robbie?

for the next couple of months i want to bring together all his capabilities so they can all run together

ie Face recognition and face tracing through the AI will say hello or ask for your name if you are unknown he ask some questions and add you to the database. Navigation and arm control will work together to bring you what you request. At the moment they all work but not together



i am interesting by your tracing and your AI if you are time let s us to know how you do that step by step

it was worderful for every body 

thank you


Is this the same Robbie that had a one webcam for an eye years ago?

Robbie started out as a box with one web cam here is a link to a earlier photo


What an amazing robot you have going there.  It's very impressive.  Thanks for showing us your progress. 

Haha, now he looks a bit like C3PO :-) Merry Christmas Robbie :-D

Peter - just like to congratulate you on your build, when you say autonomous do you mean you let him drive around on his own, is he safe?