Let's Make Robots!

Robbie the Robot

Navigation, fetch objects, recognise people

This is Robbie the Robot

http://escaliente-robotics.blogspot.com.au/

The project is a couple of years in the making he is a

2 wheel differential drive robot with a Inmoov upper body

the servos in the arm have been replaced with gear motor controlled with I2C

the head has 2 Dynamixel servos for pan and tilt

The attached video shows the first driving test

using a PS3 joystick.

The second shows a test of the arm moving with the gear motors

instead of the servos

next test will be with the arm controlled with ROS MoveIt

 

Update

Here is a video of Robbies arm controlled With Ros Moveit

the arm is being moved to random location

Update 25 Dec 2014

All Robbie wanted for christmas was a coat of paint

and some new parts. Just have to finish some wiring

then he is fully operational If i have some spare time

I want to finish the web interface. After hours of navigation

simulation he is ready to start autonomous operation

 

Update 16/01/15

Autonomous robot

This we get from wikipedia

A fully autonomous robot can:

  • Gain information about the environment (Rule #1)

  • Work for an extended period without human intervention (Rule #2)

  • Move either all or part of itself throughout its operating environment without human assistance (Rule #3)

  • Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications (Rule #4)

An autonomous robot may also learn or gain new knowledge like adjusting for new methods of accomplishing its tasks or adapting to changing surroundings.

I have been asked the question how autonomous is Robbie and do you let him move on this own?

While in principle he has all the systems, and has demonstrated that they work on there own and sometimes they all work together. The fact is in the last 2 years he has been tethered to the battery charger and partially disassembled. Stage 1 is now complete we have a working robot. What we don't have is trust in him and reliability. Stage 2 of this build is address those problems trust will come with reliability but autonomy needs more , below is a list of some tasks the robot should do

Self-maintenance

  • Charge the battery, this part works using a behaviour tree

  • monitor the systems to be part of the above

Sensing the environment

  • Is anyone near me, face recognition work but needs to be improved

  • where am I, while localisation will give a map reference we need a name ie lounge room

  • day and night shut down nodes that wont be used at night

  • short and long term memory

Task performance

  • Goto a place, did I achieve my goal?

  • get something, did I achieve my goal?

  • locate something, did I achieve my goal?

Indoor navigation

  • Localisation

  • update the known world what has changed

And we also need to log activity, success and failures to measure performance, in the lab he can go through a door with out touching but in real life? Same for localisation.

 

Update 05/07/15

 

Its been a while since the last update. Other than the changes to the drive base covers all the work has been to improve reliability. The covers are an effort to keep out the dust (objects) and improve cooling also helps give a more finished look.

On the autonomous robot project I thought it would over quickly but it looks like being a very long project, the basics are solid simple behaviour works well I can leave power on to all systems leave the robot unattended the kids can move and interact with Robbie using voice control with out fear of crashing into walls or run out of power.

The next challenge is system health monitoring at the moment I only monitor battery power, I need to monitor the software as well looking for stalled or crashed nodes if movebase stalls in the middle of a drive Robbie will just keep driving, most of the software crashes were the result of the computer starting to fail (it has now failed totally)

Arm navigation with ROS Moveit is improving tuning the various parameters is very important joint speed makes a big difference in performance and I suspect inertia is also taken in to account. The biggest problem I had was missing commands joint goals were sent to the controllers but never arrived turns out it was a early sign of computer failure. Robbie wont have his new computer for a couple of weeks I can use the time to finish some of the smaller items on the todo list.

 

What's next?

Get_Beer project works in simulation in real life the grasping needs work

point N look project pick a point of interest Robbie will drive to the object point his head to the object and move the end effector to touch the object the kinect in his head will be used for the final positioning and recognition. The navigate to the point is working the look and point part is untested

 

 

 

 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

It looks like you have 2 Kinects onboard.  Are you running both at the same time?  Do the dot patterns they project interfere with each other?

Impressive bot as always.  I bet you can really frighten small kids with that.  Tell them..."There really is a monster in the closet that likes to come out when its dark."

With the layout of your bot, it would be really easy to add a sonar array around the base just above the wheels, it might give you improved situational awareness about where obstacles are, especially to the sides and rear.  I would be tempted to add some thermal array sensors too if I had a bot that big.

Great work.  Thanks for posting.

The 2 Kinects don't appear to interfere with each other the lower kinect is for navigation and localisation only. I haven't found the need for sonars, the obstacle map from the kinect keeps the bot away from any hazards and localisation is accurate enough. Still the main reason is budget or he would have a lot more sensors.


The young kids really love Robbie I can set up face recognition to say hello every time he sees them so they play a game of peek a boo or i set the voice recognition to continuous the older kid can chat with him and the younger (6) just ask the say question, its fun to watch 

 

 

The Amazing Screw-On Head!

https://en.wikipedia.org/wiki/The_Amazing_Screw-On_Head

That 3d printed InMoov face is starting to pop up in my nightmares.

I've been admiring this project for a long time.  Very nice job.

I was curious as to what ways you are using the Kinect.

I hope I can find the time to build something like this one day.

The lower Kinect is used for navigation  with the ROS navigation stack it simulates a laser scanner

if I find some time I want to use rtabmap http://introlab.github.io/rtabmap/ it lookS like it can handle a home environment

Better (all the clutter and moving furnisher)

I had a head mounted kinect for face recognition and tracking but that really slowed the system when i have the time

I'll use a webcam for face recognition and tracking and put the kinect in the chest for object recognition. I used the microphones

in the kinect with HARK to do voice localization and help with voice recognition it worked but need more tuning

A robot of this size you really need a team of people to many small jobs get left undone due to the lack of time

It looks like the unholy lovechild of C3PO and R2D2 ;) ! Or perhaps their last shared ancestor.

Peter - just like to congratulate you on your build, when you say autonomous do you mean you let him drive around on his own, is he safe?

He has limited freedom, in a large area i just watch from a distance but going through doorways i,m very close

just in case something goes wrong. The arms can get hooked on corners I will add a pan waist joint soon so the

arms have more protection. Other then that I only have to build trust and that comes with testing

 

Peter

Hi,

I'm actually curious as to what hardware you are using to interface with the Dynamixels?  I couldn't find it on your blog.  Are you using a USB2AX, USB2Dynamixel, or some sort of Arduino implementation?  I'm having a hard time getting USB2Dynamixel to work, but I'm trying to use it on Windows.  I haven't tried using the ROS stack for Dynamixels, but if I can see someone has had success I'll try it again.

Luke