Robbie the Robot
This is Robbie the Robot
The project is a couple of years in the making he is a
2 wheel differential drive robot with a Inmoov upper body
the servos in the arm have been replaced with gear motor controlled with I2C
the head has 2 Dynamixel servos for pan and tilt
The attached video shows the first driving test
using a PS3 joystick.
The second shows a test of the arm moving with the gear motors
instead of the servos
next test will be with the arm controlled with ROS MoveIt
Here is a video of Robbies arm controlled With Ros Moveit
the arm is being moved to random location
Update 25 Dec 2014
All Robbie wanted for christmas was a coat of paint
and some new parts. Just have to finish some wiring
then he is fully operational If i have some spare time
I want to finish the web interface. After hours of navigation
simulation he is ready to start autonomous operation
This we get from wikipedia
A fully autonomous robot can:
Gain information about the environment (Rule #1)
Work for an extended period without human intervention (Rule #2)
Move either all or part of itself throughout its operating environment without human assistance (Rule #3)
Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications (Rule #4)
An autonomous robot may also learn or gain new knowledge like adjusting for new methods of accomplishing its tasks or adapting to changing surroundings.
I have been asked the question how autonomous is Robbie and do you let him move on this own?
While in principle he has all the systems, and has demonstrated that they work on there own and sometimes they all work together. The fact is in the last 2 years he has been tethered to the battery charger and partially disassembled. Stage 1 is now complete we have a working robot. What we don't have is trust in him and reliability. Stage 2 of this build is address those problems trust will come with reliability but autonomy needs more , below is a list of some tasks the robot should do
Charge the battery, this part works using a behaviour tree
monitor the systems to be part of the above
Is anyone near me, face recognition work but needs to be improved
where am I, while localisation will give a map reference we need a name ie lounge room
day and night shut down nodes that wont be used at night
short and long term memory
Goto a place, did I achieve my goal?
get something, did I achieve my goal?
locate something, did I achieve my goal?
update the known world what has changed
And we also need to log activity, success and failures to measure performance, in the lab he can go through a door with out touching but in real life? Same for localisation.
Its been a while since the last update. Other than the changes to the drive base covers all the work has been to improve reliability. The covers are an effort to keep out the dust (objects) and improve cooling also helps give a more finished look.
On the autonomous robot project I thought it would over quickly but it looks like being a very long project, the basics are solid simple behaviour works well I can leave power on to all systems leave the robot unattended the kids can move and interact with Robbie using voice control with out fear of crashing into walls or run out of power.
The next challenge is system health monitoring at the moment I only monitor battery power, I need to monitor the software as well looking for stalled or crashed nodes if movebase stalls in the middle of a drive Robbie will just keep driving, most of the software crashes were the result of the computer starting to fail (it has now failed totally)
Arm navigation with ROS Moveit is improving tuning the various parameters is very important joint speed makes a big difference in performance and I suspect inertia is also taken in to account. The biggest problem I had was missing commands joint goals were sent to the controllers but never arrived turns out it was a early sign of computer failure. Robbie wont have his new computer for a couple of weeks I can use the time to finish some of the smaller items on the todo list.
Get_Beer project works in simulation in real life the grasping needs work
point N look project pick a point of interest Robbie will drive to the object point his head to the object and move the end effector to touch the object the kinect in his head will be used for the final positioning and recognition. The navigate to the point is working the look and point part is untested