Let's Make Robots!

The most advanced amateur humanoid robot project


This is the MAAHR project. MAAHR stands for Most Advanced Amateur Humanoid Robot. The goal is to design, build and program a robot which is similar sophisticated as ASIMO. The project is completely open source, everybody can participate. I'll sponsor MAAHR with 15000USD for hard-, software and logistic.

MAAHR's specifications so far:

  • 2DOF head with 2 cameras and LCD mouth
  • 2DOF main body
  • 2 x 5DOF robot arms
  • 2 x robot hand, capable to lift at least 1kg
  • 3 wheel or tracked base
  • Moving objects
  • Postures and gestures recognation
  • Speech recognation (and distinguishing sounds)
  • Synthesized voice
  • Recognizes the objects and terrain of its environment
  • Facial recognition
  • Internet connectivity

A first sketchup draft I did (arms still missing):


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I would suggest to use hardware that anyone can get and replicate the build. The software that needs the most computing power is vision. We can have a dedicated onboard SBC to do just that, as a sub-system. Then another networked SBC,(or more) for speech, mapping, A.I., etc. and about one microcontroller (more if needed) for each sub-system in the robot acting as reflex centres. Each micro can be connected to the brain through separate USB cables or a single high speed I2C interface (there are USB-I2C adapters). If one wants to replicate the build, can use a laptop, PC, one or more SBCs, does not matter, all it needs is compatible hardware to run the software. Same for the microcontrollers, I suggest the use of Arduino or compatible boards, because the code is easy to write and powerful enough, but Propeller is also a good candidate, if we need embeded multi-core. The only thing with the Prop is only a few have it and can write code for it. 

Here is an idea of looks/funtionality:


Like it but the head should be not a screen but a animated head with stereo camera eyes. Two wiper motors for the base seems to be suitable.I've a brand new 2x25A programmable Sabertooth H-bridge 6-24V which I can donate immediately :)

But the screen is clever. Can be used for head animation and for programming and debugging issues...

First head suggestion:

2DOF of the web cam's, 2 DOF for complete head...

Use an Android phone or better a Chumby 8 as a display and perhaps as a brain too. No iPhone!!!

Heh, I respect the IPhone comment -> http://www.youtube.com/watch?v=FL7yD-0pqZg 

But, I'd use a laptop for brains and such..   it's just too easy being all there - a ready made high resolution display, wifi, network,  a gazillion usb ports to connect to a gazillion micro-controllers, blue-tooth, firewire, keyboard, a disk to save stuff, even a battery & charger 


How about this bare bone head, using 2 web cam's instead of the eye balls?

I don't want to seperate the laptop screen from the laptop and use it as a head:)


Here is the FaceBot project from EMG Robotics:


The RobotSee software can be used on the Chumby, this is something I've been looking to try out myself but I'm having trouble to understand how to set it up, as the guy does not provide instructions for dummies like me. I believe MyRobotLab can do the same things, correct me if I'm wrong, so I would also like to try to set it up on my Chumby. Hints anyone?

It's all OpenCV when you get low enough... OpenCV has an example program in c - I ported it to Java and added it as a filter to MRL.

It works using Haar Classifiers which can be trained to detect other things too.  I haven't seen much re-training cause its pretty difficult and labor intensive at the moment, but I was thinking wouldn't be cool to utilize the power of the internet to re-train and have a database of haar-classifiers for different things ... like "electrical outlet" to recharge :D

You want to set up mrl on chumby or a c - facetracker program?

Nice head, I like it. I have a few questions regarding the functionality.

1. Do we need pan/tilt for both the head and the cameras?

2. Do we need to converge/diverge the cameras for distance measurement?

3. Do we need a working, movable mouth like in the above picture? Or we show it on a small 3" display? Keep in mind that servos make noise while moving, a robot that talks while seros move is a bit anoying. 

4. Do we want facial expressions? Eye lids, etc?