The MAAHR Project

What should it do?
AttachmentSize
MAAHR1.zip1.5 MB
hand1.zip1.35 MB

The goal is to design, build and program a robot which is nearly as sophisticated as ASIMO ($1,000,000) or WilloGarage's PR2 ($400,000) from off the shelf components. The project is completely open source, everybody can participate.

Groups

  • Design: Ro-Bot-X, cr0c0, MarkusB
  • Mechanics: TinHead, JoeBTheKing, Jad-Berro, MarkusB
  • Electronic: MarkusB, Jad-Berro, Krumlink
  • Software: GroG, AgentBurn
  • Web presence: JoeBTheKing
  • You?

Inventory


      

 

PartQuantityPurposeCostShippingLink

 

Intel D525MW1Main Brain$83.94$7.22http://www.amazon.com/Intel-Dual-Core-Mini-ITX-Motherboard-BOXD525MW/dp/B0041RSC94

 

M2 - ATX PSU1Power for Brain$72.50$8.72http://www.mini-box.com/M2-ATX-160w-Intelligent-Automotive-DC-DC-Power-Supply

 

Memory2 GMemory for Brain$21.84$0.00http://www.amazon.com/gp/product/B001KB6Z2U

 

Drive1 T $75.48$0.00http://www.amazon.com/gp/product/B001KB6Z2U

 

Kinect  $90.00$0.00Craigslist

 

WebcamLogitec C600 $39.48 http://www.amazon.com/s/ref=nb_sb_ss_i_0_20?url=search-alias%3Delectronics&field-keywords=logitech+c600+webcam&x=0&y=0&sprefix=logitech+c600+webcam

 

USB Wifi     

 

      

 

Total  $383.24$15.94$399.18

Current Milestones

  • Assemble brains in a protective case
  • Install all software
  • Verify working of Battery and Power Supply Unit
  • Attach Webcam
  • Attach Kinect - OpenCV Kinect update
  • Attach Bare Bones Board Arduino Clone
  • Design Base

Build Log Most Current At Top


OpenCV / Kinect Progress :

08-24-2011

I created a couple more filters and added some functionality to the OpenCV Service.  I have also created a new Service  called FSMTest for "Finite State Machine Test".  Not much is implemented as I wanted to get the Sphinx and Google speech recognition Services in a useful state.

Now that the housekeeping is done, I have uploaded a quick video of the progress.

What happens :

  • The FSM starts up - the Speech recognition services will begin listening for commands (I disabled them currently, because they take a lot of "start up" time which hinders debugging
  • I will command it to look for an object
  • It will rapidly go through various filters and techniques for object segmentation ... "Finding the Object"
  • It will then go through its associative memory looking for a match - if it finds a match it will "Report the Object"
  • If it does not find a match it will query for associative information from me.  This wil be "Resolving the Object". I will use with Sphinx speech recognition initially, but I am very interested in hooking into the Google speech recognizer Service I created recently.
  • This test shows a guitar case lying against the wall
  • Segmentation is only done with the kinect.  The kinect can range distances and a depth from 4' to 4' 3 inches is segmented off and turned into a polygon.
  • The polygon's bounding area is found then a template is created.
  • The Matching template filter is put into place, then the object is matched and locked.
  • The matching template will be associated with speech recognized words and put into memory, which can be serialized.
  • When an object leave the field of vision, and later returns to it, the robot will "Resolve the Object" and its associations.  It will be able to "Report the Object" by saying such briliant things as "Why did you put a guitar case in front of me?" .. which in turn may lead to other associations?

 



 

Remote Connectivity:

08-15-2011 
It is nice to have options.  A full X desktop would be nice to have with this project.
It would be necessary to start and use the X desktop from the command line.  Here are the step necessary to accomplish this.

  • Fedora 15 default will not have the sshd daemon running on boot - to change this option use the following commands
    # systemctl start sshd.service  # - will start the sshd daemon
    # systemctl enable sshd.service  # - will autostart the daemon on boot
  • X11vnc is a vnc server - a nice decription of the various flavors of vnc server is here
    https://help.ubuntu.com/community/VNC/Servers
  • VNC a number of ports opened on the firewall - 5800 5900 6000  for display 0 - for each display N a new set of ports 580N 590N 600N would need to be opened
  • The command line to start the vnc server on the MAAHR brain is 
    sudo x11vnc -safer -ncache 10 -once -display :0
  • This starts the vnc server so a login is a vailable - that is why root needs to start it.  You must make sure that display :0 is not being used by a current login.
  • TADA ! Fedora Core 15 running remotely on maahrBot 
  • I will be communicating to MRL directly so I needed to open up port 6161 UDP on the firewall too
  • The tower has grown a bit - It now includes the SLA Battery & a Power Supply.  The idea is I would like to keep this system on with 100% uptime.  So, when the system is charging it can still be online.  I don't know when Markus's base will be here, so I'm working on a temporary with some old H-Bridges I created a long time ago.

    It's simple, allows extensibility with more decks, cheap, and should be modular enough to "bolt on" to a variety of platforms.
  • It's "nice" to have a desktop to work on but MRL can run headless and the video feeds can be sent to other MRLs running on different computers.
  • I downloaded MRL-00013 unzipped and ran the following command line for headless operation, remote connectivity, and opencv
    $ java -classpath ":myrobotlab.jar:./lib/*" -Dja.library.path=./bin org.myrobotlab.service.Invoker -service OpenCV opencv RemoteAdapter remote
     This starts the OpenCV service and the RemoteAdapter service
    On my desktop I started just a gui service instances and connected to the maahrBot.
  • After refreshing MRL services local the remote services on the maahrBot appeared as green tabs
  • The control is the same as if the service was running locally to my desktop - it worked well with a high frame rate.
  • A few more little updates and I should be able to switch from webcam to kinect on the maahrBot

 


Brain Arrives:

07-13-2001

Simples protective Brain Case I could think of is a series of wood decks supported by 1/4" bolts.
I had some scrap 1/4" panelling for the decks.  Wood is nice because you can screw in the components anywhere and its non conductive.

Deck 1 - Mini-ITX Computer

Deck 2 - Drive and Power Supply Unit

Deck 3 - Tagged Cover.

 

Software Install

  • Connected Brain to a regular ATX power supply - this is important to do on the install - for example you can brick it if power goes out during a BIOS update
  • Jacked in the EtherPort (I don't like wireless during installs)
  • Update BIOS (Link)
  • Installed Fedora 15 from Live USB - Use Basic Video Mode (Problems with GNOME3)
  • Installed RPMFusion and Livna Repos
  • Disabled SELinux
  • Rebooted
  •  yum -y groupinstall 'Development Tools' 'Java Development' 'Adminitration Tools' Eclipse 'Sound and Video' Graphics
  • _go do something - this will take a while_
  • # uname -a returns (have not rebuilt the kernel yet)
    Linux maahr.myrobotlab.org 2.6.38.8-32.fc15.i686 #1 SMP Mon Jun 13 20:01:50 UTC 2011 i686 i686 i386
  • Installed Chrome
  • Started Eclipse - was pleasantly suprised to see it had loaded with SVN support
  • Browsed to SVN URL - http://myrobotlab.googlecode.com/svn/trunk/   right-click "check out" - finish - ready to Rock 'n Roll
  • yum -y groupinstall Robotics
  • yum -y install cmake cmake-gui
  • New version of OpenCV 2.3 (they are much more productive now) - downloaded

Todo

make opencv 2.3 build - get latest javacv

 


Hi LMR!

 

This is the MAAHR project. MAAHR stands for Most Advanced Amateur Humanoid Robot. The goal is to design, build and program a robot which is similar sophisticated as ASIMO. The project is completely open source, everybody can participate. I'll sponsor MAAHR with 15000USD for hard-, software and logistic.

MAAHR's specifications so far:

  • 2DOF head with 2 cameras and LCD mouth
  • 2DOF main body
  • 2 x 5DOF robot arms
  • 2 x robot hand, capable to lift at least 1kg
  • 4 wheeled or tracked base
  • Moving objects
  • Postures and gestures recognation
  • Speech recognation (and distinguishing sounds)
  • Synthesized voice
  • Recognizes the objects and terrain of its environment
  • Facial recognition
  • Internet connectivity

A first sketchup draft I did (arms still missing):

If you participate and have an update on the MAAHR robot page, please pm me and I'll send you the password.

In the moment exist 5 working groups:

 


 

Update April 24, 2011 by MarkusB

First hand design draft added (see zip hand-1):

 


Update 2011.07.02

Markus has sent me funds so that I may purchased the "Brain" from details gathered on this thread - http://letsmakerobots.com/node/27362#comment-67606 .  The brains are speeding to my house and should arrive in the next week.

I have purchased a Kinect - who's drivers I got working and will incorporate it into the Swiss Army Knife Robot Software - "MyRobotLab" as another Service 


I wave at you Yay !

Markus will be working on the base.  After the brain meets the base, a little programming and some sensors are attached, we should be able to complete the first round of milestones.

These would include :

  • Basic local obstacle avoidance
  • Basic mapping (encode & Kinect) - precursor to SLAM
  • Telepresence - Teleoperation - Control and data acquisition through the Intertoobs

 

 


Update 2011.07.07

I do believe we have some BRAINS doctor....
The first part of the brain has arrived.
MRL - has template matching! - I just coded the part which allows you to select the template to match.  You just click on the corners of the video feed where you want it to match - and BAH-BING...  it matches.  In this case I selected a wall socket and it matched (the same) wall socket in ~1/10 of a second.  The template is shown in as the small picture below the video feed.  The number in the video feed represents the matching quality - 0 being perfect.
I have to admit ... selecting the template is now a breeze..... very enjoyable :)

 

Lots more to do ... OpenCV Template matching does not work well with Scaling nor Rotation.  Rotation will not be much of an issue - as wall sockets typically don't spin on the wall..  Scaling will but can be remedied by collecting multiple templates - at known distances from the bot..  Scale then can be part of the associated memory of the "concept" of what a wall socket is...

I'll have video when I get it developed a bit more....


Update 2011.07.08  

good news and bad news - PSU unit arrived  (Yay!) /  missing cable ! (Yarg!)

Need a 2 X 2 male to male Processor core voltage cable - the other connector by Intel specs is ok (20 - to - 24 pin socket) .. but I have to have that other 2 X 2 jacked in ....
Can't seem to find the cable online either - saw a few 4 X 1 to 2 X 2 cables ...
Intel, why do you make this so hard?

 

  • M2-ATX PSU Module Manual - Here
  • IBM D525MW Dual Core Manuals - Here

 


Update 2011.07.08 20.35 PST

Alright, an idea has been incubating in my head for a while and I can see how it might work.  I've been very interested in the power of "community internet teaching" for robotics.  Teaching robots can be an arduous and time consuming job.  Much like editing video frame by frame.  There was a project called "The White Glove Tracking Project" which used a distributed work force on the internet to do time consuming object isolation for an art project (previous - reference) . 

I working on dynamic Haar-Learning and tracking in addition to Kinect object isolation.  Once it segments or isolates an object, it will need language input to associate a newly found object with more data.  The interaction of people and the robot would be very beneficial.  I was thinking what if it used the ShoutBox, Twitter or some other web form to Alert others it had found a new object.  Then someone could log into it, examine what it had found and associate the correct word with it, e.g. "Table", "Light", "Chair" or "Ball" - The word with the image data (Haar data) would be associated and saved.  This would be similar to the Kinect video with the exception of instead of 1 person teaching the robot, there could be hundreds or thousands.

The sending of the Alert, the Logging into remotely, and the saving of data is already in MRL.  Just need to bolt all the pieces together.

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What Amperage motor driver will this need? I would be delighted to offer free PCB's for MAAHR, as well as the free motor drivers.

Hi Krumlink,

Thanks a lot for your offer. The prop system I ordered has already a motor driver, but we will need a lot of other boards for actuators, sensors, micro controlles. I'll sure came back to your offer.

I can design the board and send it to the project centered around the HIP 4082 driver, It can handle 1KW of power.

Unfortunately it needs to be Male to Male...

Maybe that's why they don't include it ?   Dunno, anyway I managed to slice two male cables off of some abandoned/broken ATX PSUs ... will be splicing them tomorrow...

Some of the robots from RoboCup at Home might be a good inspiration for things to do with your project. After all, what good is a design without requirements?

http://www.bbc.co.uk/news/technology-14126828

I watched the video ... look at all the Kinect sensors strapped to heads !

You have a good point regarding requirements.  The software requirements I have for MRL are quite extensive.. One of the most important is that it will be able to work with multiple platforms and actuators (arms being the current topic) with very minimal changes (if any)

The arm's mechanical/physical/sensor requirements should model the functional capabilities of an arm.  (I'm interested in function - not interested in making it look like a person, Markus may diverge with me on this, I'll wait for his input)  I think human functional requirements for an arm would be multi-jointed (could be super jointed snake/elephant trunk like), capable of easily lifting and placing a 10lbs object from ground level to above counter top level ~50",  Grippers would need to be able to grasp a large variety of objects (fingers, suction cups, coffe filled balloons, other?)

 

 

I guess my point is that requirements like that are OK, but lack a certain level of connection to real world tasks. Challenges on LMR and contests like RoboCup get people focused on specific tasks, which can be a great help while designing and implementing.

As things develop for the MAAHR, consider looking around the Internet for challenges that it might be able to particiate in. Even if it doesn't get entered, if you adopt a set of tasks to achieve, it will provide a lot of focus for the team.

Or take the requirements you've already stated, and design a set of experiments/tests around them. Can MAAHR get you a cup of coffee? Can it pick up that egg without breaking it? Shake hands firmly, but gently? You get the idea.

I only responded about arm requirement in that way because that was the initial inquiry from JoeBTheKing.  And I responded with valid generalized design principles. At the moment the current focus is still with the brain.  It's the only part which has materialized.  It will take some time to get it prepared to a capable platform level.  This is the next "real" milestone.

The next milestone will be testing some of the MRL interface connections to several different micro-controllers.  

The subsequent milestone is really getting a base and constructing the necessary interfacing.  There is a new differential drive service in MRL - which probably needs more refinement.  

The next milestone would just be simple obstacle avoidance.

Next - mapping, slam, object detection - The goal at this point would be to find a electrical outlet and recharge.  The find and recharge would be the first serious challenge.

I'm working on the brain still, but need a base ASAP.  Markus said he was going to make one, but has recently been pre-occupied. I have put a rudimentary base together, but would welcome any input.

I agree wholeheartedly regarding specific challenges & milestones.  At the moment we don't need an arm, but we need a base !

For the base, I will be using parts which are accessible too me now.  This may not be the "optimum" base, but a physical vehicle will help me develop and refine the software so that it will work in a generalized manner (hopefully on multiple platforms).  This will keep expenses down and at the moment be the quickest implementation.  It can be reviewed, extended, or replaced, but purchasing or shipping new equipment would need to be approved by Markus.

My biggest "value add" is software.  In fact I would be more than happy to ship the brain off to someone who can provide more than my capabilities regarding mechanics/electronics/etc....  as long as they have a router which will allow me to get back into the brain.

 

 

I've ordered the parts for the base today. The base will be 3 wheeled, capable to drive around 10 30kg, complete with wheel encoders and motor driver (I2C).

Next step will be to built a robot arm with gripper, able to lift around 0.5kg. The gripper will be equipped with force sensors (I have the sensors already).