Let's Make Robots!

MyRobotLab - Template Matching - Bot finds teet..

myrobotlab, open source, Java, service based framework, robotics and creative machine control

Logitech C600 Camera - PanTilt Rig with $16 BBB Arduino Clone - not super fancy but it works..

 


Template Matching is now available through MRL.

Template Matching is a process of matching a small sub image within a larger global image.  As an exercise I chose a wall socket since this could be the goal for a self charging robot. 

When the target is locked and centered, an event will fire off.  If this were a mobile platform and the goal was to mate with the socket, the next behavior would be to move closer to the socket and avoid obstacles.  Since, this is not a mobile platform, I have chosen to send the event to a Text To Speech service with the appropriate verbiage. 

The interface for creating a template can be programmed with coordinate numbers, or selected through the video feed.  To select a new template the Matching Template filter should be high-lighted, then simply select the top left and bottom right rectangle of the new template.  You will see the template image become visible in the Photo Reel section of the OpenCV gui.

Currently, I am using the Face Tracking service in MRL.  The Face Tracking service will soon be decomposed into a more generalized Tracking Service, which can be used to track a target with any sort of appropriate sensor data.  Previously I found tracking problematic.  The pan/tilt platform would bounce back and forth and overcompensate (Hysterisis).  The lag which video processing incurs makes the tracking difficult.  In an attempt to compensate this issue, I have recently combined a PID controller into the Tracking service, and have been very pleased with the results.  The tracking is bounces around much less, although there is still room for improvement.  

PID is a method (and artform) which allows error correction in complex systems.  Initially a set of values must be chosen for the specific system.  There are 3 major values.  

  • Kp = Proportional constant - dependent on present errors
  • Ki = Integral constant - is the accumulation of past errors 
  • Kd = Derivative constant - attempts to predict future errors

The video will show the initial setup.  This involves connecting an Arduino to a COM port, then connecting 2 Servos to theArduino (1 for pan & another for tilt).  After this is done, I begin selecting different templates to match as the test continues. The template match value in the upper left corner represents and represents the quality of matching.

The states which can occur

  • "I found my num num" - first time tracking
  • "I got my num num" - lock and centered on target
  • "I wan't dat" - not centered - but tracking to target
  • "boo who who - where is my num num?" - lost tracking

More to Come

In the video you can see when I switch off the lights the lock is lost.  Template matching is sensitive to scale, rotation, and light changes.  Haar object detection is more robust and less sensitive to scale and lighting changes.  The next step will be taking  the template and proceeding with Haar training. 

Kinect & Bag Of Words - http://en.wikipedia.org/wiki/Bag_of_words_model_in_computer_vision association


Update 2011.08.08

I was on vacation for a week, however, when I got back I wanted to make sure the latest (mrl 13) was cross platform compatible.
I had some problems with Fedora Core 15 / GNOME 4 desktop / Java & OpenCV
FC15 can install opencv version 2.2 but 2.3 is available for download.
I removed the 2.2 version - and did a clean install of mrl 13

The desktop is still acting a little "goofy" but after :

  • copying *.so's from the bin directory to /usr/lib (blech) 
  • ldconfig
  • loading opencv service - and refreshing the gui screen (blech)
  • using the Gray/PyramidDown/MatchTemplate filters

I got tempate matchin on the MAAHR brain going at 78 ms per frame running with debug logging. (almost 13 frames per second!)
it says 93ms because the screen-capture slows the process down. 

 

MAAHR is currently running on a 12V SLA
The CPU and power supply are cool 
None of the CPUs are over 60% and this should drop off significantly if the video feed was not being displayed.
MRL-13 has now been tested on Windows XP & 7, Fedora Core 13 & 15 -  The next version should come with opencv binaries compatible with an ARM-7, although Ro-Bot-X's Chumby appears down for some reason.....

 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
1. Sounds good 2. Iris system will be a prop that I am ordering. It will use xbee, IR, cables, and rf to send commands to various things. It just needs to receive a command. 3. Sorry about that, it was a little misleading, I meant: not an arduino. Either a prop. Or a picaxe. Also, in your oppinion, do you think it would be easier to add the computer to the communications cog (which will require adding an extra ID bit to the command and involve waiting until the board was ready (not already receiving), or to have an 08m monitor the port and take care of all that? 4. Thanks a lot for the help btw :)
Poke

Yeah.. gettin there... I had to do a bunch of upgrades...  OpenCV came out with a new version of its software, and I'm dealing with broken parts...  Have to get a new mode working for the Kinect too (interleaved - where 1 frame comes in as depth and another comes in as the image) this will make great object isolation.  So things in range can be identified.. 

The keyboard input is easy squeazy, but it would be helpful if you had the hardware setup first...  Maybe string together what you have so we can start test and developing what you want...

Small steps/changes are usually more productive... 

I agree, ordering the hardware sometime tomorrow hopefully. I'm going to buy a USB prop. Board and picaxe 08m. Also, I thought it would be helpful to tell you that the receiving comp. Will be an old Linux (ubuntu) box I refurbished.

Although its in it's pre-Alpha stage :)

I believe Ro-Bot-X will attempt to drive his Chumby around with it....  http://letsmakerobots.com/node/28713#comment-72700 

just downloaded the latest version but can't find a "keyboard" service, is it called something else?

Pre-Alpha means its only sitting on my workstation :)

Uploaded it ... you only need to download and overwrite the myrobotlab.jar (quick download) where-ever you unzipped myrobotlab-0014 - http://myrobotlab.googlecode.com/files/myrobotlab.jar 

Keyboard should be in it...

excellant, I have it logging the keys I type! But is there any way to make it a string or have it recognise strings yet?

What?  
Download the latest r112M & look harder ! :) 

 All I'm seeing is this: