Let's Make Robots!

Face tracking with Arduino, Processing and OpenCV

Tracks faces and orientates the camera so that the face is always in the middle

UPDATE 1:

I have stripped down the processing code to the bare basics so that it can be used however you want. I added lots of comments to make it easier to understand and leave you open to put your own actions in when criteria are met. I have also added an additional check to see if the face is in the middle of the screen, (min_x, max_x, min_y and max_y inside middle square), so additional actions can be performed, in the case of a robot, it could move towards you.

The stripped down processing code : https://www.dropbox.com/sh/vpvbsrpx43zzcen/S8Xi4MmMwj

 

 

UPDATE 2:

Made the face tracking code to control a remote control car to show its potential uses.

http://www.youtube.com/watch?v=3qzT2Jzp9qc&feature=plcp

 

 

UPDATE 3:

The last update I will post on this. I was playing around with the basic sketch today and managed to read how far away the face is from the screen. All I did was put my face 30cm away from the screen and work out how many pixels wide the face bounding box was. I then did the same for 60cm and used the two points to form a linear equation to determine how far away the face is depending on how big the bounding box of the face is. I recorded this and the link is below:

http://www.youtube.com/watch?v=FhcNtd1ePws

 

 

 


 

How this actually works is described about below but also in the video. I suggest you watch the video as my writing is terrible.

Video : http://www.youtube.com/watch?v=1cEp7duDbNU&feature=plcp

The Processing code : https://www.dropbox.com/sh/v9hkdxuoazoyb0d/fCGQzEfBAK The OpenCV and Arduino libraries are needed. OpenCV also needs to be installed onto the computer.

The Arduino code : https://www.dropbox.com/sh/ujjlahx83ilv1j2/QB0bu8E-EJ

 

This is something that I have wanted to do for a very long time, face tracking. As I look at all my favorite films I see a common factor, technology, whether it was R2D2, flying cars or JARVIS from the Iron Man movies. I have already made something inspired by JARVIS, a simple voice control code written in Glovepie to control my computer seen here: http://www.youtube.com/watch?v=i5S1H1nogpI&feature=plcp, but this time I wanted to make something really cool. I had the idea in the shower to try to make something that would follow my face like something out of the i-Robot movie and the rough idea of how I would do it.

I have been mucking around with Processing for about a month now and the Arduino Uno for about 4 and have learnt a lot about the two and how the two can interact. I knew that I didn't have the skill-set to have a Arduino standalone face tracking system and later found out that this wouldn't be possible anyway because of the processing strength required cannot be provided by the Arduino. I looked into how processing could use my webcam and found many libraries with blob detection and such but nothing like what I really wanted. On my google adventures I found a motion detection code that took two frames from a video and subtracted colours in each pixels to detect changes in the overall picture, I liked this idea and decided to start with that. I played around with the code for a while but couldn't manage to split up the video data into 9 quadrants that I could use. More googling found OpenCV and all the powerful things that had been made with it, I decided this is what I would use. I installed OpenCV and installed OpenCV libraries for processing. I found a sample code for face detection and attempted to run it, it didn't work. After much troubleshooting I found out that Processing wasn't acquiring the appropriate file for face tracking so I had to input the full path directory to that file. After that it still didn't work, I had to include the rectangle class in Java also. After all this it finally started to work, Processing was tracking my face.

Now to talk about what I did. I broke up the screen into 9 different quadrants representing different areas of the screen, I did this visually by drawing 4 lines running up and down the screen to show where the areas are. The program drew a bounding box around the faces with a (x,y,width,height) values, all I did was simply check to see which quadrant the box was in and then print those results to Processing. Depending on what quadrant the box is in Processing will write cases to the Arduino through the serial port. The Arduino code listens for cases and increments the servos position accordingly.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I am astonished at the fact that we are in 2014, yet no chip maker took the opportunity to produce a face tracking chip for hobby use. we are still stuck with openCV..standalone chip would sell as hot cakes just like the early speech synthesis ICs...speak of which, they even cut us off speech syntesis chips as they declined to release anything more advanced after speakjet, or SPO256...back to vision; since it s integrated in small digital cams; sure they can throw a standalone chip or 2 for the millions of innovative developers...has anyone seen one ..lately ??

THe small digital cameras do the tracking in software not hardware. They do have a processor onboard running an operating system of some kind.

Everything is fine until i select the video device.

I get loads of errors:

Error while starting capture : device 0
Drag mouse on X-axis inside this sketch window to change contrast
Drag mouse on Y-axis inside this sketch window to change brightness
OpenCV could not define source dimensions.

Exception in thread "Animation Thread" java.lang.NullPointerException
 at processing.core.PGraphics.image(Unknown Source)
 at processing.core.PApplet.image(Unknown Source)
 at face_detection.draw(face_detection.java:71)
 at processing.core.PApplet.handleDraw(Unknown Source)
 at processing.core.PApplet.run(Unknown Source)
 at java.lang.Thread.run(Thread.java:662)

 

nice cp

 

What versions were used? (OpenCv, Processing and Arduino).

Arduino-1.0.1

Processing-1.5.1

OpenCV-1.0

Figuring out the distance that way. My gut tells me that you may want to try adding one more measurement for accuracy though (think the inverse square law.) Of course if you get up to 90 cm or down to 15 cm away you might be pushing the resolution of the camera or the abilities of openCV to recognize a face as a face. Thanks for sharing your work!

I can't remember hearing of the inverse square law before I'll definetly have to look into that. I found with my method is that it seemed to lose its accuracy the further away I got, although openCV was still able to track my face quite well at distances >3m, at distances <30cm it wasn't able to get a good view of my face (not many views of my face are good :D). Thankyou very much for the kind words.

Nice post - here are some other links you might find useful

LK Optical tracking is fast & would be  good if you want to track a face after it is detected - http://myrobotlab.org/content/lucas-kanade-optical-tracking-opencv-lkopticaltrack-filter

Template matching is good for when you are interested in other things besides faces - http://www.youtube.com/watch?v=UMXrk6EVWfI

Here is the concept of switching algorithms & filters - heh... and its 007 related (apropo to your avatar) http://www.youtube.com/watch?v=jMJ2kJTKyq4&feature=relmfu

It appears that most of them use OpenCV so Im sure there is a lot more that can be done with this then what I am capable of. That template matching is a brilliant idea, I can see many instances where that would be very useful for any type of butler bot, looking for cups etc. Haha, Q in the James Bond series definely was an influence on me becoming interested in electronics. I think that the next step with this code would be to make it like the last post, have a nice gui to change up what you are looking for and the bounding boxes etc. Thanks for that.