Let's Make Robots!

Face tracking with Arduino, Processing and OpenCV

Tracks faces and orientates the camera so that the face is always in the middle


I have stripped down the processing code to the bare basics so that it can be used however you want. I added lots of comments to make it easier to understand and leave you open to put your own actions in when criteria are met. I have also added an additional check to see if the face is in the middle of the screen, (min_x, max_x, min_y and max_y inside middle square), so additional actions can be performed, in the case of a robot, it could move towards you.

The stripped down processing code : https://www.dropbox.com/sh/vpvbsrpx43zzcen/S8Xi4MmMwj




Made the face tracking code to control a remote control car to show its potential uses.





The last update I will post on this. I was playing around with the basic sketch today and managed to read how far away the face is from the screen. All I did was put my face 30cm away from the screen and work out how many pixels wide the face bounding box was. I then did the same for 60cm and used the two points to form a linear equation to determine how far away the face is depending on how big the bounding box of the face is. I recorded this and the link is below:






How this actually works is described about below but also in the video. I suggest you watch the video as my writing is terrible.

Video : http://www.youtube.com/watch?v=1cEp7duDbNU&feature=plcp

The Processing code : https://www.dropbox.com/sh/v9hkdxuoazoyb0d/fCGQzEfBAK The OpenCV and Arduino libraries are needed. OpenCV also needs to be installed onto the computer.

The Arduino code : https://www.dropbox.com/sh/ujjlahx83ilv1j2/QB0bu8E-EJ


This is something that I have wanted to do for a very long time, face tracking. As I look at all my favorite films I see a common factor, technology, whether it was R2D2, flying cars or JARVIS from the Iron Man movies. I have already made something inspired by JARVIS, a simple voice control code written in Glovepie to control my computer seen here: http://www.youtube.com/watch?v=i5S1H1nogpI&feature=plcp, but this time I wanted to make something really cool. I had the idea in the shower to try to make something that would follow my face like something out of the i-Robot movie and the rough idea of how I would do it.

I have been mucking around with Processing for about a month now and the Arduino Uno for about 4 and have learnt a lot about the two and how the two can interact. I knew that I didn't have the skill-set to have a Arduino standalone face tracking system and later found out that this wouldn't be possible anyway because of the processing strength required cannot be provided by the Arduino. I looked into how processing could use my webcam and found many libraries with blob detection and such but nothing like what I really wanted. On my google adventures I found a motion detection code that took two frames from a video and subtracted colours in each pixels to detect changes in the overall picture, I liked this idea and decided to start with that. I played around with the code for a while but couldn't manage to split up the video data into 9 quadrants that I could use. More googling found OpenCV and all the powerful things that had been made with it, I decided this is what I would use. I installed OpenCV and installed OpenCV libraries for processing. I found a sample code for face detection and attempted to run it, it didn't work. After much troubleshooting I found out that Processing wasn't acquiring the appropriate file for face tracking so I had to input the full path directory to that file. After that it still didn't work, I had to include the rectangle class in Java also. After all this it finally started to work, Processing was tracking my face.

Now to talk about what I did. I broke up the screen into 9 different quadrants representing different areas of the screen, I did this visually by drawing 4 lines running up and down the screen to show where the areas are. The program drew a bounding box around the faces with a (x,y,width,height) values, all I did was simply check to see which quadrant the box was in and then print those results to Processing. Depending on what quadrant the box is in Processing will write cases to the Arduino through the serial port. The Arduino code listens for cases and increments the servos position accordingly.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I'm preparing something like that, but with a Raspberry Pi, I've got the facetracking software running, but the hardware for the robot is expensive, so I'm going step by step, first the RPi, then the code for face and body tracking, then the pan-and-tilt hardware like yours, then the chassis and the rest of the sensors.

Yes price is the only factor stopping me from implementing this into a robot at the current moment. Let me know when you have got your face tracking code on your robot I would really like to see it in action. My pan-tilt hardware consisted of 2*5gram servos and double sided tape, which was later upgraded with zip ties :)

Well, that's a nice project. Will see if I can use it for my "Stray" when I ever get time to proceed with that project :-)

I would love to see it operational on a robot! Im getting more components of eBay so expect to see my implementation of this in a robot in the coming months, but I would love to see what people do with this.

Well, it might take some time since I am not familiar with OpenCV yet :-)

Fot the beginning I will just grab some code from here and try the things out. I will get two cheap webcams by tomorrow, so the hardware is already ready to be mount...

After playing around with some values the vertical jumpiness on the servos can be corrected by changing the tilpos varialbe in the arduino code to 0.5 instead of 1.