Let's Make Robots!

The Visible Kitteh Project

My latest project isn't much of a robot in the traditional sense. That is it has virtually no moving parts. It will all told in fact have 1 single linear actuator. However since it does have at least one and is controlled by a computer brain and more importantly I think people will find it interesting I wanted to post it here. I call it The Visible Kitteh Project. The main page for the project can be found here: http://www.visiblekitteh.com Problem Description: We have a problem. His name is Timothy. He is a sweet if very _loud_ tabby cat that we love dearly. One thing we really don't love however is the carnage. Timothy is a very good hunter and his favorite thing to do with his prey after he has caught it is bring it inside and finish it off under my wife's desk. Now this poses two problems. First, cleaning blood, feathers, fur and viscera out of the carpet under the desk and off the walls. Second, some times they get away. My wife is tired of cleaning the carpet under her desk. I am tired of chasing field mice around the house with a shoebox. It is time for a change. Requirements: Prevent the cat from being able to enter the house when he has an animal in his mouth. Nice to have's: Do so in a way that doesn't terrify the other cat whom we are trying to train to use the cat door to enter the house. Methodology: Object Recognition using Haar Cascades. Implementing it using the OpenCV libraries Java bindings for Android. Ultimately I would like to try and switch back to doing this on the Arduino platform for a low cost, low power consumption solution but I just couldn't find enough reference code or documentation to learn how to do it. Source code (such as it is) is here https://github.com/omadawn/VisibleKitteh


Update 04/21 12:30 am TVKP Went live Phase I Data logging mode.



Update 05/01 I finally seem to have resolved the repeated problems I've been having trying to threshold pixel brightness. Thanks in no small part to help from some of the cats here on LMR.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Comming to think of it. You know GroG and his My Robot Lab software project, right? I tried to follow it some time ago, but sorta lost focus. Perhaps he have already solved some of your OpenCV issues.

you commented about getting less of a point source from your LEDs. Have you considered filing down the lenses of the LEDs to get a more dispersed light?

I made some lens caps out of some old wall plug baby proofers that look like they work well but once I got all 4 LEDs spread around my aperature the light spread out pretty well so I never implemented it. I might just have to play with that and see if I can get it spread out even better. That might be cool. thanks for the idea.

Enclosure V2 works _much_ better. No more fur closeups.

I've made three or four changes since the last post and learned new things each time.

Switching to HUV made a huge difference in detecting when something is in frame. OpenCV now returns numbers than I can vagelu understand so I am able to make a decision using a simple < to determine if I should snap a learning image. There are some other challenges I've encountered with the phone automatically adjusting exposure based on ambient light so I'm still capturing a bunch of photos of the inside of the box I'm just not capturing 26 thousand of them every day any more.

Moving the camera back was perfect it's far enough to actually catch a whole head instead of the tip of a nose or an ear. However even though I was feeling pretty solid in my 'object in frame' detection (at least at night) I noticed I wasn't getting images when I know the cat had gone thorugh the door. I was starting to fear that he was just moving too fast to capture but I figured I should at least have a blur of torso or even tail.


To determine if something is in frame I'm checking the brightness of a pixel currently in the middle of the image. Right about in the center of this blue circle.

Detection Point

Which, if you look carefully, means that it's possible for him to walk almost entirely under it. In some situations only the tip of his ear might brush past it. It is entirely possible that the phone won't notice the change at even a normal gait.


I've moved the detection point down to where it should be in the center of his head in this image. I've also replaced the tap light with a sheet of thick white construction paper. Now almost 100% of what the camera sees should be smooth white until there is a cat in it. We'll see how those samples worked out when I collect them tonight.


More on this here: https://sites.google.com/a/forstersfreehold.com/visible-kitteh-project/home/announcements/learningthelearning

built a little box and stuck it on the side. Drilled and cut some holes. Let's see how this works out

The Visible Kitteh Project is up in Data Logging mode!!!!! Phase one is in place!!!!


When you get it functional, do you have any friends with cats (or small dogs or a badger) that you can borrow to see if the system knows the difference between your pet and theirs? It would be interesting to see how discriminating the software is.

At this point the level of descrimination would depend pretty much entirely on how you trained it. Since it will be trained with only profile pictures of my cat my suspicion is that it would let in some but not all other cats but should refuse any other creature.


It does sound like an interesting experiment though.

Keep the updates coming.