Let's Make Robots!

Walter's New Teaching Pendant for Head Moves (Virtual)


Is it wrong to give your robot a little head? Or a big head for that matter?

Gone are the days of my old, clunky teaching pendant for Walter's head. I have coded a new one via Processing. Watch the video and enjoy --this is a pretty good one. Code available upon request.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Excellent you are my first Guinea Pig....  I started robotics back with that silly walking shopping cart and I have not been able to stop obsessing on it.  I got frustrated with what was available regarding software so I started an open source robotic framework about 2+ years ago.. Its grown, but maybe at this point it should be pre-beta released..

So let's start with the first steps - 

I'm interested in creating/updating documentation regarding the installation and development of this software...

I have the very beginning of a site put up here http://myrobotlab.org

Getting you up and running will help me and hopefully others in their art/Foo of making robots...

So, Let's roll up our sleeves and get to it:
1. Do you know what Java version you have no the MS Computer?
2. One of the biggest hurtles - although I don't suspect it to take too long is creating the PicAxe Board Service - this will allow your computer to talk to your PicAxe using this framework
3. Do you have links or pages already around which describe in detail what the connection is from you Puter to the Smirf and from the other Smirf to the PicAxe - 
4. I will needs some sample PicAxe code specifically the program you use to send and receive messages
5. MyRobotLab comes with a controlling GUI - but if you can start thinking of what senarios you want to happen with Walter - e.g. what action/reactions would be interesting of fun..

I'm all tingly with excitement.....

 

Let's see, first off I am quickly finding out the limitations of Processing... It's sorta like basic, it's great because it looks like english and is simple and serves its purpose, but if you really need to crunch numbers...

Probably about time for me to start learning C or C+ etc. and I would love a little chunk of tennis ball or color code --great if in ended with a simple int x and int y noting the center of the tracked object in relation to the window --this could then be sent out to the pan/ tilt servos

. I am using openCV 1.0 --this is the version that the openCV website told me to download and install and I am/was a bit confused because the 2.0 version was the big download button and the 1.0 was tiny and much futher down on the page.

In terms of C, you suggest Eclipse CDT as my "editor", correct? I was looking at the roborealm as well, and you are right --it is well worth it to get the 30 day trial and see if I like it. I need to also start learning about API's and how everybody is supposed to be talking to eachother.

Like I have said before, my brain is in full-on sponge-mode lately (not to mention having a ton of time on my hands) and I am simply in the groove to be learning new stuff. I love the fact that LMR and guys like you, GroG, rik and the like are around to steer me in the right direction.

 

I dig your facial tracking. I think that here at the beginning of my learning I am going to focus on the whole "track a tennis ball" or track a certain color thing. I gotta tell you though, it seems that C is just slightly more popular to do this (in processing). I am just now going through link after link after link trying to find some sample code to do this. I have gone through all the openCV examples and can't seem to find what I need. All the documentation states that color tracking is a standard feature in openCV, but I guess I am not looking hard enough.

Thanks bunces for the suggestions and encouragement

--and don't forget to stop by the post office today!  :)

 

Oh wait, one more thing. Sorry for the stream of consciences. If you aren't really interested in learning OpenCV or active vision from the ground up per se, but more or less interested in solving a specific vision problem as it pertains to Walter, maybe just try RoboRealm. In fact, they have an API into the application, so you may be able to leverage what you already have in Processing. Check them out www.RoboRealm.com. If you download the 30 day trial and make a tutorial, I think the guys at RoboRealm will give you a license key free of charge. 

Nice code post, love your hair ... I've looked at RoboRealm too, they have a nifty interface - MATLAB has a system where you can layer vision filters one on top of the other - I think they even had a interface to do stuff i.e. move servos but I didn't go that far with it

In fact, wait, try this. I forgot I posted this a few weeks ago:

http://www.davidjbarnes.com/Publicly_Available_Software/OpenCV_Object_Tracking_Based_on_Pixel_Color.aspx

Like I said earlier, you'll need MinGW and a C compiler. Also, Eclipse CDT is a good idea.

I just got back from the post office. It should be there in a few days. 

In my opinion, your gonna wanna ditch Processing pretty quick if you want to work with active vision. If you want, I can write you up some quick C code to track a tennis ball. Which version of OpenCV did you install 2.0 or 2.1?

Do you have a MinGW and a C compiler?

I have to say, first off this is the first time I have downloaded and installed the openCV stuff and had it work. I have tried 2 or 3 times in the past (as openCV crosses my path (seeing it in posts and comments)) and it never took. I now have it installed and have run some of the examples. Blob detection works great and this seems like a good place to start. I would like to replicate what I have seen others do --Waving a colored ball in front of the camera and allow the computer to track it. Seems pretty easy to get an X and a Y from this and then control some pan and tilt servos to keep this object centered in the screen. Obviously, with a distance sensor also shooting to the object, it seems we can make a pretty simple "follow the leader (or in this case, green ball)" code. I.e. find and track the ball with the blob detection, keep it centered via pan and tilt, turn the whole robot when we are getting close to maxing out the pan/tilt travel and finally, get a distance number from it (via sonar) and stay a given distance away. I would love to see walter cruising around the room following a green ball on the end of a stick.

Of course I would eventually like to get to full navigation via webcam as well, but that seems a bit down the road. I am really liking the idea of multiple services taking care of different tasks all reporting back to who they need to report to. The bottom line is that I am simply interested in/ready to start learning real, real, real big-boy programming now. If this post starts a conversation between us, I welcome it.

I would love to hear just about anything you have to tell me --If nothing else, MORE EXAMPLE CODE, PLEASE. --oh, and a link to your voice recognition stuff too!

--I'm getting excited

BTW

MSI Wind Netbook (1.6 atom --just like all the netbooks) 1gig mem

Internal USB webcam --I am still looking for a wireless webcam --don't want the whole lappy on Walter's back

Windows XP

CtC,

I just released a video the other day testing some visual servo functionality. Seems to me you could leverage some of what I have done. Take a peek: http://www.youtube.com/watch?v=i48DnMCfTBc

I stay away from Processing, intentionally. The application in the video implements EmguCV, C# environment. However, in the past and in most of my applications I stick to C/C++ and at times build GUIs in Qt.

One other thing worth mentioning is a new open source initiative I am embarking upon; www.OpenRobotVision.org. Perhaps I can rope you into contributing in the future. 

What are they? On my system when it tracks faces I always hear the squeeeeek squeeeeek of the servos...