Let's Make Robots!

Walter's New Teaching Pendant for Head Moves (Virtual)

Is it wrong to give your robot a little head? Or a big head for that matter?

Gone are the days of my old, clunky teaching pendant for Walter's head. I have coded a new one via Processing. Watch the video and enjoy --this is a pretty good one. Code available upon request.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Aight - gonna start by givin you what I know.... It seems you are interested in "coding" so I'll pontificate ad nauseum  

I believe Processing is coded in Java - it has a scripting language which in turn gets compiled into Java classes and executed.  It looks pretty damn nifty. I downloaded it once and messed around with some of the demos (damn cool) .. however my interests made me look around further for specifically "robot/machine control" software... Processing could do this to some degree, but it was designed for a different purpose.

OpenCV is the powerhouse of vision software !  It was started by intel - release to the public and contains amazing collection of functions and utilities.  It has had a long life, software years are like dog years ..


http://sourceforge.net/projects/opencvlibrary/ - it is written in C and has interfaces for C++ & Python - I have downloaded and played with it..  it was designed to be built on all sorts of platforms - I've built it on windows and Linux

So... initially you might be interested in OpenCV + Processing ... well you need some Java Glue to do that so they stick together..

These guys created some of that glue http://ubaa.net/shared/processing/opencv/ ...
Really its not like glue but more of a "specialty fastener"... limited in some ways...  It's wicked cool, I tried it .. thought it was cool. Got it to work with Processing..

What I Got :
I made a service based multi-threaded java framework.. A framework is like a lot of glue.. and It currently is gluing OpenCV + Arduino + Servos + Motors + Speech Recognition + EZ1 Sonar + IR Module + MP3 Player + Control GUI + Joystick + RecorderPlayer + Text To Speech 


Here is a screen shot on a recent experiment... - the little rectangles are the services - and the little arrows are message routes
So in this case the "camera" service which uses OpenCV sends messages to the "tracker" service which in turn send messages to the "pan" and "tilt" servo services which in turn send messages to the "board" in this case an Arduino Duemilanove..

Make sense ?? a service is like a little piece O your brain ... visual cortex, cerebellum, hypothalamus, etc.. ;)
Services make messages, relay messages, receive and or process messages...
Message messages messages neurons synapsis dendrites ..... wheee !

So if you really want to try it I need to know some of the details of your system :

1. what is your puter OS - it looks bill gate'ish (bad boy) 
2. what are your "boards" - picaxe if I recollect - I currently don't have a PicAxe board in my library of services - but I could write one with your help - I'd be interested in adding it

Below is the "uber" pane - which have most of the controls for the "Services" .. e.g "camera", "pan".. etc... remember?

The "board" service is a bit more complicated so it has its own pane...

This is a work in progress - so things will be bumpy ... still interested?


I am using a bluesmirf bluetooth module from sparkfun for transmitting data --I could not be happier with the unit by the way. In terms of the next mapping step is, like I said above, a system of then following the map that we just drew. I have however been thinking of building a system where walter not only draws the map as he goes but also does a sonar sweep every few inches or so. I could easily send the position data and sonar data to build a little bitmap picture of its surroundings as it goes.

In terms of web cams, I would love to start playing with edge detection and/or blob stuff. Unfortunatly, I can't seem to find any libraries that will work with processing. I have found some but none of them seem to compile --even with a clean cut and paste. I understand that processing is based on or very similar to Java, but i don't know if they are fully interchangeable or if any webcam libraries exist for java either. I would love to see whatever you have, I am quite interested in learning it --Oh, voice recognition too --that would be voice recognition on the pc, not on a pic. At any rate, gimme what you got on that webcam stuff.

Wonderful progress..
You mentioned an IR array - or webcam?  Would you be trying to do more mapping with those?  I have some software you might want to try if your interested (regarding webcam stuff)

It all sounds good - I guess I'll need to go through your other posts to see details.  I saw the processing/mapping one so far, but I suspect there is more I need to look at.

How is you computer communicating to walter? Xbee? or some other RF or IR link?


Here's the plan, Stan:

Yes, docking is also charging. Walter can already measure his own batteries and knows when they are getting low. I have also perfected the docking connection --the actual contacts that walter will drive into to send juice to the charging circuits. More importantly, when Walter is docked, or more to the point, when he leaves the dock, he now knows his exact position and heading --great for a starting point when mapping. From there, with a little cruising around, he has drawn a pretty good map of where he has been. It is just a tiny step from there to be able to click on the map (in processing) and have processing or walter calculate the route to that point. --Really, I already have the map, as a scale picture and as data on an eeprom, how hard could it really be to simply read this back and send it to the motors and encoders?

Now we're talkin' a robot that can navigate the whole house, dock (and measure from that point to determine any stopping point anywhere in the room) retrieve items and return, goto a location on demand at the click of a mouse, etc. etc. etc. Not to mention, with this compass I've been fixated with lately, I have discovered I can simplify my whole IR docking system. Right now, I am using 2 beacons (with different "ID's") for Walter to traiangluate on and thus center himself. From there he can  drive forward to the follow line that will direct him, quite precisely, to the final docking position. With a compass heading on hand, we can now simply use one beacon, shining 180 degrees, up against a wall. Walter can find this beacon, and then simply position himself so he is A) facing the beacon and B) pointing in a predetermined compass heading. Take a quick distance sonar reading bounced off the back wall and boom, you know where you are. You know where the beacon is on the map (this is pre-programmed), we know we are driving straight into it (90 degrees to the back wall) and we know how far away it is. Guess what? We now know where we are and what room we are in just based on one simple $5 Ir beacon stuck in the corner. Not to mention at this point, we can also update our map to sorta 0-out where we are. This will take care of any slight encoder inaccuracies that we have had during a long mapping run.

You can quickly see how many dominos I have stacked up here --I have just been waiting to get ahold of the first domino so I can put it in front of the rest and start knocking them all down. I've got so many awesome, solid sub-systems done it's crazy. And everyday it seems another peg falls in a hole and walter can do one more amazing thing. The bottom line is, 2 years ago I was making a LED blink. Walter simply does/has an example of every single thing I have learned in those 2 years. It is finally nice to start putting all these learned pieces together into what I head in my head when I started.

Great - I get it - nice that you are into the details of movement, me too.. What the next big thing?  I take it from your recent pimping that "Mapping" is it?  Mapping for the purpose of recharging?  Or do you have other objectives?

What I have done is pre-record various moves for different situations and yes, there is a "WTF was that" move included. Some of the head moves also sync up with phrases and other audio "spoken" by walter. The eeproms simply click by and the X,Y,Z position of the head is recorded sequentially onto the eeproms. In addition, there is another eeprom that simply stores the address numbers that correspond to the start and stop numbers for each move. This works just like the counter on an old tape deck. I.e. move one is from address 0 to address 1527. Move two is from 1528 to 3289, etc. When played back, the data is simply read from the eeproms and then off to the servos it goes.

I also have a nice chunk of code that will calculate what it will take to move the head from point A to point B. I use this to transiton from the end of one move to the start of another to avoid the dreaded "fast jerk" to the next starting position. In addition, I use this transition code to do simple moves like center yourself and look up (this is the standard "I am in menu mode" position. Both these head moves systems work great, the difference being that the prerecorded ones have a definate "human" quality to them in the fact that each small twitch or micro-movement from my human hand gets recorded whereas the "transition from a to b" method is a very calculated, "robotic" looking move.

Looks like you've really taken to Processing..
I've been on sabbatical for a while and have not even seen your "old style" pendant which I think is very impressive too..

So forging ahead I'm guessing the point of "recording moves" would possibly be the "play move" under appropriate conditions?

Any ideas to the appropriate conditions or recorded moves?
e.g.  Hits wall --->triggers--> WTF move (slightly tilted head)

Sorry for not being up to speed... it's been a while..