Bug Toy - PID - Visual Tracking and Moon Delay
April 16, 2011
First hacked attempt at a P - control.. it got a little complex being multi-threaded - the feedback on one thread, the commands on another. The result is in the video (at the bottom of this post). I think I have to boost the kp constant a little..among other things
Things I noticed:
- dwindling battery power affects everything - once there is more feedback as to how "successful" any move was - I might have error change power levels too (versus length of time as it does now)
- there is a +200 ms in estimating times because anything under this value has really no effect
- there are two forms of feedback 1 is the angle difference "headingDelta" and feedback of position in expected time.. right now an error is only assessed at the end of the move.. it was my plan at some point to have this thread monitor the error value associated with the command which was saved 700 (lag time) ms ago in some array (this has not happened yet)
- there should be a way to derive lag time dynamically
- there should be a way of saving output even in spread sheet form so that some real analysis can come out of it...
Thanks for the feedback. Being that I understand things better in a visual medium, I have asked my elves to draw up the problem (and possible solution) in graph form. They have come up with the following graph.
The issue is a 1500 ms lag time of video feedback for position control. I was interested in the possibilities of using P, PD, or PID in order to provide accurate controls for tracking or positioning Bug Toy. Tele-fox and JIP (lag/lead) suggested a possible solution would be to "save" power levels, therefore it can be thrown back into a regular PID request. Warning psuedo-psuedo-code follows.
- save the current power level - so it can be used in an error calculation in 1500 ms
- get the current feedback and use it in PID with saved power level from 1500 ms ago - derive error
- add error and adjust current power level
With 70 ms sampling rate there will be a set of about 22 samples saved in the 1500 ms lag. Its a queue of applied power levels, push the current power level on, pop the last one off to use it in PID. This way the power profile and feedback follow one another, even though the feedback is 1.5 seconds out of phase.
Did the elves do OK? Now, hopefully I can get them little guys to write the actual code.
Also, I've been reading about how I & D are difficult if the sampling rate is not constant. The video frames are AROUND 70ms but can vary 50ms to 110ms.. How problematic will this be? I was going to tell the elves to start with a queue and just P-roportional ... since they've been working so hard lately.
I've started a few tests with Bug Toy and discovered I needed to create a "Differential Drive Service" for MRL. The idea was if a few variables were supplied regarding the footprint of any Bot (not just Bug Toy) an accurate model / map could be constructed and all the calculations and processing would be available from the service.
Additionally any sensor which provided absolute or relative positioning could be attached to the service to provide more SLAM data.
Encoders, odometers, even wii camera on the bot could provide relative positioning. A video camera can provide absolute positioning.
I'm working on the video position currently. I've learned a few things. Like, white electrical tape provides extremely good marker points for Lucas Kanade Optical Tracking
One of the benefits of working with video in this way is you get the whole map at once. There is no scanning from side to side. It is absolute. But here's the trade off.
The lag between "real life" to display can be 1.5 seconds ! Although the "processing time" of each frame is only 70 ms when tracking Bug Toy. It's a stream packed full of data every 70 ms, but running 1.5 seconds behind.
1.5 seconds would happen to be the minimal amount of lag for controlling something on the moon since its 1.5 light seconds away. It gave me new appreciation for "very remote" control. It also made me think about how some moon rover software might work. Immediate obstacles would have to be dealt with by the rover, but finding areas of interest or best paths to known destinations would be done remotely. The immediate obstacle avoidance would have the ability to repress the best path objective when needed. But when an obstacle was not present the best path behavior would be dominant (subsumption architecture).
Jeeze, where's the question already? It's coming...
I started looking into PID control for the Differential Drive Service. I have never implemented a PID control, but at this point I have looked at Big Face's tutorial and I am trying to digest several others examples on the internet.
The question is: Would PID be applicable with such a large lag, when lag will so grossly affect Error?
I have started a "calibration" routine which attempts to determine the exact amount of time between starting a command (turn right) and the feedback. Once this is done, the system can further calibrate by testing power levels and look at their results in the video stream at the expected lag time. This calibration seemed like a good idea, but now I'm debating whether it could all be implemented in an appropriate PID control?
Things that make you go Hmmmmm....
Oh, I just remembered... this is how I would expect the SoccerBots control system to work, where there is an immediate/localized behavior which is quick/tactical and a remote strategic behavior with laggy commands and feedback for the players controlling the SoccaBots.