Some empiric signal processing
July 25, 2008
I wanted to do something for my balancing robot even if i don't have the parts yet, so i decided that i could do some study on how to smooth the signal. I read aroung that many peoples who had a balancing robot working used some kind of filter becouse small noise on the readings of the sensor can compromise the stability of the robot. Some people used standard filters, some other used hand made custom hacks. Unluckly i haven't studied signal processing at school so the best i could do was reading about on Wikipedia and do some empiric tests.
To know whethere this will be useful on the final robot or not, i need to wait to have the sensor but in the meantime i thought i could share this, maybe someone will find it useful. Experts of signal processing will instead probably find it funny :)
So to start, i used an actual real sensor to gather some data, my Sharp GP2G120 Infrared distance sensor. It's quite famed for his noisy output so i can probably consider this a worst case scenario :P But who knows!
Well, i hacked together a small program to log some data, generating a file with 800 readings. I then wrote another small program to generate a graph of the data.
As you can see, it have a first part where the sensor was clear and the signal was costant. This is the most noisy part. Then i had a slowly raising and lowering of the signal up to the top value of the scale (the valley in the middle is the blind area of the sensor under 4 cm), and then a third part where the signal was rapidly changing.
Then i generated other sets of data applying different functions to the readings and watching the results. One of the first i did was generating a graph where each point is averaged with the previous two and the following two. Of course you cannot do this in realtime since you can't predict the future :P But it was useful for reference as it arguably have the best output.
It almost sticks with the original one, and smooths very well where there's more noise. It does a great job removing spikes.
The second test was making an average with the previous two readings. This can be very easily done in realtime, but the results are not really good. It does some good trimming of isolated spikes. One problem is that averaging backward means that the system respond to real changes with some delay, and in fact you can see that the curves are displaced a couple of pixels to the right. I don't know how much this lag could impact on an actual robot, where you sample continuosly many times per second.. this is something i need to test :)
The third test was still with average, but now with 4 previous readings. As you can guess, this is exacly the same curve of the first test (average of 5 values), but very displaced. So good smoothing but very bad delay.
Another test i did involved making a prediction of the future position based on the previous one, and then averaging the prediction with the actual current value. So if Xp is the previous and Xr is the current reading, then Xp-Xr is the "delta", and Xr+delta is the prediction. Then you can do (Xr+Xprediction)/2 to get the current value. It does trim the spikes in half, but also double their width! This is becouse the second prediction is based on the spike and so is very wide (in fact, about twice as the spike, giving a delta that is the height of the spike). The actual reading returns to the "ground" value and so averaging them gives you half spike back :) Now that i write this, i could have used the output value as Xp instead of the input (ie, the red point instead of the blue one), that would have probably given better results, while also promoting the filter to a "recursive one" :P (Wikipedia: a recursive filter is a type of filter which re-uses one or more of its outputs as an input).
Another test i did was implementing the Alpha Beta filter. This one is a known filter and is very easy to implement, and wikipedia says it's "still commonly used" so it must have some merit :) Indeed, it turned out to have a good output. I played around with the parameters (alpha and beta) and i think that the best results are with both at 0.5.
It smooths out isolated spikes and it sticks with the real curve, but it suffer of the same problem as the "prediction" filter. Also it makes the curves taller (see at the right end of the graph). I don't know why, but i don't think it's a problem. Removing spikes is probably much more important for stability.
You can read about Alpha Beta Filter in this good article: http://www.mstarlabs.com/control/engspeed.html
So in the end, what is the best? I don't really know :) Some testing is in order to discover the one who perform better. They're all very easy to implement.