Let's Make Robots!

Robot Localization and Map Construction Using Sonar Data

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I read about the Rossum project before, about a decade ago, but totaly forgot about it. Robots before the "1001 microcontrollerboard era" were so much cooler. Look at that awesome ultrasonic sensor and instead of odd named wireless devices like Bluetooth or Xbee you got a Radiometrix BIM transceiver! I have no clue what a BIM is, but it does sound important.

It's nice to read that I am "reinventing the wheel" the right way, with lots of similarities. The biggest difference is that I have an extra infrared sensor which should (in theory) result in a more detailed map.

The parts are cheaper and more compact, but the theory is still the same.  This paper is as relevant today as it was in 2001.




Thanks for posting this article.  It seems like a lot of us are working on this issue with our bots, and this article will definitely change what I intend to implement.  I had done a bunch of research before writing any code and hadn't run across this.  Great find.

It does seem a shame we are all developing separate solutions in "silos" instead of collaborating together.  I am personally used to developing with a team, and the solutions two or more people come up with are always better than what is developed in a vacuum.  If we had a github account or some such we could easily share code or maybe just a forum on LMR to chat about algorithms, post progress etc. 





I guess collaboration will be quite hard to achieve. We're all sitting on our own islands - or silos as you call it - with different hardware, different programming languages and different methods of programming.

Of course for every problem there is a solution and in case of information technology data modelling was invented, as you are well aware of. The problem with data modelling is that a lot of hobbiest never worked with it or even never heard of it and if they do there are a lot of different data modelling methods, making things not much better (or even worse). I for example had to pass an exam in NIAM a long time ago, a horribly confusing information analysis and data modelling method.

That leaves us with either trying to read others code, often written in an unfamiliar language or pseudo code. Pseudo code would be preferable over real code, but even that can be confusing. I guess a well written tutorial with pseudo code examples is the only solution to the multiple hardware and/or software problem. Unfortunately writing well documented tutorials take time and if dozens of people are going to do the same the result will be cluttered information which is not much better than the scattered information which can be found on different sources on the internet.

Maybe a hobby robotics AI Wiki could be a step towards collaborative development. The downside is of course that it lacks the discussion possibilities forums and LMR offer.

As a group, we are all working on very similar projects.  Looking through LMR, I can find probably about a dozen classes of "robot" with individual variations on each.

My personal objective is to build an Autonomous Rover that can use simple / inexpensive Sonar or InfraRed ranging devices to achieve environment mapping, and eventually Localization.  I *know* that many of us here have similar desires, but as you've pointed out, a lack of concentrated collaboration has impeded this goal.

I would be more than happy to sponsor a working group of sorts on the subject. 

Come up with an abstract model that defines ranging and odometry sensors, and methods to put them to use.

  1. We would be able to provide pseudo-code for basic mapping of rooms/obstacles with these limited sensors. We could also provide specific examples of this pseudo-code in a language of our preference (ie: python or Arduino)
  2. Once we have established how to map an environment, we could move to the next level of using that data, along with dead reckoning to get from point A to point B along the shortest known path.  Also pseudo-code and an example.
  3. Ultimately provide pseudo-code for a low end particle filter for localizing.

In most of the current SLAM implementations, they rely on high prices laser scanners for the mapping. These provide three very important aspects. 

  • First Accuracy.  Sonar (UltraSound) has a wide cone, and realistically cannot resolve anything smaller than 5-10 degree angle. Infrared is prone to light pollution.  Both are more prone to reflections than laser.
  • Second is Range.  Lasers work well in the 8-10 meter or more range, whereas IR and UltraSound typically only work up to about 1.5 to maybe 3 meters.
  • Third is speed of acquisition. A laser scanner can range a full 360 (more typically -170 to + 170 from dead center) several times a second.   IR and UltraSound at best take 5-10 seconds for a full sweep.  This can be remediated by increasing the number of sensors used (ie: four quadrants spun through 90 degrees)

What this all boils down to is that to use low end Ranging devices, you have to accept that your Autonomous Rover is going to have to do a lot more moving and a lot more filtering/accepting of data. It's the price we pay for not wanting to pay a higher price.

So?  How do we set up this collaboration within the confines of LMR?  I suspect that we could simply start a new forum for this, attach our documents, and moderate it appropriately?

Thoughts anyone?






In my opinion a section of the forum dedicated to AI will give us a platform to discuss and eventually develop various ideas. For me 2d mapping isn't a goal in itself. Having a robot whos only purpose is to be aware where it is is quite silly.

I for example find human interaction and seemingly intelligent behaviour very interesting. At the beginning of this century I bought a super Poo-chi dog in the bargain bin on the way to my parents house. I was just curious about the electronics inside and it was cheap enough to justify dissecting it. Anyway, at my parents house there were a few adult and children so I put some batteries in it and let it do it's robot dog thing. The interesting thing I noticed was that not only the children reacted in a way as if the toy dog was capable of much more, but even the adults were under the impression that the pre-programmed seemingly random behaviour was responsive behaviour.

I know that this has more to do with psychology than AI, but real artificial intelligent behaviour is still science fiction and the closest thing we can achieve as hobbyists is seemingly intelligent behaviour mixed with low level AI. Of course it helps a lot (maybe even mandatory) when a robot looks like something humans can relate to while avoiding the uncanny valley.

LOL you are right of course.   It is silly to only have a robot only know where it is.  It is one of the hard problems in robotics that hasn't really been solved in a generic enough way that someone can easily reproduce it though.

I don't know if you saw this post I did a few months ago:


I took some code that MarcusB had written and encapsulated it into some C++ classes.  I had a robot bouncing around my kitchen like a ping pong ball and after about 10 minutes could actually see it start to only do appropriate behaviors as it learned what wasn't appropriate.  It was very cool!  MarcusB did the hard part; all the math to drive this.

With this technique, you could actually have a bot teach itself how to modify behavior to make it more liked by children or people.  Maybe an AI forum would be good as well.  Otherwise, any posts like this seem to get lost and people don't see them.  I thought people were going to love this when they saw it, but only a couple people respond to the post on it as I remember.



Of course MarcusB's experiment was simple thus the unpredictable yet valid result is easily explained, but it's AI at it's core! Sure lots of people will argue that this is pre-programmed and therefore not intelligent behaviour and they are right. But, the question then is: what is intelligence? We don't know! We can recognize intelligent behaviour, but we have no clue how it works. I'm quite confident that eventually robots will become such complex machines that it will become near impossible to say if unpredictable behaviour is true AI or just a pre-programmed action or just a software or hardware glitch.

As for seemingly lack of interest: AI is difficult, it's not something you can program in a few spare hours and the reward for the effort (and money) you put into it is unclear, if any at all! I think that scares off quite a few people. And this forum is quite confusing, I'm a member of a few other (non robotics) forums who have the standard "top down" setup in which you quote people if you want to comment on them. That makes readabilty in my opinion much easier, especially when a topic has hundreds or more comments. But maybe I have to get used to this forum.




After I wrote my above post, I did more research and it appears that Robot Operating System (www.ros.org) has a Navigation module which from what I can tell does a fair amount of what we are trying to do here. ROS can be installed on a RasPi.  I am sure there will be some nuts and bolts kinds of code to write to hook into their framework, a fair amount of learning curve etc, but the framework for what we want is there from what I have gathered so far.  I have been going through tutorials and bought a book.  So, more to come over the next few weeks.  I will share what I figure out.

I am really impressed with the work you have done so far, but I think going the way you are going is a deep rabbit hole.  It isn't impossible, but it will be hard.  I think Dead Reckoning coupled with machine vision for mapping and localization, ultrasonic(15 cm or less) and IR sensors for close in (the last few cm) may be a better approach.  I am using a $5 usb webcam for the machine vision testing that I am doing.  I have yet to move it to the RasPi but assume that performance will be fairly slow (3-5 frames/sec)  but probably good enough so staying away from vision for cost reasons I don't think is the best way forward.  As you stated, all of these sensors have issues and problems including vision.  I am open to having my mind changed on the best way forward, but I am sharing what I think and what my experience has been so far with robotics.

But it will be really hard to collaborate if we can't agree on our approach or the goal of what we are trying to accomplish.  I think of collaborating as we have a github account and we work from the same code for both of our solutions.  That said just having all of the resources in one spot on LMR would vastly help me and others moving forward.  You posting this link to this article is the kind of resource that really can jumpstart someone who is trying to solve this problem.  For instance, I have a C++ class that one enters in the wheel diameter, pulses per revolution, axle length, pointer to left and right encoders, pointer to direction variables and it gives position on a cartesian x y grid.  That will be helpful to anyone no matter what hardware or language they use.  I am in the process of building a robot with my Pololu encoders so I can test its accuracy.  When I am satisfied, I will post the code.  This code will work with an Arduino Uno but doesn't do any mapping.  I have some ideas to reduce the memory footprint, but with only a few hundred bytes of ram available and 1k of eeprom to write to, one really needs to move to "big boy" processor such as RasPi or an Arduino Mega for the mapping.  For your approach, you are really looking at probably an Arduino Mega maybe a RasPi.

To reduce memory, my thought was to have a RowStorage which would be basically an array of ints.

0 0 20 25 30  would be row 0 columns 0 to 20 and 25 to 30.  If 24 is added, it would then go

0 0 20 24 30 and so on.  It would then test to see if any "loners" could be added to existing storage etc.  It would reduce footprint, but the storage on an Uno is so limited.  If you were mapping a really large area, it would not do the trick.  It would need to have a circular queue or way to allow overwrites to truly be robust.  If there is a wall parallel to you that is 100 units long, it would have to make 100 row storage objects which is pretty much all the RAM on an Uno.  I could have column and row storage objects using the same algorithm, but that means more processing, harder to manage and to write code and it still might not robustly solve the problem. 

I am not sure what we solved here.  If we talk to Fritz about a special forum for navigation and localization that might help to have an organized place to post info, algorithms, code etc.  But by the same token, we can just put tags on the pages and that will come up for anyone who does a search.  There are at least 5 or 6 active members who are working on this actively so it would serve a purpose for those individuals who are working on this pretty difficuilt issue.  That said, we may decide ROS on the Pi in a few weeks is the best thing since sliced bread, and it just becomes how to help people to install the correct programs on the PI.



You might also want to peruse  http://en.wikipedia.org/wiki/Open-source_robotics


As my understanding of the principles of robotics, motion, mapping,  and localization is fulfilled, I might opt for a framework like ROS, but for now, I am developing for the purpose of actually learning.  In my case, part of my learning process is the ability to articulate and show others why and how something works.

There may be either portions of ROS that would make immediate sense, and there are DEFINITELY concepts and methodologies that we could/should adopt. 

I have no problem provisioning a git repository for the purpose of sharing code for LMR.

Bill, even if you do down the road of ROS, I would love to have your input and commentary in this exercise.