Let's Make Robots!

Machine Vision Obstacle Avoidance

Uses a webcam and robot vision to navigate around obstacles

Obstacle avoidance is one of the most important aspects of mobile robotics. Without it robot movement would be very restrictive and fragile. This tutorial explains several ways to accomplish the task of obstacle avoidance within the home environment. Given your own robots you can experiment with the provided techniques to see which one works best.

There are many techniques that can be used for obstacle avoidance. The best technique for you will depend on your specific environment and what equipment you have available. We will first start with simpler techniques that are easy to get running and can be experimented on to improve their quality based on your environment.

Let's get started by first looking at an indoor scene that a mobile robot may encounter.

Robot View


Here the robot is placed on the carpet and faced with a couple obstacles. The following algorithms will refer to aspects of this images and exploit attributes that are common in obstacle avoidance scenarios. For example, the ground plane assumption states that the robot is placed on a relatively flat ground (i.e. no offroading for these robots!) and that the camera is placed looking relatively straight ahead or slightly down (but not up towards the ceiling).

By looking at this image we can see that the carpet is more or less a single color with the obstacles being different in many ways than the ground plane (or carpet).


Edge Based Technique

The first technique that exploits these differences uses an edge detector like Canny to produce an edge only version of the previous image. Using this module we get an image that looks like:

Edges Detected

You can see that the obstacles are somewhat outlined by the edge detection routine. This helps to identify the objects but still does not give us a correct bearing on what direction to go in order to avoid the obstacles.

The next step is to understand which obstacles would be hit first if the robot moved forward. To start this process we use the Side_Fill module to fill in the empty space at the bottom of the image as long as an edge is not encountered. This works by starting at the bottom of the image and proceeding vertically pixel by pixel filling each empty black pixel until a non-black pixel is seen. The filling then stops that vertical column and proceeds with the next.

Filled From Below

You will quickly notice the single width vertical lines that appear in the image. These are caused by holes where the edge detection routine failed. As they specify potential paths that are too thin for most any robot we want to remove them as possible candidates for available robot paths. We do this by using the Erode module and just eroding or shrinking the current image horizontally by an amount large enough such that the resulting white areas would be large enough for the robot to pass without hitting any obstacle. We chose a horizontal value of 20.

Horizontal Eroded

Now that we have all potential paths we smooth the entire structure to ensure that any point picked as the goal direction is in the middle of a potential path. This is based on the assumption that it is easier to understand the highest part or peak of a mountain as compared to a flat plateau. Using the Smooth Hull module we can round out flat plateaus to give us better peaks.

Smoothed Outline

Once this is done we now need to identify the highest point in this structure which represents the most distant goal that the robot could head towards without hitting an obstacle. Based on the X location of this point with respect to the center of the screen you would then decide if your robot should move left, straight, or right to reach that goal point. To identify that location we use the Point Location module and request the Highest point which is identified by a red square.

Final Goal Point

Finally just for viewing purposes we merge the current point back into the original image to help us gauge if that location appears to be a reasonable result.


Given this point's X location at 193 and the middle of the image at 160 (the camera is set to 320x240) we will probably move the robot straight. If the X value were > 220 or < 100 we would probably steer the robot to the right or left instead.

Some other results using this technique.







This works reasonable well as long as the floor is a single color. But this is not the only way to recognize the floor plane ...

 See more at http://www.roborealm.com/tutorial/Obstacle_Avoidance/slide010.php



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Im am starting to look very hungy when I see small laptops!

I gotta get into this!

It just reminds me something I read yesterday about the boe bot

the fill,erode, smooth procedure

and oh surprise..  it came from you !




By the way, welcome to the site, Steven, I'm sure it will be really helpful to have access to your tutorials and your knowledge.


This is great.  Thanks a lot for posting it.  I could see someone on this site putting together a big project that uses an old laptop with this software instead of a microcontroller.


Is it Open Source ?

What operating systems does it run on?


This is actually one of the software packages I'd looked at for my arm. If I remember correctly, it's free but not open source, right? And I think Windows-only. But it looks pretty powerful and flexible. I really like how you put together your own algorithm by combining simple modules, rather than just being a pre-packaged solution that tries to do it all for you (and inevitably never does exactly what you want).


Dan is correct. RoboRealm is free but not open source (but it does have an extensive API) and it only runs on Windows (but you can communicate to it from any OS).

If you are looking for open source packages we maintain a list or free and open source image processing packages at





Thanks STeven for the links. 

If you would not mind, I was wondering why those choices? Why ask for Donations? Why not open source? Why windows only?

Again, appreciate your tutorial and info.