Let's Make Robots!
AttachmentSize
Simulacra.jpg102.97 KB

I have got the sonar sensor working and scanning and added this to the data sent to the pc.  I have also written a very basic obstacle avoidance program which basically involves the robot turning for a certain time whenever an obstacle is sensed.  I am going to develop this program so that the sonar data is also used to make decisions on what way to turn.  I would also like to send data from the pc to tell the robot which way to turn so that unexplored areas of the map can be explored.  For now i have added a video of Crum producing a map utilising the sonar data as well as the ir sensor data.  I have also altered the mapping software slightly so that the shade of grey displayed on the map reflects the probability of an obstacle in that cell (not very clear in the video).  I have also added functionality to change the scale of the map and the resolution of the occupancy grid.

So my mapping robot is dead, robbed for parts!  It could only move on flat, smooth surfaces and i wanted something that could drive around the house. Meet crum.  Ive built Crum to continue my endevour to create a robot that can draw a map.  The body is made of expanded pvc (an idea taken from several robots posted recently, thanks guys!).  Its armed with 3 ir sensors and a sonar sensor.  The old mapping robot used just ir sensors and i think the addition of the sonar will help with the mapping and the avoiding of chair legs!!  Ive also included an I2C LCD screen, looks cool and very useful when debugging.  Crum has the compass module and the wireless link, same as my old bot.  Ive been working on software for the pc end where data from the robot is recieved and displayed.  Ive decided to try an occupancy grid approach to mapping.  Data from the robot can be used to draw a map but thats as far as it gone at the minute.

 

LCD display

 

 

 

 

 

 

 

 

 

 

 

Above is a pic of the LCD screen in all its glory.  Ive also got the sonar working this evening which is nice.


Ive played around with the code and managed to get the robot to plot an occupancy grid map using just the IR sensors at the moment.  The screen shot below shows the grid being filled with white cells to represent unoccupied space and black cells to show where objects are located.  Im getting a lot of false or inaccurate readings at the moment from the IR sensors.  I dont think they are very accurate at longer distances, hence the addition of a sonar sensor.

Screenshotdata.jpg

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Update: Ive just added a video of the robot drawing a map.  It is currently only using the ir sensors to measure distances to objects. Each cell in the grid has a value associated with it that is altered when the robot sees either an object or free space in that cell.  When a threshold is reached the program either colours the cell white or black. I programmed the robot to travel slowly in a square path, no obstacle aboidance or wall following yet.  It is clear from the video that even over a short distance, odometry errors occur leading the program to think the robot is in a different place than it actually is.

 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Aniss1001's picture

5 stars from here :) I'm currently planning something similar as you saw here. So I may have some questions from time to time....

GroG's picture

Great bot Big Face,

It's nice to see progress with mapping.  I'm interested in mapping too, although I'm trying to get ranging data from a webcam & parallaxing edges in different frames.  I was amazed at how noisey image data is, but I'm slogging through it.  It seems you have been successful with the active IR sensors.

Now a bunch of questions:
1. Is the sonar currently being used?  Is it being interpolated over the rest of your sensor data?  If so, in what way?
2. Whats the max range on the IR units?  What is the max range of your sonar?
3. I noticed your on windows.  Are you using msrobotics dev studio?  If so which parts, If not what dev tools do you use?

Thanks for sharing your mapper bot !

Regards,

GroG

 

Big Face's picture

Thanks for the comments. Now for a bunch of answers!! :)

1. In the second video the sonar data is being used, although the video of the computer screen is a bit dodgy you can just about see the sonar scans being plotted on the map. Occupancy grids work with proabilities although im not really using that properly at the moment.  I initialize the map so that each cell value is 5, in probabilities this would be 0.5 i.e 50/50 chance of object or free space, but i wanted to work with integer values.  For the IR sensors, when a measurement is taken the PC software uses the measurement along with the robot direction to work out what cells are free space and which cell has an object in it.  For a free space cell 1 is taken away from the value in that cell and for an obstacle 1 is added.  For each measurement the value in each cell is only changed once.  This way several measurments of each cell must be made before it can be decided that it is an obstacle or free space.  The sonar data is a bit different as i allow for the wider beam angle of the sensor, but the cells values are changed in the same way. Occupancy grids are great for using different types of sensor.

2.I am using 60cm as the max range for the ir sensors as i found anything over that was unreliable.  I have also limited the range of the sonar sensor to 1.2m due to the wide beam giving me inaccuracies on the map.  It should measure accurately up to 4m though.  The data from both sensors is used to plot free space up to their max limit as this is a reliable reading, but obstacles are not plotted above the limits ive set.

3.The pc software is written in visual c++.  All my own creation!!

 

Using a camera for navigation and mapping sounds like a challenge, one i was thinking about myself, what setup are you using?  is it a wireless webcam?

GroG's picture

Thanks for the answers Big Face.

 

Nope, using cheapo webcams I got on ebay for $20.00.  My bot has a computer on board.  I remote into it (using NXMachine) or use samba and map a drive to it.  My code is java with a few simple jni shared objects, the idea being it can run on any OS. Here is a link to a haphazard rambling log.

 

The video has been a bit of a challenge.  I have gone through a couple iterations so far.  I started with my bot aiming a laser pointer on a stepper motor in the video frame.  This worked pretty well when it was dark.  I could find the red dot in the frame at night, but in daylight, outside, or strong fluorescents it would completely drown out the laser.  After the laser, I used two cameras and try stereo vision – but was having many issues with the correspondence problem.  Recently, I am using a single camera and will move in a horizontal line.  I think this might reduce some of the complexity of matching object in two different views, by tracking them from one to another view.  The Cylons had it right (or we will see) !

 

I'll use the laser triangulation again, but I realized, that passive ranging is more challenging, but might offer a bigger payoff.  Also, it seems (like you have proven) that multiple input types might be better than one.

 

Passive video ranging      

pro

  1. not susceptible to being drowned out by ambient light

  2. possibly longer range than active, although unknown how this will help the bot

  3. fewer parts (no laser)

  4. capable of working outside

  5. can process a lot more data in a frame – all of the data is usable not just a single spot

con

  1. correspondence problem

  2. more intensive algorithms

 

Active video ranging

pro

  1. works in no or low light conditions

  2. no correspondence problem

con

  1. does not work in high ambient light conditions.

  2. won’t work outside during daylight very well

 

 below is the rgb histogram of the line at the bottom of the image

Image:histogram2.jpg

Now am at a point where I have applied several software filters and have (kindof) found edges intersecting a scan line.  I still need to track / correlate them inorder to calculate distance through parallaxing.  Below is are two samples, with the camera moved horizontally about 1.5 inches.  They are roughly the same signature, although you can see some anomalies.

Image:160frames.jpg


Image:157frames.jpg

Sorry for the lengthy post, just happened to have some data handy. :P

GroG

 


Big Face's picture
That looks cool. I want one! :)  I was thinking about getting a wireless webcam and mounting it to my robot.  Ive not looked into it much but i assume it would be possible to recieve the images via my wireless router and write software to analyse the images.  Ive used matlab at uni to program edge detection but im sure i could write my own program to do it.  A project for the future i think!!
Zanthess's picture

Wow, that is really awesome!

 My robot was giving me false readings too, so what I did is make it pause 1/10th of a second, and do a second reading if it first came up positive. Seemed to fix the issue for me.

Mr Clean's picture

When do we get to see a video now that you have the compass calibrated?!

Also, where did you get the components for Crum's Sketchup?  Like the ultrasonic? 

Big Face's picture
new video added now that the compass has been calibrated, much better!!
Mr Clean's picture
Nice!!  Seems like it's working like a charm.  :D