# Occupancy Grid Programming Help.

With much help and inspiration from Gareth and Grog (and others) I have a working Lidar (wiicamera and laser) system. Currently, I am using a very low power laser (while checking the mail everyday for the good laser to show up) and getting about 16" of range. For now during testing this is just fine. Everything is just in a smaller scale, but proportionally, everything is equal.

I am taking one distance measurement every degree in a 180 degree sweep. This array of 180 distances is sent off to processing via BT. Thank you rik for help with the processing code --and oldie but a goodie, dusted off and still working great. I start with a small circle in the center of the screen, this shows the location of the camera. From this point, we have 180 imaginary "rays" going out 1 degree apart. On each of these rays we place a dot. The dot's distance from the center is the same as the corresponding distance as read by the lidar. We draw all the dots. We connect each one with a line, and we get this:

Anywhere the line changes from blue to red (or vice versa) there has been a change in distance (between 2 adjacent points) and that change is above a given threashold. We can consider this an "edge". From one "edge" to another can be considered a "path" in that we would never drive into the corner or "edge" of anything. Zip through all the arrays of distances and angles and paths and "paths that are wide enough to fit through" and you end up with walter picking the path that is furthest away and one that he can physically fit through. We could also use this same data to find the longest "blind corner" which would be the one he would be most "curious" about. Or we could go to the closest to "investigate".etc... etc... Basically, we end up with a bunch of arrays of data, we can go through them in different ways with different criteria and chose a given path among many available ones.

Whew... Man Chris, you gunna ask a question any time soon?

I want to start exploring the idea of occupancy grids. I am gaining much inspiration from this.

Now, I have got the very basic idea here, if you look at the picture above, you see basically a 1/2 circle. This circle jets in and out but it does start at the left and continues to the right and it is a continous line. How do I code the simple idea of everything in the circle is clear, everything out of the circle we don't know about yet? I can draw a grid, I can associate an array with it --no problem. But how do I say, "If there is a peice of a line or a dot or pixel at these coordinates, change the coresponding array location to 1"? Or, "draw an imaginary line between every point and the center and any part of the grid that touches that line, change the array value to 1". Moreso, I would also need to move the center dot as walter moves to a new location on the grid. Actually, now that I think about it, that's a simple thing to do in processing. I got that one. Focus.

I think this would be similar to a video game and how characters would interact with walls and objects. Maybe I should draw a line from the center to each point --draw each ray-- Man, I got nuthin' here... I don't even know how to begin thinking about the problem...

Go easy on me.

## Comment viewing options

Oh well, I have sucombed to this problem last year... Never got an idea how to actually do it, it involves some math I never knew (or did I?). Talked to some guys about it, programmer guys that do that for a living, but no avail. If you do manage to do it, please, share it, my robot really needs to know where to go... Did I mention I wanted to do this in a microcontroller? I mean just the mapping part, scanning, displaying and deciding where the objects are... I know there is not enough room for a large map inside a microcontroller, but I was thinking of working with smaller blocks of a bigger map. Oh guys, please help here!

Dude, do you wanna see my code for Crums map plotting? Or would it be more helpful to explain, as best I can, how I did it because the codes a bit of a mess!  Let me know and ill help you out as much as I can with the mapping part.  You're on the right track with the idea of stepping along a line from the centre point to the end point of each measurement and updating each cell along that line.  Just to mention briefly, occupancy grids generally assign a probability to each cell.  So a grid will be initialised with each cell having a probability of 0.5 (no knowledge of what is there).  The way I did it to start with is each time a measurement is taken, any cell that can be shown to have no object present, subtract 0.1 from the cells probability.  When an object is detected in a cell, add 0.1.  After several measurements through a cell, the probability will tend towards 0 or 1.  A better way to do it is to use a recursive Baysian estimation, but I couldn't get my head around that properly! :) Anyway, let me know how I can help.

I'm curious, for the purposes of maintaining an occupancy grid, can you abstract a group of data points into a blob the size of your robot? If the idea is to help a robot (Walter, in this case) navigate, then it seems like you could speed processing and reduce data by building a grid of roughly Walter-sized squares. If any of the data points within a square are showing an object, you can't go there.

Forgive me, these concepts are pretty new to me. I'll have to give your post on Crum a good read through.

If you use larger cells within an occupancy grid, it will certainly speed up processing and reduce memory requirements.  However, the resolution suffers and you may find objects, or even more annoyingly spaces or gaps between objects are missed.  A robot sized gap may exist between 2 cells, but each of the cells contains an object near its outer most edge, thus the robot sized gap disappears and becomes a solid object.

"Simultaneous Localization And Mapping" is what its called. Not ust by me, but also by people who done it before you. Find those clever bastards and tell them to get their shit inside a prop board for you.

Or show us your printouts and tell us where you got stuck. I know you have a desk full of them!

a few things that come to mind...probably obvious to some, and not helpful, but sometimes worth pointing out the obvious

when I think of an occupancy grid, as you point out, I think of a 2 dimensional array, which does fit the model of an area in front of you.

as each vector or ray goes further away from the origin, there will be spaces in that grid you simply will not see.  not because vision is blocked, but because a fixed increment of 1 degree will land behind or in front of an object, and only as you get closer will they come into view.   however, you would think 1 degree would give pretty good resolution until you get too far out.

when imaging how hard drives work, you picture a similar map and the further away from the center you go , the more loss you have, aka "sector overhead"

I think your main question is:  But how do I say, "If there is a peice of a line or a dot or pixel at these coordinates, change the coresponding array location to 1"?     aren't you answering the question in the question?     are you wondering how to compute the x,y coordinates for a blip found?   such as:  object found at a distance of 30" at 57 degrees, therefore using vector math, that is grid position x,y, therefore place a 1 in the array.

or am I missing the whole point of your question.

First and foremost, Hello to Big Face!! So good to see you again, my friend. And yes, I would LOVE to get a look at your code!

I can agree with almost everything said already. But my original question still stands... However I gave it some though and I think I can ask it in a better way...

If I were making a button, I would draw a rectangle and put some words in it. From there, the commands would be, if mousex>recx and mousex<recx+recwidth and mousey>recy and mousey<recy+recheight and the mouse button is clicked -->then you hit the button.

Would this sorta thing need to be done with my sonar line? I just can't seem to get my head around the fact that I basically need to check every pixel on the screen (or every array location) over and over and over and just keep comparing them to the data I got. Isn't there a simpler/faster way? Or is this just what computers do?

Oh yeah, I totally know about the system of numbering the squares --and using the numbers to know how far away from "target" we are. For now, gotta keep it simple --If I can make a system with just a bunch of 1's and 0's for now. Just red/blue or black/white squares. Something there or something not there. Keep it binary, keep it simple --we can get complicated later.

I understand and agree with the mouse logic analogy, but not sure how it relates to your problem.

your picture appears to assume solid objects behind each hit, which may or may not be true.  instead of an entire line behind each hit, how about just placing a red dot at each hit.   then as you progress around the 180 degrees, it would draw the outline of the room or the objects in front of you, from your point of view.

obviously, you wont have knowledge of whats behind each blip until you move elsewhere.   then when you take 2 steps forward, scan again, and a little more of the picture gets filled in because now you see a little more of whats behind there.

I'll shut up after this, :-)  because it looks like BigFace has done what you want to do.

You're not simply looking for these conversions again, are you?

I think rik has answered your question with his picture.  Having read up a lot about occupancy grids the one bit of info that is always missing is how to convert a range measurement to cells in the grid.  I couldnt think of any other way to it than check every cell that is on a line of measurement and update its value, then repeat for the next reading and so on.  As for working out which cells a measurement passes through, its trig all the way, which i think is your main question.  You know the direction the robot is facing with respect to a datum, say 'North', you know the angle of the measurement with respect to the robot position.  Its a case of using trig to find every cell that that line of measurement passes through, empty space and object included.  I dont think there is any other way to do it other than update every cell a measurement passes through, each time a measurement is taken.

Going back to what I mentioned about using probabilities in the cells of the occupancy grid.  This is different to assigning a cell a number relating to distance from an object.  My first occupancy grid code did what you are suggesting and assigned a cell a value of 1 or 0 depending on whether an object was detected or not.  It drew a map and was a very good starting place.  However, as the robot moved around and took measurements of the same object from different angles, lots of cells were switching between 1 and 0, probably due to measurement errors, odometry errors or rounding errors with the trigonometry.  Not a big deal as the map still looked ok.  But each time a cell switched its value, the information contained within that cell from it being measured previously from a different angle was lost.  Which is the correct reading? the first one that said there was an object present or the subsequent reading that contradicted the first?  This is the idea of using probabilities, any information gained by measuring a cell is retained in the occupancy grid.  Even doing it the simple way i suggested of adding or subtracting 0.1 each reading helps to reduce the effect of false or noisy readings.  Does this make sense?  I know it may be something you may look at more in the future, but is worth being aware of this approach.  Hope this is helpful anyway, sorry for the lengthy post!

If you need a bit of assistance with the trig involved to work out what cells lie under a measurement, just shout.  I havent looked at it for a while so I may need to refresh my memory which would be good aswell!