# Occupancy Grid Programming Help.

With much help and inspiration from Gareth and Grog (and others) I have a working Lidar (wiicamera and laser) system. Currently, I am using a very low power laser (while checking the mail everyday for the good laser to show up) and getting about 16" of range. For now during testing this is just fine. Everything is just in a smaller scale, but proportionally, everything is equal.

I am taking one distance measurement every degree in a 180 degree sweep. This array of 180 distances is sent off to processing via BT. Thank you rik for help with the processing code --and oldie but a goodie, dusted off and still working great. I start with a small circle in the center of the screen, this shows the location of the camera. From this point, we have 180 imaginary "rays" going out 1 degree apart. On each of these rays we place a dot. The dot's distance from the center is the same as the corresponding distance as read by the lidar. We draw all the dots. We connect each one with a line, and we get this:

Anywhere the line changes from blue to red (or vice versa) there has been a change in distance (between 2 adjacent points) and that change is above a given threashold. We can consider this an "edge". From one "edge" to another can be considered a "path" in that we would never drive into the corner or "edge" of anything. Zip through all the arrays of distances and angles and paths and "paths that are wide enough to fit through" and you end up with walter picking the path that is furthest away and one that he can physically fit through. We could also use this same data to find the longest "blind corner" which would be the one he would be most "curious" about. Or we could go to the closest to "investigate".etc... etc... Basically, we end up with a bunch of arrays of data, we can go through them in different ways with different criteria and chose a given path among many available ones.

Whew... Man Chris, you gunna ask a question any time soon?

I want to start exploring the idea of occupancy grids. I am gaining much inspiration from this.

Now, I have got the very basic idea here, if you look at the picture above, you see basically a 1/2 circle. This circle jets in and out but it does start at the left and continues to the right and it is a continous line. How do I code the simple idea of everything in the circle is clear, everything out of the circle we don't know about yet? I can draw a grid, I can associate an array with it --no problem. But how do I say, "If there is a peice of a line or a dot or pixel at these coordinates, change the coresponding array location to 1"? Or, "draw an imaginary line between every point and the center and any part of the grid that touches that line, change the array value to 1". Moreso, I would also need to move the center dot as walter moves to a new location on the grid. Actually, now that I think about it, that's a simple thing to do in processing. I got that one. Focus.

I think this would be similar to a video game and how characters would interact with walls and objects. Maybe I should draw a line from the center to each point --draw each ray-- Man, I got nuthin' here... I don't even know how to begin thinking about the problem...

Go easy on me.

## Comment viewing options

@rik --Awesome picture by the way...

I have explained this to a lot of newbies but alas, I didn't seem to get it this time...

Take a look at your picture, rik --My first thought was, "of course the point at the end of the ray (@ 3') would trigger the square below it. Now, what about all the squares that are touched by the ray itself?"  --Now that I think about it, you don't have to do anything with them! They are already open!!! This is like asking, "how do I tell my robot to NOT do something..." Well, it won't do anything by default, unless there is code telling it to do otherwise. In this case, there is the simple fact that we are NOT looking for open space (we assume everything is open to start with) and we are looking to ELIMINATE open space.

This is not an issue of in front of the line or behind the line. --Known and unknown-- Instead, we are only worried about the actual line itself. The line (the obsticle or outside wall or whatever) is the only known we have, and that known only exists ON THAT LINE! The obsticle could be paper thin and when we get around this barrier, we could find a ton of open space behind it. Who cares? The only thing we can know --or need to know-- is just that line. As we move around and take more measurements, the "line" will gain "depth" because it will be supplemented by many other lines.

@Big Face

I like your thoughts on each grid space constantly getting flipped on and off as we remeasure it. I use a similar system (to solve this problem) for my navigation. I start with a direction we WANT to go and then run it through a bunch of tests. During these tests, ways we CAN go get eliminated or "make the cut". Basically, the same chunk of data gets a couple chances to make to cut, and in the end we get the most "trustworthy" data we can get. I would assume the grid system would work the same. Each square gets run though some yea/nay tests with those tests being affected by the "score" the sqare already has. If the square has gotten an OK the last 4 times it was checked, and thus has a high score, it will take a lot more nays before we can change it. In then end, the goal is to have the "most trustworthy" or "the most likely" to be correct data. On the other hand, as with everything, I will give it a shot with simple 1 or 0, see how sloppy it is, and add more time/code/precisions as we need it.

Paul you got it. Single dots.

Simple when you think about it... If the dot is in the square, that square gets the nod. As I said above, we are only worried about the data right on that line, or in this case, the very end of each of the "rays". Forget about connecting the dots --the grid will tell the tale in the end.

KISS, Chris...

That picture of yours up there ^^ is going on a tee-shirt.

"rik is my boink filter" on the back.

Dude, I reckon youve got it sussed.  It will be good to see Walters first occupancy grid map!  Its nice to see the map emerge as more and more data is collected.  Im looking forward to seeing what you can do with this!  Ill try and dig out Crums mapping code when i get a chance and send it to you.

So far.

• hooked up keyboard to propeller
• propeller hooked up to Bluetooth
• able to send "arrow keys" to processing

In Processing

• made big double-decker array [50][100]  // [Y][X]
• started image buffer at 1000x500
• drew a bunch of boxes 10x10
• seeded currentX and currentY and currentDirection for a start position
• figured out array code:
• 0=open
• 1=object
• 2=been there path
• 10 Walter's current position and pointing north
• 11 Walter's current position and pointing east
• 12 Walter's current position and pointing south
• 13 Walter's current position and pointing west
• Able to "drive" walters position around on this grid and mark where we have been and know what way we are pointing
• Path is stored in the array

Enough for tonight. Video tomorrow maybe.

Man, I'm tired... Arrays fill your brain up. Double decker arrays fill your brain twice as fast...

Everything MUST be done at a 90 degree angle. If we/I don't stick to this 90 degree theory, my head will explode.

What if instead of a grid system you used a pattern of circles and rays to make a sector map like a disc drive? Then your mapping points would align more gracefully to your measurements. ; j

BOOOM!

(wipes bits of grey matter from sleeve)

I am fighting tooth and nail to get through 90 degree angles... You want concentric circles? Yup, my head officially exploded.

45 next right?

Yeah, 45s are the next logical step... That said, I am on step 13 --that is step 87. I hope to drive around via RC tonight and get the side sensors going. Will be a good test of basic data transfer and getting that data into the main array.