Let's Make Robots!

Navigation with (2) sonars Update 2 (with data)

*****Update 2.16.10*****

Got some data to start thinking about, guys. My thoughts of getting nice, even packs of 4 or 5 is a little out the window. To be honest, I tried to forget that I could see where the robot was facing and tried to make a decision where to go based on just the readings. I couldn't . Please note that I was seeing the readings on a LCD so left was above right not in a straight line. --Now that I think of it, I have no idea why I didn't line them up as it is a 16x2 display. Huh.

All readings were taken L to R

Now, here's what I did, I added some scraps of drywall that I had around the kitchen table's base. I didn't want any sensor holes, just smooth walls just to keep this as basic as I could. So here we go, the data is under each photo and the 3 readings were at 1000, 750 and 500.

Navi_Pictures_001.jpg

@1000

L = 00000000

R = 11100000

@750

L = 00000111

R = 11110000

@500

L = 00011111

R = 11111111

Navi_Pictures_002.jpg

@1000

L = 00000000

R = 00000000

@750

L = 00000000

R = 00000111

@500

L = 11111000

R = 00001111

Navi_Pictures_003.jpg

@1000

L = 00000000

R = 01111001

@750

L = 00000000

R = 11111101

@500

L = 10000001

R = 11111101

Navi_Pictures_004.jpg

@1000

L = 00000000

R = 11000000

@750

L = 00000000

R = 11000011

@500

L = 00001111

R = 11111110

****Original Post*****

Need some brainstorming here, folks...

Walter is just friggin big and because of his size, he needs some "side checks" i.e. one sonar is just not enough to keep an eye on his total width. In the past, I have used one center-mounted sonar with one sharp on each side. The sonar was in charge while the sharps would catch the walls. Basically, the sharps would catch a wall if Walter was coming up on that wall at a shallow angle or was traveling allong a wall (not actually following it). This system worked well (except for the God-awful elec. noise from the sharps) with the sonar being a code priority and the sharps "knew their place" in terms of when they could tell the code to change course.

Now:

I have 2 sonars, one each on each of the front two corners. Each sonar can pivot from almost 90 degrees to the side to slightly past center (i.e. the left sensor can see straight out to the left side and, when fwd, can see slightly past the centerline --a little to the right)

Question:

I need some brainstorming here. I have tried about 9 different versions of navigation code with different levels of success and a lot of failures. I have tried:

If one side is closer than the other, the closer is priority -listen to that sensor.

If one sensor is looking to the side, let it skootch off the wall and the fwd sensor is the priority for obsticle avoid.

If one side sees something, let a bit=1 (to remember something WAS there) and then figure that in as to what the other side is seeing.

Average all numbers on one side and compare to the average of all numbers on the other side.

Find a wall, keep that side's sensor fixed on the wall and sweep the other side's sensor for obsticles

Simple avoid whatever either sensor sees no matter what unless both are triggered. Then stop, scan R to L, find the furthest distance, turn to that heading, rescan to see if there is a wide enough path for the bot to fit, turn to the middle of this "path" and go.

etc. etc. etc. etc. etc.

 

I'm looking for BIG ideas here, folks. No code, just a 30,000' view of the problem. Brainstorm, brainstorm, brainstorm...

 

****Update*****

 I think I got it, guys... Check my work, would ya?

We have this:

0211001638.jpg

Breaks down like this. 3 distance rings with 8 segments each left and right. Each point on the arc gets a bit --6 bytes total, 48 bits total. The danger limit is proportionate to the servo posistion so as the sonar swings more to the side the danger limit is decreased and as it swings to the fwd posistion, it is increased. At each point, if the range is > danger that bit turns into a one.

Reading the data:

Find the start and end of each "pack" of ones. Each pack of ones is then checked for width. Each pack of ones that makes the cut must be wide enough for the bot to get through and also should be able to be narrower if it is farther out. From there, I have all the available paths I could use. Choosing a path could be anything -for instance, widest pack that is farthest out wins -or- lets say I am in my docking mode... I have found the beacon, is there a path wide enough in that direction to travel that way? Or maybe I get one of those pyro-thingies that detect body heat... Same thing: There is a human and I should follow it, is the direction of the sensor within one of the "paths" available. Etc. Etc.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Can you describe a little how you would like Walter to move about? Just random movement while avoiding obstacles and walls? Should he search for some object, human, light/dark etc? Is he mapping the area so you can tell him to go to a specific place?

A really easy algorithm could be:

if (left < 80cm AND right < 80cm) { turn left; }
else if(left < 40cm) { turn right; }
else if(right < 40cm) { turn left; }
else { go forward; }

OK, here are some thoughts. Establish the principal behaviors you want. Such as:

  • In the absence of any obstacles or other commands, move towards the most open space
  • Set a variable like sensor_priority, and one sensor will always have priority 
  • If sonar1_range < sonar2_rangesensor_priority = 2 
  • Anything closer than danger_distance immediately assigns priority to that sensor
  • In the event of a conflict between sonar1 and sonar2, stop, scan both sensors and decide

You might have other modes, like the one you described using one side-turned sensor to keep a minimum distance for a wall, while the other sensor scans for obstacles. So depending on the situation, you might have general navigation code, wall following code, etc.

Another random thought, is that with two sensors you have the possibility of bifocal vision, which allows for excellent depth perception in human and other animals with forward facing eyes. I don't have a solid idea how to take advantage of that, but it is a germ of an idea. Maybe complex object detection or recognition may be possible.

Consider posting some video showing Walter in action. Maybe it will inspire some good ideas in others.

Your sensors are providing instant information. Range measurements that are only valid in that moment. It's momentary data. How much of that do you wish to store for future retrieval? Or do have some purist wish for your bot to "live in the moment"? (Now where did I read that before?)

In other words, does Walter "have a plan"? A floor plan of the premises for starters.

For a bot living "in the moment", you might consider how long one moment is. Maybe it is as long one full sweep of the sensor. How about storing all the data from one sweep in a single number? An average could work. But perhaps also a number indicating how much floor space in front of the sensor is free of obstacles. I'm sure I read that before, when discussing surface area calculation. Can't remember where exactly (my brain at the moment is filling up with snot).

Scan 100 degrees in steps of 10 degrees. Each scan showing more than <some distance> scores one point. When a new scan drops below <some distance> reset the score to zero, remember the old score somewhere. Each new score is compared to the old one. Only the best score is reported back to the main program.

That kind of code will hopefully tell you how wide a space is. Problem is, that algorythm is designed for one forward facing sensor. You might have to combine the readings from both sensors into one sweep.

OOOH, that may the best sniffing idea so far. >snorrrrth< Full stop.

To be honest, I have forgotten a very basic pricipal and have broken a very important rule here. Actually, more like a cardinal sin... I went straight to the full code, it didn't work, then I came here and said "I wrote a bunch of shit and it didn't work, tell me why."

I am correcting the problem now. I descovered/ remembered/ realized that I am on step 487 when I really need to be on step one. I.e. right now, I have the robot in RC mode with the sonars constantly firing and displaying the values on a LCD. I can also manually posistion the sonar servos via RC and that number also appears on the LCD. For the last couple days, I have been driving around to different posistions, checking data in different posistions, recording it as well as the servo posistion and a quick picture of the bot. I am dumping all of this info into some rik-style charts to start getting an idea of what different situations "look like" in data form. I can say this first off, my initial "guesses" on limits and "danger levels" were way off.

rik:

I'll say my friend, you got something there... I think I might re-examine my original "may I go" code. Based on the fastest speed of the servos and the slowest speed of my motors, I could really only get 1 or 2 sonar sweeps in a reasonable amount of forward travel. Not a lot of data there or time to figure out what it all means. With my "may I go" code, it is a system of narrowing down the situation:

Regular sonar sweep

If wide open --it's wide open, just go.

as things get closer or more cluttered or narrow, speed is decreased or a full stop and assess is triggered

way close on one or more sensors triggers a full stop, edge find, a calculation to see if edge to edge is wide enough to get through, and then an  encoder turn to the middle of this path, etc. etc.

 

Really when I start thinking about this rationally, a "just keep going and avoid" robot is not what I want. In actuallity, I want walter to stop a LOT! I was so into this navigation code that I forgot that walter has a kick-ass head, can talk and sing, etc. Why would I not want to use this? Instead of a nifty-smooth path around a corner, why wouldn't I want him to stop and look around? (pretending to look for a new route (with some spoken comment) while the navigation's "may I go" has a chance to do the math on the new path.) Besides, if the eventual goal is to leave walter on all the time (with a self-dock and charge) I simply can't have him driving around all the time like a "start here".

 

Thanks again, guys --I got my head going the right way now, just some more basic tests, I'll 
get there.

 

Your sin didn't leak into LMR, so you're off the hook. You asked for broad viewed brain storming and you got it. And it kinda helped.

I like your approach with the data gathering. Figures...

I also like the idea for coding different navigation modes based on the immediacy of any perceived dangers. But most of all, I like that you are rethinking Walter's behaviour. That is really Big Idea Creativity going on!

Walter is not gerbil sized, so he should not run around like one. He's morel like a big friendly dog, so he should walk around gently and with great consideration. He's got a very polite English voice, so he should act like, errrhm, never mind...

With every high level decisioin you will have Walter make, please sprinle in some random data. Nobody likes a totally predictable personality.

And ask your "voice talent" for some more quotes: "may I go?", "I think I will", "what's over here?","looks like a doorway", "I'd better share my data with master Chris, so he can make a neat rik-chart".

Hey, CtC. If anyone deserves some leeway on asking for help, it's you. Walter may be the most complex and well considered long term project on LMR. TOBI comes close, but you've made so much progress. Plus, your videos are great. "Ting!"

Are you using EEPROM memory at all? I occurs to me that you have a very basic decision to make. Is it more interesting/useful to make Walter capable of navigating a general, unknown environment, or to know lots of detailed data about a known environment (your basement/house).

There has been some very advanced mapping/navigating work done by some at this site. Should Walter have some detailed map data for things in your house that are VERY unlikely to change, like the walls and floor plan? It's a lot of data, but you have a pretty advanced platform, and you can set aside memory to store it.

Think of how a blind person can navigate around their own home. Walter isn't blind, but his processing of the data he can get from sonar is seriously limited compared to what a sighted person can do. Think about sonar as a walking stick for a bind man, rather than vision. He can see out a limited distance, and determine when something is in the way. It's hard to interpret what is in the way, just that something is there. This analogy may help you think through some of the programming challenges.

Walter is a great piece of work. I can't wait to see how you tackle his latest stage of development. You are an inspiration to many.

Update
Jackfruit

Good to see the bits are rubbing off on you.

What you just designed there is binary map, a bitmap. In radial coordinates. Zeroes code for danger, ones code for safety.

The danger limit for each measurement is "arbitrary", as far as the interpreting algorithm is concerned. More on that later. (No wait, you have three limits. Must discuss why later.)

Building a bitmap for later interpretation is brilliant. Because it allows you to use one and only one piec'o'code (poc or algorithm) to build up a generic map that can be used in different poc's for decision making. That's exactly why bots make maps, I suppose. No bot would let its sensors decide where to go. Not even a start-here-bot (would be fun though).

Like you write: "Choosing a path could be anything". Therein lies the brilliance. Separating eyes from brains. You now have the logical architecture for giving your sensors a separate picaxe. Just send the bitmaps down the wire to the main brain.

---

Ever heard of the unit "Light Year"? Sure you have. Germans take this principle a bit further. They speak of "Geh Minuten" and "Wagen Stunden". I now introduce "Walter Seconds": the distance Walter will cover in a single second. I further propose that a smart danger limit would be expressed in this new unit, the Ws. It is a fluid unit too. Its real world value (expressed in meters or feet) would vary with Walters current velocity (in meter per second). Let's assume an arbitrary danger limit of 3 Ws. Three seconds in which Walter can take a high level decision and execute it. "Emergency brake now" would be the highest priority decision to take.

As Walter travels at varying speeds through its (his, pardon me) environment, he would automatically adapt his danger limits to his speed. Or at least, the limits for his forward looking sector. The sectors to the sides are hardly influenced by the forward pointing velocity vector. Or are they....

I know I am inundating you with thoughts here. Cherry pick all you want.

It occurred to me that a single byte bitmap leaves little to interpret or decide upon. You could "pre-decide" all actions based on every possible bit pattern. There are only 256 patterns to consider. Store them in eeprom for later retrieval.

You'd have to pre-calculate each decision with a separate program and store them in memory. Thus eliminating the advantage of pre-storing them. Unless it's speed you're worried about. I imagine an eeprom byte retrieval is 10 or 100 times faster than any data interpretation poc.

Different behaviour modes would be represented in different parts of eeprom. For instance, in free ranging mode:
00000000 = full stop, turn around
11111111 = all clear, increase velocity
00001111 = obstacle ahead, turn towards clearance

However, in maze solving mode (wall following strategy)
00000000 = turn slightly, rescan
11111111 = fuck, where did the wall go?
00001111 = wall to one side, steer towards it

Dunno, just goofing around with your ideas...