Let's Make Robots!

Rudimentary LDR eye

Old idea rekindled

This idea about very basic robot vision haunted me ever since I first saw AmandaLDR. Even before it was named after a canine. It's somewhat inspired by the evolution of the first eyes in nature: just a bunch of light sensitive cells on your rump.

Chris' ramblings about Walter the other day inspired me to make this vague idea more defined. Here are still many open questions. And I do not have the time at the moment to pursue this in my lab. So, I invite you all to give this some thought and perhaps a little experimentation. Please share!

Situation

This is set in The Carpenter's house, but it could easily be anyone's residence. Or office. Or maze. It is the imaginary "Green Room" which I think Chris has. He does not. Probably. Probably not. But the colour works for the nifty diagram.

eye_ldr_floorplan.png

 

Consider a large window in the North wall, a smaller one in the West wall. In the East is a door to another room. In the South the door through which Walter just entered. He does not smell a Wumpus. That sensor is not invented yet.

Somewhere on the top of his form, Walter has eight light sensitive cells. Engineered, rather than evolved. These are humble (cheap) LDRs, somewhat encased in a light obscuring material. Each cell "looks" in a different direction. Let's number these cells 0 through 7 clockwise. (Start to count at zero if you need to prove you're a nerd. Rotate stuff clockwise to prove you're not a fundamentalist nerd.)

Each cell receives a different amount of ambient light. And thus, each LDR gives off a different voltage to each 8-bit ADC to which it is hooked up. The "brain" receives eight byte values.

Beware: Assumptions make for ugly offspring!

Assumption #1

Each room in the house has a unique lighting situation. Recognising this situation equals identifying the room.

This octagonal LDR setup will produce a data pattern based on the distribution of light around the robot. Here it is in another diagram. The cells are now rearranged in a straight line.

 

eye_ldr_graphs.png

The top diagram reads the values in 8 bit resolution. Values range from 0 to 255 (decimal). Cell #0 reads 171 out of 255. That translates to 10101011 (binary). All eight values form a data pattern. We are now going to program some very basic pattern recognition.

Assumption #2

The eight values in such a data pattern can be combined into a memory address. Simply by stringing all the bits along into one 8x8=64 bit long word. At each such address the brain holds information about location. Or it will store some, while it is learning.

For example, the combination of all eight values above form a veeeeeery long address. This memory address holds data that has meaning to Walter along the lines of "you're in the green room facing North".

The veeeeery long, 64 bit, address is a bit of a problem. We are programming in a very stupid computer. We simply cannot juggle words that long. And also, we do not have enough memory to store even one byte in 2^64 different places. 2^64 bytes sums up to 16 Exabytes of memory. Besides, it would store the same old info in many many places anyway. That is wasteful. Of memory space and of learning time.

So we need to dumb it down a few orders of magnitude. We must make the cells "less precise" about the light levels they see. I propose to scale the values 0-255 down to 0-3. That is a factor of 64. That would save us 6 bits per cell. resulting in a total of 16 bits in the resulting pattern. That is a normal word value in a Picaxe and sums up to 2^16 = 64 KiloBytes of memory space required. That would easily fit in an EEPROM for example.

The second diagram demonstrates this low resolution data pattern being derived from the first one.

Assumption #3

An oldy: Teach your robot, rather than programming it.

Let Walter roam the house. Avoiding collisions as if it were a start here bot. Each time it enters a room it does not yet recognise (or not recognise any more), it will ask "BLEEP?". You will have to tell it somehow (suggestions welcome) where it is. Walter will store this new info into the appropriate, empty, memory nook.

Next time it enters the same room, it will hopefully receive the very same pattern through its eye. This is one reason to dumb down the patterns. A slight variation in lighting conditions (a thin tiny cloud has drifted over the house for example) will not upset the patterns too much.

Or better: the dumber patterns are not as sensitive to variations.

At first the bot would need to learn a lot. Many empty memory nooks and crannies to be filled. "Need more input, Stephanie." Just leave Walter alone in a room for 24 hours and let him soak up all the patterns it can get its eye on. Daytime and night time patterns. With and without people in the room.

He would not need to "BLEEP?" for a location, because it is under orders not to move. All patterns will register the same room as the location to recognise next time around. Walter needs to soak up each and every room. Well actually, Walter needs not be attached to this part of his brain during this educational tour around the premises. He could just be stumbling through the yard, old school style.

Assumption #4

If only I could send a part of my brain some place else to soak up interesting stuff while I were getting drunk in the yard....

 


 

Update 10 april

Oddbot and mintvelt are both suggesting the same thing. I did not mention it in the original post, because it was already too long.

They speak of smarter ways to produce a more distinctive data pattern. Ideally, each room would only produce one pattern. (That is what bar codes are all about. The pattern is then called a code.) But natural variations in light will probably cause many patterns per room.

Think of fingerprints. I could scan my right thumb a million times and the flatbed scanner would produce a million different bitmap files. Here is an imaginary example.

 

fingerprint_scan.jpg

I could dumb it down the way I propose above. And I would end up with a much coarser bitmap. Here is the same image in a 64x64 pixel version.

fingerprint_bitmap.gif

This 64x64 monochrome bitmap can hold 2^4096 different combinations of pixels. That is a lot "dumber" than the above 480x480 version. But it's still not very efficient. My one thumb could still produce a zillion different patterns in an algorithm like that. Each of them different from the next by only one pixel. One of those patterns would look like this.

fingerprint_bitmap_rotated.gif

It's exactly the same print, but rotated 90 degrees. To a computer it holds very different data, but to us it it still the same old information: my thumb print.

Now, if I were to follow each line in the pattern and note all the crosses an junctions and endpoints, I would end up with this "roadmap".

fingerprint_graph.png

This roadmap would hardly ever change. This map is the kind of pattern the Police are storing in their fingerprints database. Makes it much easier to search for matching patterns. Rotating it would not change the data.

In this roadmap version is a lot less data but still very distinctive information available. Compare:

480x480 =  230400 pixels or bits
64x64 = 4096 pixels or bits
roadmap = 20 vectors, some 64 points

Taking room prints

The same principle can be applied to the lighting situation in a room. The Green Room could be described as:
"big window, corner, doorway, dark corner, doorway, corner, small window, light corner".
I could draw up a translation table that says "big window" = 3, "dark corner" = 1, etcetera.
The pattern would then read as:
"2, 3, 1, 1, 2, 2, 2, 1".
(I bolded the first two values for recognition.)

And that is exactly what my aneurysm proposes to do. But it is still a bitmap. Rotating the bitmap would produce different data of the exact same room. The above pattern after 90 degrees of rotation would be:
"1, 1, 2, 2, 2, 1, 2, 3".

This is totally different data to a computer, but holds the same information to us. If we would turn the pattern "right side up" before feeding it into the computer, we could help it greatly in searching for matching patterns in its "room print database".

So which side is the right one? Without any external sources of orientation (like a compass), we can only go by the available data. I propose to use the brightest light source out of the eight detected levels. Either do this in hardware (put the eye on a servo) or in software (shift the values until the highesst value sits up front in the series). Both example patterns would turn into
"3, 1, 1, 2, 2, 2, 1, 2".

When you consequently use the brightest/highest value as your point of orientation, you will get consistent pattern comparisons. The hardware will probably give slightly better results because the servo can turn anywhere in high angular resolution, zooming in on the brightest light source. The software can merely rearrange the values in 22.5 degrees steps.

How about that wheather huh?

All that will hopefully account for the rotation/orientation issue. How about varying light levels? Both Oddbot and Mintvelt suggest looking at differences between values, rather than looking at absolute values. Differences themselves can be calculated absolutely or relatively.

The reasoning is like this: No matter how bright the wheather outside, the big window will always be the brightest looking wall in the room (in daytime anyway). Two suggestions:

A) bring the values down so that the lowest value is always zero. Subtract the lowest value from all values (absolute differences remain). The dark/bright relations will remain intact. Fewer different patterns, same information. Good!

B) bring the values up until the brightest value is maxed out. Multiply all values by 256 and divide by brightest value (relative differences remain).

I hope that helped...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Would it be possible to put a small color sample on the wall or floor near the doorway of each room for him to see as he enters? Red = kitchen, blue = bathroom, etc? Maybe a barcode or RFID?

here

You will like reading it.

You will be, like, reading it.

First off, a picaxe can do all kinds of math involving decimals -its just the answer that gets spit-out as a whole number. Here's what we got:

Set your test rig up in the first room and have it take a reading every hour for a day, maybe 10 readings. Average them, or not -whatever- and stick them in your eeprom. Just like this:

writei2c sequential address (room number,b0,b1,b2,b3,b4,b5,b6,b7)

You can write to sequential addresses, it does not matter. The room number is a fixed number I pre-assign to each room and the b0-b7 is the LDR readings. Lets now assume you have 50 readings in packs of 10... 5 rooms.

Now walter heads into a new room, and does the "look for the brightest, start there" thing and ends up with another set of b8-b15.

Now you simply read back each eeprom address and compare b0 to b8, b1 to b9, b2 to b10 and so on. As we compare we just use:

How much different is b0 to b8 + or -? The room number (which could be an eeprom address or scratch pad number to save variables) gets those "points" -the points being the difference between the current reading and the prerecorded one, no negitive numbers here, just the difference in a whole number. Continue this process with the remaining variables in each pre-recorded eeprom data address.

Now this could also be done in two catagories, one set of points for each pack of 10 to then be averaged together for a "room total" or each of the 10 entries in a pack could win within that room number and then compete with the winner from the other "room packs".

Another way, would be to assign scratchpad or "reserved" eeprom numbers. Lets say we have pre-recorded the readings starting at address #11. Now after walter has taken his "which room am I in" sweep, we read #11-#21 and rewrite them into #'s 1-10. As we go through our "assign points" system, we can again rewrite the winners. 10 turns into 1 winner written at address #1. In the end we have #'s 1-5 representing the winners from each room that were taken from each room's original pack of ten. Again and again we are trimming down the choices but doing it based on scratchpad or eeprom address #'s instead of using 482 differnt variables that we don't have.

I am rambling here but the point is, we simply compare and asign points as to samples of each room and how close they are to what we are reading now. Room with the most points, wins.

Now don't get me wrong, we are talking about a metric shit-load of "if x>y then... if y>x then..." but who cares? We don't have any pauses, PWM's or servos to deal with here so lets just overclock the picaxe up to 16hz (or one of those new fancy x2's up to 64hz) and fly through these calculations. Badda boom, badda bing. Simple.

 

Christ, I hope this makes sence tomorrow... this is 5 pints of Guinness logic here, yo.

About the retrieving aneurysm.

We have the actual measured pattern from an unknown room stored in eight bytes b8 - b15. Let's call the whole set "snapshot".

Now we start reading in eeprom (or whatever the technology of choice). Start at the first archived pattern and work your way through all of them. Along the way you assign points (I'd say "evaluate fitness") for each candidate. Always noting the highest ranking pattern as you go. After you've gone through the entire archive, the pattern that holds the title wins the prize. The winner gets to call the room we're in.

Am I right so far? Because it's getting tricky from here on. This is where I sprinkle doubt:

The fitness evaluation assures that we need to archive only a limited set of patterns per room. You propose ten per room. Let's go with that number. The evaluation is an aneurysm in itself. You propose to calculate absolute differences between corresponding bytes. Here is a numerical example assuming a matching pattern.

snapshot b8 - b15    255  90 120 160 160 200  99 212
archived pattern 255 99 135 162 170 208 95 198
differences (abs) 0 9 15 2 10 8 4 14
total of those differences = 59
max difference in the set = 15

Did you also just notice that we do not need to store the first, brightest, LDR reading?! It will always be 255! Predictably so. We made it that way. Therefor no memory required! We can predict its value both in the snapshot and in the archive! The difference will always be zero. I'll keep it in the examples for now.

I wonder how this rating (either 59 or 15) would be able to make a distinction with other archived patterns that are not for this room. I could imagine an archived pattern from a different room that would match the snapshot by the same amount of error.

snapshot b8 - b15     255  90 120 160 160 200  99 212
different pattern 255 91 121 161 159 201 47 211
differences (abs) 0 1 1 1 1 1 52 1
total of those differences = 59
max difference in the set = 52

This comparison will only give a wrong match (false positive) when we use the "total of differneces" as the rating to rank our candidates by. But for the "max difference in the set" exists an equally misleading candidate, I'm afraid.

heard about that somewhere....

Rik jokingly metioned brail in the shout box. Although nothing to do with LDRs it is a briliant idea. Small speed bumps in each doorway to identify it. They could be small enough that a person wouldn't even know they were there but the vibrations as the robot ran over them could be picked up with a mic.

You guys have me convinced... I'm gunna give it a go...

Proposal (for you to tell me is all wrong):

Forget Walter for now, this wiil be done on breadboard.

One sensor, probably just a lightly "shielded" LDR on a 360 degree servo going into an adc input (the LDR shooting upward at slight angle)

This servo rig will be placed  in the center of the room, this assuming Walter could do the same with sonar.

Spin one sweep, stoping and taking reading at the (8) locations, noting brightest

Spin back to brightest location -now calling this "start" or "0"

2nd spin, (8) stops starting at "0", (8) readings into scratchpad memory

Readback values, do the  "average out" math (*256/ highest)

Store to eeprom

***Repeat for different rooms***

 

At this point, I think I could try to figure out some code to start to interpret these numbers however, I figure we are having so much fun with this conversation, and I know how much rik likes to make charts, I will just come back with a bunch of data for you guys. --How does all this sound to you folks? Pretty good start for a proof of concept?

If you can get hold of eight LDRs it will be easier and quicker to rotate the bits.

Like the idea! A lot. Even most of details.

I would like to see this 360 degree servo. Home brew?

I like the simplicity (code wise) in sweeping twice. Once for searching the brightest spot and again to take final reading in the correct sequence.

The eight resulting readings stored temporarily in scratch pad. hmmmkay. Averege out math. Challenge for a picaxe. Does not do fractions/decimal. Only integers. Bytes need to be processed as words (two bytes each). Then reduce back to bytes. Or better to nybbles or even half nybbles.

(A nybble is off course half a byte. Half a nybble would likely be called a crymble.)

Before storing to eeprom, comes an imprtant step. Deciding where in the eeprom to store anything. Just storing at the next available address is not very smart. Well, not as smart as I am anyway. Perhaps it is smarter. In which case, how would I know?

My plans hinges on the idea to store "You are in the green room" (coded into a single byte) at the location dictated by the eye. This is where we need to string those crymbles together into a 16 bit address.

I'll see if I can come up with some picaxe basic that can do that.

I love it that you're picking up this wild goose chase! Here's my contribution in prettified sample code. It works in my simulator. Mind the bitwise shifters. They requires a X1 or X2 Picaxe.

#picaxe 28x1

' Rudimentary LDR eye
' http://letsmakerobots.com/node/6461

symbol pattern = w4

' some fake readings from eight different directions all around
' starting with brightest measurement
symbol cell0 = 220  ' brightest reading
symbol cell1 =  86
symbol cell2 = 117
symbol cell3 = 140
symbol cell4 = 147
symbol cell5 = 180
symbol cell6 =  82
symbol cell7 = 171

'symbol scale = 255 / cell0   ' this would be 1.159 if picaxe would do decimals

' better ramp up the scale to word size for now
symbol scale = 65535 / cell0 ' 297

' scale up the values, normalizing according to brightest
' undo ramping up
b0 = cell0 * scale / 256  ' = 255
b1 = cell1 * scale / 256  ' =  99
b2 = cell2 * scale / 256  ' = 135
b3 = cell3 * scale / 256  ' = 162
b4 = cell4 * scale / 256  ' = 170
b5 = cell5 * scale / 256  ' = 208
b6 = cell6 * scale / 256  ' =  95
b7 = cell7 * scale / 256  ' = 198


pause 1000

' shift the reading 6 bit positions to the right
b0 = b0 >> 6 ' 11111111 >> 00000011
b1 = b1 >> 6 ' 01100011 >> 00000001
b2 = b2 >> 6 ' 10000111 >> 00000010
b3 = b3 >> 6 ' 10100010 >> 00000010
b4 = b4 >> 6 ' 10101010 >> 00000010
b5 = b5 >> 6 ' 11010000 >> 00000011
b6 = b6 >> 6 ' 01011111 >> 00000001
b7 = b7 >> 6 ' 11000110 >> 00000011


pause 1000

' repeat for each LDR:
'  put the 2 remaining bits from each byte in pattern word
'  then shift word to left by 2 positions, making room for next two bits
pattern = 0             '               00
pattern = pattern | b0  '               11
pattern = pattern << 2  '             11
pattern = pattern | b1  '             1101
pattern = pattern << 2  '           1101
pattern = pattern | b2  '           110110
pattern = pattern << 2  '         110110
pattern = pattern | b3  '         11011010
pattern = pattern << 2  '       11011010
pattern = pattern | b4  '       1101101010
pattern = pattern << 2  '     1101101010
pattern = pattern | b5  '     110110101011
pattern = pattern << 2  '   110110101011
pattern = pattern | b6  '   11011010101101
pattern = pattern << 2  ' 11011010101101
pattern = pattern | b7  ' 1101101010110111 aka 55991 aka $DAB7


pause 1000
 

Don't forget to store your room identification code (room number) in eeprom at that address: 55991.