Let's Make Robots!

Analog_Binary system with associative memory

Demonstrates non-numerical associative memory and learning.


Greetings! This first post will be an attempt to introduce one of the projects I've been working on lately.

If the photo can be enlarged, then you'll notice I've tagged the chips for identification....which also hints at what is being built. The base and wheels for the robot are not pictured, this is just the experimental board for a portion of the memory system and one of the A/D converter chips used for the CDS pair for detecting light amplitudes.
There is no mcu or processor used in this project, as it's more of a memory processing architecture. The Ram is a simple 1024 x 4 bit chip,of which there will be several. One of the ideas behind the process is that each sensor or sensory array has its' own memory processing circuitry with additional provisions for memory match detection, novelty, attention, etc...The basic rule to this type of architecture is separate memory regions for all inputs and outputs. There is also memory for "higher cognition" and further sensory I/O processing as it relates to the overall state of the machine.
The primary difference between this machine and other smaller scale robots is in the use of distributed functions and associative memory, at several "Orders" of processing, as I like to call them. Orders can be thought of as levels or layers of processing. Theoretically several orders are possible within a particular machine, but in practical terms, perhaps 3 is a realistic goal.

The associative memory scheme is  key to the robot's operation. Consider the address input to a ram chip as vector A, the data input as vector b and the data output as vector a'. In a very simple example, a sensory input vector A provided by possibly an analog to digital converter, activates particular address lines which in turn outputs an initially random 4 bit data pattern as vector a'. The ram defaults to read mode and will output whatever contents happen to be in the output register when a valid address pattern is activated (Vector A). The output vector a' can be considered to be associated with vector A, the address input pattern. In this example we can use 4 infrared obstacle detectors (  RB-Cyt-75 or equiv), connected to the input port, representing vector b. The detectors would be positioned to best detect possible interference from obstacles in the robots path (naturally). A comparison between the output data on vector a' and the input data from vector b, is constantly monitored by additional circuitry. When a mismatch between the 2 vectors is detected, the ram switches to write mode and writes the data on vector b into the register currently addressed by the primary input sensor, the A/D converter. Let's say the input to the A/D converter is an ultrasonic ranging sensor....the binary output from the converter is a representation of the distance between the robot and an obstacle. I never make direct connections from memory devices to motor controllers as I believe there should be at least several layers of "decision making" between the primary memory processes and any physical output; motors, end effectors, etc. But for the sake of this article we will allow motor control from output vector a', possibly with the aid of some minor steering logic. To continue, after the data pattern of vector b is saved in ram, the ram switches back to read mode and now outputs the vector b pattern through vector a' output which is now connected to a decoder and motor control. The motor, via the decoder, will adjust itself to steer away from the obstacles. The ram continues in the read mode as long as vector a' and vector b match. In essence, the robot is learning to associate the ultrasonic ranging data with the obstacle detection data.

More information will be included as the project advances...I'll also add plenty of photos and schematics when I can.
Later I will describe some of the analog functions and how the system integrates it all into a working robot.



 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

At this stage, and in this particular model, no "punishment" is really required. Everything is learned by association, therefore if you have a set of obstacle detectors that are pre-wired to perform certain behaviors, like turn motors, left, right..etc....Then those behaviors will be associated with another sensing function if it is activated relatively simultaneously, for example if the sonar is picking up close distance readings while the obstacle detctors are being activated. It's actually a simple classical conditioning example. But because of the possible "depths" that the Orders are capable of, it will be interesting to see what behaviors are learned with what sensor activity. After some experience, the machine will learn to associate many environmental conditions with the correct behaviors. So, you're correct in assuming that survival=success....there's just no "pain" involved, so to speak.

Thanks for the interest. I'll post as much as I can with the time I have.

 

Lol, in neurological sense, punishment is a way of learning, just like pavlov's dogs.

"punishment is just a cruel reward, reward is just an awesome punishment"

                                                                                    -oz

You're right on the money Mmlad, classical conditioning is part of what's going on here..... Which oz is that a quote from? Or am I reading it wrong? ;))

lol, i'm oz! that's a shortcut for my pseudo-latin name, Ozmus Ferous.

...

Read it again a couple times...I'll answer all your questions.....

Thanks for the interest ;))

 

finally! you have posted it! 

now this is what i call "advanced", not like my neural nets, w/c is made just for small personal reseach purposes only.(just like grey walter's tortoises with 2 neurons {i really don't think that grey walter's neurons are simulating like real ones, my neurons are made using some of mcCulloch-pitts principle, but i used my own circuit and my own philosophy, and it's so simple that it is only as smart as an ant, my neurons simulate a real neuron, but doesn't really do that much[must be because i need billions of them]})

what neural nets did you use? if you made yer own, do you have some schematics for it?

and what memory did you used? i only used some capacitors and transistors connected uniquely for the over all neuron (including the memory)

and lastly, how do you "punish" it?