Let's Make Robots!

OK, finally signed up and a few questions...

Hi everyone.

After many days of looking around this site, I have finaly signed up, and, hope to start my new robot very soon.

However, I would like to ask, is it a case that the imagination really is the limit, or does the technology cause concern?

I ask this as I eventually would like to create a robot with advanced AI, and that can "learn".

I was also woundering if anyone has connected 2 or more MCU's to provide extra power, and maybe run larger, more complex code for these reasons?

I welcome any advice.

Thank you,

Regards,

Steve

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

To combine multiple CPU's is just a matter of the protocol that they can talk to each other.

Regarding to AI, what is the definition of AI. Say, a mapping algorithm is this AI? Learning how the environment is and store this map in memory, later using it to navigate around. Is this already AI, I would say yes. 

AI is also if you use sensors for light, combined with e.g. heat sensor. Learning that "cold" light (LED light, daylight etc.) is save and can be navigated to but "hot" light (a flame) is not save so avoid it. This is simple AI too.

As a programmer you should be able to write this down in code. Just start simple with only one CPU and then expand it step by step.

PS: While writing this some ideas popped up. Thanks for asking here and inspiring me ;-) 

I don't think we'll see a PIC-based machine passing a Turing Test any time soon. ;-)

Even on a computer human like AI has trouble passing that test. But our robots have basic insect like AI and we want them to grow to animal like AI. Say dog intelligence. A dog knows his way around the house, can play fetch and can feed itself. I say this is the next step for our basic insect intelligence based robots. After we are confortable with dog like intelligence, we should attempt human like intelligence. Mean while the technology will allow us to do so, right now it's a bit limited, unless we shed out a large sum of cash.

Dog like AI can be done with microcontrollers. For speech recognition there are modules like VRbot, for text to speech modules like SpeekJet. So the interface with the user is easy. The hardest part is the mapping and image processing. For image processing we can get capable cameras that have built in image analisys and return coords for color blobs and the size of the blob. Good enough for fetching stuff. The only hard part I see is the actual mapping, because it needs a large storage space. But with the addition of the uSD card shield, that can be done too. So what keeps us from doing it? Coding, that's the problem. We, the hobby robot builders, are not computer programmers that are familiar with algorithms. Well, most of us. Some builders have the necessary experience, so we need their help to climb this wall in front of us. But I would say that even for PC programmers, writing efficient mapping code for microcontrollers is difficult, because of too little ram available. All PC algorithms assume a large ram space so they afford to go wild. So we need a smart guy that can work with limited resources.

Here is a real case. My robot's size fits in a 20x20 cm square. All the measurements the IR and US sensors return are in cm. I can generate a list of coords for the obstacles detected in a scan. Because of EEPROM space constraints, I can store a map with the cell size similar with my robot's size. Each cell is represented by a byte in a 2D array. We can use some tricks to write 2 cells in a byte, perhaps even 4 cells if necessary. To have an accurate map, we can encode different cell information in the value of that byte, for instance, a cell might have a wall on the North side, but the cell is empty, we can store the value of 200 in the byte. 201 for a East wall, 202 for a South wall, etc. A call that is completely blocked can be 255. Say we have a couch that uses 6 cells, for the cells on the edge that the robot can measure, we can encode the value 250 but for the cells closer to the wall we can write 255. Makes sense? Similar, we can write 100 for a cell that contains the leg of a table or a chair. The robot might go between the legs, but say the chair occupies 2 cells. Instead of blocking the access, the robot can decode the 100 value and approach with caution, measuring the distance between the legs, center itself and pass with care, then update it's new position on the map. In the case of chairs, it may also be possible that a human moved them around and they are actually occupying other cells. The robot in this case can mark their new position on the map and also proceed with care. Why go between the chair or table legs? Say we want to play ball fetch, the ball will roll there for sure, so the robot should be able to follow. So, storring the map is easy, 2D array. Set rules on what to do based on the encoded value. Not so easy, but doable.

Now here is my problem. As I said in the beginning, the sensors return distances in cm but my map size is in 20x20cm. How do I compare the sensor readings with the map? How do I make decisions on how to update the map or correct the robot's position on the map? I use IR and US sensors on a panning head and also a compass and encoders to measure the traveled distance. The robot travells from the center of one cell to the center of the next cell and makes 90 degree turns, except for the "proceed with care" situations. Say we elliminate the special case for now to keep things simple. How do I use the sensor readings to verify the position on the map? This is where I need help.

They're kind of a cheat, but they work.  "Northstar" I think is the system that my Rovio is supposed to use (not that I can get it on my network...)

Yeah, thanks, but that still does not answer my software problem. For now, the map is given before hand, in the future, the robot will build the map by itself. If I always rely on external beacons the robot will work only where those beacons are installed. I am trying to avoid that. I can build a beacon system easily with the WiiCam board that I got from CtC. Another thought I had for beacons was a combination of IR and US. I can do it, but like I said, I will not be able to take the robot out of that room and still work. Sure, with Kinect it will be easier but I'm trying to use microcontrollers only, even if it's not going to be fast or precise or if it will require a few microcontrollers to work. So, any ideas?

I guess you're a more hardware guy just like me, come up with hardware solutions...

Yeah, thanks, but that still does not answer my software problem. For now, the map is given before hand, in the future, the robot will build the map by itself. If I always rely on external beacons the robot will work only where those beacons are installed. I am trying to avoid that. I can build a beacon system easily with the WiiCam board that I got from CtC. Another thought I had for beacons was a combination of IR and US. I can do it, but like I said, I will not be able to take the robot out of that room and still work. Sure, with Kinect it will be easier but I'm trying to use microcontrollers only, even if it's not going to be fast or precise or if it will require a few microcontrollers to work. So, any ideas?

They're kind of a cheat, but they work.  "Northstar" I think is the system that my Rovio is supposed to use (not that I can get it on my network...)

OK, some very interesting comments I must admit.

Getting back to my original question, regarding the AI, I am talking about very basic "AI" as apposed to advanced stuff.

Also, I will be using the PIC32 along with C, the reason being, I have already ordered my PIC32 board, and I am a C programmer.

I would just like to add, if it is possible to combine multiple PIC32 MCU's, and also, is anyone aware of being able to use either C++ or C# with such 32bit MCUs?

One last note, I have heard of people building robots using Intel Atom CPU's onboard.

Are we at the stage where we can obtain an old CPU, obtain its data sheet, and set that up and run C or C++ code instead of a PIC based system? 

If this is the case, then I cant wait to get into this properly.

Thank you again for the comments,

Regards,

Steve

You can definitely combine multiple PIC32 micros - there are many many ways of doing this, but I'd suggest you look at the integrated USART modules for some easy micro-to-micro comms.

I don't know of any C++ or C# compilers for PIC micros, but I suspect that you'd be better off just using one of the standard C compilers anyway.

While you may be able to get the datasheet on a CPU, they need a lot of extra peripheral circuitry to be useful. Anyone building a robot with something like an Intel Atom is almost certainly using a small form factor motherboard with all that peripheral stuff already on it. They're also likely to have a Windows/Linux/other operating system loaded on it to run their robot program, rather than programming the whole thing directly themselves.
If you really need the processing power there are some good single board computers out there - Raspberry Pi, BeagleBoard, Chumby to name a few.