Let's Make Robots!

CDR(Compact Disc Robot) UPDATE - Video

Explores via Ultrasound, Plays Zelda's Lullaby ( kinda ) when stuck

Hello LMRtians!  I finally got my bootloader code working.  I got my package in the mail from Sparkfun this morning.  I added a ball caster to the rear for better motion.  I used the bootloader to change the pins for the motors and speaker ( so I could use the bootloader ).  I adjusted the motor times for the new CD base.  I'm really happy with the turn times now.  I'm getting pretty close to 90 degree turns and 180 degree uturns.

I still need to add a reset button to the board.  I'm pretty unsatisfied with the delay in reading the ultrasound from the eyes.  CDR gets too close to blocks all too often.  I'm thinking of rewriting the whole thing and moving to the Sharp IR eyes.  

Nevertheless, I hope the video links OK.  I've been promising to do one for a while now.

Silly thing is, I'm more happy about the success of the bootloader than anything.  I reprogrammed the bot in circuit faster than my regular programmer!  in 733+, I'd guess I'd say \/\/()()+!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Nice robot! Simple and practical. I have built a similar robot, but I had to add a second CD because the motors would come off too easy.

I don't understand why you complain about Ping measurements delays. According to the Datasheet, trigger time is 5us, the burst time is 200us, the return time after the burst can take up to 18.4ms and you need to wait 200us between measurements, so in total, you can get measurements very fast if there are close objects and up to 19ms if there are no objects. A Sharp IR sensor needs 39ms from the start of the measurement to the start of the next measurement for good results. 

However, walls or boxes at an angle to the Ping sensor can't be detected, because the sound is bounced away. I had to add bumper sensors to my robot to avoid crashing into wall. If you use the Sharp sensor, no bumper sensors will be needed, because the light is reflected in all directions.

If you want to see my robot (actually I should say my daughter's) take a look here: http://letsmakerobots.com/node/22467 and after I added the bumper sensors, see the video at the bottom of this page: http://letsmakerobots.com/node/22414

Thanks for looking!

Looking back, I should have said I'm unhappy with the speed of my software in reading the Ping.  It was the first time I've used an utrasonic sensor.  Once I got it working, I moved on without trying to improve it.  Really... pretty much all of this is a first.  Like you say, if I want to keep the Ping, I should either add a bumper or change the code to have the sensor do a continuous sweep ( 10-2 or 11-1 o'clock ).  I will probably move to the Sharp.  I want to play more with A/D.  I also want to add on to the functionality.  I will need some interrupts to add some extra servos and other peripherals.  

The base is very floppy with a single CD.  After a number of bumps, it ceases to track straight.  

I took a look at your uBotino.  Very nice performance!  I almost wish our playroom wasn't carpeted.  The toys make a great obstacle course.  

Now take a look at a similar robot that uses a Sharp IR sensor. See how well it avoids obstacles without the need for bumpers. Also, the sensor can be used for mapping, as the beam is just a tiny spot, not a cone as the US (UltraSonic) sensor's. The US sensor is good for seeing objects from a greater distance, I would use both sensors when scanning. One observation, on this robot, the Sharp sensor is mounted horizontally, you should mount it vertically if you want to take a measurement at every rotation degree of the servo. The US sensor does not matter how you mount it, but it needs to be higher than 10cm (4") off the ground, so it doesn't pick up reflections from carpet.

I was so taken with the aesthetic of "eyes", I never would have thought to do a vertical mount.  Like all things once someone has pointed it out to you, it makes perfect sense.  2 servos would give vertical and horizontal freedom.  You'd have to take the angles into account, but that could provide more of a contour mapping.

I was just checking out this one:http://letsmakerobots.com/node/23341 Now that is a serious mapping project.  Great entry too.

 

So far all decent mapping solutions I've seen are done on the PC. I still want to do at least some of those features on the microcontroller. A separate one if needed. Perhaps with a SD flash card attached. There is info about the Poor Man's Laser sensor which is basically a Sharp IR long range on a servo. Now the mapping code has to be written...

I was thinking along the same lines.  The bot could take all of the raw data points.  Maybe have it upload them to a "home base" PC with XBee when full or when it needs results.  The PC could crunch the data and inform the bot about relevant things it needs to know, and it could assign new missions.  I don't think I have the appetite for the mapping code right now.  It feels too much like everyday work at the office.

When I hit the lottery, this could lead to a swarm of little mappers, coordinating efforts (pardon the pun)...

What does it matter if the Sharp sensor is mounted vertical or horizontal? I've seen both, but I don't see that one would be better than the other. I have one mounted horizontally and am taking readings at about every 1 degree and it seems to work fine (I'm working on a mapping scheme). Am I missing something? Either way the light is focused at a point, reflects, then the sensor reads it.

Well, it's in the Datasheet. If they wrote it that way, they might know something... But I think it's because of how the range is determined. I mean it's less error prone if you are moving the sensor in a plane perpendicular to the reading plane.

Regarding the mapping scheme, are you using Arduino? If so, I would be interested in having a look at your code. I am having trouble writing the code because I don't have much programming experience to express my thoughts in code. Right now I am trying to write the best pathfinding solution from a set map. I am looking at 2 ways: A* and Waveform. The problem I have with A* is the need to make 2 lists that can be as big as more than half of the map size, which is big in my case. Waveform works easier, but the guy who wrote the code in C is scanning and recalculating the path at every cell, which I don't like. I would calculate the path then as I drive I would scan and update if necessary. But I start with a known map. To do SLAM we need a faster scanning sensor system, also capable of longer range and less prone to errors. That would be a LIDAR system (laser). On the PC we can easily add more cells to the map (an array of bytes lets say), but on the microcontroller the memory is limited, so I would do it by segmenting a big map in sections, all saved on a SD card. Because of memory considerations, I think there should be a special microcontroller in the system that does only mapping and the "brain" asks for help only when needed. For instance, the Mapper will control a servo and read a sensor(s) to get the distance data (360 degrees if possible), determine current position on the map, create (or use) a system for coordinates of places on the map, calculate the necessary path to get from current location to a new location. The Brain would hold a list of places of interest (coordinates on the map) so the robot can get commands like go from living room to the kitchen and back and ask for directions (path finding) from the Mapper and so on.

Yeah I saw it in the datasheet about mounting the sensors vertical, I just wonder why...no matter though. 

I'm not using a Arduino, I'm using a Freescale mcu, that, as of now, talks to my laptop serially. (eventually wirelessly). Each time I complete one scan with the servo/sharp, the mcu sends the distances and angles from that scan to the laptop. Once in the laptop I use Matlab to extract lines out of the scan data (using a algorithm called split and merge). This is as far as I've gotten as of yet (I haven't built the robot yet so its just stationary). I to plan on starting with a known map, also, the robot will know roughly where in the map it is when it starts. My plan is to use the lines from the scan data, compare those to the known map (also estimating position change with odometry) and be able to have accurate positioning.

I did a similar bot and called it CD Cart and used Sharp IR for eyes, it worked pretty well.  I never did get around to replacing the Sharp with a Ping, so maybe that was a good thing.  I can't really tell from the video, do you have a caster on the back?