At the risk of sounding pompous; I think that you're talking about a different cause-and-effect. Your neice's experiments with expressions and gauges their effec by the reactions of the "audience". I'd guess that this robot is not assessing the impact the expressions have on people, is just comparing the expression to a pre-existing expression "template".
So the robot has an idea template of what "happy" should look like (for example), and then twitches the face-expression actuators and examines how much more, or less closely the new expression is to the template.Sucessive iterative twitching of more actuators will bring the overall expression closer to the template. They are probably using albert einstien because they already have a lot of archival info on his various expressions, and so it's easier to get a decent start on the matching.
"learning" in this case is probably not the best word to use - "evolving" facial expressions might be better.
Agreed. "learning" is pushing it a little bit ;)
The way I understood the project, though, was the robot attempts a face. If the face is close to what the human wanted to see, the robot was "rewarded". In that case the robot is not comparing its current expression to a template, it's waiting for the human to say "yes, that's it, good job".
Admittedly I could be misunderstanding the writteup.
The writeup doesn't go into much detail and it's dumbed down for us plebians, but I'd think that if a computer was programmed to maximise pattern matching, then "giving a reward" to the computer could be grading the match on a scale of 0-1.
I know that thing is supposed to be Einstein... but it really looks more like a lousy-makeup-aged Helen Hunt to me. Creepy.
When I first saw this (yesterday I think) I wasn't really impressed. They describe the 'bot moving its motors randomly, and "stumbling upon" certain combinations of motor position that cause it to resemble what a human might look like when expressing some emotion. When said random emotion appears the 'bot is rewarded (somehow, I doubt it eats cookies). I took that as the robot isn't learning a damn thing, it's just a human sitting there occasionally poking the "save" button. Not exactly "teaching itself" to do anything.
Then I thought of my little niece (she'll be one year old next week!). One of her seemingly favorite games is to make faces at people until they react. It's quite amusing as she is capable of contorting her face around into looks that I can't make. Instead of one of us hitting a "save" button, well, we don't (always) hand her a cookie for making faces either, but a reaction from us causes her to (likely subconsciously) save that face for later use. If it's a positive reaction from us the face would be saved for happy purposes, a negative reaction would be saved for unhappy purposes, etc... Effectively the same thing as this robot is doing.
By abstracting the save button into a reward button and programming the 'bot to increase its likelihood of remembering in relation to rewards, maybe this bot really is "learning", not unlike a human baby.