The minds of robots
The Sage Center sponsored a talk this afternoon by Daniela Rus entitled “Do Robots Have a Mind?”. Dr. Rus is a Professor at MIT in the Electrical Engineering and Computer Science (EECS) department. She is also a Co-Director of the Computer Science and Artificial Intelligence Laboratory Center for Robotics (CSAIL).
Her talk focused mostly on the many recent advances in robotic reasoning and distributed sensing coming out of her lab. These advances were largely partitioned by modality. She first went over some visual work related to the dynamic color-correction of underwater images based on object distance. She then segued into movement processing as she highlighted the role of proprioception and adaptive algorithms in traversing unknown environments. Other topics that were discussed included self-assembling micro robots, traffic optimization based on distributed data collection, and bovine virtual fencing.
I was left with two impressions at the end of her talk.
First (and most important) is that there is some amazing work going on in robotics right now. The reduction in size and cost of powerful microcontrollers has enabled a revolution in robotic information processing. The emergence of more effective sensing and communications systems has complemented this, yielding more data about the robot’s environment than ever before. The fact that one modular self-disassembly brick has more computing power than entire mainframes of yesteryear speaks volumes about the potential power of these little buggers.
My second thought was that we were moving no closer to recreating the human brain in silicon. An unspoken part of Rus’ talk was that the big advances were more or less unimodal. That is, there were advances in robotic vision or movement, but not so much in tying the two together to deal with highly varied environments. The closest thing I heard was a description of the DARPA car challenge, where autonomous vehicles have to navigate without human input.
At the end of her talk Dr. Rus answered the question of whether robots have a brain with an unqualified ‘yes’. I definitely agree with her, and it is important to recognize that these brains are becoming more complex every year. Still, in my mind the next big step in robotics is going to be an order of magnitude more difficult than previous hurdles. It is going to be the transition from procedural instruction to abstract reasoning. The fact that we have absolutely no idea how to program a computer to reason abstractly speaks volumes about how long it will take to provide a robot with this ability.
Leave a Reply
You must be logged in to post a comment.