Lawrence alumnus discusses robots

Anne Aaker

In his Feb. 23 presentation, “Robust State Estimation for Intelligent Physically-Embodied Systems,” Paul E. Rybski of the Robotics Institute at Carnegie Mellon University illustrated how robots could work for humans in everyday life as well as the challenges and solutions included in the process.
According to Rybski, a Lawrence alumnus, state estimation is the process by which information is retrieved through interpreting “noisy” sensor data. This data is necessary to make something – a robot – that can orient itself to the current environment.
The main challenge, Rybski said, was figuring out how to extract the information from the data. Thus, the project was started with the goal of creating a robot with basic information processing capabilities.
The problems that surround the creation of robots are numerous, Rybski explained. The group that he worked with wanted to figure out how to get robots to work in groups. They started with a set of obstacles to address, such as a robot’s limited sensing abilities, low computational power, poor sense of direction, and low capacity for communication.
In order to fix these problems, Rybski and his colleagues decided to try a “virtual place sensor,” which alerted the robot when it had crossed its own path. This improved the robot’s sense of direction.
To illustrate this, Rybski showed the audience a slide of a square. When the robot’s operators tried to get it to move around the square shape, it ended up moving all over the place and crossed its own path several times.
When they used a virtual place sensor, the result was far better – the robot managed to cross its path less often and moved in a shape that was actually comparable to a square. Without the virtual sensors, there is no correction factor. With them, Rybski said, the robot can reconstruct its path and travel with fewer errors.
Next Rybski introduced the “RoboCuppers.” These robots, which look like little dogs, are able to play soccer with each other without the aid of any human control.
RoboCuppers are impressive, but Rybski said there are still several problems to address. The robots have cameras with a limited field of vision, must independently figure out where they are, and have “nondeterministic actions” – their actions may have several possible results rather than just one. For example, when a robot goes to kick the ball, it might just end up flopping it around.
The limited-field-of-vision cameras, Rybski said, are “like looking through a cardboard tube.” They cause tunnel vision, which makes it harder for the robots to estimate where they are, where their teammates are, where the ball is, and where any other robots might be.
Because of this, and the robots’ other problems, tracking is very difficult. For example, a robot might think it is in the top right corner of the playing field when really it is in upper midfield.
To fix this problem, the group of researchers developed multi-object, multi-hypothesis object tracking. This gives the robot some information as to where it is and allows it to evaluate the “uncertainty of objects” – that is, decide which target to go to based on which object has the least expected uncertainty.
Next, Rybski introduced the use of robots in everyday life, such as in the office. One of his projects, an “intelligent office assistant” also known as CAMEO, can observe and log meetings and keep the user’s calendar and schedule in order.
Rybski said that if you were to go to a meeting, you would want to take your CAMEO with you. “It would want to know the information given at the meeting,” he said.
Difficulty occurred when attempting to make this robot assess the different environments that it encountered. To help achieve this, the robot was given a set of actions to recognize: standing, sitting, fidgeting or walking. It was also allowed to map a motion in the 3-D world onto 2-D cylindrical projection.
These actions resulted in good sensory recognition of the current environment the robot was in – it could tell what was going on in the room.
To help the robot define its world, Rybski’s team programmed the robot with the “FOCUS” algorithm, which classifies objects by structure and use. This type of object recognition, Rybski said, is exemplified in the sentence, “A chair is where a person sits down.” Rybski explained that this keeps the robot confused when people sit on tables, in which case a table is essentially turned into a chair.
Rybski’s work with robots is a complicated mathematical and scientific process. Technology keeps on moving forward, and someday we all might have our own personal robots. The future will continue to yield advances in technology that will hopefully bring about developments in the medical field and perhaps even in our personal lives.