Building Artificial Creatures
One of the problems the Artificial Life community is concerned with is the modeling and building of artificial animals or so-called "Animats". The goals of this research are twofold. The scientific goal is to better understand animal behavior and animal intelligence. The more practical goal is to develop tools and techniques (inspired by biology) for building autonomous adaptive "agents" that behave in complex environments.
The talk discusses the state of the art in Animat research and presents a lot of concrete examples. The examples stem from the domains of human-computer interaction, computer animation, virtual reality and entertainment. Other speakers will discuss applications in the area of robotics.
The talk will discuss in more detail an interactive installation called "Alive" (which stands for Artificial Life Interactive Video Environment) which was recently built by myself, Professor Sandy Pentland and our students. This installation will be premiered in August at the Siggraph-93 conference in Anaheim, California. It brings together the latest technological breakthroughs in Vision-Based Gesture Recognition, Physically-Based Modelling and Artificial Life. The ALIVE system allows a user to experience a physically-based computer graphics environment and to interact with the artificial animals that inhabit this environment using simple and natural gestures.
More specifically, chromakeying technology is used to overlay the image of the user on top of a real-time interactive computer animation. The composited image is displayed on a large screen (10' by 10') which faces the user, the resulting effect being that of looking in a "magical mirror". Using natural gestures interpreted by a vision-based pattern recognition system, the user can interact and communicate with the animated creatures in the mirror and as such affect their behavior.
The animated creatures of the ALIVE system are constructed using a toolkit and set of algorithms which make it possible to create "autonomous goal-seeking agents" by specifying their sensors, motivations and repertoire of behaviors. For example, for the ALIVE creatures, the sensor data include the gestures made by the user, positions of the user's hands as well as the position and behavior of other creatures in the world. Motivations (or goals) include: desire for creatures to stay close to one another, fear of unknown things/people, curiosity, etc. Examples of behaviors are: move towards the user, move away from the user, track the user's hand, etc.
Given this information, the toolkit produces creatures which autonomously decide what action to take next based on their current sensor data and motivational state. The model also incorporates a learning algorithm which makes the creatures learn from experience and improve their goal-seeking behavior over time.