www.aec.at  
Ars Electronica 1994
Festival-Program 1994
Back to:
Festival 1979-2007
 

 

Architexture
computer-generated pneumatic biogrids

' Supreme Particles Supreme Particles

GOAL
The observer will be confronted with a variable pneumatic-acoustic screen through the architectural + technological arrangement. Both visual and acoustic perception will be addressed. The screen itself behaves as if it were intelligent, i.e. it possesses a past and a future.
KEYWORDS
MIDI, Digital Signal Processing (DSP), 3D sound, soundmorphing, soundmapping, Fourier Transformation, filters, virtual reality (VR), multimedia, Solaris (S. Lem), plasma, genetic algorithms, organic changes
DESCRIPTION
Architexture is an interactive visual/audio/spatial installation with realtime computer images and realtime sound processing.
STRUCTURE
A variable pneumatic-acoustic screen is located in the middle of the room, i.e. a variable rubber skin is stretched over a loudspeaker matrix which can be controlled via a pneumatic air intake. A figure is located on the floor in front of the projection screen: a bullseye, a circle – the center of the action and interaction.
INTERACTION
1. The first image and the pneumatic screen - the observer, who steps into the cross of the figure on the floor, is scanned with the aid of an infrared camera and is then a part of the pneumatic sculpture. The data on his or her image and movement coordinates is reproduced on the screen as acoustic and topographic information - i.e. the space in front of the screen will be analyzed according to the following criteria with an infrared camera and a directional microphone:
  • the observer's relative spatial changes,

  • history of the spatial changes,

  • sounds produced by the observer,

  • history of the sounds.
Afterwards, this information will be analyzed by a control computer and converted into impulses which control the wall via the pneumatic system and emit organically interpolated sound through the loudspeaker matrix (history-buffered soundmapping). At the same time, an image generated by the computer in realtime will be projected onto the screen, and this image will depend on the events in the room (organic, plasmatic reflection). The installation ARCHITEXTURE therefore unites image, movement, sound and space to create an organic sculpture.

2. Movement, language & sound: The observer is now standing in front of his or her own image, which was reproduced through the resolution of the variable screen. In order to recognize him or herself, he or she stretches an arm, turns around, bends over – the projected image will react accordingly. Furthermore, audible language will be added to the body language. A microphone registers all spoken words and sounds in the room; they are broken up into sound units and stored. The stored sentence and word fragments are then interpolated by the computer into an artificial language melody. This simulated language will be reproduced via a 3D sound system.

Finally, the artificial being of the pneumatic sculpture will awaken from its imitative passivity and stop following the example of its model. Almost as if the dimensions had shifted, the recipient becomes a reflection; the pneumatic reflection moves independently, "speaks" to the observer, requests him or her to repeat. Suddenly, the installation does not react solely to the recipient; it turns around and "plays" with him or her, makes the observer its equal, a variable reflection.
INVENTION
The (software) implementation of a structure in the temporal axis, which is similar to the brain, will free the interaction between observer and computer from its usual 1:1 ratio. The program creates a kind of history in order to obtain conclusions from the system's past for its future behavior (extrapolation). This brain itself has been set up to be "destructive" in its way, i.e. the computer is able to forget and replace superfluous information with meaningful information.
SOFTWARE
All functions are designed to produce organic behavior, which occurs between audio data, digital images and three-dimensional coordinates. In doing so, all modules are linked, i.e. functions can, for example, process audio samples and 3D objects simultaneously. The two-dimensional video originals are transformed into three-dimensional space according to the rules defined by the software. A space-time dimension, a topography of the image so to speak, is created according to this principle: The sequence of similar images in time is no longer of importance, but the number of potential metamorphoses in the space-time component inside the image is. This enrichment of the images by a further dimension, which is linked to an arbitrary dilation of time, makes it possible to free the actual image data from their static state and transfer them to states of greater or lesser complexity. The additional inheritance of external information (e.g. from the audio range into the visual range) allows the control of the flow of information through parameters that are foreign to the image.
SOFTWARE METHODS
Transition from order to chaos/ movement:
  • gravitation, molecular dynamics

  • Random Walk, Drunken Fly, Worms

  • chemical diffusion processes, attachment processes

  • life algorithms

  • translation from 2D to 3D (DwarfMorph)

  • sound-specific parameters

  • change of image elements in range of colors
Creation of sub-patterns / substructures:
  • particularization of images into 2D objects

  • particularization of images into 3D objects

  • conversion of images into grids (texture mapping)

  • change of material and texture parameters.

  • paint effects – 2D/3D warping (distortion)
Application of Digital Signal Processing (DSP)
  • to 3D bodies, to 2D images

  • to sound

  • to spatial coordinates / movements in time
Sound-specific controls:
  • control of molecular movements through sound

  • mapping of audio signals onto 3D objects

  • generation of forms and behavioral patterns, independent of frequencies, amplitudes

  • sound-controlled Digital Image Processing
Sound-specific parameters
  • amplitude

  • frequency / frequency bands

  • generating curves as timelines

  • deviations, average, minimum, maximum
Sound generation
  • transfer of 3D coordinates to MIDI parameters (tone pitch, volume)

  • transfer of 3D coordinates to 3D loudspeaker matrix

  • conversion of 3D coordinates to generating curves / envelopes

  • scaling of movement paths
Metamorphosis / interpolation
  • of three-dimensional forms

  • of sounds (soundmorph)

  • of two-dimensional images (morph)

  • through fractal algorithms -through gravitation, magnetism
SPONSORS
ArSciMed, Paris
boso, manufacturer of medical apparatuses, Jungingen
Silicon Graphics GmbH
Städelschule – Institut für Neue Medien, Frankfurt
Steinberg Research, Hamburg
X94, Akademie der Künste, Berlin