www.aec.at  
Ars Electronica 1995
Festival-Website 1995
Back to:
Festival 1979-2007
 

 

Mysteries of information


'Florian Rötzer Florian Rötzer

I. BEING WIRED

"We can understand society only through the study of information and its possibilities of communication; it is up to the information from man to machine, from machine to man, and from machine to machine to play an ever more important role."

Norbert Wiener: Mensch und Menschmaschine
There is more and more information circulating around this world or stored in archives. In the world of digitalized information there are four imperatives: Always create new information! Gather, compile, and distribute all the information that is accessible! Gain access to all kinds of information! Information has to be readily available and able to be manipulated immediately! "Being wired" is the prerequisite for diving into the stream of on-line information of all kinds, for floating around in it, for finding one's way into the niches of remote and closed-off memory banks, for making everything accessible to the public and for being part of the virtual worlds. With utmost resolve, networking is advancing on all levels. One just has to be part of it. Those who are not hooked up are forgotten, left behind, lost forever in a black hole or condemned to a life on the margin of the information society. They will be allowed to watch from the sidelines as the rest of the people build highways, squares, and cities in a virtual world that is gradually being colonized.

The cells of this wide-spread and decentralized network compensate for the distance through an increasing number of interfaces for input and output in real time; they end up being sensorimotor "bodies" and "brains" acting in an electronic environment. There is a growing necessity for man to link up physically to the network so that their biological bodies become part of the network, and thus bio-neurotechnological interfaces, enabling man to keep pace with the speed and complexity of the streams of information running through networks and satellites. The future "homo-cyber-sapiens" will be a cyborg, as there are no signs that the human brain will grow further, i.e. become more complex and more intelligent as far as processing of information is concerned – as happened in the biological evolution of the homo sapiens. Only through biotechnological interfaces between the brain and the computer, e.g. by implanting chips in the brain or hooking up the brain to technical systems, can the human brain improve its processing and storage capacities and have more effective access to technical sensors and motor extensions, thus preventing, for the time being, the surpassing of human beings by autonomous and intelligent robots.

According to Luc Steels, the director of the robotics institute of the University of Brussels, only a direct link to cyberspace could advance human evolution further:

"People might be equipped with cameras to enhance their range of vision or to give them the ability to directly control robots and machines. If we could set up direct links between the brain and the electronic information highway, we would have the remarkable opportunity to have our brain access enormous amounts of information and effect actions by way of remote-controlled electronic devices. This idea is still so remote that we have a hard time picturing its consequences. Will we be able to "read" electronic mail or "send" messages to other brains without resorting to our normal sensory apparatus or perhaps even without resorting to language?"

Hans Moravec has identified a similar evolutionary pressure on the people who become increasingly absorbed in cyberspace and have to adapt to this new and rapidly changing environment. Apart from possible genetic manipulations of those areas of the genome that control the morphogenesis of the brain, in the coming decades the focus will be on perfecting an interface that, thanks to a direct link-up, will largely do away with sensory and motor systems. This may result in the fact that the cognitive structures of a human being could be transferred to a computer within a robot whose hardware would constitute a nodal point in the network:

"Imagine a tank in which a brain is kept alive and is connected to a range of artificial bodies in remote places and simulated bodies in virtual realities through amazing electronic links. Although the brain can be kept alive beyond its natural life span in an optimal material environment, it is unlikely that a biological brain that was designed to last the life span of a human being will continue working effectively indefinitely. Why not use advanced neurological electronic links, which serve as a bridge to the outside world, in order to replace the gray matter once it has begun to fail? Our degenerating brain can be replaced "bit by bit" by sophisticated electronic equivalents, which will help to make our personality and our thoughts clearer than ever before. But after some time not a single trace of our original body and brain will be left.

The objective of the information society was established during the Second World War and the post-war years with the advent of computers, cybernetics, genetics, information and game theories and their biotechnological implications. The objective is to largely replace the transport of material things and messages between two places in space by way of transmitting information via senders/receivers, which are part of a network and spread across a given space, in order to achieve simultaneousness. This requires the ability to describe and encode everything as information and processing of information. The sphere of influence of a human being goes as far as his sensory and motor systems will allow him to go. A robot controlled in real time through telepresence and telerobotics is an extension of the body, which does not stop at the skin and is no longer organic. The objective in relation to man is to remove his cognitive system of gathering, processing, and producing information from his biological hardware and turn it into software that will run on any other hardware as well. If the cognitive system could be described as digitalized information, then the requirement would be met for people not only to send and receive messages, but to actually become messages themselves which could be encoded, transmitted, decoded, and, above all, implemented in different – but not identical – hardware. For the paradigm of information, functionalism is a central dogma. This functionalism has a long history and goes back to the discussion of the possibilities of transubstantiation in the Eucharist. Back then people seriously discussed how something like the body of Christ, as identical information, could be given a new shape, in this case the shape of the eucharistic Host; this argument gave rise to Protestantism and to the Thirty Years' War.

Norbert Wiener, the prophet of digitalizing the body, says quite apodictically, "One thing is clear: the physical identity of an individual being is not based on the identity of the substance it is made of." Wiener draws the following conclusion from the permanent changes occurring in the body's cell structure and metabolism, "The biological individuality of an organism seems to consist in a certain continuity of the transformations and in the memory that the organism has of its past developments. … The individuality of a body can rather be compared to the individuality of a flame than of a stone, rather to the individuality of a shape than of a matter particle. This shape can be transmitted, changed or doubled. … The fact that we cannot telegraph the pattern of a person from one place to another is probably due to technical difficulties and particularly due to the difficulty of keeping an organism alive during such a comprehensive reconstruction, but not because it is impossible to do.“ (1) The aim of the project, based on the theorem of DNA as a code and on the molecular-genetic description of the biological communication system, is to turn man into a message that can be freely transmitted, implemented, and changed.
II. THERE IS NO OBJECTIVE INFORMATION
Information technologies, it is often said, are an extension of the brain, which is seen as a computer. This is why computer networks form a global superbrain, which can process information in different places simultaneously. Human brains, "electronic brains", virtual agents, and other creatures of Artificial Life living in a digital technosphere are part of this superbrain.

However, the input and output channels of our brain have only a limited rate of processing information. The capacity of the cells determines how fast and how much information can be recognized, processed, and passed on. Stimuli have to reach a certain level before they are "noticed" by the cells, which react with a binary "all-or-nothing" decision. As soon as a certain level of stimulus frequency has been reached, the stimulation is no longer deflected, and the cell adapts to a constant stimulus intensity. The cell's reactions will slow down, unless there are new stimuli or a change in the stimulus intensity. The fastest process in the neurons is the action potential, which lasts for about a millisecond. The cell can react again after three to four milliseconds following that. At the very threshold of perception the receivable signals are selected, but there are also "gates" or filters narrowing or selecting the amount of possible information. First of all, recognition and perception are tantamount to reduction of complexity, to the search for compressed algorithms and for continuity. Recognizing means suppressing, overlooking, neglecting, removing, shortening, compressing information. This process is similar to selection in the biological evolution, because it is based on redundancy and diversity, which are then streamlined in the process. According to recent theories on neuronal Darwinism, this process obviously repeats itself on higher levels where various groups of neurons produce different versions from the sensory data and match them with other data. Eventually, one version will prevail in the internal selection, while others will be suppressed.

Critics and enthusiasts of the information society alike deal only with the amount of "external" information that is available and that can be absorbed. Similarly, information theory primarily deals with the amount and transmission reliability of information originating from a sender. This information can be carried on one channel as a reproducible chain of symbols, which can also be total nonsense. The information content of a message is not only measured by the number of binary symbols necessary to encode the message, but especially by the probability of their sequence which depends on the knowledge and expectations of the receiver. There is no objective information. If someone does not understand English, but is familiar with the Latin alphabet, he will be able to discern the individual letters in the word "brain", but not their meaning and the probability of their sequence. If someone does not know how to write and is unaware of the fact that "brain" is a word, it will be nothing else but a sequence of graphic symbols to him, a pattern that might have been created at random. But in such random patterns people tried to look for secret messages in an incomprehensible language, which they often produced themselves, as guidance for their decisions. One could actually say that our brain tends to "see" regularities in such random patterns rather than in patterns based on a schema, as Stephen Jay Gould pointed out. (2) Behind the idea of the information society we still find the naive notion that a larger amount of data about a certain fact will increase knowledge; i.e. we can better recognize a picture the higher the resolution and the more details we receive through our sensory channels. But for this kind of faithful representation there seems to be only a lower threshold in as much as the resolution of the environment corresponds to the resolution of the sensory organs in order to produce an image that is familiar and expected. But sensory deceptions tell us that the brain continuously complements insufficient information, which gives us reason to believe that every perception in large part is produced by superimposing simulated information on received data. This includes the simulation of simultaneousness, which the brain constantly produces to create the subjective impression that there is no time gap between the initial stimulation of the sensors and the conscious perception. The stimulus enters the conscious half a second later because of the time needed to transmit and process it, but usually our mind makes us believe it happened simultaneously.

Even far away from civilization and data networks we have always plunged ourselves into streams of information, but only bits of it have trickled into our conscious, which manages to process a few bits per second. Our sensory organs transmit millions of bits per second to our brain. There they are processed simultaneously by a hundred or even a thousand billion neurons, each having up to 10,000 synapses. It is believed that some neurons in the cortex actually have 200,000. So, there are about a million billion synapses between neurons, which is gigantic when compared to the global telephone network, which has only about a billion lines - and they are not as closely linked to each other.

Subconsciously, several hundred billion bits are presumably processed per second; a lot more than a supercomputer can do. Therefore, it seems as if the data streams entering the brain every single second were multiplied, because the several hundred million sensory receptors, which transmit the external stimuli to the brain, are faced with billions of synapses linked to each other. The stimuli are entered into these synapses, which form a complex network for the purpose of internal communication. Indeed, the number of sensory receptors has not increased tremendously in the course of evolution, nor has the transmission rate of the neuronal network been accelerated. What has increased is the number of neurons in the brain, i.e. the secondary and tertiary processing areas and the internal communication – "Instead of getting a clearer picture of the environment by way of increasing the capacity of the sensory organs, the brain, in its evolution, has taken the opposite course, i.e. to enhance the internal evaluation system enormously and to make it more effective.“ (3)
III. THE RECEIVER CREATES INFORMATION
This is an amazing fact in view of the present euphoria about the expansion of information highways and the concern of being crushed by ever expanding data streams that are becoming faster and faster. More complex perception and a better understanding were not brought about in the biological evolution through the reception of more detailed primary information, but through a mounting complexity of evaluation and processing, through an extreme reduction of the amount of data on the conscious level, through the invention of sleep and possibly also through dreaming, which only mammals can do, and which enables the brain to independently process the information available "off-line". In doing so, the brain matches new data with the data stored, strengthens links, and erases superfluous information.

Of course, a neurobiologist would add that it is not true anyway that the brain absorbs data first and then tries to make sense of it, but that the changes in the environment of the organism – or within its body- picked up by the sensory receptors are preselected by the organism and, roughly speaking, evaluated according to the mood or the pain. The paradox of conscious perception seems to be that we believe we see our environment through our eyes, i.e. as through a window or a camera lens, but the image generated by the brain does not correspond to the images picked up by the receptors of the retina, because they are polymorphous and thus shifts of different shades of light and color, which can be interpreted in different ways, just like a hologram that has to be tilted several ways to make sense.

Talking about information from the point of view of brain research is only meaningful when done in relation to a "receiver". This receiver is not at all passive, but it searches for, in its view, important changes in order to create information, while constant stimulus constellations are neglected and substituted with simulations soas not to overburden the "random-access memory". In this way, the pseudo-receiver generates messages independently from incoming "broadcasts", which may not be targeted towards it at all or anywhere else, for that matter. They are made up of "random" data – whatever that means – or noise. A "sender" encoding messages for a receiver is just as important as a faithful representation. In fact, not a single bit of information will have to be transmitted to the receiver, because even the lack of information may be valuable and complex information within a context created by the receiver's expectations. When the sensors of the brain have not transmitted any relevant data over a period of time, e.g. in the case of sensory deprivation or when there is a flood of stimuli, the brain begins to create its own information, called hallucinations. But if the brain has no data to compare them with, it will not be able to recognize them as hallucinations. So, information is created by a context and a receiver extracting it from polyvalent data by means of internal schemata. These schemata have proved viable both ontogenetically and phylogenetically through evolutionary selection, although they do not have to be identical with the external reality.

Data is not information. It is either symbols or functions that the receiver knows how to decode or which are allocated codes by the receiver. By way of these codes the receiver can pick up data through the proper input channels, extract and process information. The interpretation of data does not necessarily involve semantics, the code does not even have to be decoded correctly. In fact, it is not even necessary for the chains of data picked up by the receiver to be "actually" encoded. All situations that the receiver picks up and reacts to are data and thus information, because to the receiver they are meaningful enough to trigger some kind of action, which will then be "evaluated" by the environment or internal "authorities".

Looking at the neuronal model, one could say that in the brain, information is transmitted back and forth by means of electric impulses. Perhaps the neuronal language has not been descrambled yet, but so far the assumption has been that there is an unspecific "lingua franca" or a universal code, which helps to pass on stimuli through an electrophysiological volley of impulses and emits activating and deactivating signals. The sensory organs consist of cells that are specifically designed for certain stimuli or the lack thereof, but they translate these stimuli into an unspecific neutral code. An outside observer only sees the sequence of impulses, but cannot tell which signals are being transmitted, e.g. if they are visual, tactile or auditory signals. This is quite similar to the way a computer processes a digital code: its sequence of symbols does not make any sense, either. Without knowing the purpose, the program, and the context, one cannot objectively tell whether a certain bit pattern, which has been isolated, is part of a word-processing program or whether it encodes a sound, a command for controlling a robot or whether it contains any information at all. This is the similarity between a brain and a computer, because to an outside observer, who does not know what information is being processed, these sequences of signals do not make any sense. Contextual information in terms of the brain means that the information of the neuronal universal language is not defined by the bit codes, but by the target location of the information. The very same electric impulses, which can be applied to nerve cells by means of electrodes, trigger different sensations, perceptions, and motor commands depending on where the impulses are applied. In addition, the neuron in such a gigantic network does not "know" which data it receives from which neuron, because the dendritic branches and the axones overlap. (4)
IV. COMMENTS ON GENETIC INFORMATION
Information has become a widely-accepted basic concept not only because of the initial comparison of the computer (the "electronic brain") and the brain and because of information theory, but primarily because the discovery of the genetic code made production, storage, transmission, and reproduction of information a fundamental process of life. (5) According to the genetic paradigm, life itself in its molecular form is tantamount to processing information, which is universally encoded. As in the case of computers and brains there is also a universal language. Life is controlled by the information defined by the four building blocks of the genetic molecular language DNA. According to the genetic dogma it is translated into RNA and eventually proteins, but not vice versa. That is why the genome, the DNA database, is said to contain the blueprint for a molecular machine that makes an almost identical copy and reproduces itself for the sole purpose of passing this blueprint on. The genome, it is said, is the complete set of instructions for building an organism, which at first consists of identical cells and which later become differentiated. Bacteria are the simplest life-forms: they encode their genetic information in four million nucleotides or bits, human beings in more than a billion. But, and this is a consolation for those surfing through the information in the networks, most of the genome of an organism, which is seen as the "sender" of biological information, seems to consist of redundancy, garbage, nonsense, stuttering, remnants or scrap. The functions of many sequences are unknown, with the "meaningful" information sticking out like islands in a sea of scrap. Therefore, a few deviations in the genome may cause catastrophic changes in the organism, while most of the deviations have no effect whatsoever (I suppose the same will be true of the hundreds of television channels that everyone will be able to receive). It is not true that the entire information stored in a genome, which, according to information theory, can be measured through sequencing, gives an insight into the complexity of the product. For example, amphibians have more bases, i.e. more information, in their genomes than mammals, which contradicts the conventional hierarchy of complexity. The ultimate dream behind the genome project is to fully understand genetic information. So, will it be possible to reconstruct a three-dimensional functioning organism once we have grasped the genetic text and its functions? Does the genome really store the entire information on the morphological structure, the functions, and the behavior of an organism? Will it be possible to reproduce a dinosaur from the information contained in the genome just as a complex pattern can be created from an algorithm on the hardware of a computer?

As we know, mathematically it is impossible for a genome to contain all the information related to the growth, position, and function of every single cell and the proteins produced in it. The genome stores data and can therefore not be compared to a blueprint or program for building a three-dimensional structure. The number of instructions contained in a gene do not fully encode the information contained in a cell, a hand or any other part of the body. If that were the case, we would be able to extract all possible functions from genetic information. As in the case of a digital code of a computer or the neuronal universal language of the brain, identical sequences of symbols of the genetic code may have different meanings just as different sequences may lead to the very same results. Moreover, sequences of genes can have controlling and monitoring functions, e.g. they indicate the time at which certain other genes have to be activated, but they may also regulate the production of proteins. 99 per cent of the genes in humans and chimpanzees are identical. How come they are so different from each other? Each cell of an organism, apart from the gametes, has an identical set of genetic information. So, why are there so many different cells in an organism? Is the genome really a program that contains morphogenes, i.e. genes carrying information on the position of a cell? As far as morphogenesis is concerned, and the brain as well, the context plays a crucial role because it helps turn the data in the genome into information, into a message. (6) The genome, the "sender", creates the framework and part of the basis for the self-organization of a complex structure, but it does not generate it directly. There is no linear, possibly no direct, mapping ratio between the genome and the organism. The phenomenon of convergence shows that different causes may have the same effects, which means that the fact that insects and bats have wings and that fish and whales have fins is not necessarily based on similar genetic information. For example, the development of tadpoles depends on the temperature of the water (I suppose this is why they require more genes than we humans). Axolotls never grow up in their normal environment, although the relevant genetic information does exist. With some animals it is the context that determines which sex they will be and what shape their bodies will be. The eggs of most animals begin growing without the interference of the genome. At a later phase of development the genes set in and begin to control the process from then on. "In a number of experiments," according to Cohen/Stewart (7), "the original nucleus of a zygote was replaced with the nucleus of another kind, but the development continued as if this had never happened. Only when the embryo had reached a certain degree of organization, its cells began to activate specific genetic sequences in their own nucleus. The genetic content of a cell includes many possible patterns of behavior, but it is the context that determines which potentials will be realized." Cohen/Stewart justify their attack on the concept of biological information primarily by means of the following scenario: one could imagine that a protein stemming from the mother triggers a certain development. But if this protein did not exist in the environment of the egg, an entirely different kind, even an entirely different phenotype, could develop. Even if this is only a possibility at best, the scenario highlights the fragility of the concepts of generating and mapping based on the paradigm of biological information. Information means that a sender encodes a message which triggers a certain behavioral pattern in the receiver decoding the message. But if the same information, e.g. the very same genome, can have different effects, then we can no longer speak of information as such, as has been discussed in relation to the brain, because then a set of data does not acquire meaning unless there is a receiver or a context it depends on. So, even if we isolate certain genes and find that they have certain effects, this clear conclusion will probably fail in terms of more complex organizational principles, which are regulated by the interaction of several genes under normal conditions. Even if this, apart from the starting problem, could be explained by the concepts of self-organization, synergism or emergence, i.e. that the genome creates its own environment through its activities which it reacts to with specific activities (which genes will be activated or deactivated and which genes will remain inactive), then the causality, which is an assumption of the genetic paradigm, would be reversed at least in part. If you take a hydra and cut it in half, you will find that the head grows a foot and vice versa. The respective cells are of the same regions, and so their genes cannot "know" in advance what they are supposed to turn into. An explanation for this kind of behavior is that local concentrations of chemical "signals" tell the cells "what to do". This means that the genes are not the blueprint, but that it is the chemical environment they are in that gives off the signals and regulates the genetic information, which by itself would be meaningless. The genes could thus be seen as a framework for an information-generating process, which is just as selfish as the selfish gene of Richard Dawkins. So, we are going around in circles, trying to determine the sender, receiver, and the information-generating authority. It becomes obvious that the simple schema of communication of information only applies if the context is determined externally, i.e. by an observer.

From all this, one might draw the conclusion that this information madness, which goads us into expanding the number of channels and "senders" on penalty of doom, is just that – sheer madness. It is the "receivers" that decide for themselves what is information and what is not. True, even if information is produced by a context and a receiver looking for information, it is done on the basis of a framework and a diversity that helps to bring about information in the process of selection and elimination. But what is more crucial is the fact that the "circuits" of the brain and information are interdependent of each other. Once these circuits or channels have been established, it will have catastrophic effects, if they are destroyed or if the signals expected (= information) cannot be received because they have become too distorted. Moreover, natural and social environments – the "senders" – can transmit signals that the receiver has to take note of if he wants to survive, no matter how independently he processes them. It is not necessary for the receiver to interpret them correctly, but to react adequately. Again, it is the context surrounding the receiver that decides what an adequate reaction would be.
V. A LOOK AT ZERO-INFORMATION
It is assumed that it could be important for the evaluation mechanisms of the brain, if there were many cell groups and areas with synchronized activities in order to further the transition from a state of no meaning to a state of meaning. (8) If this happens, one parameter will prevail. At first, it will consist of the activities of individual cells, but then it will "enslave" the rest by interaction. (9) This means that cells in an asynchronous state and their versions will not prevail; if this happens repeatedly, they may lose their functions, become inactive, be allocated other functions or synapses may disappear. This could also be an uncomfortable, Darwinistic, model for the dynamics of society, which are regularly created by communication. But communication is all about senders and generating information. Because there is no information without physical carriers, our future will depend on how well these carriers and channels can be protected against attacks, because the communications of a society and the control of machines and systems are increasingly coming under the control of computer networks. On February 1 of this year a group with the paradigmatic name "Keine Verbindung e. V." (No Connection) entered the Frankfurt airport and destroyed fiber-optic cables using bolt cutters, causing immense communication difficulties in that area. The only message they left behind was their name, but no letter claiming responsibility and explaining the motives for their action. But it is safe to assume that this operation was just another one in a fight against the airport, which has been going on for years now. Planting bombs at computer centers and infecting computers with viruses serve about the same purpose: to blast holes into computer networks and to destroy information indiscriminately.

If the media are the message, then it is the media that today's terrorist movements want to destroy, which is their only message. Most attacks, like the recent poisonous-gas attack on the Tokyo subway and then in Yokohama, or the bombing of the World Trade Center in New York or the office building in Oklahoma and military operations, like Beirut, Kabul, Sarajevo, and Grozny, are focused on cities and metropolises, on dense areas that cannot be protected, where they can cause a maximum of spectacular damage with a minimum of resources. In this way, they indirectly favor a further decentralization of institutions and social life through the expansion of networks. So, even if these attacks are not really based on a clear goal, but aimed at society at large, they do not destroy information. But attacks on networks are more radical, because they are directly aimed at information and tear black holes in communication, because they are based on the idea that the loss of information is the biggest possible blow to the empire of global information, i.e. the empire of indifference. Perhaps creating such a small vacuum in a saturated solution of information, as Jean Baudrillard puts it, is the only, yet fatalistic, hope we have that something can be done, yet without any specific purpose in mind.


(1)
Norbert Wiener: Mensch und Menschmaschine. Berlin 1958, pp. 88–90. back

(2)
Stephen Jay Gould: Erleuchtung durch den Glühwurm, in idem.: Bravo, Brontosaurus, Hamburg 1994. back

(3)
Gerhard Roth: Erkenntnis und Realität, in: S.J. Schmidt (ed.): Der Diskurs des radikalen Konstruktivismus. Frankfurt/M. 1987, p. 246. back

(4)
For further details on information and the brain refer to the books of Gerald Edelman: Unser Gehirn – ein dynamisches System. Munich 1993, and: Göttliche Luft, vernichtendes Feuer. Wie der Geist im Gehirn entsteht. Munich 1995. back

(5)
Cf. Bernd Olaf Küppers: Der Ursprung der biologischen Information. Munich 1990. back

(6)
Cf. Jack Cohen/Ian Stewart: Chaos – Anti-Chaos. Ein Ausblick auf die Wissenschaft des 21. Jahrhunderts. Berlin 1994. back

(7)
idem. p. 379. back

(8)
Wolf Singer: Hirnentwicklung und Umwelt, in: Gehirn und Kognition. Heidelberg 1990. back

(9)
Hermann Haken: Die Selbstorganisation der Information in biologischen Systemen aus der Sicht der Synergetik, in B. O. Küppers (ed.): Ordnung aus dem Chaos. Munich 1987. back