Ars Electronica 1990
Festival-Program 1990
Back to:
Festival 1979-2007


Virtual Worlds: The Emperor´s New Bodies

'Peter Weibel Peter Weibel

Perception here,
and there the Object.

G. W. F. Hegel
In the 18th century, at the dawn of the machine revolution, a strange story took place.

A magician, an extremely adept watch-maker, had constructed an automaton. He had executed this machine to such perfection, its movement so smooth and natural, that the public could not distinguish them, once both appeared on stage. To put a point to the spectacle the master felt compelled to "mechanise" his own movements, even his complete bearing, lest the spectators in their increasing unease as to who or what was "real" should actually take the man for the machine, and vice versa. (1)
This story provides a simple illustration for the difficult relationship between machine and body, man and machine. It also addresses another problem with technology, that its perfection might one day eliminate the difference between man and machine. Would there be, sometime, computer-robots, intelligent machines that perfectly simulate man? Clearly any thing, any "Substance" – Heidegger uses the term Zeug – is basically amorphous. Material objects also give away something about their producer. First of all any man-made object inherently displays its maker's properties, simply by being made by man. Secondly they repeat human behaviour through the purposeful delegation of human properties onto them. It is obvious that machines are constructed to enhance, take over or replace human functions. Freud gives an exact description of their functioning as artificial limbo in his Civilisation and Its Discontents. We construct machines to satisfy human needs, therefore any machine will display anthropomorphous properties. The point is that on account of this desired anthropomorphism the machine will be perfected to such an extent as to be able to replace man, which is in turn lamented. Stupidly so, as the aim of such anthropomorphisation must lie in the perfect simulation and eventual substitution of man.

In the cockpit of an airplane in blind approach on autopilot, i. e. steered by a machine, the instruments are nevertheless still controlled by human hand. It is foreseeable, however, that even these instruments might soon be controlled by a machine that would read them, react, and programme them, etc. Such an intelligent device, capable of real-time reaction, could replace man. Such a blind cockpit, an independent, self-sufficient flying machine would be an automaton, taking off, flying, and touching down automatically and on its own. Of course such autopilots are devoid of consciousness, nevertheless, the more perfect the machine, the less need is there for man. Lewis Mumford already foresaw this when he wrote in Technology and Civilisation: "The machine eliminates human performance, which amounts to paralysis."

As this parable shows, the more perfect the machine, the more will it exceed man in its very perfection, because we must define perfection as "perfection of what?" Of human properties, to be precise. We want machines because they perform more reliably, longer, stronger, and more exactly than human beings. The machine perfects human properties to such an extent as to replace man or partial activity of man. However, such perfection from simulation outdoing man will lead to a reverse situation, where humans will simulate the machine. In their perfect simulation and anthropomorphism, things become independent, self-sufficient, autonomous. Similar to goods behaving as if they were imbued with a life of their own, machines behave like Golems, as if they had their own spirit and mind. Because of this new sovereignty machines step into a new relation to man, and man to machine, occasionally becoming their slave.

Concerning the Spirit of machines,
and machines of the mind
In his utopian novel Erewhon, a backward reading of the word nowhere (1872), Samuel Butler already recognised these problematic relationships between man and machine. "Is it man's eyes, or is it the big seeing-engine which has releaved to us the existence of worlds beyond worlds into infinity? … And take man's vaunted power of calculation – have we not engines which can do all manner of sums more quickly and correctly than we can? … In fact, wherever precision is required man flies to the machine at once, as far preferable to himself … May not make man himself become a sort of parasite upon the machines?"

Because of their very precision man seems to almost take refuge in the machine. The machine is preferable to man in many aspects. In the end man becomes the machine's parasite. Out of these considerations Butler develops an evolutionary theory of the machine. Machines themselves develop through evolution, similarly to Darwin's evolution of the species through survival of the fittest. These proposals, whose author has been forgotten, are of particular relevance especially today, in view of the work of Gotthard Günther, Hans Moravec, Gerald M. Edelman, Daniel Hillis, and others.
G. E. Edelman, winner of the 1972 Nobel Prize for medicine, devised a new theory for the functioning of the brain and neuronal systems in his book Neural Darwinism, the theory of neuronal group selection. As already implied in the title, this amounts to a qualified application of Darwin's evolutionary theory to the nervous system. (2)

According to this theory the nervous system in each individual operates as a selective system corresponding to selective mechanisms in nature, but using different mechanics. The categorisation of various stimuli to the senses is shown to be a dynamic process of re-categorisation.

Heuristic grounds have led Edelman to design an automaton that integrates parts of this theory of selection into the physical structure of an operational, self-organising network. This perceptive automaton he aptly names Darwin II. Along interconnections (synapses) within the network groups signal their activities to other groups. Parallel networks with several sub-networks in parallel operation are also possible. The second network is named "Wallace," after another main figure in evolutionary theory.

The Darwin network reacts primarily to individual stimuli, making individual selections in its categorisation, whereas Wallace would react to objects as part of a group, employing statistical means in its categorisation. Together, they constitute a classifying couple.
Whilst Edelman's approach to the problem is based on an examination of real effects, the computer scientist Daniel Hillis uses simulation. In 1983 his enterprise Thinking Machines Corporation constructed the parallel computer "Connection Machine" where "thousands of programmes compete in a sort of evolutionary process" (Hillis) in order to find the best solution to a given problem. A kind of umpire-programme chooses the most suitable version of software during the process. These selected variants meet in a second round. Through this principle of "survival of the fittest" the programmes develop themselves ever further – following a Darwinist principle of evolution – in order to eventually "perform in exactly the manner we wish them to" (Hillis).

Edelman's idea of dynamic re-categorisation of sensual stimuli had already been suggested in broad outline in 1949 by the Canadian neurophysicist Donald Hebb, in his book Organisation of Behavior: "The more active the two neurons," i.e. the greater the number of signals exchanged between them, the more they stimulate each other, "the stronger will any connection between them develop." This would mean that our brain alters the cabling of its physical structure slightly with each new experience. (3)

Ralph Linsker of the IBM Watson Research Lab has demonstrated this ability of a neutral network to shape its connection lines in response to experience in the simulation of a neural network. (4)

I think that the computer is a spiritual machine.

Umberto Eco
Linsker is only one of many, many scientists who are trying to drive at the complex way of functioning of the human brain by means of a "Connectionism" that can sensibly complement incomplete patterns through these neural networks, which are able to learn by themselves and to set up associations. (5) Terry Sejnowski, whose NETalk computer working with a huge number of interconnected artificial neurons is learning how to read a written text aloud, says that the neural network theory in fact "provides a new language by which scientists from various fields can talk about the brain and the spirit."

Both sides, neuroscientists applying findings from computer technology, and computer scientists following theories from neuroscientific research, have thus formulated a new theory that has created a new generation of artificial brain, of computer, which I would care to call Hypermaton (instead of automaton). Amongst these neural network revolutionaries we also have to count Jim Anderson who began research twenty years ago, John Hoppfield, who first made public the term neural network by applying them in the construction of machines. Others are the neurobiologist Gary Lynch, the philosopher Patricia Churchland, the linguists George Lakoff and Geoffrey Hinton, and particularly David Rumelhart and Jay McClelland, who together edited the standard work in three volumes about neural networks, where they also devised new models for such networks and the necessary new mathematics for their formation. (6)
After research into artificial intelligence we have thus begun to set up a science of postbiological, artificial life. This science is trying to find "the ghost in the machine," to discover the origins of spontaneous formation of molecules and networks of nerves, of how we see, learn, talk, think, perceive, and recognise – how the seemingly blind principle of natural selection could bring forth such variety and beauty in life, and how we may simulate and artificially recreate such evolution.

Ch. G. Langton, editor of the book Artificial Life (1989), is convinced that "an era of evolution is drawing to a close and another one is beginning." However, this first era of evolution is drawing to a close and another one is beginning. The process of evolution has led – in us – to "watches" which understand what makes them "tick," which are beginning to tinker around with their own mechanisms, and which will soon have mastered the "clockwork" technology necessary to construe watches of their own design. The Blind Watchmaker has produced seeing watches, and these "watches" have seen enough to become watchmakers themselves. Their vision, however, is extremely limited, so much so that perhaps they should be referred to as nearsighted watchmakers.

The process of biological evolution has yielded genotypes that code for phenotypes capable of manipulating their own genotypes directly: copying them, altering them, or creating new ones altogether in the case of Artificial Life. By the middle of this century, mankind had acquired the power to extinguish life on Earth. By the middle of the next century, he will be able to create it." (7)

The mutual manipulation and creation of genotype and phenotype closely follows my proposition of the mutual simulation of man and machine as a natural result of evolution.

Such a perspective would further reiterate Butler's assessment of the parasitical symbiosis between man and machine, or rather the elimination of man by the machine.
Hans Moravec, director of the Mobile Robot Laboratory at the Carnegie Mellon University envisages such a radical scenario for the "future of machine/ man intelligence" in his book Mind Children. (8)

In chapter four he is asking almost identical questions to Butler when he says in the first two sentences, "what happens when ever-cheaper machines can replace humans in any situation? Indeed, what will I do when a computer can write this book, or do my research better than I?" (p. 100). His answer is similar to Butler's, that intelligent machines are threatening our existence. "We will simply be outclassed." Over the next century machines will become as complex as us, and we will be proud to see them proclaim themselves as our descendants. Already, an indication for the current complexity of machines lies in the term user-friendly. Too complicated for our simple minds to operate, they have to be designed to be user-friendly, i. e. their complexity threshold must be lowered. Ina competitive spiral over billions of years our genes have tricked each other and have now devised a new secret weapon. The intelligent machine. These "children of our minds" will one day break free from us and start their own lives. The beginning of this final phase lies at the start of the industrial revolution two hundred years ago, when artificial substitutes for human bodily functions came into use. Machines became indispensable in transport, production etc. Computing power for mechanical machines, developed recently, has multiplied by a thousand every twenty years and has brought us close to an era when no fundamental human physical or mental function will lack its artificial counterpart. As the epitome of this development the intelligent robot will construct and improve on itself, without us and without the genes that are our make up. In evolutionary competition DNA will have lost out. Such genetic takeover by the machine will radically alter our culture. (A. G. Cairns-Smith, Seven Clues to the Origin of Life, 1985.)

Although we are still living organisms completely defined by our genes we can already only function within our culture by relying on information which is not handed down by our genes generation after generation, but rather on information that is being produced and stored outside our genes. The next step will be that we as human beings will no longer be necessary for the machine, nor – one day – for the world. Without our help intelligent machines will then be capable of their own upkeep, development and reproduction, our culture will progress independently from human biology: the genetic takeover complete. The foundation of a post-biological world dominated by self-developing, learning and thinking machines ignorant of the limitations of the mortal human body would ensue. Cybernetics, artificial intelligence and robotronics are only the first indicators for such a third era in evolution, intelligent robots after animal and human life. According to Moravec our own future survival and that of our civilisation is already dependent on a rapid development of such machines, particularly for space research and colonisation. Perhaps these intelligent robots will render quite unnecessary our own sojourns in space that may be so much more difficult and billions more expensive. And one day they will emigrate into the universe, leaving us behind in a cloud of dust.

Similar thoughts were espoused by K. Eric Drexler in his book Engines of Destruction (Anchor/Doubleday). Microbots, robots that reproduce in microscopic essence, based on integrated circuits miniature technology which is partially adapted to genetic mechanisms, would have an eternal life-span and take on quite specific tasks. Moravec has set out a chart mapping computational power (speed at which calculations are carried out) and computational capacity (storage capability) in an evolution of the computing machine.
This impressive chart must not, however delude us over the formal limitations expressed in the parable at our beginning, and in the famous Church-Turing Hypothesis as well as in Gödel's findings and in Turing's trials.

We know that Gödel formally demonstrated that not all parameters of a formal system could actually be proven in that system, and were thus formally doubtful. From this we might derive that the computer as a formal system – anything that can be formalised may be mechanised – cannot solve all the equations for this world. It follows that not everything can be calculated, formalised, and mechanised within the formal system of computer. Gödel himself took an ambivalent stance in his own conclusion, nevertheless leaning towards an nonalgorithmic interpretation of the nature of human thought. Thought is not mechanical, therefore the mind will always be superior to the machine. He does, however, emphasise that the digital analogy between computer and mind – that both operate on digital principles – has to be accepted. Taken further, this proposition would limit the extent of possible simulation of human capabilities by machines. Gödel's findings are advanced further in the Church-Turing Hypothesis that narrows computability down to only those functions calculable on a Turing Machine; as Church showed, this applies only to the class of function he calls recursive function. Only what can be calculated can be expressed in formulae, could be mechanised. However, calculable is only what can, in effect, be calculated recursively. This would consist of a descending order for the extent of the possibilities for a digitalisation of the mind.

Applied to our problem, the quest would have to be for a similar theorem limiting simulation. Can simulations be reduced equally to comparatively effective calculations like that of the Turing Machine? Does the digital dream of pure numerical depiction and calculation of all processes of human life, or at least the human brain, simply end in the formal limitations set by Gödel's, Turing's, and Church's findings? If the computer cannot solve all functions of the world and every mathematical equation, how should it then perfectly simulate the mind?

Gödel, of course, has placed the importance of his theorem for such a question firmly in perspective, leaving a platonic way out.

In his famous 1950 essay "Computing Machinery and Intelligence" Turing asked the question "can machines think?," surprisingly answering it affirmatively in his operative argument of what is now known as Turing's Test. (9) He stipulated that a computer will think when his answers become indistinguishable from those by a real human being. A person is placed in front of a wall and poses a question, he does not know beforehand which of the written answers appearing on a video screen are given by a computer or another person. The computer will win the test if that person is unable to detect which answers came from the computer and which ones from the human respondent. In the mid-sixties K. M. Colby so successfully simulated a psychoanalyst by computer that many patients preferred the latter.
Going back to our initial parable, simulation is successful when it removes the difference between man and machine. However, it does not automatically follow that this renders the former superfluous, but rather it would no longer make sense to differentiate between the two, as in a really perfect and comprehensive simulation of man by machine they would in fact operate alike. No longer would we know whether we dealt with a machine or a human being. There will not be any difference, therefore we shall no longer make any, it will be pointless to be talking of man versus machine. Even Gödel envisaged a nondigital computer one day to defy any limiting theorema, only then will we recognise no more the computer. Man can then either operate below machine capacity, as in the fable, in order to regain his identity or he can begin to simulate the computer. Man in perfect simulation to supercede the perfection of his own creation which improved on the simulation of his own kind. Husserl would define this as the transcendent in immanence. This is why at the beginning of this essay I have said that the simulation of man by the machine will reach a height of perfection that will lead mankind to try and emulate precisely such a level of perfection through simulation of the machine. When man begins to simulate properties of his own products, then there is a danger that the social characteristics of these man-made products will suddenly be mistaken as naturally inherent in such products. Marx defined this process as reification, the objectification of a subject and its state of being. This tendency to universally objectify ones existence, treating all human intercourse and activities as commodified goods, has its origins in barter trade and finds its extension in the mechanised world. Goods with their fetish-character represent the prototype for objectification.
"The mysterious qualities inherent in goods lies simply in the fact that the objectified characteristics of the products of man's work are a reflection of the social characteristics of human labour thrown back at man in the socially natural properties of these goods." Marx continues about man-made products in a commodified world: "Here the products of the human mind seem imbued with a life of their own, independent figures relating to each other and to mankind. This is what I call fetishism which the products of labour acquire as soon as they become a marketable commodity; this fetishism is thus inseparable from production." (10)
What Marx has said about consumer goods of course applies even more to robots. Machines even more than the production of goods reduce humans to marketable commodities, machines imbued with a double fetishism as goods themselves and through reification, as the omnipresent fetishism of a motor car, a television set, or a computer. Robots are precise products of the human mind with a life of their own, independent creatures. Do intelligent machines with their double appeal to consumer fetishism, then represent the end in alienation? Certainly intelligent robots epitomise Hegel's "alienated spirit." However, he also writes in his Phenomenology of the Spirit (1807), from which AI would have much to learn, "the existence of this world and the reality of consciousness rely on that movement which it extorts from its personality, creating its own world so alien that it must now be reappropriated. But the renunciation of being is in itself the creation of truth through which that truth may be acquired." (11) The realm of reality can only be created through self-extortion and subjective alienation. Moravec thus quite rightly calls robotmachines as such products of the self-alienated mind "mind children." "Although springing forth from individuality," the real world is "like an alien entity to the conscious." But this is how "the coming into being of the real world" takes place. (12) Machines and tools in simulation of human organs and activities, self-extorted from man, are contributing to the construction and evolution of the world.

According to Hegel the simulation of simulation in a kind of recursive cycle where man simulates those products that simulate himself provides a model for the creation of reality. It is thus primarily simulation which questions Darwin's theory of evolution in its tautological essence contained in the dictum of survival of the fittest. This term is determined by survival in an Aristotelian sense, art obscure entelechsis. However, it is not fact that arises from fact in evolution, it is rather models that transform into facts which then become simulated models providing once more the source for facts. In truth evolution consists of a full interactive network of mutual simulations, representing an existence perforated by simulation. Ideological qualities are already part of nature, and mimicry as an instance of adjustment to a dynamically changing environment would be clear evidence for such a state. The meaning of the term mimicry has to be reconsidered irk this context. A plant producing yellow dots on its leaves to repel insects who have been taught by experience and genetic information that such dots would contain a poisonous substance represents a successful instance of simulation helping to survive. If these insects detect such simulation after a while and readapt (obtaining new genetic information) to nevertheless sit on that plant (now perhaps themselves acquiring yellow dots as protection from other insects), then the plant will again be compelled to change. This would describe a chain of adaptations to a series of dynamic simulations. Survival of the fittest therefore amounts to survival of the fittest simulation.

This in essence is the gist of our initial parable. The evolution of man and machine represents a new evolutionary phase, where the existing model is being restructured through a re-accentuation and redefinition of its preeminent elements. Such mutual evolution of man and machine through simulation will, of course, result in the formation of a totally artificial, simulated world. (Viz. also Jean Baudrillard, the "Penseur" of simulation.) Survival of the fittest simulation also entails survival of the simulation of the fittest, man designing reading, interpreting, machines, capable of acquiring learning, from whom he in turn will learn. This structure can counteract the digital dreamers' (in a Hegelian sense) numerical fetishism that is comparable to consumer fetishism. Even in nature the fabric of reality is perforated by the spirit of alienation where mere animals create tools of de-expression and thus their own truth. The machine revolution only renders obvious the fact that reality has always depended on artificial, virtual construction.
One much ridiculed thinker in the Hegelian tradition, Gotthard Günther, published his first revolutionary work The Consciousness of Machines (1957) which has assumed new relevance for our discussion. (13) (This followed his exposé on the "Basics of a new Theory of Thought in Hegel's Logic" in his dissertation in 1933.) Dismissing naive linear Pythagorean principles ("All is number") he developed a keno-grammar (kenos, Greek 'empty') based on the premise of the void as the basic structural component in mathematics and logic which can be taken up by any random coefficient. The Arabic "sifr" (German Ziffer, English digit) means empty or zero. From this he derived a complex non-linear Pythagorism, an arithmetic theory where numbers do not progress in linear fashion but may rather verge off the line randomly. In his theory of polycontextuality he also had to give up the logic of the power of two. Such a third option of a multifarious kind of logic of course rejects the whole concept of alternate thought in terms of true or false. The ambivalent logic of existence is given up in favour of a value-added logic that guarantees open options in formalism catering for a constant extension in complexity. Thus multifarious, non-linear logic, together with the structural context of the theory of polytexturality where ambivalent logic may still be valid, serve to explain the infinite variety of material qualities in this world. Quantity transformed into quality denies a model for the universe that consisted of a closed unified contexture. Thus Günther is anticipating the ideas of eternity in parallel universes in quantum mechanics that were to appear later (viz. David Deutsch) in 1957. It is this polyvalent logic of reflection pitted against the purely linear and ambivalently formalised and mechanised digital computer that anticipated the development of parallel computers and neural networks. Günther's polycontextual, polylinear and polyvalent logic could be of benefit in the organisation and conceptualisation of further such networks.

Günther also has an answer to the problem posed by our parable. Following Hegel, man and machine may always be differentiated because the subject changes with the evolution of the machine, in its expression in the machine it is split in two, losing its former identity. Depositing a mere mechanised, formalised form of its consciousness in the artifact the subject advances into hitherto unattained depths or heights of awareness. The human spirit will always remain superior to the machine (viz. also Gödel), as the simulating machine compels man to increased reflection in this evolutionary game of simulation. Here from ensues the self-regulatory and self-reflective progress in the development of matter towards higher plains of complexity, liberating human consciousness from imprisonment in its own subjectivity, so aptly lamented in 1904 by Karl Heim and again by G. Günther in his Global Image for the Future – Weltbild mit Zukunft. Such a reformed subject will be possessed of a sovereignty unconstrained by biology or problems of locating identity between flesh and spirit – a subject close to the observer of quantum mechanics, a phantom if seen in relation to history, of course.
Moravec quite rightly accepts humans improved by genetic engineering only as second-class robots, instead he is looking for a subject possessing the advantages of the machine without correspondingly losing its sense of personal identity. Already a large number of people survive with the aid of artificial organs and limbs, machines that support the body, and one day such surrogates, or simulations, will be better suited for survival than the original. Moravec then asks why not replace the lot and simply transplant the brain into a specialised robot. Such an instance is illustrated in Piet Hoenderos' film "Victim of the Brain," where the protagonist's brain is removed, stored outside the body and a cloned version of it implanted into a computer. The subject can now switch between his two exterior brains. This would, however, not free the brain from the constraints of its limited intelligence. The question is thus no longer whether machines can think, or, put differently, can we transplant the brain into the computer much like a kidney; it must rather be as to the extent of the spirit's independence from its physical basis, the brain. Could we extract the spirit from our brain?

An initial step lies, of course, in giving up the idea of subjective identification with the physical basis of the body, transgressing the old conflict between spirit, flesh, mind and the body. Moravec proposes "pattern-identity" as the essence of the patterns and processes taking place in both mind and body, equating it to software and not to hardware, the machine which merely supports and contains these processes.
Nor does the body-in-prosthesis, the surrogate body provide an answer to the real problem about the phenomenon of consciousness, namely that the spirit of life is a dynamic system arriving at more than just the collective sum of its components. Such a state of virtuality will have to be examined further on.
One of the key problems lies, of course, in the physical nature of consciousness. This can best be illuminated through quantum mechanics that enable us to reconsider problems of the body-spiritual, of human identity and awareness, and also think about theories limiting formal systems and the capacity of the mind. It is quantum mechanics that threaten most acutely that premise in the digital dream that wants to express and calculate everything in numbers. G. Günther, by extending the theory of numbers, has tried to ban such danger, however, at the same time preformulating quantum-mechanical conditions albeit expressed in a traditional dialectic.

Should quantum theory really be a universal physical theory, then the spirit and brain are undoubtedly quantum-mechanical phenomena. A leading advocate of this opinion is the eminent physician and mathematician Roger Penrose, who, together with Stephen Hawking invented substantial parts of such a new cosmology. Starting out in opposition to the thesis that "everything is a digital computer," "everything can be modeled exactly through digital calculations," he felt the illegitimacy of the underlying argument declaring the human brain and spirit to be nothing but a digital computer. He also had to contest the notion traditionally arising from the above about the insignificance of hardware in mental phenomena.

Instead the evolution of the brain is seen as exploitation of quantum-mechanical effects, and consciousness itself as a quantum-mechanical phenomenon. (14) Although indeed conceding the algorithmic nature of some of the brain's activities, he finds himself unable to imagine the complex algorithms of the human brain as simply the result of a "natural selection" of algorithms. Penrose deduces that even quantum mechanics are insufficient to describe the activities and structure of the human spirit, which would in fact require laws more fundamental than quantum mechanics. Therefore there seem to be facets of the human spirit that can never be simulated by a machine. (15)

If the brain is not a digital computer, could it be a quantum computer? Let us try to transfer the game of survival by simulation onto the computer, as David Deutsch did in 1985. (16) The concept of a quantum computer is based on principles of the Turing machine. No one has as yet managed to build a quantum computer, nor do we know if this could be possible, but there are some remarkable preconditions.

As the Turing machine is a serial computer not only limited by the halting problem but also by the complexity theory, we could hope that such limitations would be remedied by the addition of a few parallel computers, which indeed they might be. However, a quantum computer with its own complexity theory differing from that of the Turing machine could of course avoid such limitations. The question arising in the simulation of a universal quantum computer through a universal Turing machine is whether the quantum computer can actually calculate functions that the Turing machine cannot do, which would invalidate the above mentioned Church-Turing theorem. Deutsch has nevertheless demonstrated that the number of functions calculable on a quantum computer amounts to exactly the number of Church's recursive functions that can be done on the Turing machine. Yet, there are tasks beyond the mere calculation of functions. In quantum parallelism, for example, the number of tasks that can be performed at the same time is no longer limited, the advantage being that whilst any classic computer or Turing machine programme could run on the quantum computer, by no means could any quantum programme operate on a Turing machine.

Deutsch does believe that quantum computers will be constructed one day, and their existence will provide powerful proof for the interpretation of quantum mechanics as an infinity of parallel universes. A quantum computer's behaviour can be expressed in terms of its delegation of sub-operations to copies of himself in other universes.

The Oxford philosopher Michael Lockwood has advanced Penrose's ideas, that quantum correlations occurring over wide distances could be responsible for the unity and globality of the states of awareness in the human brain (as highly coherent quantum states). In his book Mind, Brain and the Quantum he defines "the compound I," (17) using the physician H. Fröhlich's 1968 discovery that the quantum mechanical phenomenon of the Bose condensation can be applied to biological systems. Condensed Bose states can be seen as responsible for the coherence of biological systems, and useful for the amplification of weak signals, and the codification of information in minute space.

I. N. Marshall in 1989 espoused the thesis that Bose condensed states provided the physical basis for mental states such as the unity of Consciousness. (18) Lockwood then deduces that the singularity of the human mind derives from precisely such Bose condensed states, should the brain really operate as a quantum computer.
The real threat to the digital dream emanates from the role of the observer in quantum mechanics and in its indeterminate principle. What happens when we observe a physical system? The contention offered in conventional interpretations of the "problem of measurement" is that the actual observation influences the observed system at the moment of contemplation, that there is an interactive relationship between observer and the object. Another interpretation tells us that we invariably lose something in observation. Yet unobserved events pass all the time, and the principle of indeterminability quite clearly causes loss of realisation in observation, there is no certainty which possibility the next successive moment will chose, as the paradoxical mental experiment of "Schrödinger's Cat" will illustrate.

Lockwood has elaborated on this indeterminism to ask if the "ghost" is to be found in "the machine" of the body, or if it in fact needs no machine, body, or any specific machine or body. Or is the spirit perhaps pure software, pure mathematical abstraction, whether with or without optional hardware. Neither question could be answered positively, instead Lockwood concentrated on setting up a new interdependence, based in quantum mechanics, between spirit and brain, and consciousness and the physical world. At the heart of the quantum mechanical "observation and measurement problem" lies the question of "how consciousness (specifically the consciousness of the observer) fits into, or maps onto, the physical world."

The physical state of the observing brain is undergoing a stream of observant experiences, i.e. a stream of consciousness designated by and emanating from the brain, that yet at the same time has to participate in the properties of the set of observables selected. Only shared properties between the brain and the chosen set of observables may be designated as conscious observation. Lockwood's Theorem can be taken to mean that something in the physical quantum mechanical state of the observed entity has to correspond to the quantum mechanical state in some part of the brain of the observer, in order to be registered consciously. Very simply then, something approaching the spirit or a state of consciousness has to be already inherent in objects or machines. Consciousness, observable and observer simulate each other and transfer properties onto each other. Thus the quantum mechanical formulation of the measuring problem in terms of the observer's participation in the system under observation must be a question of conscious projection, looking for those components of human consciousness inherent in the "consciousness" of the very objects themselves. Human interference in the world cannot go against the will of the objects therein, and as the world is only the one we can recognise, we only select objects from it which we can detect with our senses. Objects that are in fact detected by certain properties in our senses and that therefore must be corresponding reciprocally to any such properties. Goethe described that "the eye is of a sunny disposition," and it is this what we term as anthropomorphous, as "theory of naturalised cognition" to use W. v. O. Quine's words.
I am indebted to the great chaos scientist Otto E. Rössler for pointing out to me that as early as 1763 Roger Joseph Boscovich formulated such a theorem in all its consequences in his Theoria philosophiae naturalis. (19) Boscovich defines his law of a sole driving force as a common principle of co-variance, according to which the universe has to be described in relation to the observer, and that even motions within the observer contribute to its transformation.

The Boscovich curve illustrating his thesis depicts an asymptotic branch, according to which our universe would be a self-contained, closed, cosmic system. This would mean that no point outside that universe could come into contact with us, which opens up the possibility of infinite space filled with cosmic systems that cannot interfere with each other. Not even a ghost wandering around such a system could recognise any universe other than the one in which he exists. This actually amounts to a premonition of the quantum mechanical "many worlds" interpretation, where space is infinite, but can only be recognised as finite. Thus writes Boscovich in his supplement # II "Of Time and Space, as we know them": We cannot obtain an absolute knowledge of local modes of existence; nor yet of absolute distances of magnitudes." (20)

Should the universe commence to revolve in another direction or contract or expand, we would be unaware of it. His early theory of relativity also already encompasses the "measurement problem," "what has been said with regard to measurement of space, without difficulty can be applied to time; in this also we have no definite constant measurement. (21)

Consciousness cannot simply be subtracted from the world of matter; even Kant's absolute terminology – a priori, beyond our experience of time and space – is being put into perspective, "as we know it." Consciousness itself is no absolute category a priori. Boscovich defines this mutual relationship between awareness and the physical world with the complicated idea of "compenetration" and the coexistence of points of matter in time. Consciousness derives from the compenetration of matter and spirit, as their designated process. His doctrine of impenetrability has acquired somewhat infamous renown. "Matter is composed of perfectly indivisible, nonextended, discrete points," which he axiomatically qualifies in that "two points of matter cannot be at the same point in space at the same point in time." His critics tend to overlook what he says later: "To the infinite number of possible points of matter there will correspond an infinite number of possible modes of existence. But also to any one point of matter there will correspond the infinite possible modes of existing, which are all the possible positions of that point." Thus "any point of matter has its own imaginary space, immovable, infinite and continuous."

"Every point of matter is possessed of the whole of imaginary space and time; the nature of compenetration." (22)
This imaginary space is virtual space, virtual reality takes place in Boscovich space. If indeed two bodies cannot be perceived sensually in the same point in space at the same point in time in the real world, this may nevertheless be achieved in virtual space. With the aid of data gloves and data visors we can superimpose imaginary space onto real space, a computer-generated sphere could occupy the same space as other objects. Virtual reality is a journey into imaginary Boscovich space, where real and possible are contrived in coexistence, in compenetration. Their fascination lies in the simulated defiance of all classical laws of nature, of the tyranny of here and now, space and time conquered. Traditional spatial concepts disintegrate when I can see my own hand in simulated space, when I can observe real and imaginary objects react to my actions. This kind of space where the present and the absent may exist equally is pictorial space which, for the first time, I can actually penetrate. I have entered the picture through closed circuit television installations, Jackson Pollock had claimed to have entered his own pictorial space with subjectivity. Here, however, visual spectrum of the spectator and pictorial space of the image intermingle, collaborate, as anything the spectator does in pictorial space he does his real environment. The virtual environment is not the real world, not reality, but a representation of the real as artificial reality, where wish fulfillment still corresponds to reality, where interior and exterior, imagination and reality, I and other are all bridged. Myron W. Krüger defined "artificial reality" as an environment controlled by computers who register our needs and react to them. (23) Virtual worlds embody the pure essence of omnipotent experience and the pleasure principle. This is the space of the psychotic that stage-manages reality in hallucinatory wish-fulfillment, uttering the battle cry "VR everywhere." Freud, describing the aims of technology in his Civilisation and Its Discontents as being the creation of substitute organs and limbs that make of man the God of prosthesis, is actually illustrating the sorts of fantasies of omnipotence that simulation technology makes possible: fantasies that we can forget the trouble in life, the opposition of the object. Cyberspace is the name for such a psychotic environment, where the boundaries between wish and reality are blurred. In its worst expression the VR movement will remain an infantile toy, at its best a tool in space travel technology where teleportation beaming people from one star to another could be rendered from mere S/F via concepts of VR into reality. (24)
"In the beginning there was the number" (25) must, so to speak, initiate any digital dream. The first digital thinker was, of course, Pythagoras, who first set up the philosophical circumspection according to which numbers are the omnipresence behind all structures for any phenomena, and that numerical relations are the benefactors of harmony. Plato similarly preached digital harmony, leaving an indelible mark on western civilisation. Our yearning for perfect harmony led us to the golden rule, the divine measure of proportion in the arts and architecture of antiquity, and in its renaissance that bore in Leonardo da Vinci yet another digital dreamer. It was the French philosopher Descartes who first formulated the digital dream as science in his pretensions at elevating mathematical method onto the plain of becoming the universal scientific methodology: mathesis universalis. The digital dream lies in the claim of Pythagorean/Platonic metaphysics that the entire world can be depicted in numbers and numerical relations. It is interesting to think that a simulated Descartes could just as logically deduce his existence as could the real Descartes. Simulation threatening the digital dream is in itself a manifestation of that dream.

The end of the digital dream is the fully comprehensive enumerability of the world into mathematics, in fact a "mathematisation" of the world. The astronomer Johannes Kepler, who lived and worked in Linz, and who is the subject of this year's Ars Electronica 90, he too was such a digital dreamer when he published his Harmonices Mundi in 1619, a key exponent of digital harmony, the harmony of the world based in number.

The philosopher and mathematician Leibniz succeeded at a decisive breakthrough a century later when he developed the theory of binary numbers, the binary code, the depiction of all numbers in just two digits, zero and one, void and material presence, to be or not to be. What seemed at the time a mere curiosity became the central basis for modem computer technology. In fact, by setting up the facility for the depiction of all numbers in just the two digits 0 and 1, Leibniz formed the basis for the technological realisation of the digital dream. He had tried to replace logical deduction through calculation, i.e. logic through mathematics, amounting to the displacement of thought by a machine that would automatically provide proof with the aid of those two digits. Two centuries later Leibniz' discovery has been transformed into an algebraic switchboard, a logical network based on networks of electric currents, technical machines where the digits "0" or "1" are indicators for the absence or presence of a flow of electrical current: in short, digital technology, electronic calculating machines, digital computers. The calculator has always been the companion of digital dreamers, and is was a close associate of Kepler, Wilhelm Schickard of Tübingen who invented the first known computer. Thus the computer represents the current peak in the embodiment of the digital dream which would like to seethe world as a cosmos of numbers, to be simulated and reconstructed from the laws of number.

Digital harmonies, calculators, virtual machinery all emanate from one and the same human dream, to transform nature into a humane environment that can be controlled by man with the aid of number and its law – to tame the terror of the elements, to be able to predict and contain the forces of nature. Here lies the base for the gradual creation of a new world by man alone, an artificial reality seemingly more favourable than (hostile) nature. Attempts at anticipating such artificial realities in computer-controlled machine worlds that react intelligently to our needs will provide the focus for the 1990 Ars Electronica.

Data glove, datasuit, data visor, data banks are all an indication for a new world. Data-ism for Dada-ism. Digital credo beginning with Pythagoras, for the time being has doubtless reached its height of perfection in today's computer technology.

In the land of hypermedia and hypermata virtual machines represent a new generation of automaton. Human interaction with three-dimensional cybermodels in a near-world (virtual world) is probably an improved form of man-machine interaction and simulation, so far the most perfect simulation. The anthropomorphisation of the object has attained new perfection, as has their in- and self-dependence as intelligent virtual machines. Heidegger would, of course, see his worst fears of technology displacing nature and the body corporal confirmed in VR

The body doubled and part-imagined in Virtual Reality (VR) as the most recent possibility in its technological transformation may indeed represent its deposition, yet entrailing also its improvement; I may now comport myself without danger to limb and body in zones perilous to the natural body. The I, the state of conscious awareness, will need less of the physical body, VR will drain the conscious mind "I" of limb and nature. Through its technological deterritorialisation in VR the subjective has been raised into a new category of res extensa, of points in space and time, now im-materialising in the virtual infinite.

Consciousness, in the course of evolution through survival of the fittest, has created simulation, and through the simulation of survival ever more complex models and media, the legendary ghost in the machine creating ever improved machines for itself. Consciousness as the diving force behind evolution also creates the simulation of consciousness. Reality is perforated with simulation, with strategies of semblance and deceit, founded precisely in those mechanisms of selection I have described when I cited mimicry as an instance of adaptoral strategy. It is in such simulations that the "ghost in the machine" of Lockwood's Bose condensations is to be located.
This argument would actually be corroborated by Ch. G. Langton's definition of virtuality in the book Artificial Life that appeared under his editorship. To him "virtual particles" are the real molecules of life as their characteristics would appear neither in the system, nor in the particles themselves, but only in mutual interaction. A system becomes virtual when its part-components and its entity display their marked characteristics not in isolation, but only at the moment of their mating (compenetration – as Boscovich would say). Such virtual systems are non-linear and dynamic, alive. The spirit is a virtual system in the machine of the physical body, body operating inspirit, and spirit in body. Now we may understand what the attempt to surgically remove the spirit from the body would entail, it is impossible – according to Boscovich (every point of matter is possessed of the whole of imaginary space"), according to quantum theory and virtuality. At this point it would perhaps be fitting to qualify my stipulations to say that simulation would correspond more closely to mechanical systems whereas virtuality would seem to be corresponding to dynamic, non-linear systems. In actual fact we were talking about virtuality when we discussed simulation in the context of evolutionary theory, the essence of simulation is virtuality. Thus a clock is basically a mechanical system, that nevertheless displays hints of virtual characteristics. In its functional essence a clock will only exist in the action of its movement powered by some external source of energy, but a hand remains a hand no matter whether the clock lies "dead" of "lives." Equally the body in prosthesis in its classic function would represent such a mechanical system in its essentially unchanging nature, it and its component parts do not lose their identity in a split expression, remaining forever the base sum of its parts.

The computer would display a number of virtual characteristics. As digital automaton it embodies nature translated into a different language which then gradually introduces us into the state of virtuality. This machine is a-changing, its hardware, his body has changed, and will continue to change. Nevertheless, its defining essence, the binary code, will remain fixed. Other than for the clock, however, it is the programme that is more important in the computer, its language the algorithm, more important than the messenger, the body, the machine itself. The computer would evidently contain more of a "spirit" than the clock. The body may become its own clone to the extent of its binary self-codation and decodation, de-coding distance. Perhaps the body is the quantum computer in whose construction we have so far failed. After all, the body, just like the quantum computer which sends copies of itself into other universes, now sends copies of itself into other, virtual, worlds. The computer is in fact a simulative prosthetic body hinting at potential virtuality in the coexistence of limb and spirit. However, as long as it remains just a body in prosthesis, a mechanical body, it will lack true life force, lacking in virtuality.

We have thus reached a situation with computers on the one hand that simulate in essence the "spirit" of the brain, and robots that simulate the "life" of the body. Will it be possible to mix the two, unite body and spirit? Well, yes, albeit only through virtuality.

Virtual machines may be seen as occupying a stop along the way from the "thinking" to the "living" machine. Not only would the living machine have to be virtual, but should there forever be a difference between man and machine, it also would be immune against simulation. But, if everything could be artificially calculated, depicted and reconstructed in binary code as stipulated in the digital dream, then everything could be simulated.
I have argued primarily in terms of quantum physics that the digital dream cannot hold universally true. My main argument must, however, be the theory of simulation itself. AIDS has demonstrated that the perfect virus is the one that is immune against simulation. Thesis # 1: The highest level of simulation lies in attaining immunity from simulation itself. (A copy without original, a clone without body.) This used to be expressed in principio individuationis. So, how could a "living" machine that would have to perfectly simulate man from a digital basis be effected, when we take it that man represents the end in a chain of evolution of survival of the fittest simulation? Applying thesis # 1 man would be immune from full simulation, he cannot be comprehensively simulated by a (digital) machine. Secondly, I put it that life is a condition of virtuality. Virtuality, however, is defined not as a property inherent in the very objects, machines, parts, or systems which themselves can be simulated, but rather as a property pertaining only in the act of correlation of all particles. Per definition precisely this correlation cannot be simulated. Because of virtuality not everything can be simulated, least of all simulated digitally. So far the virus is the best virtual machine, or as William S. Burroughs says, "language is a virus of outer space." Language would therefore provide an instance of a virtual system in our context. On the one hand it seems to function like a mechanical clockwork movement in a determined system consisting of 26 elements (letters), embedded in a determined algorithmical structure (grammar), which some are of the opinion is nothing more than a programmed succession of variants, combinations and permutations. No way could in this manner the literary output of mankind over the last 2000 years have been achieved, not even in eternity. The production of a mass of sensible text by means beyond the mechanical capacity of language as a combination of text elements invalidates the view of language as a kind of system of natural selection of algorithms. More than a mere mechanical system it manages to set up combinations of its elements more speedily and more sensibly than any mechanics. Is this the spiritual quality of the mind? This elusive spirit is not to be found in the machine, nor in the machinery of language, nor in grammar, but in that part of the brain where these predetermined and finite elements and algorithms are transformed into an infinite, undetermined succession of sensible sentences. Such an essence of virtuality originates only in the dynamic play of the elements of the mechanical system that is language, embedded in a nonmechanistic brain – it is then that language "lives."
After Cybernetics, AI, and robotics virtual machines are the last expression of the digital dream, terminating it at the same time. A computer such as Terry Sejnowski's NETalk that teaches itself to read aloud a written text is cannily close to a talking person, the simulation (of neural networks) seems perfect, similarly the result. Will virtual machines become the main protagonists in a global process that reduces man to a mere spectator and parasite? A perfect technical mimesis or simulation so far advanced that it would replace the real world by an artificial creation where man would tend to self-abstraction as a mere observer. We have seen the consequences of such perfect simulation in our parable, man as simulator of the machine, as empty torso, easy prey to myth and other such power-crazed programmers of reality who hold forth the promise once more of totality and authenticity.

However these worlds controlled, calculated and designed worlds are called virtual worlds not because they imitate nature, but because they digitally simulate an image of delusion. They are simulations, computer or cybermodels for imaginary worlds which comply with the laws of logic and physics, and yet seemingly defy these laws in the creation of imaginary space where anything is possible. Virtual worlds are illusionary worlds, three dimensional nearworlds based on digital technology. Virtual artificial realities do represent alternative realities, information space containing imaginary objects in dimensions of space and time that can be manipulated directly or from a distance. Objects in virtual reality react to man, they can be manipulated by the spectator. At the flick of the spectator's head objects depicted in digital simulation may change their proportion or perspective, man, in fact integrating with the fiction of his imagination as conjured up digitally by the computer. It is this alternative reality that makes virtual worlds more than merely the simulations of artificial digital truths.

Because the spectator himself is an emphatic part of the image in such an artificial reality, empowered with the illusion of his own body acting as clone in front of his field of vision, and because he may yet simultaneously control the imaginary objects from outside the virtual world, he is putting into perspective the universality of the digital dream, as, naturally, the spectator as the creator of such virtual worlds cannot himself be digitalised. It would be pointless to employ a machine as manipulator in the virtual world as anything appearing in his data visor would be digital simulation, no matter whether the object existed externally in front of or would be generated internally through the visor. For the machine both real and generated object appear simultaneously in the visor.

Virtuality, where simulation, imagination and reality are mutually transgressive is psychotic space and yet residually non-digitalisable. The role of the spectator using the bridge across the real and simulated represents the quantum mechanical constraints on the digital dream, virtual worlds exist at the borderline between digital dream and quantum mechanics, evoking an environment controlled and created by computer, but reacting to human needs and ideas. Now, if all were calculable it would follow that all must be predetermined. Thus we arrive at the ensuing alternatives: the simulation of imagination by virtual machines can either mean its determination, or determination opened up to the imagination. Chaos theory and quantum physics seem to suggest an indeterminable spiritual cosmos. The digital arts emanating from the cosmology of number are also a link between digital finality and infinite imagination defending man in his impossibility to be simulated. They would serve not to denigrate, but to research and appropriate anything digital so that we may express ourselves of it. Artistic creativity supported by machine therefore would not represent a contradiction in terms, just as postbiological life wouldn't. Both are far removed from something like a quality of the spiritual. Digitalised artistic creativity in an expert system towards the creation of art will one day be possible and such an algorithm will produce works of art equal to "real" art, which in fact only reiterates the invalidity of art so far, mechanical and lacking in spiritual dimension. This calls for the remedy in the aesthetic of the virtual, mechanised creativity and the automaton will rid us of a lot of dirt. Technology as enlightenment of man researching himself?

Virtual machines provide the spirit with new bodies, packaging it in tele-bodies and tele-organs, setting the scene for what Moravec has called "ejecting the spirit from the body." The emperor, the spirit of the mind is now fitted out with new bodies, neither by transplant, nor by genetic engineering or robotics, but by equipping it with new artificial "organs-in-prosthesis," namely with virtual machines like data glove etc. These tele-organs make man into the Freudian god of prothesis, or tele-deity, a god of tele-presence instead of omnipresence. Virtual machines create the tele-body and thus represent the emperor's, the spirit's, new bodies.


In Jean Baudrillard, Das Ding und das Ich (Vienna: Europaverlag, 1974).back

Gerald M. Edelman, Neural Darwinism (Oxford: Oxford University Press, 1989).back

D. O. Hebb, The Organization of Behavior (New York: Wiley, 1949).back

Ralph Linsker, Self-Organization in a Perceptual Network (Mirz: Computer, 1988).back

See William F. [ ? ], ed., Menschliches Denken, Künstliche Intelligenz. Von der Gehirnforschung zur nächsten Computergeneration (Munich: Droemer Knaur, 1990).back

D. Rumelhart and J. McClelland, eds., Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 3 vols. (Cambridge: MIT Press, 1986).back

Ch. G. Langton, ed., Artificial Life (Reading, Mass.: Addison-Wesley, 1989), p. 43.back

Hans P. Moravec, Mind Children: The Future of Robot and Human Intelligence (Cambridge: Harvard University Press, 1988).back

A. M. Turing, "Computing Machinery and Intelligence," Mind, no. 236 (1959), 1950.back

Karl Marx, Das Kapital (Berlin: Dietz, 1926), pp. 86, 87.back

G. W. F. Hegel, Phenomenologie des Geistes (Tübingen: Suhrkamp, 1986), p. 363.back

Ibid., p. 365.back

Gotthard Günther, Das Bewusstsein der Maschinen (Baden-Baden: Agis, 1957).back

Roger Penrose, "Minds, Machines and Mathematics," in: C. Blackmore and S. Greenfield eds., Mindwaves (Oxford: B. Blackwell, 1987), pp. 259–276.back

Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds and Laws for Physics (Oxford: Oxford University Press, 1989).back

David Deutsch, "Quantum Theory, the Church-Turing principle and the universal quantum computer," Proceedings of the Royal Society of London, A 400, pp. 97–117.back

Michael Lockwood, Mind, Brain and the Quantum (Oxford: B. Blackwell, 1989).back

I. N. Marshall, "Consciousness and Bose-Einstein Condensates," New Ideas in Psychology, 7 (1989), pp. 73–83.back

R. J. Boscovich, A Theory of Natural Philosophy (Cambridge: MIT Press, 1966). See also Otto E. Rössler's paper "Boscovich Covariance."back

Boscovich, p. 203.back

Ibid., p. 204.back

Ibid., p. 199.back

Myron W. Krueger, Artificial Reality (Reading, Mass.: Addison Wesley, 1983).back

Two anthologies, Bruce Sterling, ed., Mirrorshades (London: Paladin, 1988), and Rudy Rucker, P. L. Wilson, and R. A. Wilson, eds., Semiotext(e) SF (New York: Semiotext(e), 1989), offer prominent examples of the new science fiction movement of cyberspace and cyberpunk.back

Cited in Bernhard Mitterauer, Architektonik. Entwurf einer Metaphysik der Machbarkeit (Vienna: Brandstätter, 1989).back

See the important work "Machines virtuelles," Traverses 44–45 (September 1988), Centre Georges Pompidou, Paris.