|
ON BUILDING A GLOBAL PRISON FOR OURSELVES by Stephen L. Talbott A program is an elaborate logical structure that we typically map to some structured aspect of our lives. The software engineer today can perform this mapping with remarkable subtlety and sophistication. What is not so often noticed is the extent to which, once the program is up and running, an active mapping in the reverse direction occurs: we end up adapting part of our lives to the program's logical structure. This is worrisome. The program just is what it is, and remains content to run ceaselessly in the same logical grooves. The essence of life, on the other hand, is continually to transform itself from within, outgrowing and shedding old skins, and occasionally undergoing radical metamorphosis. Even the rigid shell of the acorn must crack and give way to the incarnating vision of an oak stirring within the tender seedling. Intelligent Systems Demand Universal Rationalization ---------------------------------------------------- No computer program absolutely compels our submission to its structuring of our activity. Nor does it prevent us from subsequently modifying the program logic. But, as the Year 2000 Problem reminds us, even seemingly minor aspects of software, once well-entrenched, can be hard to change later. Meanwhile, digital logic extends its grid-like filaments ever more finely through society. Of course, other technologies also present us with effects that are difficult to reverse. (Just think of the network of roads and highways.) But intelligent systems are distinctive in their ravenous urge toward aggrandizement. They always exhibit certain absolute and universal pretensions. They "hunger" for mutual articulation of their parts at the most global level possible. It's the very nature of formal reason and logic to demand global consistency -- it's that or else let the different subsystems fall into a grating disharmony. In the end, your toaster, stereo, computer, camera, and automobile will all be articulated into a single, intercommunicating system. This need for universal rationalization is why, as Jacques Ellul points out, it's harder to introduce robots to an existing factory than to organize a new factory around them. (1) In the same vein, historians Martin Campbell-Kelly and William Aspray remark that, whereas older office appliances (typewriters, cash registers, filing systems) could be "dropped into" an existing office without changing the underlying organization, the matter was altogether different with the punched-card machine and the computer: "the business's entire information system had to be reconfigured around the machinery." (2) Likewise, small wholesalers are being required, sometimes at prohibitive expense, to tie into the computer networks and databases of the retail chains. Thus, the web is spun ever more finely, and extends its filaments outward. *In its own terms*, this process knows no natural stopping place. It's hard to open a paper or magazine today without finding testimony to the demand for universal, single-system rationalization of the social processes we have adapted to digital logic. For example, the April 26 *Economist* carries a story about Proton, Belgium's ambitious experiment to establish an electronic-purse smart card. The author of the story writes, Proton ... faces complaints about parochialism. Its chip can hold just one currency. Mondex and Visa Cash, a rival run by a huge bank consortium, can store value in up to five different currencies ... Then there is inter-operability. If it is to become a global force, Proton must link its international members through an integrated payments system that allows, say, a Belgian customer to spend his e-cash in Switzerland .... No matter how impressive Proton's technology, ceding independence may be its only hope of plugging into a global network. Walter Wriston, the former chairman and CEO of Citicorp/Citibank, opines that "Open" has become the pretty girl at the party. Nobody quite knows what open means, but the world is demanding that systems talk to each other. (3) We can, I think, rightly believe that the weaving of planetary society into a global unity is an inescapable necessity for our day. But the unity of a logical system and the unity of an organism are very different things, and it is not at all clear that the social organism can remain healthy once it is bound securely by the iron logic of technology. Complex Technological Systems Are Brittle ----------------------------------------- Every rationalized system erected upon logic becomes brittle just to the extent its drive toward interdependence and universality is fulfilled. Think of a web of lies, where the overriding aim is to maintain logical consistency as proof against detection of wrongdoing. The more elaborate the web, the greater the likelihood of a fatal inconsistency, and the greater the difficulty in patching over rents in the fabric. A similar limitation has afflicted expert systems, which, despite their early promise, eventually succumbed to their own brittleness. As the dean of software engineering, Frederick Brooks, Jr., put it, a point came where the developers of expert systems suffered a "rude awakening": Somewhere in the neighborhood of 2,500 to 3,000 rules, the rule bases become crashingly difficult to maintain as the world changes. (4) We need to make a crucial distinction, however. There is no necessary brittleness in logically elaborated systems as such. A computer can take a consistent system of logical axioms and mechanically adduce valid propositions without end. The result may not be very interesting, but maintaining consistency is no great challenge. The brittleness arises when we try to correlate the logical system with reality. The failure of complex expert systems occurs, Brooks noted, *as the world changes*. In other words, the more faithfully we extract the logical structure from a real-world situation and impress this structure upon computational machinery -- that is, the more detailed and successful the mapping from world to machine -- then the more hopeless the task of adapting the machine to changes. Or, looking at it the other way around: the more we adapt ourselves to all the ramifications of a logical system, the more that system will bind and chafe us as we attempt to grow. (5) This helps to clarify the common notion that computers introduce a radical new principle of flexibility into otherwise rigidly constrained, mechanical processes. Two things need saying here. First, it is not true that there is a new principle. Old-style machines, from the loom to the automobile, always tended to gain a more finely calculated and adjustable range of capabilities as their designs grew in maturity and complexity. Computers simply continue a movement in the same direction -- and accelerate it remarkably. But they remain in the same game. Second, the "flexibility" available in this game is merely an ever closer mapping between the machine and a particular bit of the world that is itself conceived as a fixed and determinate machine. This flexibility is, in other words, exactly what made expert systems increasingly successful up to a point -- and then hopelessly brittle as the world changed. *This sort* of success, built upon generalization and abstraction (which is the only sort of success open to the computer), cannot keep up with the qualitatively determined transformation that constitutes the progressive inner realization and outer manifestation of a self or community of selves. Such transformation arises from the meanings in our lives, and these meanings -- qualitatively lived and deepening realities, as opposed to static linguistic products -- are not programmable. (6) A Tightening Web ---------------- That the machine-world *has* been tightening about us like a skin is a fact that strangely escapes our attention. Yet the individual who spends many of the productive hours of his day sitting in front of a screen, expressing himself through mouse and keyboard, occupies the terminus of a remarkable evolutionary development. For the first time in history we have accepted, without any clear limit, the mechanical mediation of our interactions with the world. It is as if the 19th-century factory had contracted more and more about the individual worker, becoming uncannily subtle and transparent, until the worker, who at first had rebelled within himself at the machine's inhumane depredations, finally yielded himself to what he could no longer even see -- and pronounced it enjoyable. That we have not yet awakened to the issues this raises is evidenced by the entrenched conviction that all worries about software can be addressed through improvements to the software. If we could just make the intelligent machinery around us more flexible, more perfectly adapted to our needs of the moment.... But this, as we have seen, is the Great Technological Deceit; as long as we are in its grip, our plight can only worsen. The greatest potential for disaster exists, after all, where the machinery has grown to fit us like a skin. The closer the fit, the more radically it constrains us. No straitjacket would prove more effective than my own skin, if it did not grow organically from within. This is not to say that we should refuse to improve our software. But it *is* to say that until we recognize the huge challenge on our side as we confront this software -- a challenge that becomes more acute with every technical upgrade -- we will be losing ground in the essential struggle. The growing from within, the enlarging or shedding of skins, the unexpected metamorphosis -- these must be accomplished by *us*. The effort becomes greater in direct proportion to the degree our inner movements have been smoothly and unconsciously "entrained" by the efficient software with which we interact. I have spoken at various times about particular dangers of this entrainment in relation, for example, to banking software, decision support systems, crime management systems, email, and various other applications. But the fact appears to be that most people have a difficult time actually sensing any of the more profound risks posed by the increasingly ubiquitous and invisible threads of digital logic that shape our lives. --------------- NOTES 1. Ellul, Jacques, *The Technological Bluff*, transl. by Geoffrey W. Bromiley (Grand Rapids, Mich.: Eerdmans, 1990), p. 316. 2. Campbell-Kelly, Martin and William Aspray, *Computer: A History of the Information Machine* (New York: HarperCollins, 1996), pp. 152, 175-80. 3. *Wired*, October, 1996. 4. *Communications of the ACM*, March, 1996. 5. The problem is not restricted to expert systems, but applies to every attempt to map a purely formal system (such as a computer) to the world. For some preparatory circling around these issues, see the chapter, Can We Transcend Computation? (http://www.ora.com/people/staff/stevet/fdnc/ch23.html) in Talbott, Stephen L., *The Future Does Not Compute: Transcending the Machines in Our Midst*. 6. Again, see the chapter referred to in the previous note. ----------------
This text appeared in issue #48 of NETFUTURE |