swarm – Artificial Intelligence https://ars.electronica.art/ai/en Ars Electronica Festival 2017 Tue, 28 Jun 2022 13:43:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Spaxels Research Initiative https://ars.electronica.art/ai/en/spaxels-research-initiative/ Thu, 17 Aug 2017 21:08:06 +0000 https://ars.electronica.art/ai/?p=3042

Ars Electronica Futurelab (AT)

The Spaxels Research Initiative (SRI) is a loose association of partners in industry and research whose shared interest in swarms has brought them together. Each one—NTT, Audi, Autodesk, Tangible Bits, et al.—pursues this project in accordance with its own particular aims.

The first order of business is to hack our way through the definitional jungle—after all, swarm is a term that is often interpreted very broadly and freely. But, in fact, what the Spaxels Research Initiative (SRI) is all about is coordinated, autonomous & semi-autonomous robotic vehicles. The term swarm is not explicitly derived from swarm behavior; it also encompasses formations like centrally coordinated fleets.

Heading Ars Electronica’s agenda here is illuminating the interplay of human beings (and society) with (future) mobile swarms. To accomplish this, we have constructed a series of prototypical arrays—the Spaxels, for instance—as a means of exploring the topic “Swarm and Human, Swarm and Society.

The discussion of Isaac Asimov’s Three Laws of Robotics and the attempt to expand on them suffices to deal with the individual case of how a robot must behave towards a human being. But what should be the behavior of a diverse assortment of numerous mobile robots communicating with each other in a network? Does this call for something like Swarm Laws to govern the interaction between mechanical swarm and human(ity)?

Like the Spaxels in the entertainment field, the deployment of coordinated robotic vehicles increasingly takes place in the public sphere—for instance, cars are beginning to join together to form “thinking organisms.”

But what does the encounter with swarms mean for the individual and for our society? Is there a “common framework” for swarms and their deployment in all their various manifestations that all the stakeholders can share here?

Is artificial intelligence implemented in dispersed fashion among the members of coordinated systems a solution? Or is that the problem itself?

How does humankind live in and communicate with an environment filled with vehicles that are potentially more intelligent—and certainly better networked—than we are? And what must this environment be capable of doing?

These are all huge questions, none of which can be answered at a conference. But answering questions is also not the point of a conference. Rather, at the top of our agenda is a determination of where we stand now. What are the positions of the participants in this discussion and what are their perspectives? With this as our point of departure, we will embark on a search for the common challenge that everyone in this field is, of necessity, actively facing. Accordingly, kicking off this conference will be a process of exchange in which the partners sketch their respective positions.

SO Sept. 10, 2017

1:15 PM–1:35 PM Horst Hörtner (AT), Senior Director Ars Electronica Futurelab
Introduction to the Spaxels Research Initiative
1:35 PM–1:50 PM Shingo Kinoshita (JP), Executive Research Engineer Supervisor at NTT,
Swarms as a Communication Medium
1:50 PM–2:05 PM Isabelle Borgert (DE), Connected Car & In-Car Technology, Audi AG,
Swarm intelligence: What Cars and Bees do have in Common
2:05 PM–2:20 PM Hiroshi Ishii (JP/US), Co-Director MIT Medialab
Tangible Bits
2:20 PM–2:35 PM Philipp Müller (AT/US), Program Manager AEC EMEA, Autodesk
Education Experiences, Future of Making with Swarms
2:35 PM–2:50 PM Sepp Hochreiter (AT), Head of Institute of Bioinformatics, Johannes Kepler University Linz

This event is realized in the framework of the European Digital Art and Science Network and co-funded by the Creative Europe program of the European Union.

]]>
Reading Plan https://ars.electronica.art/ai/en/reading-plan/ Tue, 08 Aug 2017 07:35:24 +0000 https://ars.electronica.art/ai/?p=1825

Lien-Cheng Wang (TW)

Reading Plan is an interactive artwork with 23 automatic page-turning machines. When audiences enter the exhibition room, the machines start to turn the pages automatically and read their contents in the voice of elementary school students. The machines are a metaphor for a Taiwanese classroom.

In 2016 in Taiwan there was an average of 23 students per primary school class.

When people go to school in Taiwan, they don’t have much power to decide what they want to read and study. It is like being controlled by a huge invisible gear. The authorities’ education policy prioritizes industry value and competitiveness. The government wants to promote a money-making machine rather than self-exploration and humanistic thinking. This is a complete realization of dogmatic rules and state apparatus.” (Lien-Cheng Wang)

The machines read an extract from The Analects of Confucius—a book that has influenced Asian countries for thousands of years in ethics, philosophy, and morality. The content reads: “The Master said, ‘Is it not pleasant to learn with a constant perseverance and application?’ ‘Is it not delightful to have friends coming from distant quarters?’ ‘Is he not a man of complete virtue, who feels no discomposure though men may take no note of him?’” The essence of the book is a metaphor of ancient China, which wanted to control surrounding countries for thousands of years. Reading Plan creates a space of discussion localization, education, thoughts and state apparatus.

Credits

Supported by the Department of Cultural Affairs, Taipei City Government

]]>
Pool of Fingerprints https://ars.electronica.art/ai/en/pool-of-fingerprints/ Tue, 08 Aug 2017 07:21:19 +0000 https://ars.electronica.art/ai/?p=1819

Euclid (Masahiko Sato, Takashi Kiriyama) (JP)

Pool of Fingerprints consists of a large display surface and a fingerprint scanner. The display surface is populated with fingerprints swimming like a school of fish. The visitor can release his or her own fingerprint and watch it swim with others.

When a visitor places its finger on the scanner, a scanned image of the fingerprint appears in the display. A moment later, the fingerprint starts to swim away to join other fingerprints. Later on, when the visitor comes back and scans the same finger, the one released earlier will respond and come back in front of the visitor. The fingerprint then gradually disappears, as if it is merging into the visitor’s fingertip.

Credits

Supported by NEC Corporation and Samsung Japan

]]>