The new Ars Electronica Center

Compass – Navigating the Future

press release as PDF
Compass – Navigating the Future / Ausstellungen
photos on Flickr
opening-video on Youtube
hyperlapse-video of the reconstruction on Youtube

(Linz, May 27, 2019) New exhibitions, new laboratories, new educational formats, a new self-conception: with “Compass – Navigating the Future,” the Ars Electronica Center in Linz is beginning its next chapter.
Once a telescope that offered a glimpse into the future, the Ars Electronica Center has now transformed itself into a compass, a companion on a journey through the systems of the 21st century that we humans have created. The new Museum of the Future has received 4 million euros in investment: 2.5 million from the city of Linz, and 1.5 million from Ars Electronica itself.

The reinvention of the Ars Electronica Center
“We have redesigned all the exhibitions, converted one entire floor into a laboratory, and newly conceived all our guided tours and workshops and our school program,” says Gerfried Stocker, Artistic Director of Ars Electronica. The new Ars Electronica Center offers a wealth of interactive scenarios, artistic works, scientific research projects, information stations, workshops, and laboratories, all of which explore current developments in the fields of artificial intelligence, neuroscience, neuro-bionics, robotics, prosthetics, autonomous mobility, genetic engineering, and biotechnology. But the focus rests on the effects of one technology in particular: “Artificial intelligence is currently sparking a revolution whose significance to our lives cannot be overestimated,” says Gerfried Stocker. “Whether in business and industry, in science, art, or politics, modern applications like machine learning will come into common use everywhere, bringing about fundamental changes.” And so we see it’s high time we came to grips with this new game-changer.

Focus on artificial intelligence
“Nowhere else can you get an overview of AI as comprehensive as ours,” says Gerfried Stocker. The exhibition “Understanding AI” shows how neural networks are structured, allowing visitors to try their hand training neural networks themselves at interactive stations. In the new “Machine Learning Studio,” anyone can experiment with the practical applications of AI: building and testing self-driving cars or programming robots for face recognition. The exhibition “Neuro-Bionics” imagines the quantum leap that melding AI with connectome research might bring about. The exhibition “Global Shift” reveals the role that neural networks play in the scientific exploration of our planet and how they help us tackle challenges like climate change.
A visit to the new Ars Electronica Center should convey to everyone a fundamental idea of what AI is and what its practical applications are capable of: “We want to make our visitors AI-savvy,” says Gerfried Stocker about the new center’s mission.

New Ars Electronica Center important to Linz’s image
“Linz is home to voestalpine, not to mention several high-tech companies, the JKU, the University of Art, a lively startup scene, and, of course, Ars Electronica itself. Linz is a city of innovation. New technologies are not only born in Linz, they are also co-developed here, challenged, and put to the test. Because being a city of innovation also means keeping in mind that technology should make a more livable and sustainable future possible. The new Ars Electronica Center embraces this positive yet critical approach; it explores the potential of new technologies, artificial intelligence above all, but it also shows that what we make of these technologies is up to us,” says Mayor Klaus Luger about the reinvention of the Ars Electronica Center.

“Linz is a UNESCO City of Media Arts. Anyone walking through the new Ars Electronica Center can see why. Every tour becomes an inspiring and fascinating excursion into the fields and industries of the future, inspiring us to think further, go further—whetting our appetite for the future. And that’s only made stronger by the art that can be seen everywhere here, placed on an equal footing with science and business. Anyone who wants to know which technologies are driving the biggest changes today, and how they work— anyone who wants to get involved and help shape our future—has got to visit the new Ars Electronica Center,” says Doris Lang-Mayerhofer, Linz City Councilor for Culture and Chairwoman of the Ars Electronica Advisory Board.

A multi-week inaugural program
The new Museum of the Future will open with a program that spans several weeks. The first official opening event will kick off this evening, Monday, May 27, 2019, at 7 p.m. It will be followed by the inaugural presentation of the new “Ars Electronica Labs,” the new “Machine Learning Studio” and the exhibitions “Understanding AI” and “Global Shift.”

On Thursday, May 30, 2019, the new Ars Electronica Center invites you to its Open House. Short presentations, workshops, and lectures will be on offer, and admission is free of charge.
Then come the opening weekend festivities, from Friday, May 31 through Sunday, June 2.

Each weekend will have a different theme, continuing all the way through the weekend of Sunday, July 21, 2019. The Ars Electronica team and numerous experts from the fields of art and science will be taking visitors on special tours of the exhibitions, holding workshops in the new labs and providing insights into current trends in their disciplines:

THU June 6 to SUN June 9:
Artificial intelligence: the revolution behind the hype
THU June 13 to SUN June 16:
Setting out into a new world: the digital geography of the 21st century
THU June 20 to SUN June 23:
Not just for fun: play and research
THU June 27 to SUN June 30:
Improving our bodies: are cyborgs superior?
THU July 4 to SUN July 7:
The human brain: evolution’s perfect achievement?
THU July 11 to SUN July 14:
50 years since the moon landing: reinventing the future
SUN July 21:
The 50th anniversary of the moon landing

The second inaugural event for the new Ars Electronica Center will take place on Monday, June 24, 2019 at 7 p.m. An additional three new exhibitions will be unveiled this evening: “AI x Music,” “Mirages & Miracles,” and the Children’s Research Laboratory.

Compass – Navigating the Future
Human history is inextricably tied to technology. The most recent chapter in our history has been dominated by the digital revolution, which has exponentially accelerated the development of new technologies.
And the result is in every way extraordinary. We have created a world that consists of both real and virtual spaces which seamlessly merge and inevitably overlap. A world in which we are no longer the sole protagonists. Starting with the production lines in our factories, we share more and more of this new world with increasingly intelligent man-made systems.

This trend has recently seen an enormous boost from neural networks, which are triggering a revolution in computer science: Software will no longer be created by humans, instead creating itself out of raw data. “The repercussions of this ‘summer of AI’ will be massive,” says Gerfried Stocker. “Now that we’ve taught our machines to see, hear and feel with sensors, we are using artificial neural networks to digitize the processes of thinking and decision-making.

And as this technology becomes even more efficient and powerful, the role that we humans take in actively shaping our future becomes more important, not less important. “No matter how intelligent machines and programs one day become, we will always be the ones who determine whether they are the cause of our problems or part of their solution,” says Gerfried Stocker. In order to look as far ahead as possible, you need a telescope. But when deciding which direction you want to travel, you need a compass.

The Ars Electronica Labs
A laboratory represents the intersection of creativity, technology, society, and science. It is a place where one discovers and shapes the world. And in such a place, cooperation that transcends the boundaries of disciplines and industries is crucial—research thrives on exchange. This is exactly what Ars Electronica Labs wishes to convey.

At the Bio Lab
Technology is emblematic of our domination of the planet. But for a long time now, the use of technology has meant far more than simply reshaping the face of the earth in accordance with our own desires. More and more now we are delving deeper, directly into the creation and formation of life. On the one hand, this opens up new possibilities to us in treating and preventing diseases. But on the other hand, more and more it raises profound ethical questions. In the field of stem cell research, for example. Or in tissue engineering, the artificial production of biological tissue.
In his long-term project “_zusammen_ziehen” (“_moving_together”) for the Bio Lab, Manuel Selg of the Upper Austrian University of Applied Sciences shows how muscle tissue can be artificially grown. Visitors can watch as progenitor muscle cells from mice and rats develop into muscle tissue.

At the new Ars Electronica Center Bio Lab, visitors can isolate their own DNA and cultivate cell cultures. And now for the first time, visitors can even try out the “CRISPR/Cas9” gene scissors, well publicized in recent times. Anyone will be able to see firsthand how easily and selectively this tool can insert, remove, and switch off individual genes.

At the Material Lab
In this era of climate change, the increasing scarcity of resources, and poor working conditions, sustainable and compassionate methods of production have become an increasingly important topic. Materials research occupies the center of attention. Newly invented materials possess new mechanical, physical and chemical properties, leading to groundbreaking practical applications in the forward-looking fields of energy generation, climate and environmental protection, mobility, health and medicine, safety, and communication. Innovative materials drive the creation of sustainable products; they increase efficacy as well as efficiency, thus boosting competitiveness and ecological sustainability.

Speaking to our tendency to go perhaps too far in modifying our environment and in our creation of new materials, the sculpture series “Modified Paradise” (AnotherFarm) consists of animals sculpted from a special silk that glows under UV light. This silk is produced by caterpillars whose DNA was modified to include genes from bioluminescent jellyfish and coral. The sculptures in “Modified Paradise” resemble cats and chickens, animals we humans have domesticated for thousands of years: livestock and pets we keep, breed, and now genetically modify.

In the Second Body Lab
We humans have always crafted technologies to make ourselves stronger, faster, and healthier and our lives simpler, easier, and more worth living. For a long time, this confined itself to the realm of mere tools; but in the recent past, technology has been increasingly making its way into our bodies. The Second Body Lab at the new Ars Electronica Center demonstrates some examples of these new human-machine interfaces.

“The Alternative Limb Project” (Sophie de Oliveira Barata) treats prostheses as extensions of not only our bodies, but also our personalities. The production of these works of art has combined new technologies with traditional craftsmanship. The Second Body Lab features three leg prostheses of this kind as well as three custom-made artificial fingers.
The “IKO Prosthetic System” (Carlos Arturo Torres Tovar) takes a different approach. This easy-to-use prosthetics system employs classic LEGO bricks as a basis and aims to help physically handicapped children establish trust in their bodies. The non-goal-oriented creative process of building with LEGO engenders an environment conducive to social and emotional skill development and provides insight into creative problem-solving.

The “Unicorn Brain Interface” (g.tec medical engineering) is a portable EEG headset that records activity in the cerebral cortex via eight specially crafted electrodes. This forward-looking neurotechnology lets users write words and sentences on the computer, draw, and control household appliances, prostheses and robots, all using only their thoughts. The “Unicorn Brain Interface” also creates new ways for scientists and artists alike to record and analyze EEG signals and to design their own programs, applications or artistic installations using open programming interfaces (APIs).

In the CitizenLab
Many of the social, ecological, economic, and political upheavals of our modern era can be traced back to rapid technological progress. Every day we read about the manifold challenges we face in our desire to ensure a future worth living in. But what can we do if we don’t happen to be politicians, researchers, or CEOs? The “CitizenLab” at the new Ars Electronica Center shows how each and every one of us can play an active role in shaping our world.

Even Greta Thunberg started small. On August 20, 2018, the first school day after the holidays, she stood up alone before the Swedish Parliament in Stockholm and held up a sign that said “Skolstrejk för klimatet” (“School strike for the climate”). Today, she is one of the most influential teenagers in the world. The 16-year-old activist for climate protection has inspired students around the world with her commitment and has sparked a global movement: “Fridays For Future.”

In the same way, “Bellingcat” stands for responsibility, commitment, and interconnection. This online platform aggregates research by citizen journalists who document criminal activities. In addition to publishing and distributing articles and reports, Bellingcat also offers instructions and guides for all citizen journalists who want to help in gathering facts and evidence that will assist in bringing perpetrators of war crimes and organized crime to justice.

Understanding artificial intelligence
Everyone is talking about artificial intelligence (AI). Whether it’s self-driving cars, the Facebook news feed, virtual assistants (like Apple’s Siri, Microsoft Cortana, Amazon Alexa, and Google Assistant), the latest iPhone’s FaceID, Google Translate, or even medical diagnoses—the practical application of AI underlies them all. In its exhibition “Understanding Artificial Intelligence,” the new Ars Electronica Center describes how these systems work.

As soon as they enter, visitors are recorded and photographed by an oversized “eye.” A “generative adversarial network”—GAN for short—uses these images to create portraits that can be seen on screens a few meters away. These are “fake faces,” artificial faces created from thousands of different images, that look completely real.

It’s these “deep fakes” that keeps AI in the (negative) headlines. Just a few days ago, researchers successfully created deep-fake videos using only still images. Albert Einstein, Marilyn Monroe, Salvador Dali, even the Mona Lisa—the new Ars Electronica Center shows them all alive and kicking in these perfectly faked videos.

Another application of AI is similarly polarizing: while some see facial recognition as a welcome tool for greater security, others see it as the instrument of a total surveillance state.
How do these systems manage to recognize faces, let alone create them?

Just like our brain, artificial neural networks must first perceive what is going on around them. The new Ars Electronica shows how perception and the brain work together in humans—and how we can reproduce this in machines through the use of sensors. Data is collected, transmitted, and then evaluated. For artificial neural networks, this means searching data sets for patterns that help make a conclusion about which of several actions will lead to success.

In order to make this understandable in a playful way, the Ars Electronica Futurelab has created interactive stations where visitors can observe neural networks learning and “thinking”:
Should a mouse be afraid of an elephant? Afraid of a trout? Of a cat? By answering these questions, visitors help train an artificial neural network. Each of their answers provides the system with one more clue as to which combinations of weight, size, speed, claws, and teeth ought to make a mouse turn and run for its life.

A wide-ranging installation by the Ars Electronica Futurelab shows how a trained “convolutional neural network” (CNN) works and what exactly it sees. The experiment’s setup is simple: A camera photographs a given object, and its image is sent to the CNN. Eleven large screens display the visual input as it is processed. Layer by layer, the network learns to recognize certain patterns in the image. The first layers capture simple lines, colors and curves; subsequent layers focus on increasingly complex patterns. The more abstract the original image seems to us, the clearer it becomes to the CNN.
The development of artificial neural networks like this was inspired by the human brain. Scientists have been studying the brain for centuries, developing ever more sophisticated techniques. Using animations and informational text and graphics, the new Ars Electronica Center surveys all that we have so far discovered about the brain, its structure, and its workings. And with the “Cardiff Brain Scan” (Cardiff University), Ars Electronica shows us the impressive images we can now create from research data using methods such as “cinematic rendering.”

Neuro-Bionics
Even if artificial neural networks do not work like a human brain does, many examples of them (like “machine learning,” which is currently enjoying some popularity) borrow to some degree from human physiology. A comparatively new approach in AI research takes this one step further: biological nervous systems are digitally simulated, then transferred to robots.

With “Open Worm” (OpenWorm Foundation), the new Ars Electronica Center offers an example of this fusion of neuroscience and AI. At the heart of this project is Caenorhabditis elegans, a millimeter-long roundworm whose connectome consists of 302 neurons which form approximately 5000 synapses. This simple network of nerve cells was digitized and fed into the microcontroller of a robot worm. The result: Without any additional programming, this robot can navigate its environment and search for virtual food.

Although this amazing experiment could be the next step towards faster-learning and more intelligent AI systems, it does not necessarily imply the impending arrival of a “strong AI.” The capacity and versatility of the human brain remain unmatched. It took 650 million years of evolution to create its unique complexity. A complexity that is fed by roughly 86 billion neurons forming about 100 trillion synapses—which we still mostly do not understand.

At the Machine Learning Studio
“Machine learning” is currently the most popular practical application of AI. This term refers to neural networks that independently acquire knowledge and can generalize that knowledge. They do not simply memorize information; instead, they recognize patterns and principles in datasets and apply these findings to further data. The range of possible applications for machine learning is very broad. Such applications include spam filters, speech and text recognition, search functions, automatic recommendation services, and image and face recognition. In the new Ars Electronica Center’s “Machine Learning Studio,” visitors can try out some of these applications for themselves and learn how they work.

Supported by our “tech trainers,” visitors can tinker with self-driving cars and try out the test track. “Donkey Cars” provides the wheels, equipped with a camera and a Raspberry Pi. With the help of machine learning and the right training, a car like this soon becomes a self-driving car on its way to mastering the test track.

Right next door is the “Dobot Assembly Line,” which is all about automation and robotics. Here, visitors can program and control a DOBOT Magician. This versatile educational robot arm can pick up and set down objects of all kinds, draw with a pen, 3D-print, or do laser engraving.

In the “Machine Learning Studio,” tech trainers also repair and maintain the prototypes and installations from all the new Ars Electronica Center’s exhibition areas—not in workshops normally accessible only to employees, but in front of and with the help of visitors.

AI and creativity<
Deep sadness and heady jubilation, grace and awkwardness—we humans can bestow all of these expressive, magical qualities and many more upon a simple wooden doll. But can machines do that? What if this time it’s modern industrial robots that pull the strings and breathe life into a marionette? And to take it one step further: what if the choreography itself is developed by an intelligent system? Who is the artist in that case? The machine or the programmers? And who holds the copyrights to their works?
With "Pinocchio," the Ars Electronica Futurelab shows that today, AI applications are good for more than simply optimizing production facilities and driving cars. More and more they are encroaching on a realm previously reserved for us humans alone: art.

Based on our experiences, our brains construct a very specific picture of reality. We see the world not as it is, but as we want it to be. Or as we must see it. Just as does the "ShadowGAN" from the Ars Electronica Futurelab. The "conditional generative adversarial network"—cGAN for short—has been trained to discern mountain landscapes in everything and in everyone. No matter what input the neural network receives, the output always looks like forests, rocks, and snow-capped peaks. But is this an act of creativity or just an unavoidable interpretation?

AI and prejudice
Under the heading “interpretation”: Can AI systems be objective as well? Can they operate free of prejudice and make fair decisions?
These days, machine learning can perform impressive tasks. But it is not infallible. The error rate in these systems depends largely on the quantity and quality of the data on which they are trained. And we humans are the ones who compile these datasets. If in so doing we make errors, the neural network in question will also fail to work correctly. In "imageNet fails" we see that such errors do nothing but show that the data contains features we humans don’t even notice, but that an AI system absolutely does. Doubtless even worse is the case where a dataset reflects the prejudices of those who compiled it—in such a case, an artificial neural network will in turn produce discriminatory results.

However, if AI systems can "adopt" our prejudices, then they can also be trained in our chosen values. These days, countless online forums today are forced to deal with abusive users, or trolls. Finding and deleting their discriminatory entries requires around-the-clock diligence and a lot of effort. In the future, neural networks might perform this work. In collaboration with Austrian media outlets, the Ars Electronica Futurelab has developed an interactive station where visitors can moderate anonymized comments from online forums. During this process the "troll detection AI" learns what content violates our laws and ethics and what does not. After adequate training, the Austrian online media can put this system to use.

The project "Anatomy of an AI" (Vladan Joler [RS], Kate Crawford [AU]) uses the Amazon Echo Dot to demonstrate the complexity and depth of the ethical issues surrounding AI systems. An enormous diagram details the complex process of developing, manufacturing, selling, buying, using, and disposing of this digital assistant, a process that circles the world and reaches deep into the realms of labor, capital, politics, and nature.

Global shift – Life in the anthropocene
If Earth had existed for just 24 hours, Homo sapiens would have appeared only 3.6 seconds ago. We would have been farming and raising cattle for 0.2 seconds, and the industrial age would have started 0.002 seconds ago. Two thousandths of a second were enough for us to become the decisive factor in the biological, geological and atmospheric changes taking place on this planet. We therefore refer to this most recent chapter in the Earth’s history as the human age, the “anthropocene.” Under the adage “Global Shift,” the new Ars Electronica Center investigates the reality of our modern lives.

Almost exactly 50 years after researchers in the USA created the very first computer network—the “Advanced Research Projects Agency Network”—we all find ourselves to be digital citizens. Each and every one of us spends a large amount of time in digital spaces, where we work, shop, educate ourselves about the world, and exchange ideas. Because we have left the conception of these digital spaces and infrastructures almost exclusively in the hands of technology companies, when it comes to rules and responsibilities, we’ve fallen far behind. One consequence of this is that we are only allowed to use many internet services if we first agree to release all our data.

One example: cloud services. Microsoft, Google, and Amazon are among the largest providers; their servers house many thousands of websites and apps. When accessed, they query our data and store it in the cloud. The Wi-Fi network at the new Ars Electronica Center blocks all Amazon Web Services (AWS) servers. As visitors try—in vain—to use apps on their smartphones, it becomes clear how many of them are hosted at AWS and how dominant are a handful of technology giants in this day and age.

The truism “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret” also happens to be the name given to the results of a project conducted by the New York Times. Using a middle-school teacher as an example, the Times showed how easy it is to correlate data with specific individuals despite its having been anonymized. The data needed to make this happen was purchased completely legally from a data broker.

But even outside the purely digital realm, technology has a decisive influence on our lives. Technology is the main reason we live better, healthier, and longer than any generation before us; and it helps us better understand ourselves, our planet—and beyond.

Our most recently established permanently occupied outpost is currently orbiting the Earth at an altitude of about 400 kilometers, our robots are rolling around on Mars, and our probes are pushing out into interstellar space. But it is also the case that over 6000 tons of scrap and garbage are circling our planet. And they are causing more and more problems. Almost half of this scrap is in low orbit, which is also the altitude where our satellites fly. “Orbits” (Quadrature), on display at the new Ars Electronica Center, is an artistic project that transforms the trajectories of 17,000 spaceborne objects into pleasing patterns.

Under the heading “Earth observation”: Using satellites, we can create a round-the-clock picture of what is happening on the Earth’s surface, in the oceans, and in the atmosphere. AI systems are used to evaluate the data obtained. They search through huge amounts of data for patterns that would otherwise remain hidden from us humans. The insights gained from this data demonstrate ever more clearly that technological progress and the unending growth of our economy have their price.

One example: the transportation of goods across the globe. Ships, mankind’s largest mobile structures, handle more than 90 percent of worldwide goods transport. One in every three of these voyages has a port in the EU as its embarkation point or destination. The new Ars Electronica Center displays the scope of global maritime trade on an interactive map. And there’s an upward trend. Because climate change, exacerbated in no small part by the seafaring giants that use crude oil for fuel, is melting the polar caps and will soon create a route through the Arctic that is navigable year-round. As a result, many journeys will get shorter, products cheaper, and the profits of shipping companies and enterprises even higher.

But it is not only the would-be perpetual ice of the polar caps that is melting; the glaciers are also disappearing. This is on display in the High Tauern mountain range, located in the largest and oldest national park in Austria, land of winter sports. In 1856, the Pasterze Glacier at the foot of the Grossglockner mountain covered an area of more than 30 km²; but by the year 2050, it will have almost completely disappeared. In “Global Retreat,” the new Ars Electronica Center showcases an interactive projection displayed on an elevation model of the High Tauern, demonstrating the speed and finality of glacier melt.
The steadily rising global average temperatures are caused by the ever-increasing concentration of CO2 in our atmosphere, which exceeds all measurements from the past 800,000 years. With their “climate spirals,” researchers at the University of Melbourne illustrate how rapidly this interaction is progressing.

The new Ars Electronica Center’s “Global Shift “ exhibition shows how powerfully we transform our world in accordance with our wishes, and the problems that arise as a result. It is up to us to determine which direction we will go, how much value we place on economic growth, the importance we give to terrestrial biodiversity, and the role we assign our technology.

http://www.flickr.com/photos/arselectronica/47949285867/
Compass – Navigating the Future // Fotocredit: vog.photoPrintversion

http://www.flickr.com/photos/arselectronica/47957268871/
pinocchio // Fotocredit: vog.photo // Printversion

http://www.flickr.com/photos/arselectronica/47942936206/
Ars Electronica Labs //Fotocredit: vog.photo / Printversion

http://www.flickr.com/photos/arselectronica/47957240838/
ORBIT – A Journey Around Earth in Real Time / Seán Doran (UK) // Fotocredit: vog.photo / Printversion

http://www.flickr.com/photos/arselectronica/47957231617/
Open Worm / OpenWorm Foundation (INT) // Fotocredit: vog.photo / Printversion

http://www.flickr.com/photos/arselectronica/47949662298/
SEER: Simulative Emotional Expression Robot / Takayuki Todo (JP) // Fotocredit: vog.photo / Printversion

http://www.flickr.com/photos/arselectronica/47936983177/
Machine Learning Studio // Fotocredit: vog.photo / Printversion

http://www.flickr.com/photos/arselectronica/47937003076/
Remains / Quayola // Fotocredit: vog.photo / Printversion