neural network – Artificial Intelligence https://ars.electronica.art/ai/en Ars Electronica Festival 2017 Tue, 28 Jun 2022 13:43:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Closed Loop https://ars.electronica.art/ai/en/closed-loop/ Tue, 08 Aug 2017 16:40:09 +0000 https://ars.electronica.art/ai/?p=3560

Jake Elwes (UK)

Artificial intelligence and machine learning are fast becoming part of everyday life. Based on AI models currently used, among other things, in content moderation and surveillance, the artworks explore the “latent space” of the AI as it processes and imagines the world for itself, dreaming in the areas between and beyond what it has learnt from us.

Collaborative project with Roland Arnoldt

In Closed Loop two artificial intelligence models converse with each other—one with words the other with images—in a never-ending feedback loop. The words of one describe the images of the other, which then seeks to describe the words with a fresh image. The neural networks become lost in their own nuances, sparking and branching off each other as they converse.

Credits

www.jakeelwes.com
Special thanks to Anh Nguyen et al. at Evolving-AI for their work on GANs

]]>
Blade Runner—Autoencoded https://ars.electronica.art/ai/en/bladerunner-autoencoded/ Tue, 08 Aug 2017 14:42:21 +0000 https://ars.electronica.art/ai/?p=2623

Terence Broad (UK)

Blade Runner—Autoencoded is a film made by training an autoencoder—a type of generative neural network—to recreate frames from the 1982 film Blade Runner. The Autoencoder learns to model all frames by trying to copy them through a very narrow information bottleneck, being optimized to create images that are as similar as possible to the original images.

The resulting sequence is very dreamlike, drifting in and out of recognition between static scenes that the model remembers well, to fleeting sequences—usually with a lot of movement—that the model barely comprehends.

The film Blade Runner is adapted from Philip K. Dicks novel Do Androids Dream of Electric Sheep?. Set in a post-apocalyptic dystopian future, Rick Deckard is a bounty hunter who makes a living hunting down and killing replicants, artificial humans that are so well engineered that they are physically indistinguishable from human beings.

By reinterpreting Blade Runner with the autoencoder’s memory of the film, Blade Runner—Autoencoded seeks to emphasize the ambiguous boundary in the film between replicant and human, or in the case of the reconstructed film, between our memory of the film and the neural networks. By examining this imperfect reconstruction, the gaze of a disembodied machine, it becomes easier to acknowledge the flaws in our own internal representation of the world and easier to imagine the potential of other, substantially different systems that have their own internal representations.

Credits

Carried out on the Msci Creative Computing course at the Department of Computing, Goldsmiths, University of London under the supervision of Mick Grierson.

]]>
Hades https://ars.electronica.art/ai/en/hades/ Tue, 08 Aug 2017 14:04:31 +0000 https://ars.electronica.art/ai/?p=2243

Markus Decker (AT), Pamela Neuwirth (AT)

Rigor and experience, says science, and triumphs. Today we write MATERIAL and ENERGY in capital letters; EVOLUTION has also long since suspended fate. Hades brings the light of the souls out of the underworld and transposes their radiance into chemical luminescence:

Light as a reference to the soul and consciousness glows in a gelatin cube, thus at the same time serving as a source of information. While the light glows, people’s assumptions about the world are synthesized in an artificial neural network (ANN) and modified into a machine discourse. Mold (life) slowly grows over the fluorescent gelatin, until the light is extinguished and the metaphysical discussion ends.

Credits

Supported and produced by Us(c)hi Reiter— servus.at
Translation: Aileen Derieg
FIFO programming: Oliver Frommel
Supported by Kunstuniversität Linz

Thanks to Free/Libre Open Source Software, http://fsfe.org/

Partly funded by the Bundeskanzleramt Kunst & Kultur as part of the servus.at annual program 2017 and by Linz Kultur

]]>
[{Ghost}] https://ars.electronica.art/ai/en/ghost/ Tue, 08 Aug 2017 13:52:46 +0000 https://ars.electronica.art/ai/?p=2613

Kunsthaus Graz (AT), Tristan Schulze (DE)

Do you have some idea of what the artificial intelligence might be thinking right now? What responsibilities should machines take on? Imagine a world full of intelligent machines. What role would mankind play in this possible future world? The project [{Ghost}] invites us to explore this exciting, yet also disturbing question.

[{Ghost}] is an artificial neural network, an artificial intelligence that inhabits two different art institutions. It will be shaped by online text information derived from both the Kunsthaus Graz and Ars Electronica Center but mainly through the participation of their human audience.

There will be a web app that allows dialogue with the public, while [{Ghost}] will also be connected to the media façade of the Kunsthaus Graz and the Ars Electronica Center. These façades create a projection in public space of the current status and developments in the A.I. in the form of visual patterns and brief info texts.

A collaboration between Ars Electronica Center and Kunsthaus Graz

Project team: Tristan Schulze (artist and designer), Elisabeth Schlögl (assistant curator, Kunsthaus Graz), Barbara Steiner (director, Kunsthaus Graz)

]]>
Hybrid Art – cellF https://ars.electronica.art/ai/en/hybrid-art-cellf/ Tue, 08 Aug 2017 10:46:34 +0000 https://ars.electronica.art/ai/?p=3250

Guy Ben-Ary (AU), Bakkum Douglas (US), Mike Edel (AU), Andrew Fitch (AU), Stuart Hodgetts (AU), Darren Moore (AU), Nathan Thompson (AU)

There is a surprising similarity in the way neural networks and analogue synthesizers work: both receive signals and process them through components to generate data or sound.

cellF combines these two systems. The “brain” of this new creation consists of a biological neural network grown in a petri dish, which controls analogue modular synthesizers in real time. The living part of this completely autonomous and analogue instrument is composed of nerve cells. These were taken from Guy Ben-Ary’s fibroblasts (cells in connective tissue), which were programmed back into stem cells. Guy Ben-Ary then artificially further developed these stem cells into neural stem cells, which can become differentiated into nerve cells under certain conditions in the laboratory and form a neural network – Ben-Ary’s “external brain.”

The activity of this brain can be influenced by the input from other, human musicians and made audible through the analogue synthesizer. Human and instrument become a unit – a “cybernetic rock star” from the petri dish.

The project will be presented during the Ars Electronica Festival in the POSTCITY.

]]>
Machine Learning Porn https://ars.electronica.art/ai/en/machine-learning-porn/ Tue, 08 Aug 2017 07:11:37 +0000 https://ars.electronica.art/ai/?p=3590

Jake Elwes (UK)

Artificial intelligence and machine learning are fast becoming part of everyday life. Based on AI models currently used, among other things, in content moderation and surveillance, the artworks explore the “latent space” of the AI as it processes and imagines the world for itself, dreaming in the areas between and beyond what it has learnt from us.

In Machine Learning Porn a neural network has been trained using an explicit content model for finding pornography in search engines. The network is then reverse engineered to generate new “pornography” from scratch: an AI daydreaming of sex.

Credits

www.jakeelwes.com
Special thanks to Gabriel Goh for inspiration.

]]>
Learning to See: Hello, World! https://ars.electronica.art/ai/en/learning-to-see/ Tue, 08 Aug 2017 06:10:43 +0000 https://ars.electronica.art/ai/?p=1791

Memo Akten (TR/UK)

A deep neural network opening its eyes for the first time, and trying to understand what it sees.

Originally inspired by the neural networks of our own brain, deep learning artificial-intelligence algorithms have been around for decades, but they are recently seeing a huge rise in popularity. This is often attributed to recent increases in computing power and the availability of extensive training data. However, progress is undeniably fueled by the multi-billion-dollar investments from the purveyors of mass surveillance: Internet companies whose business models rely on targeted, psychographic advertising, and government organizations and their War on Terror. Their aim is the automation of understanding big data, understanding text, images and sounds. But what does it mean to “understand”? What does it mean to “learn” or to “see”?

Learning to See is an ongoing series of works that use state-of-the-art machine-learning algorithms as a means of reflecting on ourselves and how we make sense of the world. The picture we see in our conscious minds is not a direct representation of the outside world, or of what our senses deliver, but of a simulated world, reconstructed according to our expectations and prior beliefs. The work is part of a broader line of inquiry about self-affirming cognitive biases, our inability to see the world from others’ point of view, and the resulting social polarization.

]]>
Experts Tour: The Neural Aesthetic https://ars.electronica.art/ai/en/expertstour-neuralaesthetic/ Wed, 02 Aug 2017 06:20:20 +0000 https://ars.electronica.art/ai/?p=1882

Gene Kogan will introduce the field of machine learning and its existing and speculative implications to new media and art in general. He will discuss applications of neural networks and associated algorithms to producing images, sounds, and texts, showing examples of contemporary works using these abilities. Gene Kogan will also present two of his own works intersecting machine learning and generative art.

SAT Sept. 9, 2017

SAT Sept. 9, 2017, 3:00 PM-4:30 PM

Infos

Meeting Point: POSTCITY WE GUIDE YOU Meeting Point
Duration: 90 minutes
Language: English
Price: € 16 / € 12 reduced

Register now!
]]>