neuro science – Artificial Intelligence https://ars.electronica.art/ai/en Ars Electronica Festival 2017 Tue, 28 Jun 2022 13:43:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Blade Runner—Autoencoded https://ars.electronica.art/ai/en/bladerunner-autoencoded/ Tue, 08 Aug 2017 14:42:21 +0000 https://ars.electronica.art/ai/?p=2623

Terence Broad (UK)

Blade Runner—Autoencoded is a film made by training an autoencoder—a type of generative neural network—to recreate frames from the 1982 film Blade Runner. The Autoencoder learns to model all frames by trying to copy them through a very narrow information bottleneck, being optimized to create images that are as similar as possible to the original images.

The resulting sequence is very dreamlike, drifting in and out of recognition between static scenes that the model remembers well, to fleeting sequences—usually with a lot of movement—that the model barely comprehends.

The film Blade Runner is adapted from Philip K. Dicks novel Do Androids Dream of Electric Sheep?. Set in a post-apocalyptic dystopian future, Rick Deckard is a bounty hunter who makes a living hunting down and killing replicants, artificial humans that are so well engineered that they are physically indistinguishable from human beings.

By reinterpreting Blade Runner with the autoencoder’s memory of the film, Blade Runner—Autoencoded seeks to emphasize the ambiguous boundary in the film between replicant and human, or in the case of the reconstructed film, between our memory of the film and the neural networks. By examining this imperfect reconstruction, the gaze of a disembodied machine, it becomes easier to acknowledge the flaws in our own internal representation of the world and easier to imagine the potential of other, substantially different systems that have their own internal representations.

Credits

Carried out on the Msci Creative Computing course at the Department of Computing, Goldsmiths, University of London under the supervision of Mick Grierson.

]]>
Grasping https://ars.electronica.art/ai/en/grasping/ Tue, 08 Aug 2017 13:29:23 +0000 https://ars.electronica.art/ai/?p=2054

Dr. Manuela Macedonia (IT/AT)

For adults, learning a foreign language is a task associated with great difficulty and often crowned with scant success. This series of scientific experiments conducted by Dr. Manuela Macedonia and her staff at Johannes Kepler University Linz in cooperation with the Ars Electronica Center and the Catholic University of the Sacred Heart, Milan, investigates learning a language in a virtual setting.

Users train in a virtual-reality environment implementing procedures based on the principles of learning psychology and neuroscience—for example, foreign-language vocabulary. The training is imparted ubiquitously—that is, independent of a particular time and space, and personalized.

In Grasping, the second experiment in this series, participants are immersed in a 3D underwater realm in Deep Space 8K at the Ars Electronica Center. Test subjects see virtually projected everyday objects and touch them with their hands—that is, they literally grasp them (in both senses of the word). This specific action supports the brain in memorizing the foreign language’s term for the object. This series of experiments is intended to make a long-term contribution to developing learning environments for mobile devices.

Credits

Joint research project by the Ars Electronica Center, Johannes Kepler University Linz, and the Catholic University of the Sacred Heart, Milan: “Intelligent Machines that Make Humans Learn Foreign Languages”

Johannes Kepler University Linz: Dr. Manuela Macedonia, Michael Holoubek
University of Vienna: Mag. Astrid Elisabeth Lehner, Bakk.
Catholic University of the Sacred Heart, Milan: Dr. Claudia Repetto
Ars Electronica Center: Mag. Erika Jungreithmayr
Ars Electronica Futurelab: Clemens F. Scharfen
Ars Electronica Museum Technology: Thomas Kollmann, Florian Wanninger
Ars Electronica Solutions: DI. Mag. Ali Nikrang, Poorya Piroozan, MSc.

]]>
Hybrid Art – cellF https://ars.electronica.art/ai/en/hybrid-art-cellf/ Tue, 08 Aug 2017 10:46:34 +0000 https://ars.electronica.art/ai/?p=3250

Guy Ben-Ary (AU), Bakkum Douglas (US), Mike Edel (AU), Andrew Fitch (AU), Stuart Hodgetts (AU), Darren Moore (AU), Nathan Thompson (AU)

There is a surprising similarity in the way neural networks and analogue synthesizers work: both receive signals and process them through components to generate data or sound.

cellF combines these two systems. The “brain” of this new creation consists of a biological neural network grown in a petri dish, which controls analogue modular synthesizers in real time. The living part of this completely autonomous and analogue instrument is composed of nerve cells. These were taken from Guy Ben-Ary’s fibroblasts (cells in connective tissue), which were programmed back into stem cells. Guy Ben-Ary then artificially further developed these stem cells into neural stem cells, which can become differentiated into nerve cells under certain conditions in the laboratory and form a neural network – Ben-Ary’s “external brain.”

The activity of this brain can be influenced by the input from other, human musicians and made audible through the analogue synthesizer. Human and instrument become a unit – a “cybernetic rock star” from the petri dish.

The project will be presented during the Ars Electronica Festival in the POSTCITY.

]]>