images – Artificial Intelligence https://ars.electronica.art/ai/en Ars Electronica Festival 2017 Tue, 28 Jun 2022 13:43:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Closed Loop https://ars.electronica.art/ai/en/closed-loop/ Tue, 08 Aug 2017 16:40:09 +0000 https://ars.electronica.art/ai/?p=3560

Jake Elwes (UK)

Artificial intelligence and machine learning are fast becoming part of everyday life. Based on AI models currently used, among other things, in content moderation and surveillance, the artworks explore the “latent space” of the AI as it processes and imagines the world for itself, dreaming in the areas between and beyond what it has learnt from us.

Collaborative project with Roland Arnoldt

In Closed Loop two artificial intelligence models converse with each other—one with words the other with images—in a never-ending feedback loop. The words of one describe the images of the other, which then seeks to describe the words with a fresh image. The neural networks become lost in their own nuances, sparking and branching off each other as they converse.

Credits

www.jakeelwes.com
Special thanks to Anh Nguyen et al. at Evolving-AI for their work on GANs

]]>
Silences (Active Images) https://ars.electronica.art/ai/en/active-images/ Tue, 08 Aug 2017 14:19:51 +0000 https://ars.electronica.art/ai/?p=3670

Lohner Carlson (DE/US)

Lohner Carlson have been pursuing the notion of the Active Image since the late 1980s when their initial collaboration with John Cage inspired them to expand the found object and the notion of silence into the medium of film. As a result, Active Images investigate the nature of photography and the moving image.

The viewer’s “real” time perception collides with filmed “realtime” in an experimental combustion of long-term visual loops with seemingly coincidental and minimalist changes, thereby allowing the temporal and spacial dimension to transform into hypnotic, rhythmic, visual-music structures.

In order to adequately present their digital media work, Lohner Carlson, together with Videri, have developed a hardware-software-content-exhibition platform named Active Image technology, which, for the first time, allows complete digital uniqueness and originality, accountability, transactability, and security of the media artwork.

Aesthetically, images shown on the Active Image digital canvas rival the saturation and tranquility of analog picture or painting quality. In the near future this new presentation form will be available for all media artists. At Ars Electronica 2017 this technology will be shown for the first time, featuring artworks by Lohner Carlson and Arotin & Serghei.

Credits

Lohner Carlson sind Henning Lohner (DE / US), Van Carlson (US) and Max Carlson (US)

]]>
Machine Learning Porn https://ars.electronica.art/ai/en/machine-learning-porn/ Tue, 08 Aug 2017 07:11:37 +0000 https://ars.electronica.art/ai/?p=3590

Jake Elwes (UK)

Artificial intelligence and machine learning are fast becoming part of everyday life. Based on AI models currently used, among other things, in content moderation and surveillance, the artworks explore the “latent space” of the AI as it processes and imagines the world for itself, dreaming in the areas between and beyond what it has learnt from us.

In Machine Learning Porn a neural network has been trained using an explicit content model for finding pornography in search engines. The network is then reverse engineered to generate new “pornography” from scratch: an AI daydreaming of sex.

Credits

www.jakeelwes.com
Special thanks to Gabriel Goh for inspiration.

]]>
hananona https://ars.electronica.art/ai/en/hananona/ Tue, 08 Aug 2017 05:47:11 +0000 https://ars.electronica.art/ai/?p=1787

STAIR Lab. (JP) collaborating with Surface & Architecture Inc, Kyoko Kunoh, Tomohiro Akagawa, Tanoshim Inc., mokha Inc. and Tokyo Studio Co. Ltd. (JP)

The latest AI research makes it possible to teach computers the names of things by showing them many examples. The key is a large amount of training data and deep learning software. By leveraging this, the artists have developed an AI capable of classifying 406 kinds of flower by using over 300,000 flower pictures.

hananona is an interactive work that visualizes how AI classifies a flower. When it sees a flower, it identifies its name and shows its class on a visual “flower map”—a visualization of the inside of the AI brain. This is a group of image clusters, each of which is a cluster of flower photos learned as belonging to the same class. By looking at them, users can see how AI classifies the flowers.

Users are encouraged to challenge hananona with their own flower photos, or with other materials such as pictures, paintings, flower-like objects etc. so that they can observe how the AI reacts to different abstraction levels of flowers.

Credits

STAIR Lab., Chiba Institute of Technology

Creative direction, design: Surface & Architecture Inc.

Art direction: Kyoko Kunoh
Interaction design, programming: Tomohiro Akagawa
Programming: Tanoshim Inc.
Server programming: mokha Inc.
Furniture production, site setup: Tokyo Studio Co., Ltd.

]]>
Fight https://ars.electronica.art/ai/en/fight/ Tue, 08 Aug 2017 05:24:51 +0000 https://ars.electronica.art/ai/?p=1779

We see things not as they are, but as we are.

Memo Akten (TR/UK)

Fight is a virtual-reality artwork in which the viewer’s two eyes are presented with radically different images, resulting in a phenomenon known as binocular rivalry. Presented with rival signals, the conscious mind “sees” an unstable, irregular, animated patchwork of the two images, with swipes and transitions. The nature of these irregularities and instabilities depends on the viewer’s physiology.

The act of looking around allows the viewer to probe which sections of the signals become dominant or suppressed—a reminder that seeing (and in broader terms perception in general) is an active process, driven by movement, expectations and intent. The picture one see in his or her conscious minds is not a direct representation of the outside world, or of what the senses deliver, but of a simulated world, reconstructed according to the expectations and prior beliefs of ourselves.

Even though everybody is presented with exactly the same images in this work, everyone’s conscious visual experience will be different. Nobody can see, what someone else sees. And everybody sees something else than what is actually presented. Nobody is able to see the entirety of the “reality” before them. The work is part of a broader line of inquiry about self-affirming human biases, the inability to see the world from others’ point of view, and the resulting social polarization.

Credits

Commissioned by STRP
Score: Rutger Zuydervelt (Machinefabriek)
Producer: Juliette Bibasse
Assistant: Rob Homewood

]]>