human – Artificial Intelligence https://ars.electronica.art/ai/en Ars Electronica Festival 2017 Tue, 28 Jun 2022 13:43:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Until I Die https://ars.electronica.art/ai/en/until-i-die/ Tue, 08 Aug 2017 18:51:32 +0000 https://ars.electronica.art/ai/?p=2654

::vtol:: (RU)

The large-scale project Until I Die is a hybrid installation that uses the artist’s blood, extracted and accumulated over a long period of time. The blood is used to generate electricity for a small sound synthesizer.

It is one of the most significant and complex works created by ::vtol:: in recent years, touching on many topics relating to hybrid art: alternative sources of energy, unification of the human body and machine, using the body as a resource. In general, this project is an attempt to create a technical-biological clone of the artist, using his own life energy to compose electronic music.

]]>
Natural History of the Enigma https://ars.electronica.art/ai/en/enigma/ Tue, 08 Aug 2017 14:07:15 +0000 https://ars.electronica.art/ai/?p=3663

Eduardo Kac (US)

The central work in the Natural History of the Enigma series is a “plantimal”, a new life form that Eduardo Kac has created and which he calls “Edunia,” a genetically engineered flower that is a hybrid of the artist and a petunia.

Edunia expresses Kac’s DNA exclusively in its red veins. The gene that Kac has selected is responsible for the identification of foreign bodies. In this work, it is precisely what identifies and rejects the Other that the artist integrates into the Other, thus creating a new kind of self that is partly flower and partly human.

Credits

]]>
Recomposition of Human Presence: Waves, Material, and Intelligence https://ars.electronica.art/ai/en/recomposition-of-human-presence/ Tue, 08 Aug 2017 12:24:49 +0000 https://ars.electronica.art/ai/?p=3638

From Human Society towards Digital Nature and Computational Incubated Diversity

Digital Nature Group at the University of Tsukuba, Pixie Dust Technologies Inc. (JP)

How can we redefine our human presence? The Digital Nature Group has focused on researching the relationship between waves, material and intelligence by computational environments towards building feedback loops between human intelligence and machine intelligence. From the viewpoint of computer science research, they are prototyping the systems to combine wave engineering, organic and meta-materials by digital fabrication and deep learning in order to discover the new ecosystem in the digital age.

The Digital Nature Group consists of over forty people, including students, researchers and the professor, who are all interested in wave engineering, machine learning, materials research. They are promoting research and development not only for academic research but also for use in society.

Like their prototype series, they are developing software that can output the alternative clothes designs by famous designers through deep learning in order to form making loops between ordinary designers and machine intelligence, developing automated wheelchairs and prosthetic body aids, and forming loops including the spatial recognition of machine intelligence with the relationship between light, sound and human body. All of these projects are based on the link between digital fabrication technology, wave-engineering technology and machine-learning technology.

What you see in such prototypes is a direction that differs from modern standardized social forms, modern mass-production formats or mass-communication styles. They define their view of the world as computationally incubated diversity, and by tackling the expansion of the body, expansion of the production process, audiovisual communication by holographic wave engineering for individual communication, and machine intelligence. They are trying to use these emerging technologies to figure out the digital-age ecosystem. This is what they always keep in mind in the process of combining art, science and technology, and thereby trying to solve real social problems using such technologies. The technology meme known as technium, which arises here, seems to be Japanese style and has its own cultural perspective as well.

Credits

Yoichi Ochiai, Atsushi Shinoda, Akira Ishii, Keisuke Kawahara, Amy Koike, Junjian Zhang, Kazuki Takazawa, Kensuke Abe, Kotaro Omomo, Natsumi Kato, Ryota Kawamura, Satoshi Hashizume, Ooi Chun Wei, Yaohao Chen, Hiroki Hasada, Keita Kanai, Mose Sakashita, Naoya Muramatsu, Shingo Uzawa, Yuki Koyama, Yuta Sato, Chihiro Murakami, Ippei Suzuki, Kenta Yamamoto, Shinji Sakamoto, Ayaka Ebisu, Daitetsu Sato, Hiroyuki Osone, Kubokawa Kazuyoshi, Riku Iwasaki, Tatsuya Minagawa, Taisuke Ohshima, Akira Hashimoto, Wataru Kaji, Yuta Ito, Kazuki Otao, Kengo Tanaka, Kohei Ogawa, Kent Kishima, Shinnosuke Ando, Shouki Imai, Yusuke Tanemura

All projects are supervised by Prof. Yoichi Ochiai.

Supported by: Digital Nature Group, University of Tsukuba, Pixie Dust Technologies Inc.

DeepWear

Natsumi Kato (JP), Hiroyuki Osone (JP)

We present DeepWear, a method using deep learning for clothes design. DeepWear designing clothes system use DCGAN to generate images and designers make clothes by receiving inspiration from those images.

Coded Skeleton

Taisuke Ohshima (JP), Miyu Iwafune (JP)

Coded Skeleton is a material that transforms into preprogrammed motions by using simple linear actuators. This property of the material is provided by a 3D-printable geometric structure. The motion is designed by original software that generates a 3D-printable structure that is flexible only in the designed motion but stiff in other deformations. We call this property “isolated flexibility.” It realizes precisely controllable elastic motion by using simple linear actuators, and the design system that has been developed enables us to design the motion of the Coded Skeleton.

Stimulated Percussions

Ayaka Ebisu (JP), Yuta Sato (JP)

Electrical stimulation turns muscles into machines. The body controlled by the program produces rhythms. This is a new method for musical performances which aims to beat out rhythms for beginners. It is easy to play different rhythms simultaneously with the right hand and the left hand.

Live Jacket

Yoichi Ochiai (JP), HAKUHODO Inc. (JP), Go inc. (JP), Kenta Suzuki (JP), Shinji Sakamoto (JP)

Our Live Jacket demonstration allows visitors to wear a jacket with built-in speakers and to listen to music over the whole body. There are 22 built-in speakers which play music from every part of the jacket, so visitors can experience wrap-around sound. In addition, the sounds change depending on the movement of the person wearing it.

Immersive Light Field

Kazuki Otao (JP)

This head-mounted display (HMD) system makes it possible to project images directly into human pupils and to see the environment through an HMD. This system provides an unprecedentedly wide angle and shows the possibility of metamaterials that have properties that do not exist in nature.

Printed Absorbent

Kohei Ogawa (JP), Hiroki Hasada (JP), Kensuke Abe (JP), Kenta Yamamoto (JP)

In this work, we fabricated the structure which causes capillary phenomena. These plants are grown by the structure. Look forward to how it will grow.

Telewheelchair

Satoshi Hashizume (JP), Kazuki Takazawa (JP), Ippei Suzuki (JP)

This telepresence system provides remote care by installing functions such as object recognition on a wheelchair.

]]>
Hybrid Art – cellF https://ars.electronica.art/ai/en/hybrid-art-cellf/ Tue, 08 Aug 2017 10:46:34 +0000 https://ars.electronica.art/ai/?p=3250

Guy Ben-Ary (AU), Bakkum Douglas (US), Mike Edel (AU), Andrew Fitch (AU), Stuart Hodgetts (AU), Darren Moore (AU), Nathan Thompson (AU)

There is a surprising similarity in the way neural networks and analogue synthesizers work: both receive signals and process them through components to generate data or sound.

cellF combines these two systems. The “brain” of this new creation consists of a biological neural network grown in a petri dish, which controls analogue modular synthesizers in real time. The living part of this completely autonomous and analogue instrument is composed of nerve cells. These were taken from Guy Ben-Ary’s fibroblasts (cells in connective tissue), which were programmed back into stem cells. Guy Ben-Ary then artificially further developed these stem cells into neural stem cells, which can become differentiated into nerve cells under certain conditions in the laboratory and form a neural network – Ben-Ary’s “external brain.”

The activity of this brain can be influenced by the input from other, human musicians and made audible through the analogue synthesizer. Human and instrument become a unit – a “cybernetic rock star” from the petri dish.

The project will be presented during the Ars Electronica Festival in the POSTCITY.

]]>
Hello Machine—Hello Human https://ars.electronica.art/ai/en/hello-machine-hello-human/ Tue, 08 Aug 2017 03:30:27 +0000 https://ars.electronica.art/ai/?p=1932

Rachel Hanlon (AU)

Hello . . . ? Can you talk to me . . . ? When technologies reach obsolescence our relationship with them changes, but what never changes is our need to reach out to others, connect and share. But what if no one is on the other end of the line? Who is there to hear us?

AI has made sure there always is! A “speech race” is upon us. First we had interactive voice response systems, now with natural language interface systems we have our new “weavers of speech,” these modern day “voices with a smile” are changing the way we communicate with our phones. Siri, Alexa, Bibxy, Cortana and Google Assistant (shall we call her GAbby?) are all vying for your attention, but what will our budding relationships with these Boy/Girl Fridays blossom into? Hello Machine—Hello Human, touches on the playful moments that are shared between man and machine, and seeks to connect with you by inverting this relationship, by asking what can you do for her.

Hello Machines are situated across the globe in ever-changing locations and time zones. Picking up the receiver rings the other Hello Machines, creating space for spontaneous voice visiting. They provide a way in which the viewer can interact with re-animated, technically obsolete telephone systems, utilizing present-day advances in telephony. Their aim is to open up a dialog between the technologies’ original ideas and meanings, and what makes up the “thingness” these devices now possess, by unraveling its historical and societal content that contains traces of our identity.

Credits

This project has been assisted by the Australian Government through the Australia Council, its arts funding and advisory body.

Hello Machine—Hello Human was developed within the Ars Electronica Futurelab, and forms part of Rachel Hanlon’s PhD Research through Deakin University, Australia.

]]>
iOTA https://ars.electronica.art/ai/en/iota/ Sun, 06 Aug 2017 08:55:15 +0000 https://ars.electronica.art/ai/?p=2125

OUCHHH X AUDIOFIL feat. MASOM (TR/CA)

Can machines totally replace humans, or is there a need for just the right combination of human and artificial intelligence—hybrid intelligence?

OUCHHH collaboration AudioFil, Kıvanç Tatar and Philippe Pasquier for an audio-visual performance with artificial intelligence. We will turn our real-time onstage performance into a human and machine collaboration by adding MASOM (an Artificial Intelligence system making music) to the new version of iOTA. MASOM is developed by Kıvanç Tatar and Philippe Pasquier. For this piece, MASOM will be trained on the previous compositions of Mehmet Ünal from AudioFIL.

In mathematics iOTA (i) denotes an imaginary unit or number; it can be used for the inclusion map of one space into another. Light is the single element which can be perceived by the eye. iOTA is an LED installation inspired by light physics and research into the origins of geometry. Corresponding to the focus of the observer, the nature of light and its different phenomena can be seen beyond the perceptivity of the human mind, and attempts to translate them into a unified, non-spatial form.

iOTA was presented on the 126 m² LED screen at Zorlu Performing Arts Center. The installation was part of Sonar +D showcase at Sonar Istanbul Festival 2017 and Digi.logue.

Credits

Producer: Ouchhh Studio
New media artists and directors: Ferdi Alıcı, Eylul Duranagac (OUCHHH)
Creative coder and AI artists: Kıvanç Tatar and Philippe Pasquier (MASOM)
Sound design and music: Mehmet Ünal from AudioFIL

]]>