synthesizer – Artificial Intelligence https://ars.electronica.art/ai/en Ars Electronica Festival 2017 Tue, 28 Jun 2022 13:43:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 cellF https://ars.electronica.art/ai/en/cellf/ Fri, 18 Aug 2017 04:02:38 +0000 https://ars.electronica.art/ai/?p=1896

Guy Ben-Ary (AU), Nathan Thompson (AU), Andrew Fitch (AU), Darren Moore (AU), Stuart Hodgetts (AU), Mike Edel (AU), Douglas Bakkum (US)

cellF is Guy Ben-Ary’s self-portrait but also the world’s first neural synthesizer. cellF’s “brain” is made of a living neural network that grows in a Petri dish and controls analog synthesizers that work in synergy with the neural network in real time.

Ben-Ary had a biopsy taken from his arm; then he cultivated his skin cells and, using iPS technology, he transformed the skin cells into stem cells, which were then differentiated into neural networks grown over a multi-electrode-array (MEA) dish to become “Guy’s external brain.” The MEA dishes consist of a grid of 8 x 8 electrodes. These can record the electric signals the neurons produce and send stimulations back to the neurons—a read-and-write interface to the “brain”. Human musicians are invited to play with cellF. The human-made music is fed to the neurons as stimulation, and the neurons respond by controlling the synthesizers. Together they perform live, reflexive and improvised sound pieces that are not entirely human. The sound is spatialized into sixteen speakers. The spatialized reflects the pockets of activity within the MEA dish. Walking around the space offers the sensation of walking through Guy’s external brain.

cellF was initiated and spearheaded by the artist Guy Ben-Ary. It is also the result of a collaborative work involving Ben-Ary as well as the designer and new media artist Nathan Thompson, electrical engineer and synthesizer builder Dr. Andrew Fitch, musician Dr. Darren Moore, neuroscientist Dr. Stuart Hodgetts, stem-cell scientist Dr. Michael Edel and neuro-engineer Dr. Douglas Bakkum. Each contributor played an important role in shaping the final outcome.

Credits

The project is supported by the Australia Council for the Arts and the Department of Culture and the Arts WA.

The project is hosted by SymbioticA @ the University of Western Australia.

]]>
I’m Humanity https://ars.electronica.art/ai/en/im-humanity/ Tue, 15 Aug 2017 21:29:28 +0000 https://ars.electronica.art/ai/?p=1220

Etsuko Yakushimaru (JP)

The project l’m Humanity is based on the concept of “post-humanity music” and explores how new music will be transmitted, recorded, mutated, and diffused whether sung or played via word of mouth, as scores, through radio, records and CDs, or cloud computing.

Music travels through space and time, undergoing mutations on its way. The close connection between music and media is like that between transmission and recording, and can be thought of as genes and DNA. As a musician, Yakushimaru has worked in a variety of genres from pop to experimental music and has created various types of artwork such as drawings, installations, pieces that make use of satellite and biometric data, a song-generating robot, original instruments, and more.

In l’m Humanity, Yakushimaru makes pop music with the use of the nucleic acid sequence of Synechococcus, which is a type of cyanobacteria. The musical information is converted into a genetic code, which was used to create a long DNA sequence comprising three con-nected nucleic acid sequences. The DNA was artificially composited and incorporated into the chromosomes of the microorganism. This genetically-modified microorganism with mu-sic in its DNA is able to continuously self-replicate. So even if humanity as we know it becomes extinct, it will live on, waiting for the music within it to be decoded and played by the species that replaces humanity.

When thinking about the lifespan of recording media, for example, CDs are said to last for decades and acid-free paper is said to last for centuries. In comparison, DNA’s lifespan as a recording media is 500 thousand years, physicochemically speaking. Because the lifespan of DNA is so long, it has great potential as a recording media.

Biotechnical procedures

In our DNA, which consists of four kinds of nucleotides (A, C, G and T), each amino acid is encoded into a distinct nucleotide triplet. The rules for translation are summarized in the codon table. A cipher to convert the music chords into genetic codes was created based on this codon table used in living cells. The main chord progression of I’m Humanity was converted into the following 276 nucleotides:

I’m Humanity: 276bp; A 22; T 101; G 57; C; 96 (GC content = 55.4%)

GGTCTTCCCCATGGTCTTCCCCATGGTCTTCCCCATGGTCTTCCCCATGGTCTTCCCCATGG TCTTCCCCATGGTCTTCCCCATGGTCTTCCCCATTCTTCTGGAGGATCTTCTGGAGGATCTTCTTT-GGGTTCTTCTGGAGGCGGTCTTCCCCATGGTCTTCCCCATCTTCTTCTTCTTGGTGGTGGTGG-TATTCTTCTTCTCGGTGGTCCCAC-TGGTCTTCCCCATGGTCTTCCCCATGGTCTTCCCCATGGTCTTCCCCATGGTCTTCCCCAT

The genetic code was artificially synthesized by a DNA synthesizer and inserted in a vector, designated pSyn_1. The inserted DNA fragment encoded music chords was introduced to a genome of a host cell (a cyanobacterium, Synechococcus elongatus PCC 7942) by homolo-gous recombination. The music chords in the Synechococcus genome can be infinitely re-produced along with cell division.

“I’m Humanity” genetically-modified microorganism

Etsuko Yakushimaru with “I’m Humanity” in culture

On the other hand, it is not rare for nucleic acid sequences to mutate, and naturally this leads to changes in the genetic information. In that respect, in the history of “diffusion of music,” in which “mutation” has also had an important role in addition to “transmission” of information, the uniqueness of the “mutation” of nucleic acid sequences was strikingly similar.

In the lyrics of I’m Humanity, the microorganism I’m Humanity sings “Stop the evolution―don’t stop it.” Although mutation spurs evolution, it also means the changing of a species. Perhaps I’m Humanity is caught between its own evolution and its fear that its evolving could mean the loss of nucleic acid sequences with musical information, which would make it impossible for I’m Humanity to sing the song anymore.

The transposon (the genes that transfer on the genome and cause mutation) based on the DNA of Synechococcus, was planted in the score of “I’m Humanity”. In this performance, that segment was performed in an arragement to make it seem like actual mutatuin was taking place.
I’m Humanity became the first song in human history to be released in the three formats of “digital music distribution,” “CD,” and “genetically-modified microorganism.” This song, produced with the use of biotechnology, was distributed as pop music and also made it on the Apple Music start page.

Credits

l’m Humanity produced and directed by Etsuko Yakushimaru

Lyrics: Tica Alpha (a.k.a Etsuko Yakushimaru)
Music: Tica Alpha (a.k.a Etsuko Yakushimaru)
Genetic Codes: Etsuko Yakushimaru
Art Direction & Drawing & Design: Etsuko Yakushimaru
(C) 2016 Yakushimaru Etsuko

Musical arrangement: Etsuko Yakushimaru, Motoki Yamaguchi
Vocal & Chorus & Programming & dimtakt: Etsuko Yakushimaru
Drums & Programming: Motoki Yamaguchi
Recording & Mixing Engineer: Yujiro Yonetsu
Mastering Engineer: Shigeo Miyamoto
Technical Support: Satoshi Hanada
Photograph & Movie: MIRAI seisaku / Photograph(Compact Disc): Satomi Haraguchi
Label: MIRAI records
(P) MIRAI records
Support & Thanks: KENPOKU ART 2016, METI Ministry of Economy, Trade and lndustry., National Institute of Technology and Evaluation (NITE), Satoshi Hanada, Tokyo Metropolitan University, FabCafe MTRL, Yamaguchi Center for Arts and Media [YCAM]
*Apple Music is a trademark of Apple Inc., registered in the U.S. and other countries.

About the artist

Etsuko Yakushimaru (JP) is an artist, musician, producer, lyricist, composer, arranger, and vocalist. Broadly active, from pop music to experimental music and art. Consistently independent in her wide-ranging activities, which also include drawing, installation art, media art, poetry and other literature, and recitation. Producing numerous projects and artists, including her band, Soutaiseiriron. While appearing in the music charts with many hit songs, she has also created a project that involved the use of satellite, biological data and biotechnology, a song-generating robot powered by artificial intelligence and her own voice, an independently-developed VR system, and original electronic musical instruments. Major recent activities include exhibitions at Mari Art Museum, Toyota Municipal Museum of Art, KENPOKU ART 2016, and Yamaguchi Center for Arts and Media [YCAM]. Her Tensei Jingle and Flying Tentacles albums, both released in 2016, received praise from figures including Ryuichi Sakamoto, Jeff Mills, Fennesz, Penguin Cafe, Kiyoshi Kurosawa and Toh EnJoe.

Read more: starts-prize.aec.at.

This project is presented in the framework of the STARTS Prize 2017. STARTS Prize received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732019.

eulogos2017

]]>
_nybble_ https://ars.electronica.art/ai/en/nybble/ Tue, 08 Aug 2017 14:42:13 +0000 https://ars.electronica.art/ai/?p=1892

Alex Augier (FR)

_nybble_ is an audiovisual, formal and spatial performance in which the media fluctuate between minimal and organic digital aesthetics. Two poles on the same continuum. The aesthetic fluctuation is made by a generative visual where various forces impose both natural and geometric movements on a particle system.

The modular synthesizer keeps the musician at the heart of the proposal and controls the musical fluctuation. The stage design allows the audiovisual medium to deploy in space via a specific structure composed with four transparent screens and four points of sound diffusion. It offers to the public a quadrophonic and quadrascopic image for a total synaesthetic experience.

Credits

Co-production: Arcadi (Paris/FR), Stereolux (Nantes/FR)
Support: La Muse en Circuit (Alfortville/FR)

]]>
Digital Musics & Sound Art – Acoustic Additive Synthesizer https://ars.electronica.art/ai/en/acoustic-additive-synthesizer/ Tue, 08 Aug 2017 09:05:30 +0000 https://ars.electronica.art/ai/?p=3311

Krzysztof Cybulski (PL)

The Acoustic Additive Synthesizer (AAS) is an interactive object and instrument, which is based on the principles of a pipe organ. Pitch and volume, however, are controlled here by a computer. Each of the seven pipes has a motorized piston, which changes the pitch of the sound continuously, and a dedicated motorized air valve, which changes the volume of the sound.

To interact with the AAS, you simply have to speak (or sing) into a microphone. The machine “listens” and repeats the sounds in the form of organic, quasi synthetic sounds. It doesn’t necessarily resynthesize comprehensible speech, but the correlation between input and output is obvious. The AAS is a versatile “performer” furnished with a rich sound palette and an idiosyncratic personality.

Instructions for use: Use the microphone to control the AAS with your own voice.

]]>