Site areas:

Credit: Terence Broad

Terence Broad (UK)

Blade Runner—Autoencoded is a film made by training an autoencoder—a type of generative neural network—to recreate frames from the 1982 film Blade Runner. The Autoencoder learns to model all frames by trying to copy them through a very narrow information bottleneck, being optimized to create images that are as similar as possible to the original images.

The resulting sequence is very dreamlike, drifting in and out of recognition between static scenes that the model remembers well, to fleeting sequences—usually with a lot of movement—that the model barely comprehends.

The film Blade Runner is adapted from Philip K. Dicks novel Do Androids Dream of Electric Sheep?. Set in a post-apocalyptic dystopian future, Rick Deckard is a bounty hunter who makes a living hunting down and killing replicants, artificial humans that are so well engineered that they are physically indistinguishable from human beings.

By reinterpreting Blade Runner with the autoencoder’s memory of the film, Blade Runner—Autoencoded seeks to emphasize the ambiguous boundary in the film between replicant and human, or in the case of the reconstructed film, between our memory of the film and the neural networks. By examining this imperfect reconstruction, the gaze of a disembodied machine, it becomes easier to acknowledge the flaws in our own internal representation of the world and easier to imagine the potential of other, substantially different systems that have their own internal representations.

Credits

Carried out on the Msci Creative Computing course at the Department of Computing, Goldsmiths, University of London under the supervision of Mick Grierson.