Digital Prometheus: artist Refik Anadol imbues artificial intelligence with creativity

Since graduating with an MFA from UCLA’s Media Arts in Design program in 2014, artist Refik Anadol has become a global sensation known for his exhibitions that harness advanced artificial intelligence and machine learning algorithms to create mind-blowing multi-sensory experiences. .

His work, however, is much more than just fascinating feasts for the eyes and ears; it addresses the challenges and opportunities that our ubiquitous computing has imposed on humanity.

On April 19, Anadol’s latest piece, “Moment of Reflection,” will debut on campus, where he is also a lecturer in UCLA’s Department of Media Arts Design. It was in this department that he learned from innovative teachers like Christian Moeller, Casey Reas, Jennifer Steinkamp and Victoria Vesna, all of whom use digital technology to help reshape the designs of art.

“Using the data is a scientific approach to something very emotional and spiritual,” Anadol said. “I think it’s very important for artists and creators to find that ‘human’ in the non-human.”

Ahead of “Moment of Reflection,” Anadol shares stories of inspiration and challenges about four of his pieces.

“WDCH Dreams” – Projection Show at the Walt Disney Concert Hall (2018)

LA Phil

To make a building like the Walt Disney Concert Hall “dream”, Anadol worked with Google Arts and Culture and researcher Parag Mital. They applied artificial intelligence to nearly 45 terabytes of digital archives from the Los Angeles Philharmonic, including 587,763 image files, 1,880 video files, 1,483 metadata files and 17,773 audio files.

Anadol: It was 2012 when I moved into my home in Venice and at 2am I rented a car and drove to downtown Los Angeles because I wanted to see the Walt Disney Concert Hall. Frank Gehry has been my hero since I learned architecture and I’m still inspired by architecture as a canvas.

Ever since I watched “Blade Runner,” I’ve dreamed that Los Angeles would be the place to reflect my imagination of an optimistic future. But when I went downtown at 2 am it was the opposite. There were no humans, no cars, and no lights. The building was dark and that night I remember thinking – in a very offbeat state – that I could take this building and one day it could learn and dream.

I emailed Frank Gehry and I emailed the LA Philharmonic. I didn’t know them but I was trying to connect with them and ask them if one day this could happen? And of course no answer was given.

A year later, I was at a Microsoft Research conference giving a talk to Bill Gates to present my idea. I said: “I want this building to make people dream and hallucinate one day. I received the award that day.

I came back to LA to my apartment two days later. I had received a response from Frank Gehry and I received a response from the LA Philharmonic. So in 2014, just when I graduated from UCLA, we did a screening show with Esa-Pekka Salonen (former LA Phil Music Director) inside Disney Hall. They found no reason to do an outdoor projection show until Phil’s centenary year in 2018.

Thanks to Frank Gehry and the LA Philharmonic, we got 100 years of every piece of data they recorded. Every sound, image, video, text, poster, sheet music and had one of the most advanced Google algorithms to analyze everything.

Frank Gehry means compound curvature, right? There is nothing symmetrical. There is nothing flat. Everything is equally bright and shiny. For example, where you project the images is so mathematically important for the audience to get the best effect.

We had 14 kilometers of fiber cables. We designed everything from scratch. When I say “we”, I mean my team, which is 15 people speaking 15 languages, representing 10 countries. And half of our studio is UCLA Bruins.

This project transformed our studio. In the near future, architecture will have an AI connection and if done on purpose, buildings can remember and dream. This is no longer a science fiction idea. This project sparked people’s imaginations.

“A significant moment for humanity” – NFT project for St. Jude Children’s Research Hospital (2022)


This collection of NFTs, or non-fungible tokens, is based on data collected during the first all-civilian spaceflight, Inspiration4. The artwork uses data shared by the NASA-funded Translational Research Institute for Space Health, or TRISH. Other collaborators were Baylor College of Art and Baylor College of Medicine and SpaceX. They recorded data about the astronauts’ bodies and spaceflight, including things like heart rate, brain activities, ultrasound data, temperature, cabin pressure, and more.

Anadol: I started learning blockchain in 2014. The pandemic, which has confined us to our homes, has caused all of humanity to focus on digital art, even if digital art itself does not was not the most appreciated thing in the art world. There were always skeptical thoughts, like “I prefer sculpture or I prefer painting”. For some reason everyone thought that if software did it, it wasn’t art.

With technology, there are pros and cons, like fire. We have AI and blockchain, so I challenged myself to ask the question, “What else can we do with this?” It put me in a more deeply positive mindset, like fundraising and paying attention to the lack of funding for St. Jude Children’s Research Hospital. Or we could raise awareness of the complexity of AI.

“Hallucination Machine: NYC” (2019)

“Machine Hallucination: NYC” took a vast trove of data – over 113 million photographic memories of New York City found in publicly available social media posts – and turned it into a 30-minute experimental cinema that was presented in 16K resolution at Artechouse in New York.

Anadol: I’m very lucky to have been one of the first artists to work with Google’s AI team in 2016. I got to work with some of the most cutting-edge scientists, hardware, software and AI on this video.

The questions were therefore: “Can an AI learn? Can he dream?

A dream learns and remembers. These are very cognitive processes for humans. But I think when we apply this idea to AI, we have limitations. We only used collective memories from public data, because data is very important in machines. Ethically, we do not use any personal data. We only use things that are public, like our collective memories of things like nature, space and urban culture.

Additionally, we trained the AI ​​ethically. We only showed what the AI ​​learned. We have never shown what is real. This has become very exciting ethical AI research.

It created the feeling of being in the mind of an AI dreaming of New York.

I think this project got so much attention because it was a new idea. It was a new feeling. He was talking about AI, he was talking about an experience of being in a physical environment. It was to speculate on the future of architecture. It involved AI and questions like, “OK, if we have this data, who else gets it, right? We make art with it. But what else to do?”

“Quantum Memories” at the National Gallery of Victoria in Melbourne, Australia (2020)


Using approximately 200 million nature and landscape images, “Quantum Memories” uses publicly available Google AI quantum computing research data and algorithms to explore the possibility of a parallel world. As theorized in quantum physics, this artwork was different for each person who experienced it. It tracked the movements of each audience member in real time, simulating how their viewing positions became entangled with the visible results of the ever-changing artwork.

Anadol: The project started in 2019. Working with the quantum computing team at Google AI, we were able to create something that would represent physicist Hugh Everett’s “many worlds” theory, which says that every subatomic calculation can open a new dimension.

How the hell can humans perceive subatomically, right? We need machines to understand life and record who we are and our memories. We need telescopes. We need microscopes. So we also need another machine to see these alternate dimensions.

The question was, “Can we work with this quantum computer and its data to simulate alternate realities?” We worked with the Google team, who found this fascinating, to create “Quantum Memories” – a unique AI model that can look at this data and simulate AI dreams. We generated alternate dimensional projections of nature.

What I would like people to know about this is the amount of work behind it all and that there was great teamwork, experimentation and failures. It was a series of iterations drawing from computer graphics, neuroscience, philosophy and music, nature. It covers many disciplines.

It’s a very UCLA mindset, isn’t it? We research and find new ways to make sense of the arts in the age of artificial intelligence.

Comments are closed.