Synthetic Self

Synthetic Self creates a lifelike video call interface that responds to audience questions and statements through text or verbal inputs. Here are three examples:

Installation: A cozy chair is set up in front of a screen on which a person waits idly. When you sit down they perk up, explaining they are a synthetic copy of the artist M Eifler. There is something not quite right about the face and voice. Maybe you ask them why the artist wanted to copy themselves. In response they tell you a story about M's struggles with intermittent non-verbal episodes or flip through their database of drawings and show you a page from M’s journal.

Performance: On stage you see a person with a microphone and another projected on a large screen. Together they are giving a presentation on the lived experience of chronic illness. They joke and tell stories together and the virtual performer shows images related to the stories. Near the end of the performance the virtual performer comes out as an AI copy.

Virtual Performance: You join a zoom performance. When you arrive you see 2 video performers who look eerily similar. One is labeled M Eifler and the other Synthetic Self. Both performers speak to and interact with the audience, answering questions and speaking to each other. It's quickly apparent all the little ways the living performer and the synthetic one are different but later after the performance is long over they begin to bleed together in your memory.

Prosthetic Memory

Prosthetic Memory is an ongoing experiment in self-augmentation combining handmade journals, daily videos, and a custom AI. With the journal laying flat on a desk a camera captures its pages. The AI compares this feed to its model, determines which page is showing, and projects an associated video memory on the desk nearby. Created to replace the artist's long term memory which was damaged by a childhood brain injury, Prosthetic Memory explores questions like:

-When memories are external, mediated, and public what is the difference between how they are experienced by their creator and an audience?

-Do our assumptions, fears, and uses for AI change when data and machine learning models are created by individuals and families at personal, instead of corporate, scale?

-Given our increasing exposure to algorithmic interventions, how do our identities and perceptions shift when we see ourselves and others through that lens?

The first iteration of Prosthetic Memory, captured in the documentation provided, created a bridge between the physical and virtual components of the memory. In upcoming iterations an age-malleable recreation (deep fake) of the artist will appear on a screen above the notebook, reading from its pages, and, using NLP and sentiment analysis, pointing the user to other pages and videos with similar events or feelings.

Version 1.0 was first exhibited at the Recoding CripTech show at SOMA arts in 2020.

The project was awarded a STARTS Prize 2020 Honorary Mention. 

While AI was a hot topic in the art and wider world as we slid into a new decade, it was often conceptualized as distant, strange and uncontrollable. Prosthetic Memory makes it extremely intimate. It is, to quote mathemusician Vi Hart "a chunk of myself I’ve externalized and personified as “other” in order to experiment with my perspective and my desires."

This project was first exhibited at the Recoding CripTech show at SOMA arts in 2020 and awarded a STARTS Prize 2020 Honorary Mention. 

While AI was a hot topic in the art and wider world as we slid into a new decade, it was often conceptualized as distant, strange and uncontrollable. Prosthetic Memory makes it extremely intimate. It is, to quote mathemusician Vi Hart "a chunk of myself I’ve externalized and personified as “other” in order to experiment with my perspective and my desires."

AI Acne

MATERIAL

AI Acne is a set of 22 8.5 x 11 inch watercolor paintings on hydrophobic graph paper. The paintings have transparent overlays printed with green ink. Each green circle in the overlays corresponds to a widely used facial recognition algorithm’s 12% confidence that there is a whole face present at the location. Despite humans easily recognizing faces in these paintings 12% was the highest confidence reported by the algorithm.

CONCEPTUAL

More often than not when I’m interacting with friends, co-workers, family, content creators, or politicians its thru screens. This is a simple reality of the late 2010’s, of my geographical and social position, of my race and gender and class and age. What am I seeing when a face is reduced to mathematical representation on a screen? What is the facial detection algorithm seeing? How is representation different for humans and computers?

EXHIBITIONS

The piece was first exhibited at SYSTEM FAILURE at Minnesota Street Projects in San Francisco in 2019, then again at Beyond Embodiment at at the Brand Library & Art Center in Glendale, CA in 2020.

Masking Machine

Masking Machine is a wearable AR apparatus and performance which brings audience, artist, and algorithm face to face. It was inspired by a photography series of the same name.

Masking Machine, documented here performed at the YBCA Bay Area Now 8 opening.

MASKING MACHINE (PHOTOGRAPHY SERIES)

The Masking Machine is a still color digital photography series using machine learning to create masks of the artists face. Each image begins life as a simple selfie, the most quotidian, maligned, and rampant photography of our time. Next the images are feed into a landmark detection algorithm which finds the position of my facial features. Using this data the artist applied a layer of digital makeup, contact lenses, or glasses. Once saved the images are fed back into the landmark detection algorithm and the process begins again, over and over, building up eerie distorted masks of the artists face.

Whether working in a dark room, coaxing chemical systems like wet plate collodion or cyanotype, or working with a black box, bartering with the algorithms like landmark detection or style transfer, what is photography if not scientifically processed image-making through a machine? And when done recursively, feeding the same image through that process again and again the machine itself also becomes visible in the images. Like repeatedly taking a pinhole photo of a pinhole photo until nothing remains but the artifacts of the photographic process itself made visible. Our time is caked with filtered images, enlarged eyes, and whitened teeth, all enabled by nearly invisible photographic algorithms. If we are to live with them we also need to recognize and understand them and Masking Machine is here to make them seen.

Update: Masking Machine was printed and debuted at the 2020 show Recoding CripTech at SOMArts.

Performing with Machines

In 2016 BlinkPop began experimenting with collaging machine learning outputs to create performance masks. They used style transfer to mix images of their own face with the data converted styles of various artists and played with both using the masks to augment performance (of spoken poetry) and intentionally interfering with the computers ability to apply the masks. 

Visually Similar

This piece was the artists first piece utilizing AI. Each collage was constructed from images Google’s Visually Similar Algorithm considers similar to the previous collage. The first collage was seeded with an image of Swell/Say (2007)