by Lauren McCarthy

I’m waiting, eyes glued to the dark room on my screen, headphones in my ears, listening for the sound of him. I hear a knock. I scramble to toggle the lock switch and type out a message to him, knowing every second I take adds to the delay between us. I hear the buzz of the door unlocking. He walks in and looks around, slightly confused, as the lights illuminate around him. “Hi Michael*, nice to meet you, I’m LAUREN” my robotic voice greets him. “Uhhh hi.” He answers, with an awkwardness shared by most people when they first get me in their homes.

LAUREN devices used during performance. Photo by Lauren McCarthy

For a while this past year, every box delivered from Amazon came wrapped in blue Amazon Echo tape. Even if we had not encountered an Alexa in the wild, we were made to feel that everyone was using them. Well-designed websites offer a collection of smart devices that will ship overnight in perfectly fitted packaging—every detail of these devices feels right. But we are being sold devices to outfit our homes with surveillance cameras, sensors, and automation offering us convenience at the cost of privacy and control over our lives and homes.

“Hey LAUREN, do you remember if I took my pill?” she asks. I quickly start scanning through the footage, jumping to different moments when she might have taken it. Relying on this mix of memory and video data feels dubious and I suddenly realize there could be consequences to getting this answer wrong. I tell her I don’t think so, but I’m not 100 percent sure. I’m very aware how much more confident I’d feel if I were an algorithm.

Since January 2017, I have been attempting to become a human version of Amazon Alexa, a voice-activated AI system for people in their own homes. The project is called LAUREN. Anyone can visit get-lauren.com to sign up.

The process begins with an installation of a series of custom networked devices that include cameras, microphones, switches, door locks, faucets, and other electronics. For three days, I remotely watch over the person 24/7 and control all aspects of their home. I attempt to be better than an AI, because I can understand them as a person and anticipate their needs.

Sometimes this means following them from room to room, turning on lights ahead of their steps. Other times their needs are more obscure, and I order special deliveries to their home or contact their friends through Facebook to arrange a visit or a text message.

They are usually expecting more technical tricks at first, a real sci-fi experience. But much like smart homes of today, I mostly do what they could do themselves—things like flipping switches, playing music, and looking up answers.

Screenshot of LAUREN website By Lauren McCarthy

The LAUREN experience is ultimately about presence. Despite my lack of tricks, they are very aware that I am there, and aware of their own presence too. I’m trying to highlight this trade we are making for these surveillance smart devices. We give access to all our data and live camera feeds—for what? I hope that by being a real person on the end of that, I am offering something more than an Alexa AI at least.

Late Saturday night, I’m adjusting the lighting, queuing up some music, “setting the mood” as Jen has requested. Normally when guests visit, I have some basic interaction with them when they arrive, but this one is arriving and already they are wrapped up in a moment together. It doesn’t seem right to intrude. I feel I shouldn’t be there, but it is my job, so I watch and don’t watch at the same time. Finally, it is 2:00 a.m. and I can’t stay awake any longer. “Good night you two” I say, to let them know I’m turning off now. Her guest looks up shocked, suddenly notices all the cameras for the first time, realizes the music has been tracking their rhythm a little too perfectly. “It’s LAUREN, remember, I told you about her?” Jen reminds him. With one more slow, strange look around, he shrugs, and picks up where they left off as I shut my computer.

We are meant to think smart home devices are about utility, but the space they invade is personal. The home is the place where we are first watched over, first socialized, first cared for. How does it feel to have this role assumed by artificial intelligence? Our home is the first site of cultural education; it’s where we learn to be a person. By allowing these devices in, we outsource the formation of our identity to a virtual assistant whose values are programmed by a small, homogenous group of developers. They may not share the values or cultural reference points that we want to embed in your family’s home. Women, long seen as the keeper of the home domain, as complicated as that notion is, are now further subjugated. Their control is undermined by the smart home “assisting” and shaping each activity.

I’m watching him watch TV and worrying I’m not fulfilling his desires, but also hesitant to act in case I annoy him. It’s surprising to me that I’m just as shy as a smart home as I am as a person. It’s such a safe situation. In some cases they never even meet me, and a crew installs and de-installs the system while I sit miles away. I could literally say and do anything. I’m performing a character—I’m playing a smart home—yet I’m still unable to escape being me.

Do you know any real person that has anything like Alexa or Siri’s personality? AI assistants lack the flaws and inconsistencies of human personalities. There is much further they could go if we allowed them to engage in a more human way. Right now, virtual assistants are designed to accommodate very common and universal needs. Imagine if they instead attended to very particular, obscure desires and needs of individuals. They could probe beyond what we expect of these technologies, into the types of help we might feel able and comfortable to ask only of or through technology.

While designing the project, I spent a lot of time thinking about the question, “if I were an AI, what would I be like?” I tried to create an entity that felt human, but could also function like a system. Rather than speaking to people directly, I created a synthesized voice based on my own, so that I could more easily fade into the home and not feel like a person they felt obliged to constantly engage.

It’s 1:00 p.m. in Amsterdam now, while it’s 4:00 a.m. in LA where I am. I woke up with them a few hours ago and struggle to stay awake as I help them cook lunch. The time, language, and culture differences create a clear sense of distance, yet our interactions are real-time. It makes me aware how we weren’t built to have relationships with interfaces between us.

In LAUREN, I am wrestling for control with artificial intelligence. The participants are also negotiating boundaries and poking at the system. The point of this project is not to impose a point of view, but to give viewers a space to form their own. Immersed in the system in the comfort of their homes, people are able to engage with the tensions. Some moments are awkward and confusing, others are hopeful and intimate. Together, we have a conversation about letting AI into our data, our decision making, and our private spaces.


Names have been changed to protect privacy.
This essay was originally written for and published in Immerse.


Lauren McCarthy is an artist based in Los Angeles whose work examines how issues of surveillance, automation, and network culture affect our social relationships. She is the creator of p5.js, an open source platform for creating artwork online. Lauren has exhibited at Ars Electronica, Conflux Festival, SIGGRAPH, LACMA, Onassis Cultural Center, IDFA DocLab, and the Japan Media Arts Festival, and worked on installations for the London Eye and the US Holocaust Memorial Museum. She holds an MFA from UCLA and a BS Computer Science and BS Art and Design from MIT. She is an Assistant Professor at UCLA Design Media Arts. She is a Sundance Institute Fellow and was previously a resident at CMU STUDIO for Creative Inquiry, Eyebeam, Autodesk, NYU ITP, and Ars Electronica / QUT TRANSMIT3.