*Documentation under construction
First prototype
Rehearsals
Board with history of the research
Click here to access.

Some of my experiments at ITP, Tisch, New York University
*Documentation under construction
Click here to access.
My final project was creating a web-based live performance in which the user of a website can watch the performer and move them through turning on LEDs located on their hands.
Find the code here. The base code is almost ready, it probably needs a last debugging section. My plan is to keep working on the project next semester.
For the streaming part, I will use the internet from my cellphone connected via USB to my laptop. And so I can connect the Gopro of my chest to the computer via wifi.
Create a web-based live performance in which the user of a website can watch the performer and move them through simple commands. The initial idea is to light up LEDs located on the hand of the performer suggesting how they will move through space and what they will touch.
I’m interested in thinking about how repetitive interactions with new digital technology behave as a tool for it to start inhabiting our subconscious. As one of the embodied results of this process, there is the subtle, penetrating, and gradual take over of the everyday choreographies of our hand gestures. Thus, I’m imagining an experience that creates an intimate microcosmos about this idea giving the control of another human’s hand to the user.
This semester me and Name have the goal of creating a web-based remote experience to be shown as an installation activated from at least two different locations simultaneosly. We have been working on the top of a WebRTC by Lisa Jamhoury to develop a sketch in p5js that allows remote users to communicate through body movements and a microphone through the web.
The code allows for two users in different locations to interact on the same p5js canvas using Posenet via ml5. Each user controls this bubble-body minimal avatar with a digital heart positioned in the middle of their chest through body gestures. In the first part of our piece, the user has to answer the question “How is your heart today?” with a color that defines the color of their heart and avatar while inside the experience. Once the edge of the body of one of the users overlays with the other’s edge, their avatars merge into one with the color of both colors mixed. Now, we are working to open an audio tunnel between users allowing them to speak and listen to each other.
Find the code here.
This semester I’ve been working with Name in the creation of a web-based remote experience for the class The Body Everywhere and Here. We have been working on the top of a WebRTC-Simple-Peer-Example by Lisa Jamhoury to develop the idea of an installation that allows remote users to communicate.
The code allows for two users in different locations to interact on the same p5js canvas using Posenet via ml5. Each user controls this bubble-body minimal avatar with a digital heart positioned in the middle of their chest through body gestures. In the first part of our piece, the user has to answer the question “How is your heart today?” with a color that defines the color of their heart and avatar while inside the experience. Once the edge of the body of one of the users overlays with the other’s edge, their avatars merge into one with the color of both colors mixed.
Find the code of the project on my github.
This prototype was made in collaboration with Esther. We were intended to create a speculative chat room imagining a future where people would have the true ability to develop a conversation only using memes.
The main technical challenge for the code was turning the original example by Shawn, in which you would send text messages, into an image-based chat. In the final result, people can copy the “Image Address” of a web-based meme, paste it to the text box of the chat, click send, and the meme will appear above it under the last meme that was sent.
In the back-end of the code, we are running a Node.js server hosted in a Virtual Private Server (VPS) provided by Digital Ocean. Node.js “is an open-source server environment designed to build scalable network applications.” It is “perfect for data-intensive real-time applications that run across distributed devices”. Thus, Node allows supporting WebSockets with our own servers and handles the protocols necessary to make a socket event-style programming work.
A WebSocket is “a persistent connection between the client and server” meaning that it “enables real-time, bi-directional communication between web clients and servers”. To code WebSockets in Javascript we are using a library for realtime web applications called Socket.IO. It has two parts: a client-side library that runs in the browser, and a server-side library for Node.js.
We are also running Express which is a Node module for building HTTP (Hypertext Transfer Protocol) servers. In order for it to work in the way we are coding it, all the public content of the website (HTML, CSS, images, videos etc that are part of the interface of the webpage) needs to be inside a folder called public in the same directory of the file server.js.
As part of the workflow we are using Fetch to be able to upload our files to our VPS. Fetch allows to connect via SSH (Secure Shell) to the server to transfer the files.
This week I created a prototype for a web based interactive self-portrait. The web page shows a video-performance that I made in 2014 autoplaying muted. The video can be paused or played through the “Breath/Dance” button on the bottom left of the page. I also added some text describing things that I love and as a title the fact that was most present in my mind during the period that I was designing the website: the uncontrolled fires in the Amazon Rainforest and Pantanal in Brazil and it’s connection and disparities to the fires in California.
The code was based on this example by w3schools and can be found below.
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
* {
box-sizing: border-box;
}
body {
margin: 0;
font-family: Arial;
font-size: 17px;
}
#myVideo {
position: fixed;
right: 0;
bottom: 0;
min-width: 100%;
min-height: 100%;
}
.content {
position: fixed;
bottom: 0;
background: rgba(0, 0, 0, 0.5);
color: #dc1616;
width: 100%;
padding: 20px;
}
#myBtn {
width: 200px;
font-size: 18px;
padding: 10px;
border: none;
background: #000;
color: rgb(255, 0, 0);
cursor: pointer;
}
#myBtn:hover {
background: #ddd;
color: black;
}
</style>
</head>
<body>
<video autoplay muted loop id="myVideo">
<source src="loginvideo.mp4" type="video/mp4">
</video>
<div class="content">
<h1>the forest is on fire</h1>
<p>I love cold waterfalls. I love swiming. I love a queer dancefloor. I love living close to a park. I love biking
while listening to music. I love the silence even though it doesn't exist. I love invisible machines. I love walking with my feet on the ground. I love the Fall. I love dancing. Sometimes I forget. Sometimes I make poetry. The fire won't burn the memories. </p>
<button id="myBtn" onclick="myFunction()">Breath</button>
</div>
<script>
var video = document.getElementById("myVideo");
var btn = document.getElementById("myBtn");
function myFunction() {
if (video.paused) {
video.play();
btn.innerHTML = "Breath";
} else {
video.pause();
btn.innerHTML = "Dance";
}
}
</script>
</body>
</html>
BeeMee is a web-based live performance made as a social experiment by the MIT Media Lab on Halloween night in 2018. The experiment let a real actor to be controlled by an online audience in real-time, deciding the future of a story showed at different locations at the MIT campus.
The event followed a dystopian story of “an evil AI by the name of Zookd, who had accidentally been released online. Internet users had to coordinate at scale and collectively help the actor (also a character in the story) to defeat Zookd“. It lasted for about two hours and had a huge mediatic repercussion before and after the event, and an audience of more than one thousand people, which actually made the online platform crash during the event.
Participants of the experience could control the actor through a web browser, in two ways. One was by writing in and submitting custom commands, such as “make coffee,” “open the door,” “run away,” and so on. The second way was by voting up or down on those commands. Once a command is voted to the top, the actor would presumably do that very thing. The documentary below reveals that an IRL person would be responsible for getting the most voted actions or the most frequent custom commands, filter them and report them to the actor via sound communication.
I find it a very interesting performance and immersive social game experiment for the uniqueness of the partially undetermined experience of the actors who performed it and for the audience’s experience. First, because I had never found something that would allow users to control real people over the internet developing a story with such complexity. I can only think about the live porn industry as another example at this moment and some web chats where you can choose or “give orders” to performers or strippers while tipping them. Second, because the performance not just happens online and live, but it takes into consideration the web and its specific potentialities as a medium. And third, because the story which is told is also related to the medium since they are talking about AI and dystopian futures which are something very trendy right now.
It’s is also interesting to notice that in December of the same year of the BeeMee experiment, Netflix released Black Mirror: Bandersnatch on Netflix, the first interactive film of the platform, which is different from BeeMee for several reasons and probably the most obvious is that BeeMee’s performance is happening in real-time IRL while in Bandersnatch users choose between pre-recorded and edited footages.
It’s interesting to think about how the team had to design the narrative in a way they have only partial control over it but still control enough to make it engaging for the audience and coming to an interesting ending. I also apretiate observing their set-up on the body of the performers and how they streamed it. I guess one way of improving the experiment and making it more complex and interesting would be creating an AI algorithm that would be able to understand the inputs from the audience, calculate the most frequent and accurate and give commands to the actors, instead of that having to be mediated by a human.
For this week’s assignment, I improved my last week’s avatar’s animation following Matt’s tutorials about Blendspace 1D, Animation Blueprints and State Machines in Unreal Engine 4. After completing it, the character that I downloaded from Mixamo has specific ways of idling standing up, walking, running, jumping and falling. It’s interesting to observe how each detail that is added to the animation composes their identity and sculpts their personality or the way that I imagine their personality based on how they look and move.
For me, it’s getting more complex to understand how the State Machine is working with all it’s blueprints details, so I decided to start taking an online course about Blueprints in UE to see if I can have more autonomy for future projects in UE4 and other software that may use similar visual scripting systems.