Live web_Final project proposal

Goal

Create a web-based live performance in which the user of a website can watch the performer and move them through simple commands. The initial idea is to light up LEDs located on the hand of the performer suggesting how they will move through space and what they will touch.

Background/past experiments

I’m interested in thinking about how repetitive interactions with new digital technology behave as a tool for it to start inhabiting our subconscious. As one of the embodied results of this process, there is the subtle, penetrating, and gradual take over of the everyday choreographies of our hand gestures. Thus, I’m imagining an experience that creates an intimate microcosmos about this idea giving the control of another human’s hand to the user.

Sketch/first explorations
first sketches
hand movement exploration
Chalenges/Questions
  • How to let the performer move freely through a big space, considering the need of having an Arduino on the body of the performer connected to a computer connected to a WIFI?
  • Is it necessary to create a narrative or just lighting up the LEDs on the hand of the performer and moving them through space enough?

body and bust 3D-scan

Full body and Bust 3D scan with itSeez3D app on an ipad and a structure sensor on an Ipad
Results after resizing the full body object with blender and cleaning it up with Wrap3
texture after cleaning up process in Wrap3 software
Some animations experiments after uploading the character to Mixamo
first explorations with 3D modeling/sculpting on Blender on the top of my scanned bust

the collective sonic web heart prototype

This semester me and Name have the goal of creating a web-based remote experience to be shown as an installation activated from at least two different locations simultaneosly. We have been working on the top of a WebRTC by Lisa Jamhoury to develop a sketch in p5js that allows remote users to communicate through body movements and a microphone through the web.

The code allows for two users in different locations to interact on the same p5js canvas using Posenet via ml5. Each user controls this bubble-body minimal avatar with a digital heart positioned in the middle of their chest through body gestures. In the first part of our piece, the user has to answer the question “How is your heart today?” with a color that defines the color of their heart and avatar while inside the experience. Once the edge of the body of one of the users overlays with the other’s edge, their avatars merge into one with the color of both colors mixed. Now, we are working to open an audio tunnel between users allowing them to speak and listen to each other.

Early experiments
first prototype as an installation
Video showing the first user-testing section with video-projection audio prototype
code / technical details / challenges

Find the code here.

  • In this version, we used two distinct WebRTC libraries. The first one is peer_client.js to send the position of the user’s body and color selection and we used Simplesimplepeer.js, created by Shawn Van Every, to send the audio data when the users are merged.
  • The first data that we send is the user’s color selection for their heart/avatar and the position of their bodies. In this case, we wrapped the data into a JSON format to send the data only once. 
  • In order to detect the position of the body and position of the heart, we are using Posenet on the top of ml5. 
  • Sending the audio has been our main challenging at this point. Considering the amount of time that we had to work this week on this project, we decided to stick with p5js and add the sound channel to Lisa’s code. In the beginning, we were trying to use only Simplepeer.js to send the audio data through ‘sendData’ but it didn’t work. So we reached out to Shawn to ask for advice. Shawn introduced us to the hi newest library SimpleSimplePeer.js which makes SimplePeer much easier to use within p5js.
  • We were able to open an audio channel between users with this library activated by the MousedClicked function, but when we tried to add it to our if statement in the code, it doesn’t work properly. There is an audio channel being opened, but it’s very noisy and it has no definition as if the connection were not working properly.
  • In my point of view, having two libraries sending the data to the other is not a great idea. It might be more efficient if we can send data with one library and the same server functionality. It might be a better idea to not work with p5js as well and focus on Shawn’s examples from the Live Web class to develop the work further.
thoughts and feedbacks after the first user-testing
  • We need to design a soundscape for both parts of the experience: first when the user is not connected, and second when they connect and can hear each other.
  • It should be more challenging for people to merge or connect the avatars. Maybe using the heart position is better than the body edge and setting a time frame for it to stay in the same position before merging the bodies.
  • When the body is merging, we need more obvious and “explosive” feedback such as changing the background color or having a heart beating effect.
  • The movement is still finicky and too shakey. We should find a way to smooth it and design a more defined avatar that still kind of abstract which is good for the concept of the project.
  • It might be a good idea to work with Kinect instead of Posenet to have more precise results.
  • All the mouse interactions need to be replaced by body tracking inputs with the webcam, so the user can choose their color with their hand position for example, instead of mouse-clicking.
  • The angle of the camera and the windows-width and height should be the same for both remote installations. The camera’s angle works better when it is as close as possible to the gaze of the user.
  • The work might be interesting in large-scales as well as a projection mapping intervention for building facades, for example, with a station for the pedestrians to activate it.

the live web collective sonic heart

This semester I’ve been working with Name in the creation of a web-based remote experience for the class The Body Everywhere and Here. We have been working on the top of a WebRTC-Simple-Peer-Example by Lisa Jamhoury to develop the idea of an installation that allows remote users to communicate.

The code allows for two users in different locations to interact on the same p5js canvas using Posenet via ml5. Each user controls this bubble-body minimal avatar with a digital heart positioned in the middle of their chest through body gestures. In the first part of our piece, the user has to answer the question “How is your heart today?” with a color that defines the color of their heart and avatar while inside the experience. Once the edge of the body of one of the users overlays with the other’s edge, their avatars merge into one with the color of both colors mixed.

Goals for Live Web’s midterm project

installation sketch for each of the two users
  • Add a sound layer to the code so when the avatars merge, the users can listen to each other’s microphone and communicate over peer connection;
  • Create a soundscape for the installation while the avatars are not merged;
  • Improve the interface/user experience for choosing the heart’s color and for restarting the experience;
  • Host the website on my ITP-Digital Ocean public server;
  • Experiment with video projection, user-test, and document it.

live web – the meme chat room

The 2058 meme chat room interface screenshot

Find the code of the project on my github.

This prototype was made in collaboration with Esther. We were intended to create a speculative chat room imagining a future where people would have the true ability to develop a conversation only using memes.

The main technical challenge for the code was turning the original example by Shawn, in which you would send text messages, into an image-based chat. In the final result, people can copy the “Image Address” of a web-based meme, paste it to the text box of the chat, click send, and the meme will appear above it under the last meme that was sent.

user-interaction example recording
Work-flow and technologies used

In the back-end of the code, we are running a Node.js server hosted in a Virtual Private Server (VPS) provided by Digital Ocean. Node.js “is an open-source server environment designed to build scalable network applications.” It is “perfect for data-intensive real-time applications that run across distributed devices”. Thus, Node allows supporting WebSockets with our own servers and handles the protocols necessary to make a socket event-style programming work.

A WebSocket is “a persistent connection between the client and server” meaning that it “enables real-time, bi-directional communication between web clients and servers”. To code WebSockets in Javascript we are using a library for realtime web applications called Socket.IO. It has two parts: a client-side library that runs in the browser, and a server-side library for Node.js.

We are also running Express which is a Node module for building HTTP (Hypertext Transfer Protocol) servers. In order for it to work in the way we are coding it, all the public content of the website (HTML, CSS, images, videos etc that are part of the interface of the webpage) needs to be inside a folder called public in the same directory of the file server.js.

As part of the workflow we are using Fetch to be able to upload our files to our VPS. Fetch allows to connect via SSH (Secure Shell) to the server to transfer the files.

live web – self-portrait

result of the web self-portrait

This week I created a prototype for a web based interactive self-portrait. The web page shows a video-performance that I made in 2014 autoplaying muted. The video can be paused or played through the “Breath/Dance” button on the bottom left of the page. I also added some text describing things that I love and as a title the fact that was most present in my mind during the period that I was designing the website: the uncontrolled fires in the Amazon Rainforest and Pantanal in Brazil and it’s connection and disparities to the fires in California.

The code was based on this example by w3schools and can be found below.

<!DOCTYPE html>

<html>
<head>
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <style>
    * {
      box-sizing: border-box;
    }

    body {
      margin: 0;
      font-family: Arial;
      font-size: 17px;
    }

    #myVideo {
      position: fixed;
      right: 0;
      bottom: 0;
      min-width: 100%;
      min-height: 100%;
    }

    .content {
      position: fixed;
      bottom: 0;
      background: rgba(0, 0, 0, 0.5);
      color: #dc1616;
      width: 100%;
      padding: 20px;
    }

    #myBtn {
      width: 200px;
      font-size: 18px;
      padding: 10px;
      border: none;
      background: #000;
      color: rgb(255, 0, 0);
      cursor: pointer;
    }

    #myBtn:hover {
      background: #ddd;
      color: black;
    }
  </style>
</head>

<body>

  <video autoplay muted loop id="myVideo">
    <source src="loginvideo.mp4" type="video/mp4">
  </video>

  <div class="content">
    <h1>the forest is on fire</h1>
    <p>I love cold waterfalls. I love swiming. I love a queer dancefloor. I love living close to a park. I love biking
      while listening to music. I love the silence even though it doesn't exist. I love invisible machines. I love walking with my feet on the ground. I love the Fall. I love dancing. Sometimes I forget. Sometimes I make poetry. The fire won't burn the memories. </p>
    <button id="myBtn" onclick="myFunction()">Breath</button>
  </div>

  <script>
    var video = document.getElementById("myVideo");
    var btn = document.getElementById("myBtn");

    function myFunction() {
      if (video.paused) {
        video.play();
        btn.innerHTML = "Breath";
      } else {
        video.pause();
        btn.innerHTML = "Dance";
      }
    }

  </script>
</body>
</html>

live web – The BeeMe Experiment

Welcome page of BeeMee’s website

BeeMee is a web-based live performance made as a social experiment by the MIT Media Lab on Halloween night in 2018. The experiment let a real actor to be controlled by an online audience in real-time, deciding the future of a story showed at different locations at the MIT campus.

Audience view during the experiment

The event followed a dystopian story of “an evil AI by the name of Zookd, who had accidentally been released online. Internet users had to coordinate at scale and collectively help the actor (also a character in the story) to defeat Zookd“. It lasted for about two hours and had a huge mediatic repercussion before and after the event, and an audience of more than one thousand people, which actually made the online platform crash during the event.

Participants of the experience could control the actor through a web browser, in two ways. One was by writing in and submitting custom commands, such as “make coffee,” “open the door,” “run away,” and so on. The second way was by voting up or down on those commands. Once a command is voted to the top, the actor would presumably do that very thing. The documentary below reveals that an IRL person would be responsible for getting the most voted actions or the most frequent custom commands, filter them and report them to the actor via sound communication.

Documentary telling the story behing the BeeMee experiment

I find it a very interesting performance and immersive social game experiment for the uniqueness of the partially undetermined experience of the actors who performed it and for the audience’s experience. First, because I had never found something that would allow users to control real people over the internet developing a story with such complexity. I can only think about the live porn industry as another example at this moment and some web chats where you can choose or “give orders” to performers or strippers while tipping them. Second, because the performance not just happens online and live, but it takes into consideration the web and its specific potentialities as a medium. And third, because the story which is told is also related to the medium since they are talking about AI and dystopian futures which are something very trendy right now.

BeeMee’s platform interface

It’s is also interesting to notice that in December of the same year of the BeeMee experiment, Netflix released Black Mirror: Bandersnatch on Netflix, the first interactive film of the platform, which is different from BeeMee for several reasons and probably the most obvious is that BeeMee’s performance is happening in real-time IRL while in Bandersnatch users choose between pre-recorded and edited footages.

It’s interesting to think about how the team had to design the narrative in a way they have only partial control over it but still control enough to make it engaging for the audience and coming to an interesting ending. I also apretiate observing their set-up on the body of the performers and how they streamed it. I guess one way of improving the experiment and making it more complex and interesting would be creating an AI algorithm that would be able to understand the inputs from the audience, calculate the most frequent and accurate and give commands to the actors, instead of that having to be mediated by a human.

animation blueprints for performative avatars

For this week’s assignment, I improved my last week’s avatar’s animation following Matt’s tutorials about Blendspace 1D, Animation Blueprints and State Machines in Unreal Engine 4. After completing it, the character that I downloaded from Mixamo has specific ways of idling standing up, walking, running, jumping and falling. It’s interesting to observe how each detail that is added to the animation composes their identity and sculpts their personality or the way that I imagine their personality based on how they look and move.

For me, it’s getting more complex to understand how the State Machine is working with all it’s blueprints details, so I decided to start taking an online course about Blueprints in UE to see if I can have more autonomy for future projects in UE4 and other software that may use similar visual scripting systems.

the collective heart permeable body

Find the code here.

This week me and Name worked on the top of Lisa’s example for the class The Body Everywhere and here to create a Web-RTC sketch on P5.js with Posenet and Ml5. The example is a continuation of our last week’s assignment.

The code allows for two users in different locations to interact sharing the same canvas and have this bubble-body minimal avatar with a digital heart positioned in the middle of their chest. In the first part of our piece, the user has to answer the question “How is your heart today?” with a color that defines the color of their heart and avatar while inside the experience. Once the edge of the body of one of the users overlays with the other’s edge, their avatars merge into one with the color of both colors mixed.

people interaction test video

Performative avatars – skeleton animations

This week I learned from Matt’s tutorials how to download characters and animations from Mixamo to use it in Unreal Engine. So far I haven’t had any considerable technical challenges to complete the assignment. However, when trying to discover how to add an animation to the walking style of the character by myself, I wasn’t successful. Honestly, so far I’m feeling much more comfortable with UE4 interface, when compared to my past experiences with Unity, for example. I’m looking forward to learning more about the blueprints logics since I haven’t worked with it before and it seems quite challenging for me to understand it.

Result after doing the tutorials