the collective sonic web heart prototype

This semester me and Name have the goal of creating a web-based remote experience to be shown as an installation activated from at least two different locations simultaneosly. We have been working on the top of a WebRTC by Lisa Jamhoury to develop a sketch in p5js that allows remote users to communicate through body movements and a microphone through the web.

The code allows for two users in different locations to interact on the same p5js canvas using Posenet via ml5. Each user controls this bubble-body minimal avatar with a digital heart positioned in the middle of their chest through body gestures. In the first part of our piece, the user has to answer the question “How is your heart today?” with a color that defines the color of their heart and avatar while inside the experience. Once the edge of the body of one of the users overlays with the other’s edge, their avatars merge into one with the color of both colors mixed. Now, we are working to open an audio tunnel between users allowing them to speak and listen to each other.

Early experiments
first prototype as an installation
Video showing the first user-testing section with video-projection audio prototype
code / technical details / challenges

Find the code here.

  • In this version, we used two distinct WebRTC libraries. The first one is peer_client.js to send the position of the user’s body and color selection and we used Simplesimplepeer.js, created by Shawn Van Every, to send the audio data when the users are merged.
  • The first data that we send is the user’s color selection for their heart/avatar and the position of their bodies. In this case, we wrapped the data into a JSON format to send the data only once. 
  • In order to detect the position of the body and position of the heart, we are using Posenet on the top of ml5. 
  • Sending the audio has been our main challenging at this point. Considering the amount of time that we had to work this week on this project, we decided to stick with p5js and add the sound channel to Lisa’s code. In the beginning, we were trying to use only Simplepeer.js to send the audio data through ‘sendData’ but it didn’t work. So we reached out to Shawn to ask for advice. Shawn introduced us to the hi newest library SimpleSimplePeer.js which makes SimplePeer much easier to use within p5js.
  • We were able to open an audio channel between users with this library activated by the MousedClicked function, but when we tried to add it to our if statement in the code, it doesn’t work properly. There is an audio channel being opened, but it’s very noisy and it has no definition as if the connection were not working properly.
  • In my point of view, having two libraries sending the data to the other is not a great idea. It might be more efficient if we can send data with one library and the same server functionality. It might be a better idea to not work with p5js as well and focus on Shawn’s examples from the Live Web class to develop the work further.
thoughts and feedbacks after the first user-testing
  • We need to design a soundscape for both parts of the experience: first when the user is not connected, and second when they connect and can hear each other.
  • It should be more challenging for people to merge or connect the avatars. Maybe using the heart position is better than the body edge and setting a time frame for it to stay in the same position before merging the bodies.
  • When the body is merging, we need more obvious and “explosive” feedback such as changing the background color or having a heart beating effect.
  • The movement is still finicky and too shakey. We should find a way to smooth it and design a more defined avatar that still kind of abstract which is good for the concept of the project.
  • It might be a good idea to work with Kinect instead of Posenet to have more precise results.
  • All the mouse interactions need to be replaced by body tracking inputs with the webcam, so the user can choose their color with their hand position for example, instead of mouse-clicking.
  • The angle of the camera and the windows-width and height should be the same for both remote installations. The camera’s angle works better when it is as close as possible to the gaze of the user.
  • The work might be interesting in large-scales as well as a projection mapping intervention for building facades, for example, with a station for the pedestrians to activate it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: