Performative avatars – skeleton animations

This week I learned from Matt’s tutorials how to download characters and animations from Mixamo to use it in Unreal Engine. So far I haven’t had any considerable technical challenges to complete the assignment. However, when trying to discover how to add an animation to the walking style of the character by myself, I wasn’t successful. Honestly, so far I’m feeling much more comfortable with UE4 interface, when compared to my past experiences with Unity, for example. I’m looking forward to learning more about the blueprints logics since I haven’t worked with it before and it seems quite challenging for me to understand it.

Result after doing the tutorials

The collective heart mouse interaction

Try the sketch here and access the code here.

For this sketch, I worked together with Name on the top of Lisa’s Web RTC example. We were tasked to create a two-player sketch using the mouse over webRTC peer connection that encourages the users to be co-present in some way.

How does the collective heart sketch work?

Both users accessing the browser simultaneously answer the question “What’s the color of your heart today?” on the bottom left of the page selecting a color for their mouse-heart. Then, when both hearts encounter, they start to beat and the color of the background of the page turns to be the mixture of both heart colors.

This example can be improved in a way that the user would use their real heart’s position instead of the mouse position to control the cute heart in the browser. And other layers could be added when both hearts meet, such as opening the microphone for people to listen to each other.

The design of the heart was inspired by the aesthetics of the project Luv ’til it Hurts.

Performative Avatars – Self Portraits

This week we were tasked to create two self-portraits using different avatar creation apps. I see self-portraits a way of revisiting my identity and projections that I have over it, at the same time as an opportunity to rethink it and recreate it.

Avakin Life Self-portrait

Avakin Life is a 3D virtual world for mobile and PC where you can design your own avatar, live chat with different users, travel to different locations and join events. The avatar’s customization is gender-binary as in most of the apps. Although there are endless options for every detail that you can change, for most of them the user has to pay with coins that you can buy or earn while interacting with Avakin’s universe.

I ended up choosing a face cover which made me feel much more identified with the character in a strange way. I usually find it difficult to build a face that looks like mine for my avatars. And it made a lot of sense to be interacting with a face-cover during COVID. So for this avatar what builds my personality is the outfit and the rose that he carries, which makes me feel like there is a narrative or a touch of poetry in this character. Who is he going to meet to give the rose? Where did he find it?

customization process
Facemoji Self-protrait

“Facemoji Keyboard is an emoji-centric keyboard app that offers speech recognition and emoji prediction features, as well as a wide variety of stickers, GIFs and customizable keyboards”. 

This app was the less gender-binary app that I found. The user doesn’t get to choose gender as their first task as in most of the other apps, which I found really problematic for the non-binary, genderqueer, and gender fluid community. Fortunately, in Facemoji you start choosing your characteristics right away despite your gender identity, which made it possible for me to build a very cute avatar wearing some make-up.

This is a mobile app, in which you are limited to design only the head of your avatar, and then you can embody it as an augmented reality filter using your camera and it will react according to your facial expressions and movements. I had a lot of fun trying it. And there are a lot of different accessories that give more personality and uniqueness to the avatar.

customization process

the collective web tentacular body

This work-in-progress of a relational nomadic sound sculpture which I’m calling now the tentacular collective web body.

Process of creation of the installation and first interactive experiments

As a way of structuring the conceptual scope of this piece in a text, I’m designing a zine. I imagine that this text will work as a basis to create a learning environment surrounding the artwork, such as a workshop, where thoughts can emerge collectively while people dive into how the internet works, not just in terms of its hardware infrastructure, but also in terms of its ecological impacts. Hopefully, we will surf together with the idea that a complex web created by non-human forces exists, existed before the internet was invented and will continue existing after it’s extinction.

I’m planning to finish this work by the end of 2020.

The web Firebird

the spirit of the fractal tree

The web Firebird is a live performance experiment on coding, choreography, and augmented reality designed to be shared through the cyberspace. Find its code here.

This project was created during Spring 2020 at ITP NYU as part of the classes Nature of Code (Daniel Shiffman), Choreographic Interventions (Mimi Yin), and Experiments in Augmented Reality (Irene Alvarado).

screenshot of the performance streamed from my rooftop

Past experiments

The Firebird is a mythology that I started creating during my first semester at ITP. Firstly, as a physical computing project, where I started imagining an experimental performance art set up where the performer would be able to control audiovisual graphic elements of the space only with gestures and wearable technology.

During my second semester, as my Nature of Code’s midterm, I created the first version of the Firebird for the browser. The piece is a net-art an interactive visual poem, which can be found here, as part of the online residency Jururu Club, which is one of the Brazilian pavilions of the fourth edition of The Wrong Biennale.

The story of the Firebird

The Firebird used to live in a wet forest with their community. One day, without previous notice, their house is put on fire by someone who couldn’t understand the way they were living. And so the Firebird flies away from home. The Fire Bird migrates to the city, in the north, alone, and finds a place to sleep where no human could find them. The thing is that they were carrying an invisible fire seed inside their belly. So in the place where they sleep, they dream for so long, that the invisible seed that was inside of their belly finds time to germinate and turns into a tree.

example of an audience view of the final part of the performance
References

The story of the Fire Bird is based mainly on several environmental disasters caused by fires in the Amazon Rainforest in 2019 with the aim of keeping its memory and constantly question all the actions that lead the grown of the fires in that year, such as the considerable growth of deforestation encouraged by macropolitics representatives. I’m interested here in how interspecies life is affected by human culture and it’s catastrophic eco-system design during the Anthropocene.

drawNest()
The Firebird (1910) is also a ballet and orchestral concert work by the Russian composer Igor Stravinsky

How does it work?

For this project, I worked with ml5.js to load PoseNet along with p5.js. PoseNet is a machine learning model that allows for Real-time Human Pose Estimation with images from a regular digital camera, such as a webcam.

I’m focusing this post on the coding part of the work. However, there are other important layers to this project, such as developing the choreography and the character design in a way that I feel gradually more comfortable and clear about the way to embody it. This part takes time, a lot of rehearsal, and experimentation with the code, which basically is the component that animates this creature through my body, giving room for them to exist.

early experiments with PoseNet for this project

I’m choreographing the code working with the millis() function in the draw loop, which allows me to add or take away functions/effects overtime using conditional statements.

function draw() {
  let now = millis();
  // console.log(now);
  // Scale everything based on ratio between
  // window width and video width
  scale(width / video.width, width / video.width);
  // Shift everything up 50 pixels
  translate(0, -50);

  // Draw the video WITHOUT resizing it
  image(video, 0, 0);
  // textSize(128);
  // text(width/height, width/2, height/2);
  if (pose) {

    if (now >= 355000) {
      fireParticle(0, 11);
      birdMask();
      fractalTree();

    } else if (now >= 320000) { //night
      background(0);
      fireParticle(0, 17);
      birdMask();
      fractalTree();

    } else if (now >= 290000) { //fractal tree whole body on fire

      fireParticle(0, 17);
      birdMask();
      fractalTree();

    } else if (now >= 260000) { //fractal tree on fire
      fireParticle(0, 11)
      birdMask();
      fractalTree();

    } else if (now >= 230000) { //line tre
      fireParticle(0, 5);
      birdMask();
      fractalTree();

    } else if (now >= 140000) { //seed appears
      fireParticle(0, 5);
      birdMask();
      ellipsoidSeed();

    } else if (now >= 100000) { //fly to the city 
      fireParticle(0, 5); //fire face
      birdMask();
      fireParticle(9, 11);

    } else if (now >= 75000) { //nest in fire, get out of nest
      fireParticle(0, 5); //fire face
      birdMask();
      drawNest();
      fireParticle(9, 11);

    } else if (now >= 15000) { //drawing nest
      birdMask();
      drawNest();

    } else { //start
      birdMask();

    }
  }

So far, I’ve been presenting this work in Zoom meetings, sharing my screen with the audience, who will view my browser in fullscreen and present mode on p5.js online editor. That’s why I’m scaling everything in my draw loop based on the ratio between the window width and the video width. I feel that it is not the best solution because it makes it more difficult to think about other elements proportions etc, but probably a better one will come once I discover the best set up to stream the performance with the best quality possible.

function birdMask();
function birdMask() {
  push();
  imageMode(CENTER);
  let dF = dist(pose.rightEye.x, pose.rightEye.y, pose.leftEye.x, pose.leftEye.y);
  translate(pose.nose.x, pose.nose.y - dF);
  scale(dF / 50);
  image(img, 0, 0, 180, 120);
  pop();
}

The birdmask function in a .png image with a transparent background that I designed in Illustrator and loaded into the code. It’s positioned according to the position of my nose and scaled in relation to the distance between my two eyes pose points.

function drawNest();
function drawNest() {
  image(pg, 0, 0, width, height);

  leftWristX = pose.leftWrist.x;
  leftWristY = pose.leftWrist.y;
  rightWristX = pose.rightWrist.x;
  rightWristY = pose.rightWrist.y;

  pg.stroke(255);
  pg.strokeWeight(2);
  pg.line(leftWristX, leftWristY, pleftWristX, pleftWristY);
  pg.line(rightWristX, rightWristY, prightWristX, prightWristY);

  pleftWristX = leftWristX;
  pleftWristY = leftWristY;
  prightWristX = rightWristX;
  prightWristY = rightWristY;
}

In this part of the code, I’m drawing lines with the last two positions of my wrists. This is probably the ugliest part of my code, because the first time I draw a line, I haven’t defined a value for “pleftWristX, pleftWristY, prightWristX, and prightWristY” yet and that’s pointed as an error in my code. So far I haven’t found a better solution for this, and the aesthetic result is exactly what I want plus I have received a couple of specific positive feedback about it. Maybe I should store the last two poses in an array and then call them?

I discovered this effect earlier in the semester, when creating this Electronic Rituals assignment.

function fractalTree();
function fractalTree() {
  let nY = pose.nose.y;
  //console.log(nY);
  theta = map(nY, height / 4, height - 400, 0, PI / 2);

  // Start the tree from the middle point between my Ankles
  let rootX = (pose.rightAnkle.x + pose.leftAnkle.x) / 2;
  let rootY = (pose.rightAnkle.y + pose.leftAnkle.y) / 2;

  // Calculate the distance
  let y = rootY;
  yRoot.push(y);
  // Only keep most recent 60 data points
  if (yRoot.length > 60) yRoot.shift();

  // Calculate the average y position
  let avgY = 0;
  // Add up all the y-values
  for (let y of yRoot) {
    avgY += y;
  }
  // Divide by the number of y-values
  avgY /= yRoot.length;

  // Calculate the distance
  let x = rootX;
  xRoot.push(x);
  // Only keep most recent 60 data points
  if (xRoot.length > 60) xRoot.shift();

  // Calculate the average x position
  let avgX = 0;
  // Add up all the x-values
  for (let x of xRoot) {
    avgX += x;
  }
  // Divide by the number of x-values
  avgX /= xRoot.length;

  //position the tree between the ankles  
  translate(avgX, avgY);
  stroke(255);
  if (pose.nose.y > height / 4) {
    branch(120);
  } else {
    strokeWeight(1.8);
    line(0, 0, 0, -120 * 2.9);
  }
}

function branch(len) {
  // Each branch will be 2/3rds the size of the previous one

  //float sw = map(len,2,120,1,10);
  //strokeWeight(sw);
  strokeWeight(2);

  line(0, 0, 0, -len);
  // Move to the end of that line
  translate(0, -len);

  len *= 0.66;
  // All recursive functions must have an exit condition!!!!
  // Here, ours is when the length of the branch is 2 pixels or less
  if (len > 2) {
    push(); // Save the current state of transformation (i.e. where are we now)
    rotate(theta); // Rotate by theta
    branch(len); // Ok, now call myself to draw two new branches!!
    pop(); // Whenever we get back here, we "pop" in order to restore the previous matrix state

    // Repeat the same thing, only branch off to the "left" this time!
    push();
    rotate(-theta);
    branch(len);
    pop();
  }
}

The fractal tree was created by Daniel Shiffman in the video tutorial below.

Fractal trees tutorial by Dan Shiffman

In my code, I’m positioning the bottom of the tree between my ankles and using a technique that I learned with Mimi Yin to smooth it’s movement. The thing here is that PoseNet generates a noisy and shaky effect because it receives too many different position points (1 per frame). So to create a more stable visual effect when positioning the tree related to my ankles, I’m calculating the average of the last 60 x and y PoseNet data points and using the result to position the tree – translate(avgX, avgY).

The angle of the branches is mapped to the position of my nose on the canvas. I also created a conditional statement allowing the tree to be drawn only if the pose.nose.y is greater than certain value, otherwise, the code draws a line with the same strokeWeight of the tree’s line. That allows me to dance with a simple line and generate a more surprising effect when opening the tree.

The challenge here is that I always have to adapt where to begin to draw the tree and how much I want it to fractalize according to the position of the nose. That variates according to spacial factors of the performance such as my distance from the camera and its angle. This part is a bit confusing since I’m doing some calculations to scale the whole code in the draw loop to fit it to fullscreen mode perfectly when presenting it, and that makes it tricky to imagine exactly the proportions of the canvas in relation to my body on space.

fireParticle(v1, v2);
function fireParticle(w, z) {
  for (let i = 0; i < 2; i++) {
    for (let j = w; j < z; j++) {
      let x = pose.keypoints[j].position.x;
      let y = pose.keypoints[j].position.y;
      let p = new Particle(x, y);
      particles.push(p);
    }
  }
  for (let i = particles.length - 1; i >= 0; i--) {
    particles[i].update();
    particles[i].show();
    if (particles[i].finished()) {
      // remove this particle
      particles.splice(i, 1);
    }
  }
}

class Particle {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.vx = random(-1, 1);
    this.vy = random(-5, -1);
    this.alpha = 255;
    this.distEye = dist(pose.rightEye.x, pose.rightEye.y, pose.leftEye.x, pose.leftEye.y);
    this.triangleSide = this.distEye / 3; //20
  }

  finished() {
    return this.alpha < 0;
  }

  update() {
    this.x += this.vx;
    this.y += this.vy;
    this.alpha -= 5;
  }

  show() {

    stroke(255, this.alpha);
    strokeWeight(this.triangleSide / 50);
    fill(255, 30, 0, this.alpha);
    let theta = map(this.y, 0, height, 0, TWO_PI * 50);
    push();
    translate(this.x, this.y);
    rotate(theta);
    triangle(-this.triangleSide / 2, this.triangleSide * sqrt(3) / 4, 0, -this.triangleSide * sqrt(3) / 4, this.triangleSide / 2, this.triangleSide * sqrt(3) / 4);
    pop();

  }
}

The fire effect is a particle system that I created customizing the code from this tutorial by Daniel Shiffman. The particle in my case is an orange squared triangle that rotates according to its y position. The fire particle function allows me to position the particle system in any data point provided by PoseNet, indicating the first and the last values of the 17 pose keypoints that I want to have as a basis for the fire effect. That allows me to easily play with different fire effects depending on the moment of the narrative that I am.

function ellipsoidSeed();
function ellipsoidSeed() {
  let d = dist(pose.leftWrist.x, pose.leftWrist.y, pose.rightWrist.x, pose.rightWrist.y);

  graphics.background(0, 0, 0, 0);
  //graphics.clear();

  graphics.fill(255, 255, 255, 0);
  graphics.strokeWeight(0.5);
  graphics.stroke(255, 255, 255);
  graphics.push();
  graphics.translate(0, 0, 2 * d);
  graphics.rotateY(angle);
  graphics.ellipsoid(50, 40, 50);
  graphics.pop();

  angle += 0.01;

  fill(0, 255, 0);

  image(graphics, ((pose.leftWrist.x + pose.rightWrist.x)) / 2 - width / 2, pose.leftWrist.y - height / 2);

}

The x position of the rotating ellipsoid seed is in the middle point between the leftWrist and rightWrist points and the y position is in relation to the height of the leftWrist. I’m working with the createGraphics() function, which allows me to overlay a WEBGL canvas to my 2D sketch. And then, I’m calculating the distance between my left and applying it to the z position of this canvas, which creates the illusion that I’m changing the size of the ellipsoid, while I’m moving it back and forth, that is what allows me to invite the audience to enter the ellipsoid, positioning it right in the limit of the z-axis. The challenge here is that I need to adjust these relations every time the canvas changes its size or according to the size of the space where I perform, or how far I’m going to be from the webcam.

the audience is invited to see how the seed is structured from the inside

final considerations and next steps

This process has been really rewarding for me because I feel that a lot of my past experiences and new things that I learned at ITP are being combined in a way that fills me with joy and good challenges. I’ve been feeling very encouraged to keep going with it after the amount of positive feedback that I’ve been receiving when sharing it in three different classes and talking to friends who are part of the dance community in Brazil. There is a lot of room for future collaborations within the scope of this project. The Covid19 context forced me to start thinking about the online as a space to present live performance and I want to keep going with this idea.

From one side, I envision a physical space set up, with a more computer vision approach, where the audience would wear augmented reality headsets and be all around me while I dance and guide them through the world of the Firebird. And I still want to try it. From another side, I really appreciate that I’m creating this project with relatively accessible equipment and only open source libraries and software. That adds a lot to my concept and allows me to teach these techniques in the future in more contexts, especially in the South Global, where the Firebird comes from.

I’ve been using a song called Cravo e Canela, from the album Araçá Azul, released in 1972 by Caetano Veloso. I feel that this is a great starting point and reference, but the project needs an original soundtrack, maybe gesture sound reactive graphics and spatial sound effects.

The body/choreography needs to develop further, the design of the AR costume of the Firebird can be more elaborate and also the interactivity with each visual graphic, although I like to find simplicity in complexity in this project. I’m creating live web-based performance and designing a character but I’m also embodying a human-computer interface experiment and exploring future possibilities to control computer graphics with new machine learning APIs. That is somehow part of the conceptual development of the choreography and points me to a more accessible aesthetic rather than virtuous.

what does the spirit of the fractal tree have to tell me?

Nature of Code – Neural Networks

hypothetical machine learning system against deforestation

As an exercise of designing a fictional machine learning system, I’d like to address the problem of deforestation in protected areas worldwide. My idea is to create an AI system connected to real-time satellite images that can monitor the areas and whenever one of them is under attack, an online public announcement is made and a warning is sent for the institution that is responsible for the protection of the area. The system will need humans to react properly for each alert, investigating the criminals resposible for deforesting, punishing them properly and traveling to the area under attack to stop it as soon as possible.

Amazon Deforestation Patterns. Source: Nasa’s Earth Observatory

The inputs of the system are real-time satellite images, and properly updated geolocation data of protected regions world-wide. The outputs are online and public journals containing every attack and the responsibility for it (when discovered); data visualization graphics showing the curves of illegal deforestation according to different regions; and alerts containing the specific geolocation of where the deforestation is happening as close as possible to when the attack happens. The alert should be directed to institutions responsible for each ecosystem’s protection. The training data is made of satellite images of areas that are under deforestation, especially in its initial phase, and areas that are ecologically safe and stable.

The only privacy concern I would have with the data that is used in this project is addressed to communities living in protected areas, manly indigenous and traditional communities, which are indeed strongly affected by deforestation. Thus, whenever a member or any practice of the community is captured by the satellite cameras, it should never be turned public without the consent and agreement of the community. The localization of theses communities should also be kept in secret since they are commonly suffering attacks by organizations and communities interested in their lands.

*It was pretty helpful to read some parts of the book “A People’s Guide to AI” to create this example.

performative experiments with machine learning

Trying Google’s Teachable Machine

I’ve started imagining playing with the illusion of manipulating augmented reality objects in live performances in the browser the other day and so I thought that it could be interesting to teach the machine what was the direction that my hand palm was facing. In fact, that there are very detailed machine learning hand tracking models available, but I felt that it could be relevant to try it just with image classification rather than pose estimation. I realized that an interesting way of making the system more precise when working with body pose and image classification is playing with colors, so I ended up painting my hand with 4 different colors and the model worked pretty well.

Final experiment with 4 different colors

My second experiment was playing with pose estimation and teachable machine. And so I created three classes, each one with a gesture made with different positions of my arms: Seed, Fire, and Tree. I realized that the model actually is not so precise if the angle of the camera changes or if you are not at the same distance from the camera, especially for the Fire and the Tree gestures, which are kind of similar.

Since my goal is to create different visual effects activated by each of the gestures on p5js, I trained a pose estimation model following the tutorial “Pose Classification with Posenet” by The Coding Train. It basically has three phases that allow you to train a Pose Classification model with Posenet and ml5.js. Each phase can be separated into three different sketches: Data Collection; Model Training and Model Deployment which make the process pretty accessible <3.

Find some results below and the final code here. Next step is thinking further about the audiovisual of each gesture and consider working with regression instead of classification.

Seed:Fire:Tree
first sketch : in and outputs for machines during their childhood era
weight

input a collection of questions output a spectrum input an infinite spectral gender classification spreadsheet output a collection of gentle questions made without any pretension input something that a machine will never learn output something that exists but has no name yet input smell output a thing that existed but will never be named input a thing that will die hidden because it’s deepest desire is to not be classified output the feeling of touching the floor with the palm of a human hand input an array containing x ghosts of extinct species that died in sacrifice for the progress of the most computational expensive question that has ever been made output time to breathe slowly for a long time

Nature of Code – Simulation Project

The Web Fire Bird
The Story of the Fire Bird

The Fire Bird is a mythology that I started creating during my first semester at ITP, firstly, as a a physical computing project. Now I’m creating a net-art version of the narrative. The story of the Fire Bird is divided in three base movements:

1 – The Fire Bird flies away from home because their house was put on fire by someone who couldn’t understand the way the Fire Bird was living;

2- The Fire Bird flies to the city, alone, carrying a fire seed inside their belly and finds a place to sleep;

3- The Fire Bird dreams for so long, that the invisible seed that was inside of their belly grows and turns into a tree.

For this sketch, I’m focusing on the part of the narrative in which they fly away from home to the city.

References

The story of the Fire Bird is based mainly on the environmental disasters caused by fires in the Amazon Rainforest in 2019 with the aim of keeping its memory and constantly question all the actions that lead the grown of the fires in that year, such as the considerable growth of deforestation. I’m interested here in how other species are affected by human culture and it’s catastrophes during the Anthropocene.

This work is also strongly influenced by things I learned working together with indigenous communities in Brazil. Especially the philosophy of the indigenous Guarani People, which tells that words have soul. That’s why their word for “throat” is ahy’o, but also ñe’e raity, which literally means “nest of the word-soul”. These indigenous people are mostly based in the southeast region of Brazil and some areas in Paraguay and are constantly struggling to preserve their culture and the territory they have left.

Based on the Guarani idea of the throat as a nest for words-soul, the Brazilian psychoanalyst and philosopher Suely Rolnik, which whom I had classes in the past, describes a “familiar-strange” body-mind state in which we find ourselves sometimes in life, especially when we are about to germinate a seed of a “new world” a “world-to-come”, after untieing the knots that we carry in our nest-throats. She explains how the creation of these new worlds has its own time and how the nests need to be taken care of while the colonial-capitalistic system tries to kill this process.

Screenshot of the net art work

I’m understanding that the Web Fire Bird is a simulation of this concept and psychological state as well, as a visual poem or a digital metaphor. After being affected by a colonial trauma, the bird flies away from home carrying the seed of new worlds in their throat. That’s why their landscape is composed of words that are constantly changing their combination in order to tell their own story. The landscape is under transformation and construction, as the course of a river, trying to stay alive while organizing itself constantly in search of a configuration – or a place to germinate.

Code

Find the p5.js sketch editable here and the present mode here.

The code was created based on examples by Daniel Shiffman in the context of the Nature of Code classes and with his help during office hours. I basically mixed and worked changing details and on the design of three examples: “The Vehicles class” seeking a target, “Particle Systems” with gravity applied to it and “Flow Fields” (changing every time the mouse is clicked).

Result

The result is my first net art piece! : ) It is an interactive generative poem.

During its process of creation, I was invited by the curator Guilherme Brandão to be part of the online residency Jururu Club, which is one of the Brazilian pavilions of the fourth edition of The Wrong Biennale – “a global online & offline event aiming to nurture digital culture by engaging with a vast selection of curated artworks”.

For Jururu Club, I created a Portuguese version of it.

Thanks for Dan for all the help and Gabriel for giving me a web-space to share my work : )

For future versions of this work, I plan to enrich the interactions and walk towards a more gaming atmosphere.

Choreographic Interventions: Circular Pathways

This assignment was made in partnership with Sarah Yasmine, from Columbia University. We created a 90 seconds choreography interacting with circular pathways coded in p5js to be projected on the floor while we perform. Find the code here.

Choreography design

Nature of Code – Particles Systems

The web Fire Bird

For this assignment, I’ve started creating a web version for the Fire Bird mythology, that I’ve been working on since last semester. I want to find an interesting and compelling way of telling the story in the web-browser using text, visual effects, and simple interactivity. I’m creating this as an experimental art piece for the browser, but it can also be seen as a first step towards designing a game based on this character’s story. Find the code here.

The Triangles represent fire seeds that are thrown by the Fire Bird

The story of the Fire Bird is based on recent environmental disasters caused by fires in the Amazon Rainforest and has three movements: the Fire Bird flies away from home because their house was put on fire by someone who couldn’t understand the way the Fire Bird was living; the Fire Bird flies to the city, alone, carrying a fire seed inside their belly and finds a place to sleep; the Fire Bird dreams for so long, that the invisible seed that was inside of their belly grows and turns into a tree. For this sketch, I’m focusing on the part of the narrative in which they fly away from home to the city.

The Fire Bird is moved by the mouse position in the canvas

The code has an image that moves and changes its direction according to the mouse position (the cursor is hidden by the function “noCursor();”) using a “Vehicle Class“. When you click the mouse, a rotating triangle flies away from the center of the image and is affected by a gravity force until it disappears after the bottom of the canvas. The “PartycleSystem” class is being used to push a new particle into an empty array (particles) and remove it after the particle is “dead” as well as applying the force to all particles. The death is caused by a lifespan variable inside the class Particle, which acts as a countdown timer. It has an initial value and when it becomes 0, the particle is dead. Another important point to remember is that I’m connecting the position of the image and the position from where the triangles are thrown with the sentence: “system.addParticle(v.position.x, v.position.y);”. The “addParticle” function is inside the “SystemParticles” class, and its “x” and “y” variables are connected to the position vector of the constructor of the Particle class, which itself is being affected by the vehicle class.

The next step is creating the landscape for the sketch and finding an interesting design to write the story of the Fire Bird as part of it. I’ve been thinking about drawing the landscape with moving text…I have started to try a different behavior fot the seed when it heats the ground (which has to be created) in this code.