Nature of Code – Neural Networks

hypothetical machine learning system against deforestation

As an exercise of designing a fictional machine learning system, I’d like to address the problem of deforestation in protected areas worldwide. My idea is to create an AI system connected to real-time satellite images that can monitor the areas and whenever one of them is under attack, an online public announcement is made and a warning is sent for the institution that is responsible for the protection of the area. The system will need humans to react properly for each alert, investigating the criminals resposible for deforesting, punishing them properly and traveling to the area under attack to stop it as soon as possible.

Amazon Deforestation Patterns. Source: Nasa’s Earth Observatory

The inputs of the system are real-time satellite images, and properly updated geolocation data of protected regions world-wide. The outputs are online and public journals containing every attack and the responsibility for it (when discovered); data visualization graphics showing the curves of illegal deforestation according to different regions; and alerts containing the specific geolocation of where the deforestation is happening as close as possible to when the attack happens. The alert should be directed to institutions responsible for each ecosystem’s protection. The training data is made of satellite images of areas that are under deforestation, especially in its initial phase, and areas that are ecologically safe and stable.

The only privacy concern I would have with the data that is used in this project is addressed to communities living in protected areas, manly indigenous and traditional communities, which are indeed strongly affected by deforestation. Thus, whenever a member or any practice of the community is captured by the satellite cameras, it should never be turned public without the consent and agreement of the community. The localization of theses communities should also be kept in secret since they are commonly suffering attacks by organizations and communities interested in their lands.

*It was pretty helpful to read some parts of the book “A People’s Guide to AI” to create this example.

performative experiments with machine learning

Trying Google’s Teachable Machine

I’ve started imagining playing with the illusion of manipulating augmented reality objects in live performances in the browser the other day and so I thought that it could be interesting to teach the machine what was the direction that my hand palm was facing. In fact, that there are very detailed machine learning hand tracking models available, but I felt that it could be relevant to try it just with image classification rather than pose estimation. I realized that an interesting way of making the system more precise when working with body pose and image classification is playing with colors, so I ended up painting my hand with 4 different colors and the model worked pretty well.

Final experiment with 4 different colors

My second experiment was playing with pose estimation and teachable machine. And so I created three classes, each one with a gesture made with different positions of my arms: Seed, Fire, and Tree. I realized that the model actually is not so precise if the angle of the camera changes or if you are not at the same distance from the camera, especially for the Fire and the Tree gestures, which are kind of similar.

Since my goal is to create different visual effects activated by each of the gestures on p5js, I trained a pose estimation model following the tutorial “Pose Classification with Posenet” by The Coding Train. It basically has three phases that allow you to train a Pose Classification model with Posenet and ml5.js. Each phase can be separated into three different sketches: Data Collection; Model Training and Model Deployment which make the process pretty accessible <3.

Find some results below and the final code here. Next step is thinking further about the audiovisual of each gesture and consider working with regression instead of classification.

Seed:Fire:Tree
first sketch : in and outputs for machines during their childhood era
weight

input a collection of questions output a spectrum input an infinite spectral gender classification spreadsheet output a collection of gentle questions made without any pretension input something that a machine will never learn output something that exists but has no name yet input smell output a thing that existed but will never be named input a thing that will die hidden because it’s deepest desire is to not be classified output the feeling of touching the floor with the palm of a human hand input an array containing x ghosts of extinct species that died in sacrifice for the progress of the most computational expensive question that has ever been made output time to breathe slowly for a long time

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: