A machine learning project, developed within the five days of participating in the Choreographic Coding Lab (CCL), a lab organised by Fiber, Motion Bank and ICK Amsterdam who all aimed to provide us with a research environment at the intersection of dance, choreography, digital tools, data and code.
Conceptually, our project started with the idea of generating a non-human, virtual performer to escort a human, physical performer into a short improvisational performance. We were interested in exploring the spaces in-between the human and the non-human, the physical and the digital, the static and the dynamic. In addition to that, we were intrigued by the bias that lies within the training process in machine learning projects.
In order to create our generative performer, we first considered using deep learning methods (probably an RNN/LSTM model) but due to lack of time (the training time would exceed the 4 forthcoming days of the lab) we decided to use Wekinator (a free, open-source machine learning software) for a supervised learning.
We used skeleton tracking with a Microsoft Kinect V2 (coded in Processing) in order to track the human body in the 3D space and we sent OSC messages of the tracked 72 inputs (24 body joints * 3 axis /x, y, z/) to Wekinator in order to link them to 82 outputs (72 body joints’ coordinates, equal to the inputs, plus 10 features for sonification). The 72 body-related outputs were used to design the skeleton-like generated performer (coded in Processing). The rest 10 outputs were used to control 10 sound parameters in Max/MSP.
Our first trial included training the model on 10 static body postures. Whatever the human body coordinates were on these 10 postures, the identical coordinates were used for the generated skeleton. What we expected to witness, was the virtual performer mimicking the human on these postures but also improvising in the spaces in-between them. Despite our expectations, the trained performer became very obedient to the human performer and copied her throughout her improvisations. The reason for that was obvious. We naively trained the model to be obedient. We forced this overfitted outcome because we didn’t give the model any option of not doing the exact same thing as the human.
After these observations, we passed on to our second training. We trained the model on the same postures but this time, only 5 out of the 10 postures where used identically for the human and the virtual performer. The other 5 human postures where linked to different skeleton postures. This way, the generated performer learned to mimic the human performer only on the first 5 postures but did not learn to mimic the human performer constantly. The result was absurd and very satisfying considering our initial intentions. The virtual performer was actually improvising in a human-non-human hybrid way and these generated improvisations could be seen as dancing transitions and bridges between static endings.
This collaboration, within the fruitful CCL environment, gave birth to an experimental performance which will hopefully evolve in future meetings across the countries of our current residences.
Developed along with Giovanni Muzio, Zeger de Vos and Elizabeth Walton