My right hand controls the pitch, while the left hand changes the waveform (sine, saw, square, triangle). MaxMSP also knows the position of the user on Unity’s environment which could also affect audio parameters at a later stage. This is just an experimentation, with some basic communication between Unity and MaxMSP, on taking the input of movement to change audio parameters. Next steps regard the implementation of this technique into additive music synthesis and a real-time performance. I could expand the inputs, taking into consideration the user’s head rotation and height, as well as the position of the user on the scene.
Reference: the work in the channel of MUST1002 on additive synthesis.