Recap From Day 064
Today, we’ll look at another method that has be designed for expressive variations in gesture execution. Which means that it is more able to understand intentions in expressive changes in the gesture and then to use this information in real-time performance. For instance, to control sound synthesis parameters.
Working with time
Gesture Variation Follower
The method is Gesture Variation Follower(GVF). As we said, the idea is to follow, we say or subtract, the variations in gesture execution instead of considering them as noise. More precisely, if we want to slow down when we’re executing the gesture, GVF is able to estimate the decrease in speed dynamically.
Similarly, if we want to perform a gesture bigger than the one recorded, GVF is able to estimate the increase of relative size dynamically. These variations, are so dynamical because we could clearly start a gesture faster than the recorded template and then finish slower.
This is a very important feature because it means that these extracted variations are not relative to the gesture globally, but the gesture that is continuously changing and consequently they can be used as continuous controls while the gesture is being executed.
Below is a video demonstration of GVF
That’s all for day 065. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. And until next time, be legendary.