100 Days Of ML Code — Day 073

100 Days Of ML Code — Day 073

Recap From Day 072

Day 072, we looked at the first part of designing custom algorithms for music. You can catch up using the link below. 100 Days Of ML Code — Day 072 Recap From Day 071medium.com

Today, we’ll continue from where we left off in day 072

Working with time

Designing custom algorithms for music

The two models we have seen before come from more conventional methods as such, they can be seen as hacks of these more classical models. So, for instance, gesture follower is a Hidden Markov Model(HMM). As we have seen previously, HMM is a method that is particularly good at modelling a temporal sequence of an event such as words in a sentence or states in a gesture. The example given was drawing a circle gesture. We start at the bottom of the circle, then move toward the left, to the top, to the right and back to the bottom. This gesture can be modeled by a HMM with four hidden states bottom, left, top, right. Then we can spot one that the end is passing from the button to the left, to the top and to the right position.

In HMM we can define a transition that is more likely than others, for instance, moving from the bottom to the left is more likely to happen than moving from the bottom to the top directly. In gesture follower, the HMM is configured such as each state what we have previously. Bottom, left, top and right, are the gesture samples. So now, HMM is able to say that we are at the first sample of the circle moving to the second to the third and so on. If the gesture is performed faster, HMM can spot that we started the first sample of the circle and then move to the third and then the firth and so on. By changing the granularity of the state from the general purpose HMM, gesture follower transforms the model into a real-time classifier with the ability to estimate the progress bar within the gesture template.

So now let’s inspect how gesture variation follower has also been designed based on general purpose algorithm and adaptive or hacked in order to fulfill a musical objective. GVF is based on a method used for tracking called particle filtering. Tracking is a task of estimating often the position of an object in the scene, for instance, a car in a video camera taking by CCTV or each finger captured by a depth camera and so on. Particle filtering is a widely used method for object or human tracking because it is a fairly generic method that does not rely on many hypotheses. More precisely, particle filtering has two critical features. We will looks at those features in day 074.

That’s all for day 073. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. And until next time, be legendary.

References

*https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/sessions/working-with-time*