## Recap From Day 069

Day 069, we looked at Sonic interaction with GVF. You can catch up using the link below.
**100 Days Of ML Code — Day 069**
*Recap From Day 068*medium.com

Today, we’ll look at what these models have in common

## Working with time

### What these models have in common

We have seen how we can do real-time classification with Gesture follower and how we can get more about how the gesture is performed with the variation follower. Both of these models have similar names because they come from the same families of models. Let’s see what they have in common that also differs from the other methods that we saw earlier on. Knowing what they have in common will enable us to choose what will be the most appropriate in practice.

To start with, both models are temporal models that is to say that they take into account the temporal trajectory of the gesture execution. What happens now depends on what happened before. Or from the methods perspective, what has been estimated before as an influence on what is been estimated now. After all, a gesture is a continuous physical phenomenon. There is no discontinuity in the execution of the gesture. If we start a circle with our hand starting from the bottom going towards the left, our hand can not disappear at some point in the path of that circle and reappear slightly later at another position within this path. There are no holes in the gesture trajectory so it’s temporal structure is important.

Dynamic time warping also considers the temporal structure of a gesture and it’s thanks to this feature that Dynamic time warping is able to segment a continuous gesture stream into gestures previously recorded. However, for both GF and GVF, the temporal structure is what builds the interaction. With GVF, we can perform the gesture and play with its temporal structure by freezing at the middle then going backwards, freezing again then going forward but faster than the original for instance. Playing with the temporal structure is a key feature of this models. Other methods such as nearest neighbour, DTW or Naive Bayes can not afford such control

Another common point between the two methods we saw previously is the fact that they are probabilistic and more precisely they are Bayesian. We’ve already seen an example of the Bayesian method before which is a classification method in which classification decision is formulated in terms of probabilities. The Navie Bayes method relies on the Bayes rule. The Bayes rule gives a way to update our belief on what we are looking for, for instance, a gesture class given what we can observe. Indeed this probability is often very odd to calculated directly. The Bayes rules tells us that in order to calculate this belief we can do it in terms of a few simple probabilities which might be easy for us to compute specifically the probability of what we are looking for if we take it for any kind of observation and the probability of what we observed given that we know what we are looking for.

That’s all for day 070. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. And until next time, be legendary.

*References*