Recap From Day 044
In day 044, we concluded on Features: How fast to send them. We learned that one of the rules of thumb we can usually count on in machine learning is that more training data tends to be better. Having more training examples usually tells us more about the learning problem, giving us better coverage of our feature space.
Today, we’ll start looking at working with time.
Working with time
Up until day 044, we’ve been using classification and regression to make sense of something happening at particular moment in time. If we’re using the game controller below for examples, we’re asking our model to compute their outputs based on what we’re doing at any point in time; what position our hand is in, perhaps how fast our hand is moving.
Even when we use windowed features, for instance, to deal with audio we’ve been using windows to capture something meaningful about the data that has just been measured, about the audio that has just been played, for instance.
But what if we want to build models that are sensitive to how things are changing over time. If we are working with gestural inputs, we might want to build a classifier to recognize whether we’ve just drawn a clockwise circle in the air, or a counterclockwise circle using our game controller above. We might even want to know whether we’ve drawn the circle quickly or slowly.
If we are working with audio input, we might want to build a classifier to recognize whether we’ve just played the first bar of one melody, or of a different melody. So, how can we do this? in the coming days, we’ll be discussing a few different strategies for modeling how things change over time.
That’s all for day 045. That was fast right? Do not worry. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. And until next time, be legendary.