Jehoshaphat I. Abu
std::steam

std::steam

100 Days Of ML Code — Day 044

100 Days Of ML Code — Day 044

Jehoshaphat I. Abu's photo
Jehoshaphat I. Abu
·Aug 22, 2018·

2 min read

Recap From Day 043

In day 043, we learned that the easiest way to use machine learning to control a real-time system is to use our classifier or regression models to compute a new set of output values for every time we receive a new input feature vector. That means that the faster we send features, the faster our outputs can change, but also the more computation we have to do.

Today we will continue from where we left off in day 043

Features: How fast to send them? Continued

We’ve seen how the rate at which we send features impacts our use of a trained model in real-time in day 043. But what about the rate at which we send features when we are recording training examples?

One of the rules of thumb we can usually count on in machine learning is that more training data tends to be better. Having more training examples usually tells us more about the learning problem, giving us better coverage of our feature space.

Looking at our musical genre classifier from some days ago, it’s going to do much better if we give it examples of one hundred(100) songs or five thousands(5000) songs from each genre, as opposed to giving it maybe just two from each genre.

There’s an important exception to this rule though. If we increase the size of a training set only by adding examples that are nearly identical to each other, our classifier isn’t going to be able to use those examples to really improve it’s learning. We’re not telling it anything new about the learning problem.

On the other hand, adding more examples, even if there’re virtually the same, can really slow down our training process. So one of the things we might want to do if we’re building real-time system, is send features at a relatively slow rate when we’re recording training examples.

So as we record our training examples, each feature vector that gets recorded into new training example, tells us something different about what motion we should be making, or what position we should be in, paired with what sound for instance should be happening.

[Source](https://cdn.hashnode.com/res/hashnode/image/upload/v1632827116700/dGhGq-Mqo.png)Source

Once we’re running our trained models in performance and want our system to feel very responsive, we can send at a much faster rate.

Awesome. That’s all for day 044. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. And until next time, be legendary.

Reference

*https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/sessions/sensors-and-features-generating-useful-inputs-for-machine-learning*

 
Share this