100 Days Of ML Code — Day 025

100 Days Of ML Code — Day 025

Recap from Day 024

In day 024, we looked at how machine learning in the arts is different. We learned that when you are using machine learning to build new real-time interactions, you need things to run in real-time. You probably won’t be satisfied with a gesturally-controlled instrument that takes 20 seconds to make a decision or what sound to play for the current action, much less 20 hours.

Today, we’ll continue from where we stopped yesterday. “… Because your ultimate goal is to build model that’s useful to you, it’s fine for you to make any changes to the data if they result in a more useful model.

You can often add more training examples in order to correct your model if it’s learned the wrong thing, or you could delete examples if you find that this helps. You might even completely change the learning problem, changing from five gesture classes to ten, or changing your ideas about the sounds you want a new instrument to make.

For lots of creative applications, making changes to the data is often the most efficient and easy to understand way for you to improve your models, bringing them more into line with your goals for whatever it is you’re building. In human-computer interaction, this approach to improving a model by making changes to the data is sometimes called interactive machine learning and there are other application context beyond the arts where this also makes sense to do.

You made it to the end of day 025. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey.

Reference

https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/sessions/developing-a-practice-with-machine-learning-wrap-up