Jehoshaphat I. Abu
std::steam

std::steam

100 Days Of ML Code — Day 029

100 Days Of ML Code — Day 029

Jehoshaphat I. Abu's photo
Jehoshaphat I. Abu
·Aug 7, 2018·

3 min read

Recap From Day 028

In day 028, we learned about generating useful inputs for machine learning, with a focus on features, how to build, select and process features in order to get good results from our learning algorithms.

To, we’ll continue from where we left off in day 028.

What Makes A Good Feature?

Regardless of the learning algorithm we’re using there are few properties we want to look for when choosing our features.

First, each feature should be relevant to the learning problem. We’ve seen that irrelevant features can hurt models accuracy, especially for the nearest neighbor algorithm but also for certain other algorithms. In the best case, irrelevant features can still slow down our training and increase our computational storage and processing requirements, so we try to avoid them.

[Source](https://cdn.hashnode.com/res/hashnode/image/upload/v1632827262125/CCedpBPna.jpeg)Source

Second, ideally we’d like our feature measurements to have as little noise as possible. In other words, if the things we’re measuring is at one state at one point in time, and at the same state in another point in time, we’d like our feature measurements for those two times to be identical or at least very, very similar. As we go on in this series, you’ll see some techniques you might use to reduce noise.

[Source](https://cdn.hashnode.com/res/hashnode/image/upload/v1632827264063/xF3pV2OH7.html)Source

Third, we don’t want too many features. Believe it or not, more features can often make learning more difficult even if all of those features are relevant and noise-free. As a general rule, the more features you have the more training examples you need in order for a learning algorithm to learn effectively in that high-dimensional space. This is know as curse of dimensionality.

[Source](https://cdn.hashnode.com/res/hashnode/image/upload/v1632827265852/7WA-PET8t.html)Source

Following the links below, you can read more about the curse of dimensionality. The Curse of Dimensionality Followed by an intro to dimensionality reduction!medium.freecodecamp.org Curse of dimensionality - Wikipedia The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in…en.wikipedia.org

Fourth, at the same time we want our features as a whole to provide us with enough information so that examples that are close in features space tend to be similar to each other. We’d like to make the assumption that examples that are very close are likely to be members of the same class or likely to result in very similar regression out values.

[Source](https://cdn.hashnode.com/res/hashnode/image/upload/v1632827269543/E1LOeqaGR.html)Source

You can see an illustration above where on the left we have a problem with one feature that will be impossible to learn accurately with any classifier. Here, we definitely can’t say that examples that are close to each other tend to be members of the same class but if we add one more complimentary feature as you can see on the right in the illustration below, we’d be able to build a very good classifier.

[Source](https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/sessions/sensors-and-features-generating-useful-inputs-for-machine-learning)Source

Therefore, even though we might talk about the goodness of each feature individually it is important to keep in mind that we rely on a whole set of features when it comes time to actually build a model from data.

Amazing to know that you’re still here. We’ve come to the end of day 029. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey.

Reference

*https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/sessions/sensors-and-features-generating-useful-inputs-for-machine-learning*

 
Share this