Six Things I've Learned from Studying Machine Learning

This site utilizes Google Analytics, Google AdSense, as well as participates in affiliate partnerships with various companies including Amazon. Please view the privacy policy for more details.

Machine learning. Artificial intelligence. Neural networks. It’s all exciting stuff, and I’ve been studying it for my master’s degree. But outside of the classroom, outside of computers, what has it taught me about life?

You learn more from being wrong than being right.

This is especially true with neural networks. When a machine learning algorithm correctly identifies a solution, most of the time no adjustments to any of the parameters (called “weights”) are made. But if the algorithm is wrong, these weights are nudged so that the next time the algorithm guesses at a solution, it’s closer to the right answer.

In fact, the more wrong you are, the more you learn.

How much those weights change is based partially on how wrong the answer is. The more wrong the answers are, the more the weights change. In essence, the more the machine learns.

People start learning with different assumptions.

I’ve mentioned these weights - what are they? Simplified, they’re coefficients - the numbers before the variable you often see in equations (e.g. “2x” the “2” is a coefficient). The first time a machine learning algorithm is run (especially in the case of neural networks) they’re randomly assigned.

This means the algorithm might get a different answer each time - just like people might in real life.

When learning, you can get stuck in a local minimum.

If you’re not familiar with machine learning algorithms, this might not make sense at first. Essentially, sometimes to get an even better answer, sometimes you have to get a worse answer first. But you don’t know if that worse answer will lead to a better answer. So, some people might conclude that their ideas are not only the best but the only answer.

Memorizing is not necessarily a good thing.

Memorizing a solution (often called “overfitting” in the world of AI) means that if you come across something you didn’t memorize, you’re going to have a lot of trouble figuring it out. It’s better to generalize (not stereotype people type of generalization, more like understand the problem and how to solve it) than to simply memorizing the answers.

Think of it as memorizing the solution to homework. Come test day, you might have some of the same questions on the test as on the homework. You won’t be able to answer those unseen questions, though!

The relationship between “emotion” and “mood.”

Okay, at first this doesn’t make sense in the traditional machine learning sense, but it comes into play with sentiment analysis and natural language processing (NLP). Sorry for the big words!

I used to think “emotion” and “mood” were interchangeable. I was always confused when asked how I felt, and then subsequently asked what my mood was. After watching several of these videos on NLP I came to realize mood is long term while emotion is short term.

That is, emotion is to weather as mood it to climate. (Or emotion is to mood as weather is to climate, depending how to like those comparision to go.)


I didn’t intend to learn any life lessons coming into school. I didn’t intend not to either. It just sort of happened.

Have you learned any life lessons in unexpected or nontraditional ways?

Leave a Reply

Note that comments won't appear until approved.