How Linear Algebra and Machine Learning Help You Binge Watch TV.

Unbeknownst to most, people actually do make use of math after high school. Likely every day for hours at a time. Perhaps not in the way our grade school math teachers threatened us. We may not use formulas and equations but the technology we use is driven by math. Yes, computers compute. They are able to deliver amazing things making our day to day lives easier. Making it possible for software engineers to tackle the difficult job of answering the mystery of what in the world you might binge watch next.

Math for a lot of people is a big scary monster. We’ve avoided it by not letting our foot hang off the edge of the bed while we sleep and check for it in the closet at night. “Mathphobes” are usually created in grade school by a teacher who likely taught the memorization of formulas rather than theory behind it and it’s potential applications. Now that we are far away from number two pencils and scantrons we can take a look back at math with more intrigued as it is applied to things we enjoy. Exhibit A, finding new movies and TV shows you actually want to watch.

Firstly, as most people can guess algorithms are what streaming providers use to divvy up recommendations on what to watch next. Machine learning is one of the methods used to accomplish just that. To get a little more technical; machine learning is a method of data analysis that automates analytical model building. It is a branch of AI (artificial intelligence) based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Simply explained we are able to teach machines to learn from experiences. Rather than having an editorial team create recommendations machine learning is able to automate that process quickly.

Math, more specifically linear algebra is something found in a software engineers “toolbox” and offers helpful techniques for manipulating groups of numbers. Ultimately helping generate relevant recommendations of movies and TV shows.

Fundamental concepts of linear algebra used by machine learning engineers include (but are not limited to) vectors, matrices, dot product, eigenvalues and eigenvectors.

At their core function vectors and matrices store attributes. A vector may include things like the title of a film or show, the release year, genre, number of likes of your favorite movie. A matrix also can store this same information. The difference being that a matrix can store attributes for an infinite number of films whereas a vector would only store that info for one. Vectors and matrices can be considered a spreadsheets containing attributes that can interact with other spreadsheets.

Vectors can talk (interact) with one another. The dot product is one operation you can perform with vectors. This circles back to “Mathland”. More specifically vectors can be multiplied by one another (see example below).

(The dot between the two vectors denoting that they are being multiplied by one another is where the dot product comes from. Simply put; fancy multiplication)

As we progressed in our math career in grade schools we learned that multiplication can also be denoted by a dot rather than an x. Hence the dot product is the multiplication of vectors. Once we bravely do the computing we are given a point we can plot. Similar to using a Thomas Guide. This gives us a visual representation allowing us to see how close vectors are to one another based on their attributes. In our example it provides a visual representation of how similar two movies are. Rather than taking a risk on something a friend recommends, with the help of math you can discover things you actually would enjoy.

Vectors can also interact with matrices. Matrices have several properties. One that plays into the process of computing answers when interacting with things like vectors is the trace property; the sum of the diagonal. By this property we are able to test whether or not the matrix will “play nice” (multiply, divide etc.) with other datasets. Alone, the trace operation is boring, but it offers a simpler notation and it is used as an element in other key matrix operations.

Lastly there are eigenvalues and eigenvectors. The prefix eigen is adopted from the German word eigen for “proper”, “characteristic”. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. Machine learning involves large amounts of data. This concept allows for engineers to reduce data to what is needed without losing actual representation; which can be applied to data regarding movies and TV shows. This is similar to giving someone a long story short without losing the meat and potatoes of the story.

Revisiting basic linear algebra concepts when it’s been applied to an everyday activity like watching a movie and TV show might excite more people than the slope of a mountain you may drive up one day. The concepts of linear algebra are crucial for understanding the theory behind machine learning. Vectors and matrices house the data that we can use to draw comparisons or make predictions with. The dot product is the way they often interact with one another giving us our results; while the eigenvalues and eigenvectors allow for us to manipulate data without losing the meat and potatoes of it. These concepts give a better understanding for how algorithms work. Although there are other parts of linear algebra used in machine learning these are some basic concepts that can help us appreciate the next show we find to binge watch a little more.