It’s been a while since my last article (work), but with summer approaching Lenny and I are going full speed, so expect articles on interesting machine learning algorithms (eg. recurrent and convolutional neural networks), our recent work with using machine learning to analyze fMRI scans, and new research papers from institution/companies like Google’s DeepMind we are going to learn about at our trip to ICML 2016 next month. Keep your eyes peeled for some more philosophical write-ups on AI, too!

Keep your eyes peeled!

Today I want to build on my very first article about logistic regression. In particular, I want to discuss the statistical/probabilistic interpretation of logistic regression, which I felt was missing from explanations and lectures provided by certain online courses like Andrew Ng’s Machine Learning one (which is still wonderful). I will discuss the intuition behind the logistic regression model formulated in the previous article.

UPDATE: The work presented in this article was part of my submission for my school mathematics coursework. Since I submitted it, and don’t want to be caught plagiarizing myself (heh), I’ve replaced the rest of the article with images of each page in the PDF. You can skip the intro and the conclusion + visualization through programming.

Once I get my IB results — around July — I’ll put the post back up.