welcome to the 100 days of machine learning code challenge, taking this challenge you will push your skill to it’s peak, this challenge will give you a starter boost to the machine learning field, and you will notice the amount of knowledge you gain every day, also you can see the progress.

In this guide, we’re going to through a journey of 100 days, each day contains a funny snippet of ML code, also we will reveal every day of the challenge, also what materials you need to study, what projects you need to practice and all this challenge is an educational, for free, and very soon we will make 100 Days of ML Code The Book.

You don’t need a fancy Ph.D in math. You don’t need to be the world’s best programmer. And you certainly don’t need to pay $10,000 for an expensive “boot-camp.”

Whether your goal is to become a data scientist, use ML algorithms as a developer, or add cutting-edge skills to your business analysis toolbox, you can pick up applied machine learning skills much faster than you might think. And if you don’t have an interest in this field of AI, that’s not a problem or an excuse, find something which you care about, something which doesn’t feel like work, something to which you can dedicate your entire life. Les Brown once said,

“To achieve anything worthwhile in your life, you got to be hungry.” Avik Jain

Be hungry for success, follow your dreams and watch them become your reality.

Four Reasons to take this challenge

1. Are you a self-starter?

Do you like to learn with hands-on projects? Are you driven and self-motivated? Can you commit to goals and see them through? If so, you’ll love studying machine learning. You’ll get to solve interesting challenges, tinker with fascinating algorithms, and build an incredibly valuable career skill.

2. Are you tired of seeing expensive courses and bootcamps?

We are too… That’s why we put together this guide of completely free resources anyone can use to learn machine learning. The truth is that most paid courses out there recycle the same content that’s already available online for free. We’ll pull back the curtains and reveal where to find them for yourself.

Machine learning is a rapidly evolving field. That makes it exciting to learn, but materials can become outdated quickly. We’re going to update this page regularly with the best resources to learn machine learning.

We’ve got a lot of great stuff you’ll like, so let’s dive right in!

4. What if you need more than a plan?

As I build this challenge to be fully with things that potentially interests all and useful for all in the same time, there are some students that are need more than a challenge to achieve more, they need a push to the glory. If you are one of these awesome student you can send me an email via contact us, and I will be glad to help you anytime.

Before starting the challenge

To be fair enough we are not who start this initiative, so we have to give those great people some credits

(Feb, 5th, 2020) Many said to me that the most of days need more than day to complete, so I am taking a step to simplify it.

(Feb, 13th, 2020) Formatting the first week in the challenge. Adding more concepts to grasp for the learners.

(Feb, 19th, 2020) Formatting the first week in the challenge. Make it more beautiful for the learners.

(Feb, 25th, 2020) Formatting the second week in the challenge. Adding more concepts to grasp for the learners.

(Mar, 3rd, 2020) Formatting the second week in the challenge. Make it more beautiful for the learners.

(Mar, 10th, 2020) Adding days 16, 17 to the challenge. Hope you have fun with it.

One final word, you might find many paths of the same challenge, all different in their structure and methodologies. You are free to choose any path to follow, but the important thing is to stuck with it until you finish it.

Days of the Challenge

Each day of the challenge you need to dedicate at least one hour per day to the machine learning field.

Please after finishing each day progress support us by sharing your experience on twitter #100DaysOfMLCode and #DataIsUtopia.

Day 1: Introduction to machine learning

By starting day 1, you are taking the challenge and encouraged

and delegated yourself to finish this challenge, and i hope you will do that.

In this day you have to read about machine learning and setup all the things needed in your environment to complete this challenge and also you need to choose the projects you will work on, but we will give you some suggestion.

What you need to learn is the following:

Personal Thoughts: Hope this will be exciting to you, this will help you in learning Machine Learning in a more effective way, also after this day you the starting point and what are the type of machine learning, and will make you face less error due to the environment setup.

Day 2: Introducing you to the data you will deal along with the challenge.

New day means a new learning task in our challenge 😄.

In this day we are going to learn what is data, what it’s types, and how to understand the data? These are important questions and you need a good answer for. So, we will find answers for these questions today, and that by learning everything about data, what is the data types we can face in real-life problems and how to understand these data.

What we are going to learn today is:

Personal Thoughts: Understanding data is one step forward to be a better machine learning engineer, you need to have the skills to understand the ideas inside the data, and very soon you will also have the skills to manipulate the data and extract your answers from it. But for now you need to understand what the problems you are going to face throughout the challenge.

In this day, we are going to make the dirty work. After understanding the nature of both the problem and the field we are dealing with, and before learning how to prepare a data for your machine learning and do the cleaning and preparation for a selected problem. We need to review and learn some concept to fully understand the picture.

So, the first we will start with linear algebra, as we will have to deal with data, we need to understand how the data operation is working, and how machine learning interpret it.

For fast review of linear algebra, you can view:

If you think that you need a full course, that’s so hard to finish in one day. But you can visit:

Personal Thoughts: The concepts of Linear Algebra are very important and crucial for understanding the theory behind Machine Learning, especially for Deep Learning, and that’s why I consider it one of the building blocks of both machine learning and deep learning. They give you better intuition for how algorithms really work under the hood, which enables you to make better decisions, and then you can enhance any model you need.

Day 4: Probability and Statistics at Glance.

After finishing Linear Algebra, we need to take another crucial building block of machine learning, which is statistics, and to understand statistics you need to understand probability. So, we are going to cut through both of these hard things. I know that you need something delicious, but trust me those are important to be a great machine learning engineer.

Personal Thoughts: As I think that statistics is a collection of tools that you can use to get answers to important questions about data. Statistics is generally considered a prerequisite to the field of applied machine learning. We need statistics to help transform observations into information and to answer questions about samples of observations.

Note: for who want to watch Khan Academy, you can contact me to send you a selected parts, based on your knowledge or expertise.

Day 5: Steps toward knowledge, Clean and Visualize Data.

In this day we going to do some data science stuff, which are cleaning and visualizing data. So, what we are going to learn today is:

Read chapter 5 of Machine Learning Pipeline( free book).

book). Clean and prepare one of the following datasets: Titanic dataset. Iris dataset.



Personal Thoughts: Definitely this will be exciting to you, making data preparation and cleaning is exciting and sort of nice task. I can’t lie to you this might be hard task for you and you might face some errors and obstacles, but do not worry everyone can face such these error just go on and search for an answer.

Day 6: The Most Famous Algorithm, The Gradient Descent

In this day, we will study one of the most important tools in the Machine learning and specially in Deep learning, which is Gradient Descent. Learning and Implementing this algorithm is one step forward to be a great machine learning engineer.

It’s up to you to finish more or less than the progress but it is recommended to finish the following items from the course list:

Personal Thoughts: If you asked me what’s the one algorithm that’s used in almost every Machine Learning model? You will see me answer It’s Gradient Descent. There are a few variations of the algorithm but this, essentially, is how any ML model learns. Without this, ML wouldn’t be where it is right now.

Also, seeing this algorithm from theory to implementation from scratch is very good, to ensure of fully grasp the knowledge needed, beside the concepts are really good and interesting how this algorithm is emulate the learning process.

Linear Regression Day 7: Your first algorithm,

In this day, we are going to study and implement your first machine learning algorithm from scratch, the algorithm you will implement is Linear regression algorithm. You will build it from scratch without using any library except numpy for array usage, after building it you will use it to predict on simple data.

So, in this day we will learn about Gradient descent algorithm from theory to implementation. We will do the following list:

Personal Thoughts: The built-in tools or models we use are also just algorithms , they are not any rocket science , learning ML become more fun when these algorithms are implemented from scratch , it was really a great experience.

And as I said, seeing this algorithm from theory to implementation from scratch is very good, to ensure of fully grasp the knowledge needed, beside that this algorithm is used in many real-world cases, so it is useful to understand and implement it.

Day 8: Be Selective, with Data Sampling

After finishing your first machine learning model, you need to understand an important concept which is data sampling, you can go and read Chapter 6: Data Resampling from Machine Learning Pipeline(free book). You will learn how to divide your data into train/test split, and how this is important for your model validation and evaluation.

Personal Thoughts: First, the course will give a good experience and that’s due to the programming exercises. Second, reading the book will make sure that you understand all the knowledge and gain the experience in this area.

Day 9: From Linear to Multiple Linear Regression.

After learning simple linear regression, also how to sample from data, and how to divide it into train/test. Now, we can go to some complex structure such as Multiple Linear Regression. Instead of deal simple data with one-dimensional input. We will go to complex data with multi-dimensional input.

Personal Thoughts: Going from Simple to Multiple Linear Regression will strength your understanding of this type of models (Linear Models). And as we will go with the challenge, we will see more of linear models, so you will be some how confident as you are truly understand the linear models and how it works.

Our today’s progress is going beyond the regression problem, we will start a new type of machine learning model which is classification algorithms. So, today we are going to transform linear regression to logistic regression.

Personal Thoughts: Of course this day is different, as we used on regression algorithms and now we are on classification. Also, lots and lots of math equation and code implementation. But as I told you, you are now expert in linear models and you are truly understand how it works, you only need to generalize the idea to classification too.

Day 11: Set your Decision, with Decision Trees

You can consider decision trees as a flowchart-like structure in which each internal node represents a “test” on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label.

Personal Thoughts: Understanding and Implementing decision trees is important as you can see the true difference between linear models and how it works and decision trees. Later you will see an upgrade to decision trees that makes this type of model a very awesome.

Day 12: Improve Your Model Generalization with Regularization

Today we will learn new machine learning and we will implement it, the model is Ridge regression. Also, we will learn new techniques which are L2 norm, L1 Norm, these techniques are important for removing the overfitting and stabilize your machine learning model.

Personal Thoughts: Regularization is a great tool for making a Model dumb(i.e putting some constraints). If there is noise in the training data, then the estimated coefficients won’t generalize well to the future data. This is where regularization comes. So, regularization can be motivated as a technique to improve the generalizability of a learned model.

Day 13: For a Learning Models, You Need a Learning Curve

Today’s progress is learn and Implement learning curves. you will learn a very important terms in this day.



We will do the following list:

Personal Thoughts: learning curves are very important for model diagnosis and it is really helpful in determining if your machine learning model is overfiting or underfiting.

Day 14: Is It a Holiday? No, It Is a Revision Day.

After a hard two weeks we can not get a rest simply. We are just going to warm ourselves with some mathematical foundations that are needed later. So, today’s progress is the following:

Personal Thoughts: Having a good concept building in Mathematics is very important in ML. As you are need some rest, believe me you need to understand these concepts, to make it easier for you to understand the next algorithms in machine learning. So, happy revision day.

Today’s Progress : Learned linear algebra and started with Support Vector Machines(SVM). A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples.

Personal Thoughts: Support vector machine is pretty amazing, and very powerful algorithm that go beyond the linearity of data. Finding a complex pattern using the kernel trick, and good classifier because of the margin.

Day 16: Build a Non-Naive Classifier with Naive Bayes

Naive Bayes classifiers are a collection of classification algorithms based on Bayes’ Theorem. It is not a single algorithm but a family of algorithms where all of them share a common principle, i.e. every pair of features being classified is independent of each other.

Personal Thoughts: This algorithms is one of the important ML algorithms that you need to study as it is based on a probabilistic assumption of the data. And some how it works very well with text data.

Day 17: From Trees to Forests

The random forest is a classification algorithm consisting of many decisions trees. It uses bagging and feature randomness when building each individual tree to try to create an uncorrelated forest of trees whose prediction by committee is more accurate than that of any individual tree.

Personal Thoughts: Fortunately, with libraries such as Scikit-Learn, it’s now easy to implement hundreds of machine learning algorithms in Python. It’s so easy that we often don’t need any underlying knowledge of how the model works in order to use it. While knowing all the details is not necessary, it’s still helpful to have an idea of how a machine learning model works under the hood. That’s why I always recommend building the ML algorithms from scratch.

Day 18: Distinguish between Boosting vs Bagging

Today’s progress is to stacking over yesterday, which is to learn different types of ensemble methods, such as boosting and bagging which will open new types of learning algorithms to study.

Personal Thoughts: Understanding the difference between ensemble methods is needed, as you can see that random forest made a huge step compared to decision trees, so what about the other types.

Day 19

Learned about different types of naive bayes classifiers. Also started the lectures by Bloomberg. First one in the playlist was Black Box Machine Learning. It gives the whole overview about prediction functions, feature extraction, learning algorithms, performance evaluation, cross-validation, sample bias, nonstationarity, overfitting, and hyperparameter tuning.

Day 20

Using Scikit-Learn library implemented SVM algorithm along with kernel function which maps our data points into higher dimension to find optimal hyperplane.

Day 21

Completed the whole Week 1 and Week 2 on a single day. Learned Logistic regression as Neural Network.

Day 22

Completed the Course 1 of the deep learning specialization. Implemented a neural net in python.

Day 23

Started Lecture 1 of 18 of Caltech’s Machine Learning Course – CS 156 by Professor Yaser Abu-Mostafa. It was basically an introduction to the upcoming lectures. He also explained Perceptron Algorithm.

Day 24

Completed the Week 1 of Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization.

Day 25

Watched some tutorials on how to do web scraping using Beautiful Soup in order to collect data for building a model.

Day 26

Lecture 2 of 18 of Caltech’s Machine Learning Course – CS 156 by Professor Yaser Abu-Mostafa. Learned about Hoeffding Inequality.

Day 27

Day 28

Lec 3 of Bloomberg ML course introduced some of the core concepts like input space, action space, outcome space, prediction functions, loss functions, and hypothesis spaces.

Day 29

Check the code here.

Final words…

As we said earlier, we will continue updating this page as we can. You can support us by commenting with you feedback, if there’s something missing do not hesitate to tell us.

So, we are always here to help you, just say the words and we will be here for you.

Do not forget to check our books, and we are pleased to hear your feedback.