If you don’t know what is machine learning, just know this from Francois Chollet (creator of Keras)’s “Deep Learning with Python” book :

Classical programming vs. Machine learning

Classical programming vs. Machine learning

After attending the AI Frontiers conference at the beginning of this year, I was amazed, fascinated and befuddled at what actually is machine learning and deep learning and all of the associated buzzwords at an implementation level. I wanted to learn more about this. So, on a whim, I downloaded the TWiML podcast to listen during my commute and happened to be listening to an interview with Siraj Raval. Next thing you know, I checked out Siraj’s YouTube channel and followed him on Twitter. On Twitter, he kept talking about big news coming up in a few days, and turns out that he was co-creating the Udacity’s Deep Learning Foundation course (a MOOC). I was excited by Siraj’s and Mat’s intro video, and I immediately signed up and waited in trepidation.

The good part about the course was that there is a weekly schedule of lessons and projects. As I keep saying to friends and colleagues, nothing in the modern world ever gets done without a deadline (don’t tell your boss that).

The bad part was that the course was literally being built while we were enrolled, so we would see a mad rush by the instructors to write and create the content every week for the upcoming week, which was okay by me, because getting introduced to a topic that has only become feasible in the recent years and making it accessible in a way for people who don’t have Ph.D in machine learning, was exciting and I was grateful.

In the first few weeks, we dove into Anaconda (I’ve been doing Python for 10 years and had never heard of it), Jupyter notebooks (again, had never paid attention to or used it before), and started learning about perceptrons and neural networks. I was lost in the first few weeks. The course was advertised as 3 hours / week which was clearly insufficient, I had to spend like ~15 hours/week to catch up on the course and make sense of it all.

Like a tortoise, slowly I caught up, and reading Andrew Trask’s brilliant introductory book which was the course’s prescribed text book, I started understanding a little. We started off mostly with supervised learning, where we provide the training data set and the expected output. The lessons got into higher gear with learning convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The way I understood is that CNNs are useful for working on the full input such as individual images, because you’re extracting and condensing patterns with several layers and getting a condensed representation of the full input. RNNs are useful for sequences where there is a dependency such as text, where a sentence can depend on a previous sentence.

Whenever motivation was low, Siraj’s videos kept the enthusiasm and fascination flowing!

The projects also kept me going throughout the course because that’s where the understanding is really put to the test. Since I was taking copious notes during the course, I was forced to pay attention to the details, and that helped a lot during the project.

The lessons combined with the great idea of using a forum and dedicated forum mentors who guide you on questions that you have, about both lessons and projects, was just a perfect learning environment. I can’t thank the forum mentors enough.

The last topic of the course was generative adversarial networks (GANs), a type of unsupervised learning, which is actually a relatively recent concept, the paper came out in 2014! It applies game theory to neural networks to make two neural networks to compete with each other, the generator creates new patterns and the discriminator (trained on real data) decides whether it is realistic enough or not, forcing the generator to create realistic data after sufficient training.

Unfortunately, life happened, and I was delayed by a month to work on the last project. So it took immense effort to get back into the groove. The project was to generate faces! Imagine that! That invigorated me and was so glad to finally see this screen:

Untitled

There was plenty of other concepts we learned along the way such as autoencoders and reinforcement learning, it would take an entire article to list all the concepts we encountered.

I’m thankful to Udacity for this course, I could see that not all students were satisfied with this course, but this course was oh so worth it for me. Getting introduced to data science, machine learning and deep learning in a few months has been a gruelling and happy experience.

I signed off in the course slack community with this:

Untitled