This course provides a practical introduction to the rapidly growing field of machine learning— training predictive models to generalize to new data. We start with linear and logistic regression and implement gradient descent for these algorithms, the core engine for training. With these key building blocks, we work our way to understanding widely used neural network architectures, focusing on intuition and implementation with TensorFlow/Keras. While the course centers on neural networks, we will make sure to cover key ideas in unsupervised learning and nonparametric modeling.
Along the way, weekly short coding assignments will connect lectures with concrete data and real applications. A more open-ended final project will tie together crucial concepts in experimental design and analysis with models and training.
This class meets for one 90 min class periods each week.
All materials for this course are posted on GitHub in the form of Jupyter notebooks.
Course Prerequisites
Core data science courses: research design, storing and retrieving data, exploring and analyzing data.
Undergraduate-level probability and statistics. Linear algebra is recommended.
Programming Prerequisites
Python (v3).
Jupiter and JupiterLab notebooks. You can install them in your computer using pip or Anaconda. More information here.
Git(Hub), including clone/commmit/push from the command line. You can sign up for an account here.
OS
Mac/Windows/Linux are all acceptable to use.
Textbook
Assignments
Final Project
Week | Lecture | Lecture Materials | Readings | Deadlines (Sunday of the week, 11:59 pm PT) |
---|---|---|---|---|
Supervised and Unsupervised Learning | ||||
May 02-08 | Introduction and Framing | Week 01 | RM (1) |
|
May 09-15 | Linear Regression - part 1 | Week 02 | RM (10), feature scaling, more math (1) | Assignment 1 |
May 16-22 | Linear Regression - part 2 | Week 03 | RM (4, 2), Ilin et al. (2021) | Assignment 2 |
May 23-29 | Logistic Regression - part 1 | Week 04 | RM (3, 6 (p.211-219)), more math (2) | Assignment 3 Group, question, and dataset for final project |
May 30 - June 05 | Logistic Regression - part 2 | Week 05 | RM (3, 6 (p.211-219)), more intuition | Assignment 4 |
June 06-12 | Feedforward Neural Networks | Week 06 | RM (12, 13, 14), activation functions, regularization | Assignment 5 |
June 13-19 | KNN, Decision Trees, and Ensembles | Week 07 | RM (3, 7) | Assignment 6 |
June 20-26 | K-Means and PCA | Week 08 | RM (11) | Assignment 7 |
June 27 - July 03 | Sequence modelling and embeddings Project: baseline presentation |
Week 09 | RM (8, 16) | Assignment 8 Baseline presentation: slides |
July 04-10 | Convolutional Neural Networks | Week 10 | RM (15), 1D CNN intuition, Yoon Kim (2014) | Assignment 9 |
July 11-17 | Network Architecture and Debugging ML algorithms |
Week 11 | Andrew Ng's advice for Applying ML | Assignment 10 |
July 18-24 | Fairness in ML | Week 12 | Suresh and Guttag (2021) | |
July 25-31 | Advanced Topics: RNN, Transformers, BERT | Week 13 | Rashka et al, ch. 16 (2022) | |
Aug 01-07 | Project: final presentation | Final presentation: slides and code |
For the final project you will form a group (3-4 people are ideal; 2-5 people are allowed; no 1 person group allowed). Grades will be calibrated by group size. Your group can only include members from the section in which you are enrolled.
Do not just re-run an existing code repository; at the minimum, you must demonstrate the ability to perform thoughtful data preprocessing and analysis (e.g., data cleaning, model training, hyperparameter selection, model evaluation).
The topic of your project is totally flexible (see also below some project ideas).
Deadlines to remember:
A few project ideas:
Baseline presentation. Your slides should include:
Final presentation. Your slides should include:
Note: ISVC does not allow multiple resubmissions unless I reopen it for you. I recommend uploading your work in ISVC only when you are confident you have a final version. If you need to make changes in ISVC, please add your name and section number here. I will check this document one day before a deadline.
Participation | 5% |
Assignments | 65% |
Final project | 30% |
Integrating a diverse set of experiences is important for a more comprehensive understanding of machine learning. I will make an effort to read papers and hear from a diverse group of practitioners, still, limits exist on this diversity in the field of machine learning. I acknowledge that it is possible that there may be both overt and covert biases in the material due to the lens with which it was created. I would like to nurture a learning environment that supports a diversity of thoughts, perspectives and experiences, and honors your identities (including race, gender, class, sexuality, religion, ability, veteran status, etc.) in the spirit of the UC Berkeley Principles of Community.
To help accomplish this, please contact me or submit anonymous feedback through I School channels if you have any suggestions to improve the quality of the course. If you have a name and/or set of pronouns that you prefer I use, please let me know. If something was said in class (by anyone) or you experience anything that makes you feel uncomfortable, please talk to me about it. If you feel like your performance in the class is being impacted by experiences outside of class, please don’t hesitate to come and talk with me. I want to be a resource for you. Also, anonymous feedback is always an option, and may lead to me to make a general announcement to the class, if necessary, to address your concerns.
As a participant in teamwork and course discussions, you should also strive to honor the diversity of your classmates.
If you prefer to speak with someone outside of the course, MICS Academic Director Lisa Ho, I School Assistant Dean of Academic Programs Catherine Cronquist Browning, and the UC Berkeley Office for Graduate Diversity are excellent resources. Also see the following link.