This course provides a practical introduction to the rapidly growing field of machine learning— training predictive models to generalize to new data. We start with linear and logistic regression and implement gradient descent for these algorithms, the core engine for training. With these key building blocks, we work our way to understanding widely used neural network architectures, focusing on intuition and implementation with TensorFlow/Keras. While the course centers on neural networks, we will make sure to cover key ideas in unsupervised learning and nonparametric modeling.
Along the way, weekly short coding assignments and a midterm exam will connect lectures with concrete data and real applications. A more open-ended final project will tie together crucial concepts in experimental design and analysis with models and training.
This class meets for one 90 min class periods each week.
All materials for this course are posted on bCourses.
Course Prerequisites
MIDS 1A. Fundamentals of Linear Algebra
DS 201. Research Design and Applications for Data and Analysis
DS 203. Statistics for Data Science
Programming Prerequisites
Python (v3).
Jupiter and JupiterLab notebooks. You can install them in your computer using pip or Anaconda. More information here.
Git(Hub), including clone/commmit/push from the command line. You can sign up for an account here.
If you have a MacOS M1, this .sh script will install everything for you (credit goes to one of my former students, Kevin Stallone)
OS
Mac/Windows/Linux are all acceptable to use.
(My preferred) Textbook
Assignments
Midterm exam
Final Project
Week | Lecture | Live Session Materials | Readings | Deadlines (Sunday of the week, 11:59 pm PT) |
---|---|---|---|---|
Supervised and Unsupervised Learning | ||||
Jan 06-12 | Introduction and Framing | File 1 | RM1 (pp. 1-17) | Assignment 0 |
Jan 13-19 | Linear Regression - Gradient Descent Intro to TensorFlow |
File 1, File 2 | RM 2 (pp. 36-52), RM10 (pp. 315-345), RM13 (pp. 425-462), feature scaling, more math (1) | Assignment 1 |
Jan 20-26 | Feature Engineering | RM 4 (109-127), Ilin et al. (2021) | Assignment 2 | |
Jan 27 - Feb 2 | Logistic Regression - Binary | RM (3, 6 (p.211-219)), more math (2) | Assignment 3 Group, question, and dataset for final project |
|
Feb 03-09 | Logistic Regression - Multiclass | RM (3, 6 (p.211-219)), more intuition | Assignment 4 | |
Feb 10-16 | Feedforward Neural Networks | RM (12, 13, 14), activation functions, regularization | Assignment 5 | |
Feb 17-23 | KNN, Decision Trees, and Ensembles | RM (3, 7), Psaltos et al (2022) | Assignment 6 Midterm exam |
|
Feb 24 - Mar 2 | Unsupervised Learning: K-Means and PCA Project: baseline presentation | RM (11) | Assignment 7 Baseline presentation: slides |
|
Mar 03-09 | Embeddings for Text | RM (8, 16) | Assignment 8 | |
Mar 10-16 | Convolutional Neural Networks | RM (15), 1D CNN intuition, Yoon Kim (2014) | Assignment 9 | |
Mar 17-23 | Network Architecture and Debugging ML algorithms |
Andrew Ng's advice for Applying ML | Assignment 10 |
|
Mar 24-30 | Spring Break |
|||
Mar 31 - Apr 06 | Fairness in ML | Suresh and Guttag (2021) | ||
April 07-13 | Advanced Topics: RNN/LSTMs, Transformers, BERT | Rashka et al, ch. 16 (2022) | ||
Apr 14-20 | Project: final presentation | Final presentation: slides and code |
How do I take the exam?
What is the best way to prepare for the exam?
Can I use ChatGPT?
How can I see my grade?
What is the best way to access the exam solutions?
For the final project you will form a group (at most 4 projects in total -> ~ 4 students/team). Your group can only include members from the section in which you are enrolled.
Grades will be calibrated by group size and individual contributions.
Do not just re-run an existing code repository; at the minimum, you must demonstrate the ability to perform thoughtful data preprocessing and analysis (e.g., data cleaning, model training, hyperparameter selection, model evaluation).
The topic of your project is totally flexible (see also below some project ideas).
Deadlines to remember:
A few project ideas (from my Summer 2022 students):
Baseline presentation. Your slides should include:
Final presentation. Your slides should include:
Participation | 5% |
Assignments | 45% |
Midterm | 20% |
Final project | 30% |
Integrating a diverse set of experiences is important for a more comprehensive understanding of machine learning. I will make an effort to read papers and hear from a diverse group of practitioners, still, limits exist on this diversity in the field of machine learning. I acknowledge that it is possible that there may be both overt and covert biases in the material due to the lens with which it was created. I would like to nurture a learning environment that supports a diversity of thoughts, perspectives and experiences, and honors your identities (including race, gender, class, sexuality, religion, ability, veteran status, etc.) in the spirit of the UC Berkeley Principles of Community.
To help accomplish this, please contact me or submit anonymous feedback through I School channels if you have any suggestions to improve the quality of the course. If you have a name and/or set of pronouns that you prefer I use, please let me know. If something was said in class (by anyone) or you experience anything that makes you feel uncomfortable, please talk to me about it. If you feel like your performance in the class is being impacted by experiences outside of class, please don’t hesitate to come and talk with me. I want to be a resource for you. Also, anonymous feedback is always an option, and may lead to me to make a general announcement to the class, if necessary, to address your concerns.
As a participant in teamwork and course discussions, you should also strive to honor the diversity of your classmates.
If you prefer to speak with someone outside of the course, Assistant Dean of Student Experience Shirley Salanio (shirley@ischool.berkeley.edu), Director of Admissions and Diversity Recruitment Roxanne Pifer (rpifer@ischool.berkeley.edu), and the UC Berkeley Office for Graduate Diversity are excellent resources. Also see the following link.
All MIDS students must be familiar with and abide by the provisions of the “Student Code of Conduct” including those provisions relating to Academic Misconduct. All forms of academic misconduct, including cheating, fabrication, plagiarism or facilitating academic dishonesty will not be tolerated (see the UC Berkeley Honor Code and the UC Berkeley Student Code of Conduct).
We encourage studying in groups of two to four people. This applies to working on homework, discussing live sessions, and studying for the mideterm exam. However, students must always adhere to the UC Berkeley Code of Conduct and the UC Berkeley Honor Code. In particular, all materials that are turned in for credit or evaluation must be written solely by the submitting student or group. Similarly, you may consult books, publications, or online resources to help you study. In the end, you must always credit and acknowledge all consulted sources in your submission (including other persons, books, resources, etc.)
The use of any text- and code-generating software (e.g., ChatGPT, Claude) to produce text and code for assignments, the midterm exam, or the final project is strictly prohibited and will be considered plagiarism. However, you may use these tools for learning, studying, and enhancing your understanding of course material.