Don't just read notes. Your brain doesn't work passively.
Use them to complement your lecture time and perhaps reflect on what you've learned.
Regression is used when you want to learn the relationship between two variables, e.g. median income and acceptance rate. You learn the function , which performs the mapping .
A classification problem similarly learns the relationship between two variables, except that the mapping is discrete, e.g. tumor size and whether the tumor is cancerous or not: (binary classification).
The classification can be binary or -ary, a "K-class classification": .
With regression plots, we can only feasibly plot in 3D, and sometimes the right amount of variables is much higher. To predict something well, you need a lot of disparate info/vars.
This motivated the Support Vector Machine (SVM), which can use an -dim vector (?). This is somehow accomplished using kernels (in an ML context). Now, we can use a long list of information about a patient to generate diagnoses.
Supervised learning labels the input information with the "correct" way to do something so that the model knows what the right answer is.
For example, diffusion models are provided with noisy data and are told exactly what the original data was (because we possess the information on exactly how much noise has been added). That way, the model can see how close it was to the real answer.
CMU self-driving from the 80s took in a video frame from the front of the car and predicted the correct steering amount that the human would perform. The labels were generated by a human driving along the road at that very same time. That way, the model knew what the human would do in such a situation.
This is very similar to my project on teleoperation, which allows the human to provide a trajectory it thinks to be appropriate ("supervised learning from expert demonstrations"). Now, the imitation learning model has a benchmark of what a good solution might be (the label).