8.2 Classical ML: What it does (1)
- Difference between ML and Deep Learning
- Classical ML: We need three things
- Input data points (e.g., sound files of people speaking, images)
- Examples of the expected output (e.g., human-generated transcripts of sound files, tags for images such as “dog,” “cat,” etc.)
- Could be binary, categorical or continuous variable
- A way to measure whether the algorithm (model!) is doing a good job
- e.g., training error rate
- Necessary to evaluate algorithm’s (model’s!) current output and its expected output
- Measurement = feedback signal to adjust algoritm (model!) → adjustment step is what we call learning
- Manual adjustment in Lab: Resampling & cross-validation
- ML model: transforms input data into meaningful outputs, a process that is “learned” from exposure to known examples of inputs and outputs
- Central problem in ML (and deep learning): Learn useful representations of the input data at hand – representations that get us closer to the expected output (“Representations”7)
- Further reading.. Chollet and Allaire (2018 Ch. 1.1.3)
References
Chollet, Francois, and J J Allaire. 2018. Deep Learning with R. 1st ed. Manning Publications.
What’s a representation? A different way to look at data (to represent or encode data); e.g., color image encoded in RGB format (red-green-blue) or in the HSV format (hue-saturation-value): these are two different representations of the same data; Some (classification) tasks that may be difficult with one representation can become easy with another (e.g, “select all red pixels in the image” simpler in RGB than HSV)↩