The Supervised Machine Learning book

An upcoming textbook

When we developed the course Statistical Machine Learning for engineering students at Uppsala University, we found no appropriate textbook, so we ended up writing our own. It will be published by Cambridge University Press in 2021.

Andreas Lindholm, Niklas Wahlström, Fredrik Lindsten, and Thomas B. Schön

A draft of the book is available below. We will keep a PDF of the book freely available also after its publication.

Latest draft of the book (older versions >>)

Table of Contents

  1. Introduction (only partly in draft)
  2. Supervised machine learning: a first approach
    • The supervised learning problem
    • A distance-based method: k-NN
    • A rule-based method: Decision trees
  3. Basic parametric models for regression and classification
    • Linear regression
    • Classification and logistic regression
    • Polynomial regression and regularization
    • Nonlinear retression and generalized linear models (only partly in draft)
  4. Understanding, evaluating and improving the performance
    • Expected new data error: performance in production
    • Estimating the expected new data error
    • The training error–generalization gap decomposition
    • The bias-variance decomposition
    • Evaluation for imbalanced and asymmetric classification problems
  5. Learning parametric models
    • Loss functions
    • Regularization
    • Parameter optimization
    • Optimization with large datasets
  6. Neural networks and deep learning
    • Neural networks
    • Convolutional neural networks
    • Training a neural network
  7. Ensemble methods: Bagging and boosting
    • Bagging
    • Random forests
    • Boosting and AdaBoost
    • Gradient boosting
  8. Nonlinear input transformations and kernels
    • Creating features by nonlinear input transformations
    • Kernel ridge regdression
    • Support vector regression
    • Kernel theory
    • Support vector classification
  9. The Bayesian approach and Gaussian processes
    • The Bayesian idea
    • Bayesian linear regression
    • The Gaussian processes
    • Practial usage of the Gaussian process
  10. User aspects of machine learning
    • Defining the machine learning problem
    • Improving a machine learning model
    • What if we cannot collect more data?
    • Practical data issues
    • Can I trust my machine learning model? (not in draft yet)
    • Ethics in machine learning (not in draft yet)
  11. Generative models and learning from unlabeled data
    • The Gaussian mixture model and the LDA & QDA classifiers
    • The Gaussian mixture model when some or all labels are missing
    • More unsupervised methods: k-means and PCA

Exercise material

Will eventually be added to this page. Meanwhile you may have a look at the material for our course at Uppsala University.

Report mistakes and give feedback

(A free GitHub account is required)