A | B | C | |
---|---|---|---|
1 | Filename | Demos featured in | Description |
2 | ml_regression_ARD | demo_regression_ARD | Fit a linear model using Automatic Relevance Determination / Sparse Bayesian Learning using the fixed-point method of MacKay |
3 | ml_regression_bagging | demo_regression_bagging | Fits a regression algorithm on bootstrap samples of the dataset and predicts based on the average over all bootstrapped models |
4 | ml_regression_basis | demo_regression_regression_CV | Fits a linear regression model under a change of basis |
5 | ml_regression_GAM | demo_regression_GAM | Fits a Generalized Additive Model to the dataset using one of cubic splines, polynomial regresion, or linear regression as function to determine f's |
6 | ml_regression_Huber | demo_regression_outliers | Fits a linear regression model by minimizing the Huber loss function at a specified epsilon |
7 | ml_regression_kernel | demo_regression_kernel | Fits a linear regression model by finding the weights that minimize the squared loss of a kernelized representation of the dataset |
8 | ml_regression_KNN | demo_regression_nonparam | Fits a regression model by predicting each response as the average fo the k nearest neighbours to each example in the training data set |
9 | ml_regression_L1 | demo_regression_ARD, demo_regression_outliers | Fits a linear regression model by minimizing the sum of absolute errors (L1 loss) |
10 | ml_regression_L2 | demo_regression_ARD, demo_regression_bagging, demo_regression_basis, demo_regression_NB, demo_regression_outliers, demo_regression_regression_CV | Fits a linear regression model by minimizing the sum of squared errors (L2 loss) |
11 | ml_regression_local | demo_regression_nonparam | Fits a local regression model around each point in the training set, using a specified weighting function for the k nearest neighbours of that point |
12 | ml_regression_mean | demo_regression_bagging, demo_regression_regressOnOne | Fits a baseline regression model which always predicts the mean of y |
13 | ml_regression_MLP | demo_regression__MLP | Fits a neural network of specific architecture with sigmoid or hyperbolic tangent activitation function to the dataset with an identity transform as the final activiation |
14 | ml_regression_NB | demo_regression_NB | Fits a linear regression model by minimizing the Naive Bayes squared loss |
15 | ml_regression_NW | demo_regression_nonparam | Fits a Nadaray-Watson (locally weighted KNN-like) kernel regression model by estimating y as a locally weighted average, where the weighting function is a kernel. |
16 | ml_regression_regressOnOne | demo_regression_regressOnOne | Fits a regression model by minimizing the squared error for one feature in the dataset |
17 | ml_regression_student | demo_regression_outliers | Fits linear regression model using Student's t loss function, with optional polynomial basis, L2 regularization, or weights on training examples |
18 | ml_regression_stump | demo_regression_tree | Finds the optimal binary split for a training set and fits a model on each side |
19 | ml_regression_SVR | demo_regression_SVR | Support-vector regression using an epsilon-insensitive loss function |
20 | ml_regression_totalL2 | None | Fit a linear regression model that allows for errors in the dependent and independent variables. |
21 | ml_regression_tree | demo_regression_tree | Train regression model by partitioning feature space into set of rectangular regions, and fitting the constant model or linear model that minimizes the sum of squares in each region. |