Trainers¶
Trainers offer one of the simplest interfaces in Shark, but also form a very diverse group of algorithms for a range of different settings and applications. Each trainer represents a one-step solver for one certain machine learning problem. Usually, it will adapt the parameters of a model such that it represents the solution to some machine learning or data processing problem given a certain data set.
Some trainers are very simple – for example those training a linear model such that it normalizes the components of all examples in a data set to unit variance. Other trainers are quite complex, for example for some multi-class support vector machines. But in most cases, trainers are used to reach analytical solutions to relatively simple problems which are not stated as an iterative optimization problem underneath.
The base class ‘AbstractTrainer<ModelT, LabelTypeT>’¶
AbstractTrainer is the base interface for trainers for supervised learning problems. It is templatized with respect to the type of model it trains and the type of label in the data set. The trainer then defines the following types in its public interface:
Types | Description |
---|---|
ModelType |
The type of model the trainer optimizes |
InputType |
The type of inputs the model takes |
LabelType |
The type of the labels in the data set |
A trainer offers the following methods:
Method | Description |
---|---|
train(ModelType&, LabeledData<InputType, LabelType>) |
Solves the problem and sets the model parameters |
std::string name() |
Returns the trainer’s name |
Usage of trainers is equally straightforward:
MyModel model;
MyTrainer trainer;
MyDataset data;
trainer.train(model, data); //model now represents the solution to the problem.
The base class ‘AbstractUnsupervisedTrainer<ModelT>’¶
AbstractUnsupervisedTrainer is the base interface for trainers for unsupervised learning poblems. It only needs to know about the model type, and offers a typedef for the data format:
Types | Description |
---|---|
ModelType |
Type of model which the trainer optimizes |
InputType |
Type of inputs the model takes |
These trainers also offer the following methods:
Method | Description |
---|---|
train(ModelType&, UnlabeledData<InputType>) |
Solves the problem and stores it in the model. |
std::string name() |
Returns the name of the trainer |
List of trainers¶
We first list the unsupervised trainers in Shark. Many of these operate on models for data normalization.
Trainer | Model | Description |
---|---|---|
NormalizeComponentsUnitInterval | Normalizer | Trains a linear model to normalize the components of data to the unit interval. |
NormalizeComponentsUnitVariance | Normalizer | Trains a linear model to normalize the components of data to unit variance. |
NormalizeComponentsWhitening | LinearModel | Trains a linear model to whiten the data (uncorrelate all components, and normalize to unit variance). |
PCA | LinearModel | Trains a linear model for a principal component analysis, see the PCA tutorial. |
NormalizeKernelUnitVariance | ScaledKernel | Trains the scaling factor of a ScaledKernel such that the data has unit variance in its induced feature space. Note how this trainer operates on a kernel rather than a (linear) model. |
OneClassSvmTrainer | KernelExpansion | Trains a one-class SVM which can be used for outlier detection |
List of some supervised trainers:
Trainer | Model | Description |
---|---|---|
FisherLDA | LinearModel | Performs Fisher Linear Discriminant. |
KernelMeanClassifier | KernelExpansion | Computes the class means in the kernel induced feature space and generates a classifier which assigns the points to the class of the nearest mean. |
LDA | LinearClassifier | Performs Linear Discriminant Analysis, see the LDA tutorial. |
LinearRegression | LinearModel | Finds the best linear regression model for the labels. |
OptimizationTrainer | all | Combines the elements of a given learning problem – optimizer, model, error function and stopping criterion – into a trainer. |
Perceptron | KernelExpansion | Kernelized perceptron – tries to find a separating hyperplane of the data in the feature space induced by the kernel. |
RFTrainer | RFClassifier | Implements a random forest of decision trees, see the random forest tutorial. |
AbstractSvmTrainer | KernelExpansion | Base class for all support vector machine trainers. |
MissingFeatureSvmTrainer | MissingFeaturesKernelExpansion | Trainer for binary SVMs supporting missing features. |
CSvmTrainer | KernelExpansion | Trainer for binary and multiclass SVMs, with one-norm regularization, see the SVM introduction. |
EpsilonSvmTrainer | KernelExpansion | Trains an epsilon-SVM for regression. |
RegularizationNetworkTrainer | KernelExpansion | Trains a Gaussian Process model / regularization network. |
AbstractLinearSvmTrainer | LinearModel | Base class for all linear-SVM trainers |