This page lists all tutorials available for the Shark machine learning library. Together with a comprehensive set of example and test programs, they are excellent starting points towards developing and evaluating machine learning software using Shark. Also see the guide to the different documentation pages available here.
We first show how to set up either a traditional Makefile or a CMake file for your application program. Then we move on to a simple Hello-World example of what linear binary classification can look like in Shark. The third tutorial illustrates the model-error-optimizer trias often encountered in Shark through a simple regression task.
Many generic concepts that Shark implements span across the whole library or are relevant in many different application scenarios. We collect a number of such generic tutorials here which explain these concepts in detail. We believe that these are useful if you want to thoroughly familiarize yourself with Shark. If you are looking for a quick introduction on how to set up a specific algorithm, take a look at the component specific tutorials further down the page.
Before we can introduce the main interface classes of shark, we need to define more properly a few basic terms as well as the design goals.
The main interfaces, or concepts in shark are the base classes from which nearly all components are derived. These tutorials are meant as a specification of the interfaces as well as the behavior and are written in a more formal language.
Since many machine learning algorithms work on real-world datasets, we extensively cover Shark’s Data class as well as common operations on them:
Here come tutorials for some selected algorithms implemented in Shark. It must be said that this is only the tip of the iceberg, many more machine learning algorithms and tools are provided by the library.
Let’s start with some classical methods:
Kernel methods – support vector machine training and model selection:
Direct search methods – the covariance matrix adaptation evolution strategy:
Finally, we present functionality which are not machine learning facilities themselves, but necessary or helpful tools.
First of all there is Sharks own solver for Quadratic Programs:
We give an introduction to Shark’s usage of the Boost uBLAS library for “all things linear algebra”:
For convenience, Shark provides a statistics class wrapper, as well as generic support for serialization, and the well-known factory-method-pattern:
Note that Shark follows a
If you contribute to Shark, you might also find these documents helpful: