This class offers methods for the usage of the Resilient-Backpropagation-algorithm with weight-backtracking. More...
#include <shark/Algorithms/GradientDescent/Rprop.h>
Public Member Functions | |
SHARK_EXPORT_SYMBOL | RpropPlus () |
std::string | name () const |
From INameable: return the class name. More... | |
SHARK_EXPORT_SYMBOL void | init (ObjectiveFunctionType const &objectiveFunction, SearchPointType const &startingPoint) |
initializes the optimizer using a predefined starting point More... | |
SHARK_EXPORT_SYMBOL void | init (ObjectiveFunctionType const &objectiveFunction, SearchPointType const &startingPoint, double initDelta) |
SHARK_EXPORT_SYMBOL void | step (ObjectiveFunctionType const &objectiveFunction) |
Carry out one step of the optimizer for the supplied objective function. More... | |
SHARK_EXPORT_SYMBOL void | read (InArchive &archive) |
Read the component from the supplied archive. More... | |
SHARK_EXPORT_SYMBOL void | write (OutArchive &archive) const |
Write the component to the supplied archive. More... | |
Public Member Functions inherited from shark::RpropMinus | |
SHARK_EXPORT_SYMBOL | RpropMinus () |
void | setEtaMinus (double etaMinus) |
set decrease factor More... | |
void | setEtaPlus (double etaPlus) |
set increase factor More... | |
void | setMaxDelta (double d) |
set upper limit on update More... | |
void | setMinDelta (double d) |
set lower limit on update More... | |
double | maxDelta () const |
return the maximal step size component More... | |
RealVector const & | derivative () const |
Returns the derivative at the current point. Can be used for stopping criteria. More... | |
Public Member Functions inherited from shark::AbstractSingleObjectiveOptimizer< RealVector > | |
std::size_t | numInitPoints () const |
By default most single objective optimizers only require a single point. More... | |
virtual void | init (ObjectiveFunctionType const &function, std::vector< SearchPointType > const &initPoints) |
Initialize the optimizer for the supplied objective function using a set of initialisation points. More... | |
virtual const SolutionType & | solution () const |
returns the current solution of the optimizer More... | |
Public Member Functions inherited from shark::AbstractOptimizer< RealVector, double, SingleObjectiveResultSet< RealVector > > | |
const Features & | features () const |
virtual void | updateFeatures () |
bool | requiresValue () const |
bool | requiresFirstDerivative () const |
bool | requiresSecondDerivative () const |
bool | canSolveConstrained () const |
bool | requiresClosestFeasible () const |
virtual | ~AbstractOptimizer () |
virtual void | init (ObjectiveFunctionType const &function) |
Initialize the optimizer for the supplied objective function. More... | |
Public Member Functions inherited from shark::INameable | |
virtual | ~INameable () |
Public Member Functions inherited from shark::ISerializable | |
virtual | ~ISerializable () |
Virtual d'tor. More... | |
void | load (InArchive &archive, unsigned int version) |
Versioned loading of components, calls read(...). More... | |
void | save (OutArchive &archive, unsigned int version) const |
Versioned storing of components, calls write(...). More... | |
BOOST_SERIALIZATION_SPLIT_MEMBER () | |
Protected Attributes | |
RealVector | m_deltaw |
The final update values for all weights. More... | |
Protected Attributes inherited from shark::RpropMinus | |
ObjectiveFunctionType::FirstOrderDerivative | m_derivative |
double | m_increaseFactor |
The increase factor \( \eta^+ \), set to 1.2 by default. More... | |
double | m_decreaseFactor |
The decrease factor \( \eta^- \), set to 0.5 by default. More... | |
double | m_maxDelta |
The upper limit of the increments \( \Delta w_i^{(t)} \), set to 1e100 by default. More... | |
double | m_minDelta |
The lower limit of the increments \( \Delta w_i^{(t)} \), set to 0.0 by default. More... | |
size_t | m_parameterSize |
RealVector | m_oldDerivative |
The last error gradient. More... | |
RealVector | m_delta |
The absolute update values (increment) for all weights. More... | |
Protected Attributes inherited from shark::AbstractSingleObjectiveOptimizer< RealVector > | |
SolutionType | m_best |
Current solution of the optimizer. More... | |
Protected Attributes inherited from shark::AbstractOptimizer< RealVector, double, SingleObjectiveResultSet< RealVector > > | |
Features | m_features |
Additional Inherited Members | |
Public Types inherited from shark::AbstractSingleObjectiveOptimizer< RealVector > | |
typedef base_type::SearchPointType | SearchPointType |
typedef base_type::SolutionType | SolutionType |
typedef base_type::ResultType | ResultType |
typedef base_type::ObjectiveFunctionType | ObjectiveFunctionType |
Public Types inherited from shark::AbstractOptimizer< RealVector, double, SingleObjectiveResultSet< RealVector > > | |
enum | Feature |
Models features that the optimizer requires from the objective function. More... | |
typedef RealVector | SearchPointType |
typedef double | ResultType |
typedef SingleObjectiveResultSet< RealVector > | SolutionType |
typedef AbstractObjectiveFunction< RealVector, ResultType > | ObjectiveFunctionType |
typedef TypedFlags< Feature > | Features |
typedef TypedFeatureNotAvailableException< Feature > | FeatureNotAvailableException |
Protected Member Functions inherited from shark::AbstractOptimizer< RealVector, double, SingleObjectiveResultSet< RealVector > > | |
void | checkFeatures (ObjectiveFunctionType const &objectiveFunction) |
Convenience function that checks whether the features of the supplied objective function match with the required features of the optimizer. More... | |
This class offers methods for the usage of the Resilient-Backpropagation-algorithm with weight-backtracking.
The Rprop algorithm is an improvement of the algorithms with adaptive learning rates (as the Adaptive Backpropagation algorithm by Silva and Ameida, please see AdpBP.h for a description of the working of such an algorithm), that uses increments for the update of the weights, that are independent from the absolute partial derivatives. This makes sense, because large flat regions in the search space (plateaus) lead to small absolute partial derivatives and so the increments are chosen small, but the increments should be large to skip the plateau. In contrast, the absolute partial derivatives are very large at the "slopes" of very "narrow canyons", which leads to large increments that will skip the minimum lying at the bottom of the canyon, but it would make more sense to chose small increments to hit the minimum.
So, the Rprop algorithm only uses the signs of the partial derivatives and not the absolute values to adapt the parameters.
Instead of individual learning rates, it uses the parameter \(\Delta_i^{(t)}\) for weight \(w_i,\ i = 1, \dots, n\) in iteration \(t\), where the parameter will be adapted before the change of the weights:
\( \Delta_i^{(t)} = \Bigg\{ \begin{array}{ll} min( \eta^+ \cdot \Delta_i^{(t-1)}, \Delta_{max} ), & \mbox{if\ } \frac{\partial E^{(t-1)}}{\partial w_i} \cdot \frac{\partial E^{(t)}}{\partial w_i} > 0 \\ max( \eta^- \cdot \Delta_i^{(t-1)}, \Delta_{min} ), & \mbox{if\ } \frac{\partial E^{(t-1)}}{\partial w_i} \cdot \frac{\partial E^{(t)}}{\partial w_i} < 0 \\ \Delta_i^{(t-1)}, & \mbox{otherwise} \end{array} \)
The parameters \(\eta^+ > 1\) and \(0 < \eta^- < 1\) control the speed of the adaptation. To stabilize the increments, they are restricted to the interval \([\Delta_{min}, \Delta_{max}]\).
After the adaptation of the \(\Delta_i\) the update for the weights will be calculated as
\( \Delta w_i^{(t)} := - \mbox{sign} \left( \frac{\partial E^{(t)}}{\partial w_i}\right) \cdot \Delta_i^{(t)} \)
Furthermore, weight-backtracking will take place to increase the stability of the method, i.e. if \(\frac{\partial E^{(t-1)}}{\partial w_i} \cdot \frac{\partial E^{(t)}}{\partial w_i} < 0\) then \(\Delta w_i^{(t)} := - \Delta w_i^{(t-1)}; \frac{\partial E^{(t)}}{\partial w_i} := 0\), where the assignment of zero to the partial derivative of the error leads to a freezing of the increment in the next iteration.
For further information about the algorithm, please refer to:
Martin Riedmiller and Heinrich Braun,
"A Direct Adaptive Method for Faster Backpropagation Learning: The
RPROP Algorithm".
In "Proceedings of the IEEE International Conference on Neural Networks", pp. 586-591,
Published by IEEE Press in 1993
SHARK_EXPORT_SYMBOL shark::RpropPlus::RpropPlus | ( | ) |
|
virtual |
initializes the optimizer using a predefined starting point
Reimplemented from shark::RpropMinus.
Reimplemented in shark::IRpropPlusFull, and shark::IRpropPlus.
|
virtual |
Reimplemented from shark::RpropMinus.
Reimplemented in shark::IRpropPlusFull, and shark::IRpropPlus.
|
inlinevirtual |
From INameable: return the class name.
Reimplemented from shark::RpropMinus.
Reimplemented in shark::IRpropPlusFull, and shark::IRpropPlus.
Definition at line 267 of file Rprop.h.
References shark::RpropMinus::init(), shark::RpropMinus::read(), SHARK_EXPORT_SYMBOL, shark::RpropMinus::step(), and shark::RpropMinus::write().
|
virtual |
Read the component from the supplied archive.
[in,out] | archive | The archive to read from. |
Reimplemented from shark::RpropMinus.
Reimplemented in shark::IRpropPlusFull, and shark::IRpropPlus.
|
virtual |
Carry out one step of the optimizer for the supplied objective function.
[in] | function | The objective function to initialize for. |
Reimplemented from shark::RpropMinus.
Reimplemented in shark::IRpropPlusFull, and shark::IRpropPlus.
|
virtual |
Write the component to the supplied archive.
[in,out] | archive | The archive to write to. |
Reimplemented from shark::RpropMinus.
Reimplemented in shark::IRpropPlusFull, and shark::IRpropPlus.
|
protected |