Interface | Description |
---|---|
Optimizer |
Optimization technique to tune network's weights parameters used by training algorithm.
|
Class | Description |
---|---|
AbstractOptimizer |
Skeletal implementation of the Optimizer interface to
minimize effort to implement specific optimizers.
|
AdaDeltaOptimizer |
Implementation of ADADELTA
Optimizer which is a modification od AdaGrad that uses only a limited window or previous gradients. |
AdaGradOptimizer |
Implementation of ADAGRAD
Optimizer , which uses sum of squared previous gradients
to adjust a global learning rate for each weight. |
AdamOptimizer |
Implementation of Adam optimizer which is a variation of RmsProp which includes momentum-like factor.
|
LearningRateDecay |
https://www.coursera.org/learn/deep-neural-network/lecture/hjgIA/learning-rate-decay
|
MomentumOptimizer |
Momentum optimization adds momentum parameter to basic Stochastic Gradient Descent, which can accelerate the process.
|
RmsPropOptimizer |
A variation of AdaDelta optimizer.
|
SgdOptimizer |
Basic Stochastic Gradient Descent optimization algorithm, which iteratively change weights towards value which gives minimum error.
|
Enum | Description |
---|---|
OptimizerType |
Supported types of optimization methods used by back-propagation training algorithm.
|
Copyright © 2022. All rights reserved.