Intro to Optimizers
Optimizers allow your model to learn learnable parameters, like weights and biases. You can create a custom Optimizer or you can extend an already created Optimizer or you can use premade optimizers.
Note: You can also import optimizers using their alias
Usage
Premade Optimizers
Following are the optimizers that toynn
comes packed with:
GradientDescent
Alias: GD
Usage
You can set the alpha property as it is public. Alpha defines the learning rate of your model, or in other words, it defines how much should your weights and biases be adjusted.
It is recommended to set a lower alpha value.
Note: All other optimizers are extended from Gradient Descent Optimizer
Methods
process
This is called by the train function before each epoch. The main objective of this function is to let the optimizer arrange data as per its requirement.
optimize
This function is called by train for optimizing parameters. X and Y passed are single items from the Array.
Properties
steps
Returns list of steps used to optimize
StochasticGradientDescent
Alias: SGD
This optimizer is extended from GradientDescent optimizer.
This works similarly to GD Optimizer the only difference being that every epoch dataset is shuffled randomly before optimization.
RMSProp
This optimizer is extended from GradientDescent optimizer.
This works similarly to GD Optimizer the only difference being that the rate of change is changed based on gradient and history of layer.
Custom Optimizer
You can extend the Optimizer
class to create a custom optimizer.
GradientDescent
optimizer extends this class.
References
Some of the functionality is implemented using the awesome resources from the internet.