Table of Contents

Class GradientDescentOptimizer

Namespace
NeuralNetworks.Optimizers
Assembly
NeuralNetworks.dll

Implements the classic "Stochastic" Gradient Descent (SGD) optimizer for neural network training. Updates parameters by subtracting the scaled gradient using a specified learning rate.

public class GradientDescentOptimizer : Optimizer
Inheritance
GradientDescentOptimizer
Inherited Members

Remarks

This optimizer supports parameter updates for 1D, 2D, and 4D float arrays.

There is no actual "stochastic" aspect implemented here, so the name of this class reflects that ("GradientDescentOptimizer" instead of "StochasticGradientDescentOptimizer").

Constructors

GradientDescentOptimizer(LearningRate)

Implements the classic "Stochastic" Gradient Descent (SGD) optimizer for neural network training. Updates parameters by subtracting the scaled gradient using a specified learning rate.

public GradientDescentOptimizer(LearningRate learningRate)

Parameters

learningRate LearningRate

Remarks

This optimizer supports parameter updates for 1D, 2D, and 4D float arrays.

There is no actual "stochastic" aspect implemented here, so the name of this class reflects that ("GradientDescentOptimizer" instead of "StochasticGradientDescentOptimizer").

Methods

ToString()

Returns a string representation of the optimizer, including the learning rate.

public override string ToString()

Returns

string

A string describing the optimizer and its learning rate.

Update(object, Span<float>, ReadOnlySpan<float>)

Updates the parameter values in-place by applying the provided gradients using the current learning rate.

protected override void Update(object paramsKey, Span<float> paramsToUpdate, ReadOnlySpan<float> paramGradients)

Parameters

paramsKey object

Not used in this class.

paramsToUpdate Span<float>

A span representing the parameter values to be updated. The values are modified in-place.

paramGradients ReadOnlySpan<float>

A read-only span containing the gradients to apply to the parameter values. Each element corresponds to the respective parameter in paramsToUpdate.

Remarks

This method performs a simple gradient descent update. The learning rate is obtained from the associated learning rate schedule. The update is applied element-wise to each parameter value.