squlearn.optimizers.Adam

class squlearn.optimizers.Adam(options: dict = None, callback=<function default_callback>)

sQUlearn’s implementation of the ADAM optimizer

Possible options that can be set in the options dictionary are:

  • tol (float): Tolerance for the termination of the optimization (default: 1e-6)

  • lr (float, list, np.ndarray, callable): Learning rate. If float, the learning rate is constant. If list or np.ndarray, the learning rate is taken from the list or array. If callable, the learning rate is taken from the function. (default: 0.05)

  • beta_1 (float): Decay rate for the first moment estimate (default: 0.9)

  • beta_2 (float): Decay rate for the second moment estimate (default: 0.99)

  • regularization (float): Small value to avoid division by zero (default: 1e-8)

  • num_average (int): Number of gradients to average (default: 1)

  • maxiter (int): Maximum number of iterations per fit run (default: 100)

  • maxiter_total (int): Maximum number of iterations in total (default: maxiter)

  • log_file (str): File to log the optimization (default: None)

  • skip_fun (bool): If True, the function evaluation is skipped (default: False)

  • eps (float): Step size for finite differences (default: 0.01)

Parameters:

options (dict) – Options for the ADAM optimizer.

minimize(fun: callable, x0: ndarray, grad: callable = None, bounds=None) OptimizerResult

Function to minimize a given function using the ADAM optimizer.

Parameters:
  • fun (callable) – Function to minimize.

  • x0 (numpy.ndarray) – Initial guess.

  • grad (callable) – Gradient of the function to minimize.

  • bounds (sequence) – Bounds for the parameters.

Returns:

OptimizerResult format.

Return type:

Result of the optimization in class

reset()

Resets the optimizer to its initial state.

set_callback(callback)

Set the callback function.

step(**kwargs)

Perform one update step.

Parameters:
  • x – Current value

  • grad – Precomputed gradient

Returns:

Updated x