squlearn.qnn.loss
.ODELoss
- class squlearn.qnn.loss.ODELoss(ODE_functional=None, symbols_involved_in_ODE=None, initial_values: ndarray = None, eta=1.0, boundary_handling='pinned')
Squared loss for regression of Ordinary Differential Equations (ODEs).
Implements an ODE Loss based on Ref. [1].
- Parameters:
ODE_functional (sympy.Expr) – Functional representation of the ODE (Homogeneous diferential equation). Must be a sympy expression and
symbols_involved_in_ODE
must be provided.symbols_involved_in_ODE (list) – List of sympy symbols involved in the ODE functional. The list must be ordered as follows:
[x, f, dfdx]
where each element is a sympy symbol corresponding to the independent variable (x
), the dependent variable (f
), and the first derivative of the dependent variable (dxfx
), respectively. There are no requirements for the symbols beyond the correct order, for example,[t, y, dydt]
.initial_values (np.ndarray) – Initial values of the ODE. The length of the array must match the order of the ODE.
boundary_handling (str) –
Method for handling the boundary conditions. Options are
'pinned'
, and'floating'
:'pinned'
: An extra term is added to the loss function to enforce the initial values of the ODE. This term is pinned by theeta
parameter. The loss function is given by: \(L = \sum_{i=0}^{n} L_{\theta_i}\left( \dot{f}, f, x \right) + \eta \cdot (f(x_0) - f_0)^2\), with \(f(x) = QNN(x, \theta)\).'floating'
: (NOT IMPLEMENTED) An extra “floating” term is added to the trial QNN function to be optimized. The loss function is given by: \(L = \sum_{i=0}^{n} L_{\theta_i}\left( \dot{f}, f, x \right)\), with \(f(x) = QNN(x, \theta) + f_b\), and \(f_b = QNN(x_0, \theta) - f_0\).
eta (float) – Weight for the initial values of the ODE in the loss function for the “pinned” boundary handling method.
Example
1. Implements a loss function for the ODE \(\cos(t) y + \frac{dy(t)}{dt} = 0\) with initial value \(y(0) = 0.1\).
t, y, dydt, = sp.symbols("t y dydt") eq = sp.cos(t)*y + dydt initial_values = [0.1] loss_ODE = ODELoss( eq, symbols_involved_in_ODE=[t, y, dydt], initial_values=initial_values, boundary_handling="pinned", )
Implements a loss function for the ODE \(\left(df(x)/dx\right) - cos(f(x)) = 0\) with initial values \(f(0) = 0.\).
x, f, dfdx = sp.symbols("x f dfdx") eq = dfdx - sp.cos(f) initial_values = [0] loss_ODE = ODELoss( eq, symbols_involved_in_ODE=[x, f, dfdx], initial_values=initial_values, boundary_handling="pinned", eta=1.2, )
References
[1]: O. Kyriienko et al., “Solving nonlinear differential equations with differentiable quantum circuits”, arXiv:2011.10395 (2021).
- gradient(value_dict: dict, **kwargs) ndarray | tuple[ndarray, ndarray]
Returns the gradient of the squared loss.
Calculates the gradient of the squared loss between the values in value_dict and ground_truth as
\[\begin{split}\begin{align} \frac{\partial \mathcal{L}_{\vec{\theta}}}{\partial \theta_i} &= \sum_{j=1}^N 2 \left(F[ \ddot f_{\vec{\theta}}, \dot f_{\vec{\theta}}, f_{\vec{\theta}}, x]_j \right) \frac{\partial}{\partial \theta_i} \left(F[ \ddot f_{\vec{\theta}}, \dot f_{\vec{\theta}}, f_{\vec{\theta}}, x]_j \right) \\ &\quad + 2 \eta(f_{\vec{\theta}}(0)-u_0) \left. \frac{\partial f_{\vec{\theta}}(x)} {\partial \theta_i} \right|_{x=0} + 2 \eta(\dot f_{\vec{\theta}}(0)- \dot u_0) \left. \frac{\partial \dot f_{\vec{\theta}}(x)}{\partial \theta_i} \right|_{x=0} \\ &= \sum_{j=1}^N 2 \left(F[ \ddot f_{\vec{\theta}}, \dot f_{\vec{\theta}}, f_{\vec{\theta}}, x]_j\right) \left( \frac{\partial F_j}{\partial f_{\vec{\theta}}} \frac{\partial f_{\vec{\theta}}}{\partial \theta_i} + \frac{\partial F_j}{\partial \dot f_{\vec{\theta}}}\frac{\partial \dot f_{\vec{\theta}}}{\partial \theta_i} + \frac{\partial F_j}{\partial \ddot f_{\vec{\theta}}}\frac{\partial \ddot f_{\vec{\theta}}}{\partial \theta_i}\right)\\ &\quad + 2 \eta(f_{\vec{\theta}}(0)-u_0) \left. \frac{\partial f_{\vec{\theta}}(x)} {\partial \theta_i} \right|_{x=0} + 2 \eta(\dot f_{\vec{\theta}}(0)- \dot u_0) \left. \frac{\partial \dot f_{\vec{\theta}}(x)}{\partial \theta_i} \right|_{x=0} \end{align}\end{split}\]- Parameters:
value_dict (dict) – Contains calculated values of the model
ground_truth (np.ndarray) – The true values \(f_ref\left(x_i\right)\)
weights (np.ndarray) – Weight for each data point, if None all data points count the same
multiple_output (bool) – True if the QNN has multiple outputs
- Returns:
Gradient values
- set_opt_param_op(opt_param_op: bool = True)
Sets the opt_param_op flag.
- Parameters:
opt_param_op (bool) – True, if operator has trainable parameters
- value(value_dict: dict, **kwargs) float
Calculates the squared loss of the loss function for the ODE as
\[\begin{align} \mathcal{L}_{\vec{\theta}} [ \ddot f, \dot f, f, x] &= \sum_j^N \left(F\left( \ddot f_{\vec{\theta}}, \dot f_{\vec{\theta}}, f_{\vec{\theta}}, x\right)_j\right)^2 + \eta\left(f_{\vec{\theta}}(0) - u_0\right)^2 + \eta\left(\dot f_{\vec{\theta}}(0) - \dot u_0\right)^2 \end{align}\]- Parameters:
value_dict (dict) – Contains calculated values of the model
ground_truth (np.ndarray) – The true values \(f_{ref}\left(x_i\right)\)
weights (np.ndarray) – Weight for each data point, if None all data points count the same
- Returns:
Loss value
- variance(value_dict: dict, **kwargs) float
Calculates and returns the variance of the loss value.