ode_loss

These are basically the interfaces for pygom.loss.BaseLoss

The loss functions that “implements” the class BaseLoss, if Python has such a thing. Overrides the method _setLossType

class pygom.loss.ode_loss.SquareLoss(theta, ode, x0, t0, t, y, state_name, state_weight=None, target_param=None, target_state=None)

The square loss function

class pygom.loss.ode_loss.NormalLoss(theta, ode, x0, t0, t, y, state_name, sigma=None, target_param=None, target_state=None)

Realizations from a Normal distribution

class pygom.loss.ode_loss.PoissonLoss(theta, ode, x0, t0, t, y, state_name, target_param=None, target_state=None)

Realizations from a Poisson distribution

calculations

The base class which contains has all the calculation implemented

To place everything about estimating the parameters of an ode model under square loss in one single module. Focus on the standard local method which means obtaining the gradient and Hessian.

class pygom.loss.base_loss.BaseLoss(theta, ode, x0, t0, t, y, state_name, state_weight=None, target_param=None, target_state=None)

This contains the base that stores all the information of an ode.

Parameters:
theta: array like

input value of the parameters

ode: :class:`DeterministicOde`

the ode class in this package

x0: numeric

initial time

t0: numeric

initial value

t: array like

time points where observations were made

y: array like

observations

state_name: str

the state which the observations came from

state_weight: array like

weight for the observations

target_param: str or array like

parameters that are not fixed

target_state: str or array like

states that are not fixed, applicable only when the initial values are also of interest

adjoint(theta=None, full_output=False)

Obtain the gradient given input parameters using the adjoint method. Values of state variable are found using an univariate spline interpolation between two observed time points where the internal knots are explicitly defined.

Parameters:
theta: array like

input value of the parameters

full_output: bool

if True, also output the full set of adjoint values (over time)

Returns:
grad: numpy.ndarray

array of gradient

infodict : dict, only returned if full_output=True

Dictionary containing additional output information

key meaning
‘resid’ residuals given theta
‘diff_loss’ derivative of the loss function
‘gradVec’ gradient vectors
‘adjVec’ adjoint vectors
‘interpolateInfo’ info from integration over the interpolating points
‘solInterpolate’ solution from the integration over the interpolating points
‘tInterpolate’ interpolating time points

See also

sensitivity()
cost(theta=None)

Find the cost/loss given time points and the corresponding observations.

Parameters:
theta: array like

input value of the parameters

Returns:
numeric

sum of the residuals squared

See also

diff_loss()

Notes

Only works with a single target (state)

costIV(theta=None)

Find the cost/loss given the parameters. The input theta here is assumed to include both the parameters as well as the initial values

Parameters:
theta: array like

parameters and guess of initial values of the states

Returns:
numeric

sum of the residuals squared

See also

residualIV()
diff_loss(theta=None)

Find the derivative of the loss function given time points and the corresponding observations, with initial conditions

Parameters:
theta: array like

input value of the parameters

Returns:
numpy.ndarray

an array of residuals

See also

cost()
diff_lossIV(theta=None)

Find the derivative of the loss function w.r.t. the parameters given time points and the corresponding observations, with initial conditions.

Parameters:
theta: array like

parameters and initial values of the states

Returns:
numpy.ndarray

an array of result

See also

costIV(), diff_loss()
fisher_information(theta=None, full_output=False, method=None)

Obtain the Fisher information

Parameters:
theta: array like

input value of the parameters

full_output: bool

if additional output is required

method: str, optional

what method to use in the integrator

Returns:
I: numpy.ndarray

\(I(\theta)\) of the objective function

infodict : dict, only returned if full_output=True

Dictionary containing additional output information

key meaning
‘state’ intermediate values for the state (original ode)
‘sens’ intermediate values for the sensitivities by state, parameters, i.e. \(x_{(i-1)p + j}\) is the element for state \(i\) and parameter \(j\) with a total of \(p\) parameters
‘resid’ residuals given theta
‘info’ output from the integration

See also

sensitivity(), jtj()
fit(x, lb=None, ub=None, A=None, b=None, disp=False, full_output=False)

Find the estimates given the data and an initial guess \(x\). Note that there is no guarantee that the estimation procedure is successful. It is recommended to at least supply box constraints, i.e. lower and upper bounds

Parameters:
x: array like

an initial guess

lb: array like

the lower bound elementwise \(lb_{i} <= x_{i}\)

ub: array like

upper bound elementwise \(x_{i} <= ub_{i}\)

A: array like

matrix \(A\) for the inequality \(Ax<=b\)

b: array like

vector \(b\) for the inequality \(Ax<=b\)

Returns:
xhat: numpy.ndarray

estimated value

gradient(theta=None, full_output=False)

Returns the gradient calculated by solving the forward sensitivity equation. Identical to sensitivity() without the choice of integration method

See also

sensitivity()
hessian(theta=None, full_output=False, method=None)

Obtain the Hessian using the forward forward sensitivities.

Parameters:
theta: array like

input value of the parameters

full_output: bool

if additional output is required

method: str, optional

what method to use in the integrator

Returns:
Hessian: numpy.ndarray

Hessian of the objective function

infodict : dict, only returned if full_output=True

Dictionary containing additional output information

key meaning
‘state’ intermediate values for the state (original ode)
‘sens’ intermediate values for the sensitivities by state, parameters, i.e. \(x_{(i-1)p + j}\) is the element for state \(i\) and parameter \(j\) with a total of \(p\) parameters
‘hess’ intermediate values for the hessian by state, parameter, parameter, i.e. \(x_{(i-1)p^{2} + j + k}\) is the element for state \(i\), parameter \(j\) and parameter \(k\)
‘resid’ residuals given theta
‘info’ output from the integration

See also

sensitivity()
jac(theta=None, sens_output=False, full_output=False, method=None)

Obtain the Jacobian of the objective function given input parameters using forward sensitivity method.

Parameters:
theta: array like, optional

input value of the parameters

sens_output: bool, optional

whether the full sensitivities is required; full_output overrides this option when true

full_output: bool, optional

if additional output is required

method: str, optional

Choice between lsoda, vode and dopri5, the three integrator provided by scipy. Defaults to lsoda.

Returns:
grad: numpy.ndarray

Jacobian of the objective function

infodict : dict, only returned if full_output=True

Dictionary containing additional output information

key meaning
‘sens’ intermediate values over the original ode and all the sensitivities, by state, parameters
‘resid’ residuals given theta
‘diff_loss’ derivative of the loss function

See also

sensitivity()
jacIV(theta=None, sens_output=False, full_output=False, method=None)

Obtain the Jacobian of the objective function given input parameters which include the current guess of the initial value using forward sensitivity method.

Parameters:
theta: array like, optional

input value of the parameters

sens_output: bool, optional

whether the full sensitivities is required; full_output overrides this option when true

full_output: bool, optional

if additional output is required

method: str, optional

Choice between lsoda, vode and dopri5, the three integrator provided by scipy. Defaults to lsoda

Returns:
grad: numpy.ndarray

Jacobian of the objective function

infodict : dict, only returned if full_output=True

Dictionary containing additional output information

key meaning
‘sens’ intermediate values over the original ode and all the sensitivities, by state, parameters
‘resid’ residuals given theta
‘info’ output from the integration

See also

sensitivityIV()
jtj(theta=None, full_output=False, method=None)

Obtain the approximation to the Hessian using the inner product of the Jacobian.

Parameters:
theta: array like

input value of the parameters

full_output: bool

if additional output is required

method: str, optional

what method to use in the integrator

Returns:
jtj: numpy.ndarray

\(J^{\top}J\) of the objective function

infodict : dict, only returned if full_output=True

Dictionary containing additional output information

key meaning
‘state’ intermediate values for the state (original ode)
‘sens’ intermediate values for the sensitivities by state, parameters, i.e. \(x_{(i-1)p + j}\) is the element for state \(i\) and parameter \(j\) with a total of \(p\) parameters
‘resid’ residuals given theta
‘info’ output from the integration

See also

sensitivity()
plot()

Plots the solution of all the states and the observed y values

residual(theta=None)

Find the residuals given time points and the corresponding observations, with initial conditions

Parameters:
theta: array like

input value of the parameters

Returns:
numpy.ndarray

an array of residuals

See also

cost()

Notes

Makes a direct call to initialized loss object which has a method called residual

residualIV(theta=None)

Find the residuals given time points and the corresponding observations, with initial conditions.

Parameters:
theta: array like

parameters and initial values of the states

Returns:
numpy.ndarray

an array of residuals

See also

costIV(), residual()

Notes

Makes a direct call to residual() using the initialized information

sens_to_grad(sens, diff_loss)

Forward sensitivites to the gradient.

Parameters:
sens: :class:`numpy.ndarray`

forward sensitivities

diff_loss: array like

derivative of the loss function

Returns:
g: numpy.ndarray

gradient of the loss function

sens_to_jtj(sens, resid=None)

forward sensitivites to \(J^{\top}J\) where \(J\) is the Jacobian. The approximation to the Hessian.

Parameters:
sens: :class:`numpy.ndarray`

forward sensitivities

resid: :class:`numpy.ndarray`, optional

the residuals corresponding to the input sens

Returns:
JTJ: numpy.ndarray

An approximation to the Hessian using the inner product of the Jacobian

sensitivity(theta=None, full_output=False, method=None)

Obtain the gradient given input parameters using forward sensitivity method.

Parameters:
theta: array like

input value of the parameters

full_output: bool

if additional output is required

method: str, optional

what method to use in the integrator

Returns:
grad: numpy.ndarray

array of gradient

infodict : dict, only returned if full_output=True

Dictionary containing additional output information. Same output as jac()

Notes

It calculates the gradient by calling jac()

sensitivityIV(theta=None, full_output=False, method=None)

Obtain the gradient given input parameters (which include the current guess of the initial conditions) using forward sensitivity method.

Parameters:
theta: array like, optional

input value of the parameters

full_output: bool, optional

if additional output is required

method: str, optional

what method to use in the integrator

Returns:
grad: numpy.ndarray

array of gradient

infodict : dict, only returned if full_output=True

Dictionary containing additional output information

key meaning
‘sens’ intermediate values over the original ode and all the sensitivities, by state, parameters
‘resid’ residuals given theta
‘info’ output from the integration

Notes

It calculates the gradient by calling jacIV()

thetaCallBack(x)

Print x, the parameters

thetaCallBack2(x, f)

Print x and f where x is the parameter of interest and f is the objective function

Parameters:
x:

parameters

f:

f(x)