get_init

pygom.loss.get_init.cost_grad_interpolant(ode, spline_list, t, theta)

Returns the cost (sum of squared residuals) and the gradient between the first derivative of the interpolant and the function of the ode

Parameters:
ode: :class:`.DeterministicOde`

an ode object

spline_list: list

list of scipy.interpolate.UnivariateSpline

t: array like

time

theta: array list

parameter value

Returns:
cost: double

sum of squared residuals

g:

gradient of the squared residuals

pygom.loss.get_init.cost_interpolant(ode, spline_list, t, theta, vec=True, aggregate=True)

Returns the cost (sum of squared residuals) between the first derivative of the interpolant and the function of the ode

Parameters:
ode: :class:`.DeterministicOde`

an ode object

spline_list: list

list of scipy.interpolate.UnivariateSpline

t: array like

time

theta: array list

paramter value

vec: bool, optional

if the matrix should be flattened to be a vector

aggregate: bool, optional

sum the vector/matrix

Returns:
cost: double

sum of squared residuals

pygom.loss.get_init.cost_sample(ode, fxApprox, xApprox, t, theta, vec=True, aggregate=True)

Returns the cost (sum of squared residuals) between the first derivative of the interpolant and the function of the ode using samples at time points t.

Parameters:
ode: :class:`.DeterministicOde`

an ode object

fxApprox: list

list of approximated values for the first derivative

xApprox: list

list of approximated values for the states

t: array like

time

theta: array list

parameter value

vec: bool, optional

if the matrix should be flattened to be a vector.

aggregate: bool/str, optional

sum the vector/matrix. If this is equals to ‘int’ then the Simpsons rule is applied to the samples. Also changes the behaviour of vec, where True outputs a vector where the elements contain the values of the integrand on each of the dimensions of the ode. False returns the sum of this vector, a scalar.

Returns:
r: array list

the cost or the residuals if vec is True

See also

residual_sample()
pygom.loss.get_init.get_init(y, t, ode, theta=None, full_output=False)

Get an initial guess of theta given the observations y and the corresponding time points t.

Parameters:
y: :array like

observed values

t: array like

time

ode: :class:`.DeterministicOde`

an ode object

theta: array like

parameter value

full_output: bool, optional

True if the optimization result should be returned. Defaults to False.

Returns:
theta: array like

a guess of the parameters

pygom.loss.get_init.grad_sample(ode, fxApprox, xApprox, t, theta, vec=False, output_residual=False)

Returns the gradient of the objective value using the state values of the interpolant given samples at time points t. Note that the parameters taken here is chosen to be same as cost_sample() for convenience.

Parameters:
ode: :class:`.DeterministicOde`

an ode object

fxApprox: list

list of approximated values for the first derivative

xApprox: list

list of approximated values for the states

t: array like

time

theta: array list

parameter value

vec: bool, optional

if the matrix should be flattened to be a vector

output_residual: bool, optional

if True, then the residuals will be returned as an additional argument

Returns:
g: numpy.ndarray

gradient of the objective function

See also

jac_sample()
pygom.loss.get_init.interpolate(solution, t, s=0)

Interpolate the solution of the ode given the time points and a suitable smoothing vector using univariate spline

Parameters:
solution: :class:`numpy.ndarray`

f(t) of the ode with the rows correspond to time

t: array like

time

s: smoothing scalar, optional

greater or equal to zero

Returns:
splineList: list

of scipy.interpolate.UnivariateSpline

pygom.loss.get_init.jac_sample(ode, fxApprox, xApprox, t, theta, vec=True)

Returns the Jacobian of the objective value using the state values of the interpolant given samples at time points t. Note that the parameters taken here is chosen to be same as cost_sample() for convenience.

Parameters:
ode: :class:`.DeterministicOde`

an ode object

fxApprox: list

list of approximated values for the first derivative

xApprox: list

list of approximated values for the states

t: array like

time

theta: array list

parameter value

vec: bool, optional

if the matrix should be flattened to be a vector

Returns:
r: array list

the residuals

See also

cost_sample()
pygom.loss.get_init.residual_interpolant(ode, spline_list, t, theta, vec=True)

Returns the residuals between the first derivative of the interpolant and the function of the ode

Parameters:
ode: :class:`.DeterministicOde`

an ode object

spline_list: list

list of scipy.interpolate.UnivariateSpline

t: array like

time

theta: array list

parameter value

vec: bool, optional

if the matrix should be flattened to be a vector

aggregate: bool, optional

sum the vector/matrix

Returns:
r: array list

the residuals

pygom.loss.get_init.residual_sample(ode, fxApprox, xApprox, t, theta, vec=True)

Returns the residuals between the first derivative of the interpolant and the function of the ode using samples at time points t.

Parameters:
ode: :class:`.DeterministicOde`

an ode object

fxApprox: list

list of approximated values for the first derivative

xApprox: list

list of approximated values for the states

t: array like

time

theta: array list

parameter value

vec: bool, optional

if the matrix should be flattened to be a vector

Returns:
r: array list

the residuals

See also

cost_sample()