LDDMM functions¶
- TS_PCA.lddmm.DeformationGradient(K, nt=10, deltat=1.0)¶
Return the gradient of the exponential map according to the starting points
- Parameters:
function (Kernel) – The Velocity kernel function (ideally the one given by TS-LDDMM)
nt (int) – number of integration time points, default=10
delta (float) – total time of integration \(\delta\), stepsize=deltat/nt, default to 1
- Returns:
Gradient : (P,X,X_mask)->Jacobian_position(Shooting(K,nt,deltat)) \(\nabla_X \exp_X(P)\) where X is an array of size (n_samples,d+1), and P are the momentums, it’s an array of size (n_samples,d+1). \(\exp_X\) is the exponential map which is computed by integrating the Hamiltonian system using the Ralston integrator on nt points. Also called shooting function.
- TS_PCA.lddmm.Flowing(K, nt=10, deltat=1.0)¶
Return the Exponential map function related to the kernel K that map as well a grid
- Parameters:
function (Kernel) – The Velocity kernel function (ideally the one given by TS-LDDMM)
nt (int) – number of integration time points
deltat (float) – total time of integration \(\delta\), stepsize=deltat/nt
- Returns:
Exponential flow function, taking a grid as well in input : (Grid,P,X,mask_X)-> (Grid_shooted,Exp_X(deltat P),Exp_X’(deltat P)) \((Grid_shooted,\exp_X(\delta P),\exp_X'(\delta P))\) where X is an array of size (n_samples,d+1), and P are the momentums, it’s an array of size (n_samples,d+1), Grid is another array that may be different than X, Grid_shooted is its final position after flowing by using the integrated velocity. The exponential map is computed by integrating the Hamiltonian system using the Ralston integrator. Also called shooting function.
- TS_PCA.lddmm.Hamiltonian(K)¶
Return the Hamiltonian function related to the kernel K
- Parameters:
function (Kernel) – The Velocity kernel function (ideally the one given by TS-LDDMM)
- Returns:
Hamiltonian function : (P,X,mask_X)-> float \(P^\top K(X,X)P/2\) where X is an array of size (n_samples,d+1), and \(K(X,X)\) is the kernel matrix \((k(x_i,x_j))\) and P are the momentums, it’s an array of size (n_samples,d+1)
- TS_PCA.lddmm.HamiltonianSystem(K)¶
Return the Hamiltonian system function related to the kernel K
- Parameters:
function (Kernel) – The Velocity kernel function (ideally the one given by TS-LDDMM)
- Returns:
Hamiltonian system function : (P,X,mask_X)-> (-grad_H_pos,grad_H_momen) \((-\nabla_X H(X,P),\nabla_P H(X,P))\) where X is an array of size (n_samples,d+1), and \(K(X,X)\) is the kernel matrix \((k(x_i,x_j))\) and P are the momentums, it’s an array of size (n_samples,d+1)
- TS_PCA.lddmm.LDDMMLoss(K, dataloss, gamma=0.001, nt=10, deltat=1.0)¶
Return the Loss function related to a registration problem
- Parameters:
function (Kernel) – The Velocity kernel function (ideally the one given by TS-LDDMM)
dataloss ((X,X_mask,Y,Y_mask)->float) – the attachment loss function, typically a varifold loss
gamma (float) – regularization constant related to the norm of the initial velocity
nt (int) – number of integration time points, default=10
delta (float) – total time of integration \(\delta\), stepsize=deltat/nt, default to 1
- Returns:
LDDMM registration loss : (P,X,X_mask,Y,Y_mask)->float \(\gamma |v_0|_{\mathsf{V}}+\mathcal{L}(\phi^{v_0}\cdot X,Y)\) where X is an array of size (n_samples,d+1), and P are the momentums, it’s an array of size (n_samples,d+1). \(\phi^{v_0}=\exp_X(P)\) is the exponential map which is computed by integrating the Hamiltonian system using the Ralston integrator on nt points. Also called shooting function.
- TS_PCA.lddmm.Shooting(K, nt=10, deltat=1.0)¶
Return the Hamiltonian system function related to the kernel K
- Parameters:
function (Kernel) – The Velocity kernel function (ideally the one given by TS-LDDMM)
nt (int) – number of integration time points
deltat (float) – total time of integration \(\delta\), stepsize=deltat/nt
- Returns:
Exponential flow function : (P,X,mask_X)-> (Exp_X(deltat P),Exp_X’(deltat P)) \((\exp_X(\delta P),\exp_X'(\delta P))\) where X is an array of size (n_samples,d+1), and P are the momentums, it’s an array of size (n_samples,d+1). The exponential map is computed by integrating the Hamiltonian system using the Ralston integrator. Also called shooting function.
- TS_PCA.lddmm.batch_one_to_many_registration(q0, q0_mask, batched_q1, batched_q1_mask, Kv, dataloss, batched_p0=None, gamma_loss=0.0, niter=100, optimizer=(<function chain.<locals>.init_fn>, <function chain.<locals>.update_fn>), nt=10, deltat=1.0, verbose=True, stream_bool=True, stream_object=None)¶
Return the momentums (p^i) such that the hamiltonian dynamics starting from any (q_0,p_i) reach approximately (q_1,p^i_1) at time t=1, the shooting problem is performed by minimizing the LDDMM loss defined previously
- Parameters:
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask
batched_q1 (array of shape (N_timeseries,batch_size,n_samples,d+1)) – target time series’ graphs, in practice batch_size is set to 1
batched_q1_mask (array of shape (N_timeseries,batch_size,n_samples,1)) – target time series’ graphs mask
Kv ((X,mask_X,Y,mask_Y,b)-> shape of b) – The Velocity kernel function (ideally the one given by TS-LDDMM)
dataloss ((X,X_mask,Y,Y_mask)->float) – the attachment loss function, typically a varifold loss
p0 (array of shape (n_samples,d+1)) – initial time series’ graph momentum (facultative)
gamma_loss (float) – regularization constant related to the norm of the initial velocity
niter (int) – number of iterations for the optimizer
optimizer (optimize object from optax) – Default to optax.adabelief(learning_rate=0.1)
nt (int) – number of integration time points, default=10
delta (float) – total time of integration \(\delta\), stepsize=deltat/nt, default to 1
verbose (Boolean) – Default to True, print loss every 10 iterations
- Returns:
batched_p (array of shape (n_samples,d+1)) – momentums minimizing the LDDMM loss related to the multiple shooting problems
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask
- TS_PCA.lddmm.batch_one_to_many_varifold_registration(q0, q0_mask, batched_q1, batched_q1_mask, Kv, Kl, batched_p0=None, gamma_loss=0.0, niter=100, optimizer=(<function chain.<locals>.init_fn>, <function chain.<locals>.update_fn>), nt=10, deltat=1.0, verbose=True, stream_bool=True, stream_object=None)¶
Return the momentums (p^i) such that the hamiltonian dynamics starting from any (q_0,p_i) reach approximately (q_1,p^i_1) at time t=1, the shooting problem is performed by minimizing the LDDMM loss defined previously
- Parameters:
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask
batched_q1 (array of shape (N_timeseries,batch_size,n_samples,d+1)) – target time series’ graphs, in practice batch_size is set to 1
batched_q1_mask (array of shape (N_timeseries,batch_size,n_samples,1)) – target time series’ graphs mask
Kv ((X,mask_X,Y,mask_Y,b)-> shape of b) – The Velocity kernel function (ideally the one given by TS-LDDMM)
Kl ((X,mask_X,Y,mask_Y,b)-> array of the shape of b) – The Varifold kernel function. \(K(X,Y)b\) where X and Y are array of size (n_samples,d+1), \(K(X,Y)\) is the kernel matrix \((k(x_i,y_j))\) and b is an array of shape (n_samples,d) with d the dimension of the problem
p0 (array of shape (n_samples,d+1)) – initial time series’ graph momentum (facultative)
gamma_loss (float) – regularization constant related to the norm of the initial velocity
niter (int) – number of iterations for the optimizer
optimizer (optimize object from optax) – Default to optax.adabelief(learning_rate=0.1)
nt (int) – number of integration time points, default=10
delta (float) – total time of integration \(\delta\), stepsize=deltat/nt, default to 1
verbose (Boolean) – Default to True, print loss every 10 iterations
- Returns:
batched_p (array of shape (n_samples,d+1)) – initial momentums minimizing the LDDMM loss related to the multiple shooting problems
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask
- TS_PCA.lddmm.registration(q0, q0_mask, q1, q1_mask, Kv, dataloss, p0=None, gamma_loss=0.0, niter=100, optimizer=(<function chain.<locals>.init_fn>, <function chain.<locals>.update_fn>), nt=10, deltat=1.0, verbose=True, stream_bool=False, stream_object=None)¶
Return the momentum p such that the hamiltonian dynamics starting from (q_0,p) reach approximately (q_1,p_1) at time t=1, the shooting problem is performed by minimizing the LDDMM loss defined previously
- Parameters:
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask
q1 (array of shape (n_samples,d+1)) – target time series’ graph
q1_mask (array of shape (n_samples,1)) – target time series’ graph mask
Kv ((X,mask_X,Y,mask_Y,b)-> shape of b) – The Velocity kernel function (ideally the one given by TS-LDDMM)
dataloss ((X,X_mask,Y,Y_mask)->float) – the attachment loss function, typically a varifold loss
p0 (array of shape (n_samples,d+1)) – initial time series’ graph momentum (facultative)
gamma_loss (float) – regularization constant related to the norm of the initial velocity
niter (int) – number of iterations for the optimizer
optimizer (optimize object from optax) – Default to optax.adabelief(learning_rate=0.1)
nt (int) – number of integration time points, default=10
delta (float) – total time of integration \(\delta\), stepsize=deltat/nt, default to 1
verbose (Boolean) – Default to True, print loss every 10 iterations
- Returns:
p (array of shape (n_samples,d+1)) – initial momentum minimizing the LDDMM loss related to the shooting problem
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask
- TS_PCA.lddmm.varifold_registration(q0, q0_mask, q1, q1_mask, Kv, Kl, p0=None, gamma_loss=0.0, niter=100, optimizer=(<function chain.<locals>.init_fn>, <function chain.<locals>.update_fn>), nt=10, deltat=1.0, verbose=True, stream_bool=False, stream_object=None)¶
Return the momentum p such that the hamiltonian dynamics starting from (q_0,p) reach approximately (q_1,p_1) at time t=1, the shooting problem is performed by minimizing the LDDMM loss defined previously
- Parameters:
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask
q1 (array of shape (n_samples,d+1)) – target time series’ graph
q1_mask (array of shape (n_samples,1)) – target time series’ graph mask
Kv ((X,mask_X,Y,mask_Y,b)-> shape of b) – The Velocity kernel function (ideally the one given by TS-LDDMM)
Kl ((X,mask_X,Y,mask_Y,b)-> array of the shape of b) – The Varifold kernel function. \(K(X,Y)b\) where X and Y are array of size (n_samples,d+1), \(K(X,Y)\) is the kernel matrix \((k(x_i,y_j))\) and b is an array of shape (n_samples,d) with d the dimension of the problem
p0 (array of shape (n_samples,d+1)) – initial time series’ graph momentum (facultative)
gamma_loss (float) – regularization constant related to the norm of the initial velocity
niter (int) – number of iterations for the optimizer
optimizer (optimize object from optax) – Default to optax.adabelief(learning_rate=0.1)
nt (int) – number of integration time points, default=10
delta (float) – total time of integration \(\delta\), stepsize=deltat/nt, default to 1
verbose (Boolean) – Default to True, print loss every 10 iterations
- Returns:
p (array of shape (n_samples,d+1)) – initial momentum minimizing the LDDMM loss related to the shooting problem using the varifold loss as data attachment term
q0 (array of shape (n_samples,d+1)) – initial time series’ graph
q0_mask (array of shape (n_samples,1)) – initial time series’ graph mask