Triple negative breast cancer (TNBC) Demo¶
This demo showcases an end-to-end image-guided digital twin workflow with TumorTwin
.
📚 Table of Contents¶
- Step 1: Data Loading
- Step 2: Create Tumor Growth Model
- Step 3: Create a Solver object
- Step 4: Make a prediction
- Step 5 (Optional): Compute a quantity of interest and its gradient
- Step 6: Compare the model prediction to patient data
- Step 7: Calibrate the model to patient data via numerical optimization
- Step 8: Predict patient response under alternative treatment plan
- Conclusion & Discussion questions
# Set up google colab environment (if running in colab)
import sys
import importlib.util
import os
from pathlib import Path
if 'google.colab' in sys.modules:
!pip uninstall -y torchvision torchaudio thinc fastai
# Helper function to check if a package is installed
def is_package_installed(package_name):
return importlib.util.find_spec(package_name) is not None
# Only install TumorTwin if it's not already installed
if not is_package_installed("tumortwin"):
!pip install git+https://github.com/OncologyModelingGroup/TumorTwin
# Only download and extract data if it doesn't already exist
data_path = Path("../input_files/TNBC_demo_001")
if not data_path.exists():
!wget https://github.com/OncologyModelingGroup/TumorTwin/raw/refs/heads/main/input_files/TNBC_demo_001.tar.gz
!tar -xzvf TNBC_demo_001.tar.gz
!mkdir -p ../input_files
!mv TNBC_demo_001 ../input_files/TNBC_demo_001
## Imports...
import os
from datetime import timedelta
from pathlib import Path
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.optim as optim
from dotenv import load_dotenv
from pydantic import FilePath
from rich import print
from tumortwin.models import ReactionDiffusion3D
from tumortwin.optimizers import LMoptimizer, LMoptions
from tumortwin.postprocessing import (
compute_total_cell_count,
plot_calibration,
plot_calibration_iter,
plot_cellularity_map,
plot_imaging_summary,
plot_loss,
plot_maps_final,
plot_measured_TCC,
plot_patient_timeline,
plot_predicted_TCC,
)
from tumortwin.preprocessing import ADC_to_cellularity, compute_carrying_capacity
from tumortwin.solvers import TorchDiffEqSolver, TorchDiffEqSolverOptions
from tumortwin.types import (
ChemotherapySpecification,
CropSettings,
CropTarget,
TNBCPatientData,
)
from tumortwin.utils import daterange, days_since_first
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
%matplotlib inline
font = {
"weight": "normal",
"size": 10,
}
matplotlib.rc("font", **font)
matplotlib.rc("figure", dpi=300)
matplotlib.rc("savefig", dpi=300)
Here we load in the input dataset. You will need to ensure that the relevant input filepaths are set correctly.
PATIENT_INFO_PATH = FilePath("../input_files/TNBC_demo_001/TNBC_demo_001.json")
IMAGE_PATH = FilePath("../input_files/TNBC_demo_001")
crop_settings = CropSettings(crop_to=CropTarget.ROI_ENHANCE, padding=10, visit_index=-1)
patient_data = TNBCPatientData.from_file(
PATIENT_INFO_PATH, image_dir=IMAGE_PATH, crop_settings=crop_settings
)
Visualize the data¶
We can visually verify that the dataset has been loaded as we expect by plotting the patient's treatment and imaging timeline. Here you should verify that the imaging dates, treatment timelines, and treatment dosages are correct.
plot_patient_timeline(patient_data)
We can also plot a summary of the imaging data. Here you should verify that the images are aligned/registered, cropped appropriately, and that the anatomic (T1), apparent diffusion coefficient (ADC), and region-of-interest (ROI) images are as expected.
plot_imaging_summary(patient_data)
Estimate cellularity from ADC¶
The tumor cellularity, $N(\mathbf{x},t_i)$ can be estimated from the ADC map according to the equation:
$$
N(\mathbf{x},t_i) = \frac{ADC_\text{w}-ADC(\mathbf{x},t_i)}{ADC(\mathbf{x},t_i)-ADC_\text{min}},
$$
This is implemented in the ADC_to_cellularity
. Let's go ahead and convert the ADC maps for all visits into cellularity maps.
measured_cellularity_maps = [
ADC_to_cellularity(
visit.adc_image, visit.roi_enhance_image
)
for visit in patient_data.visits
]
Step 2: Create Tumor Growth Model¶
Preparing a Reaction-Diffusion model¶
$$ \frac{\partial N(\textbf{x},t)}{\partial t} = \underbrace{\nabla \cdot \left( D \nabla N(\textbf{x},t) \right)}_{\text{invasion}} + \underbrace{k(\textbf{x})N(\textbf{x},t) \left(1 - \frac{N(\textbf{x},t)}{\theta} \right)}_{\text{logistic growth}} \quad \underbrace{- \sum_{i=1}^{n_{\text{CT}}} \sum_{j=1}^{T_i}\alpha_i C_i e^{-\beta_i\left( t-\tau_{i,j}\right)} N(\textbf{x},t)}_{\text{chemotherapy}} $$ $$ N(\textbf{x},t)_{\text{after}} = \underbrace{N(\textbf{x},t)_{\text{before}}e^{-a_{RT}d_{RT}(t)-\beta_{RT}d_{RT}(t)^2}}_{\text{radiotherapy}} $$
Model Parameters:
- $k$ : Proliferation Rate
- $d$ : Diffusivity
- $\theta$ : Carrying capacity
- $N(\textbf{x},0)$ : Initial condition for the tumor cellularity - we use the cellularity map derived from the first visit MRI data
k = torch.tensor(0.025, requires_grad=True, device=device)
d = torch.tensor(0.05, requires_grad=True, device=device)
theta = torch.tensor(1.0, requires_grad=False, device=device)
Chemotherapy parameters:
Note: Here we are working with only one chemotherapy drug, so $n_{\text{CT}}=1$
- $\alpha_{1}$ : Chemotherapy sensitivity parameter, see
ChemotherapySpecification.sensitivty
- $\beta_{1}$: : Chemotherapy decay rate, see
ChemotherapySpecification.decay_rate
- $C_{1}(d_{CT},t)$ : Chemotherapy drug concentration, which depends on the dosage given, and time since dosage was given.
We can set up a ChemotherapySpecification
using the treatment plan (times and dosages) that we loaded from the patient json file, which are now stored within the TNBCPatientData
object:
ct = ChemotherapySpecification(
sensitivity=0.2,
decay_rate=0.7,
times=[c.time for c in patient_data.chemotherapy],
doses=[c.dose for c in patient_data.chemotherapy],
)
# print(ct)
Finally, we can set up the ReactionDiffusion3D
model object.
model = ReactionDiffusion3D(
k=k,
d=d,
theta=theta,
patient_data=patient_data,
initial_time=patient_data.visits[0].time,
chemotherapy_specifications=[ct],
radiotherapy_specification=None,
)
Step 3: Create a Solver object¶
Under the hood, our ReactionDiffusion3D
model is actually a spatially discretized version of the reaction-diffusion PDE. Solving this semi-discrete model requires solving a large system of ordinary differential equations (ODEs). We do this via the torchdiffeq
library, using the TorchDiffEqSolver
object.
The TorchDiffEqSolverOptions
object contains standard solver options such as the method to use (e.g. fourth-order Runge-Kutta or "rk4"), the timestep to use for solving, and whether to use the adjoint method for gradient computations (False
would resort to automatic differentiation).
solver_options = TorchDiffEqSolverOptions(
step_size=timedelta(days=0.5),
use_adjoint=True,
device=device,
method="rk4",
)
solver = TorchDiffEqSolver(model, solver_options)
Step 4: Make a prediction¶
We are now ready to leverage our model and solver to make a prediction of tumor growth and response to treatment.
We will use the measured cellularity map from the first patient visit (measured_cellularity_maps[0]
) as an initial condition for the model, and make a prediction from the first visit date until the final visit date: a total of 119 days
. We will output the solution every 0.5 days
.
timepoints = daterange(
patient_data.visits[0].time, patient_data.visits[-1].time, timedelta(days=0.5)
)
u0 = torch.from_numpy(measured_cellularity_maps[0].array)
times, predicted_cellularity_maps = solver.solve(timepoints=timepoints, u_initial=u0)
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Visualize the predicted tumor growth¶
Let's plot our solution. We will use the Total Tumor Cell Count (TCC) as a simple measure of the tumor size: $$ \text{TTC}(t) = \sum_\mathbf{i} \mathbf{N}_i(t)\theta_{\text{voxel}} $$ Here $\theta_{\text{voxel}}$ is the number of cells that fit into an image voxel.
fig, ax = plt.subplots(1, 1, figsize=(5, 2))
plot_predicted_TCC(predicted_cellularity_maps, timepoints, ax=ax)
plt.show()
fig, axes = plt.subplots(1, 5, figsize=(5, 2))
for i, t in enumerate([0,25, 50, 100, 119]):
time_days = np.array([days_since_first(t, timepoints[0]) for t in timepoints])
t_idx = np.where(time_days == t)[0][0]
plot_cellularity_map(
predicted_cellularity_maps[t_idx], patient_data, time=t, ax=axes[i]
)
plt.show()
Step 5 (Optional): Compute final cell count and its gradient¶
Let's compute $\text{TCC}_{final}$ explicitly, and then also compute it's gradient with respect to some of the model parameters, $\frac{\partial \text{TCC}_{final}}{\partial k}, \frac{\partial \text{TCC}_{final}}{\partial d}$
carrying_capacity = compute_carrying_capacity(patient_data.breastmask_image)
final_tumor_cell_count = compute_total_cell_count(
predicted_cellularity_maps[-1], carrying_capacity
)
print(final_tumor_cell_count)
Warning: Units of input image are UNKNOWN. Defaulting to mm.
tensor(1.5552e+09, dtype=torch.float64, grad_fn=<MulBackward0>)
To compute the gradients, we need to run a backwards (adjoint) solve. We can do this using the standard PyTorch
approach of calling .backward()
on the final tumor_cell_count to populate the gradients.
final_tumor_cell_count.backward()
k_grad = model.k.grad
print(f"Gradient of the final TCC w.r.t proliferation rate: {k_grad}")
d_grad = model.d.grad
print(f"Gradient of the final TCC w.r.t diffusion rate: {d_grad}")
Gradient of the final TCC w.r.t proliferation rate: 118716014592.0
Gradient of the final TCC w.r.t diffusion rate: 3088261888.0
Step 6: Compare the model prediction to patient data¶
Let's compare how our model prediction based on only the first MRI visit compares with the data we have from subsequent visits:
# plot TCC vs. time for predictions and measurements
fig, ax = plt.subplots(1, 1, figsize=(5,2))
plot_predicted_TCC(predicted_cellularity_maps, timepoints, ax=ax)
plot_measured_TCC(
[m.array for m in measured_cellularity_maps],
[v.time for v in patient_data.visits],
ax=ax,
)
ax.legend(["predicted", "measured"]);
# plot cellularity maps for predictions and measurements
fig, axs = plt.subplots(2, len(patient_data.visit_days), figsize=(5,4))
for i, t in enumerate(patient_data.visit_days):
time_days = np.array([days_since_first(t, timepoints[0]) for t in timepoints])
t_idx = np.where(time_days == t)[0][0]
plot_cellularity_map(
predicted_cellularity_maps[t_idx], patient_data, time=t, ax=axs[0,i]
)
plot_cellularity_map(
torch.tensor(measured_cellularity_maps[i].array), patient_data, time=t, ax=axs[1,i]
)
axs[0,0].set_ylabel('Predicted')
axs[1,0].set_ylabel('Measured');
plt.show()
Not very good! We see that our model is significantly under-predicting the size/intensity of the tumor.
- Perhaps this patient has a more aggressive tumor? We could try increasing the tumor proliferation rate, $k$, or the diffusion rate $d$?
- Perhaps they are not responding to treatment as well as we thought? We could try reducing the chemosensitivity parameters, $\alpha$?
Step 7: Calibrate the model to patient data via numerical optimization¶
Rather than a trial-and-error approach, we can instead leverage numerical optimization to calibrate our model parameters to better match the observed data.
We will use the Levenberg-Marquardt (LM) algorithm (Wikipedia Link). This algorithm will seek to minimize the sum-of-squares difference between our predicted cellularity fields and the measured cellularity fields., i.e.,
$$ \sum_{v=0}^{\texttt{N\_visits}} \sum_{j=0}^{\texttt{N\_voxels}} (N_j(t_v) - \hat{N}_j(t_v))^2 $$ where $t_v$ is the time step corresponding to visit $v$, $N_j$ and $\hat{N}_j$ are the discretized predicted and measured (respectively) cellularity values at voxel $j$.
First we choose how many imaging visits to calibrate to by setting n_calibration_targets
, and picking out the corresponding solutions and timepoints:
# How many imaging dates do we want to try and match
n_visits_calibration = 2 # *Including* the initial visit
target_timepoints = [visit.time for visit in patient_data.visits[:n_visits_calibration]]
target_solution = torch.stack(
tuple(
[
torch.from_numpy(m.array)
for m in measured_cellularity_maps[: n_visits_calibration]
]
)
)
Next we create a helper function for the optimizer. This function simply takes a set of parameter values, updates the model with these parameter values, runs a forward solve, and outputs the solution at the target timepoints.
def update_model_and_predict(model_parameters, timepoints = target_timepoints):
d, k, ct_sens = torch.nn.Parameter(model_parameters)
solver.model.d = torch.nn.Parameter(d)
solver.model.k = torch.nn.Parameter(k)
solver.model.chemotherapy_specifications[0].sensitivity = torch.nn.Parameter(ct_sens)
_, predicted_cellularity_maps = solver.solve(timepoints=timepoints, u_initial=u0)
return predicted_cellularity_maps
Create an Optimizer object¶
Finally, we set up a Levenberg-Marquardt (LM) optimizer object. We can set various algorithm hyperparameters via the LMOptions()
object, or stick with the defaults. We also set an initial guess for the model parameters, as well as upper and lower bounds on their values. Finally, we pass the target solution values, and our update_model_and_predict
function for the optimizer to use.
# initial guess for the optimizer - here we use the values currently stored in the model
# initial_parameters = torch.tensor((solver.model.d, solver.model.k, solver.model.radiotherapy_specification.alpha, solver.model.chemotherapy_specifications[0].sensitivity))
initial_parameters = torch.tensor((0.025, 0.05, 0.5))
options = LMoptions()
optim = LMoptimizer(
model=update_model_and_predict,
initial_guess=initial_parameters,
bounds=torch.tensor(((0.0, 2), (0.0, 0.5), (0.0, 1.0))),
y_data=target_solution,
options=options,
)
Run the optimization¶
We are now ready to run the LM algorithm for a chosen number of steps.
Consider running somewhere between 5
and 30
iterations, with more iterations providing a better quality final solution (at the cost of longer runtime).
# Run optimization for n_iter steps
n_iter = 15
for i in range(n_iter):
print("Optimization Step: " + str(i+1) +"/"+str(n_iter))
optim.step()
best_parameters = optim.parameters[-1]
print("Best parameters: ", best_parameters)
Optimization Step: 1/15
Initial step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
/Users/mkapteyn/dev/tumortwin/venv/lib/python3.12/site-packages/tumortwin/optimizers/lm_optimizer.py:285: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). return torch.tensor((deltaY - self.y) / delta, dtype=torch.float64)
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
/Users/mkapteyn/dev/tumortwin/venv/lib/python3.12/site-packages/tumortwin/optimizers/lm_optimizer.py:303: UserWarning: The use of `x.T` on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider `x.mT` to transpose batches of matrices or `x.permute(*torch.arange(x.ndim - 1, -1, -1))` to reverse the dimensions of a tensor. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:3641.) e = solution_deltas.T @ solution_deltas
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 2/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 3/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 4/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 5/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 6/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 7/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 8/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 9/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 10/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 11/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 12/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 13/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 14/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Optimization Step: 15/15
Accepted step
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2001-11-10 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Best parameters: tensor([0.0965, 0.0521, 0.2300], dtype=torch.float64, grad_fn=<CopySlices>)
Visualize the optimization iterations¶
Let's visualize the progress of the optimizer by visualizing solutions at every n_opt_viz
optimization steps.
n_opt_viz = 2
sols = []
for params in optim.parameters[::n_opt_viz]:
predicted_cellularity_maps = update_model_and_predict(params, timepoints)
sols.append(predicted_cellularity_maps)
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Now we plot the solutions visited by the optimizer, along with the corresponding loss (sum-of-squares error) values:
fig, ax =plt.subplots(1,1, figsize=(5,2))
plot_calibration_iter(
sols, carrying_capacity, timepoints, measured_cellularity_maps, patient_data, t_calibration_end=target_timepoints[-1], ax=ax
)
fig, ax =plt.subplots(1,1, figsize=(5,2))
plot_loss(torch.tensor(optim.error), ax=ax)
For this particular dataset, we know what the ground-truth parameter values are since we used them to generate the data! Of course, in a real scenario you wouldn't know what the "true" parameters are. In fact, the data might not exactly match the model for any value of the parameters.
Ideally, our calibrated parameter values will be close to these, so let's compute the relative error in each parameter:
true_parameters = torch.tensor((0.1,0.05, 0.2))
final_parameters = optim.parameters[-1]
def relative_error_2dp(estimate, truth):
return (100*abs(truth-estimate)/truth).round(decimals=2)
print(f"Error in d: {relative_error_2dp(final_parameters[0], true_parameters[0])}%")
print(f"Error in k: {relative_error_2dp(final_parameters[1], true_parameters[1])}%")
print(f"Error in ct_sens: {relative_error_2dp(final_parameters[2], true_parameters[2])}%")
Error in d: 3.53%
Error in k: 4.29%
Error in ct_sens: 14.99%
Step 8: Predict patient response under alternative treatment plan¶
Now that we have a calibrated digital twin model, we can use it to predict how this particular patient might respond to different treatment plans.
Recall that we calibrated the digital twin model to imaging visits acquired after four weeks of neoadjuvant chemotherapy. A remaining treatment decision might be the neoadjuvant chemotherapy dosages and schedule. We would expect that increasing the dosage will lead to greater tumor control, but note that higher dosages are also likely to lead to greater toxicity. Let's explore the tradeoff using our calibrated digital twin model!
First, we'll define a function that updates the remaining chemotherapy doses based on a given total chemotherapy dosage.
def update_ct_total_dose(ct : ChemotherapySpecification, total_dose : float):
current_total_dose : float = np.sum(np.array(ct.doses))
additional_dose = total_dose - current_total_dose
adjuvant_total_dose = np.sum(ct.doses[4:])
dose_multiplier = (additional_dose+adjuvant_total_dose) / adjuvant_total_dose
ct.doses[4:] = [d*dose_multiplier for d in ct.doses[4:]]
return ct
Now let's predict the tumor response for a range of total dosages, and plot the results:
sols = []
candidate_doses = [20,30, 40, 50, 60]
fig, ax = plt.subplots(1, 1, figsize=(5, 2))
for ct_total_dose in candidate_doses:
update_ct_total_dose(ct, ct_total_dose)
solver.model.chemotherapy_specifications = [ct]
print(f"Running forward solve with total dose = {ct_total_dose}")
times, predicted_cellularity_maps = solver.solve(timepoints=timepoints, u_initial=u0)
plot_predicted_TCC(predicted_cellularity_maps, timepoints, ax=ax, alpha= 0.25 + 0.75*(ct_total_dose-min(candidate_doses))/(max(candidate_doses)-min(candidate_doses)))
ax.legend(["Total dose: "+str(d) for d in candidate_doses]);
plt.show()
Running forward solve with total dose = 20
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Running forward solve with total dose = 30
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Running forward solve with total dose = 40
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Running forward solve with total dose = 50
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Running forward solve with total dose = 60
Forward Simulation: [2001-09-14 00:00:00 to 2002-01-11 00:00:00 with timestep 0.50 days]: 0%| | 0.0…
Conclusion¶
Here we have demonstrated the core workflows of TumorTwin
. We have shown how to load in a patient dataset, create a tumor growth model, create a solver for the model, make predictions with the model under various parameters and treatments, and calibrate the model to patient data.
Discussion Questions¶
Modeling
- What effects could we add to the reaction-diffusion model?
Calibration
- How much data is needed for calibration?
- How does the timing of the imaging visits influence the calibration performance?
- Under what conditions might the calibration be unable to uniquely identify all the parameters?