Mathematical Oncology: How Modeling Is Transforming Clinical Decision-Making.
1/5 보강
Mathematical models have played a significant role in the development of current chemo- and radiotherapy treatment protocols.
APA
Scibilia KR, Gallagher K, et al. (2025). Mathematical Oncology: How Modeling Is Transforming Clinical Decision-Making.. Cancer research, 85(24), 4866-4879. https://doi.org/10.1158/0008-5472.CAN-25-0750
MLA
Scibilia KR, et al.. "Mathematical Oncology: How Modeling Is Transforming Clinical Decision-Making.." Cancer research, vol. 85, no. 24, 2025, pp. 4866-4879.
PMID
41105675 ↗
Abstract 한글 요약
Mathematical models have played a significant role in the development of current chemo- and radiotherapy treatment protocols. The widespread use of cytotoxic drugs has shaped the paradigm of uniformly administering a "maximum tolerated dose" to patients; however, this approach fails to account for the dynamic and heterogeneous nature of challenging cancers, including metastatic disease. Recent clinical trials and regulatory decisions have aimed to address these issues by integrating mathematical models to drive preclinical experiments and personalize treatment schedules. By capturing mechanisms of dose-response dynamics, ecological dynamics such as tumor-immune interactions or competition dynamics, and evolutionary dynamics across different therapeutic regimens, mathematical models hold the potential to advance current therapeutic strategies. As more preclinical and clinical data become available, the integration of mathematical models with "virtual patient" frameworks, including "digital twins," and artificial intelligence methods could further advance the mechanistic complexity and decision support capabilities of such models. Nonetheless, translating mechanistic models to routine clinical workflows will require overcoming current translational barriers, notably access to clinical data in standardized formats and regulatory constraints. Overall, recent trials demonstrate the promise of the field of mathematical oncology in translating predictive dynamics into treatment decision-making beyond the "maximum tolerated dose" approach. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI .
🏷️ 키워드 / MeSH 📖 같은 키워드 OA만
📖 전문 본문 읽기 PMC JATS · ~72 KB · 영문
Introduction
1
Introduction
Cancer is inherently complex, dynamic, and exhibits significant heterogeneity, both between and within patients, especially in metastatic settings (1–3). Differences are visible across multiple scales: from the genetic and epigenetic scale, over cells and tissues, and up to the organism level (2, 4). Across these scales, the growth of tumors and metastases, their interactions with the microenvironment, and their response to treatment exhibit complex and nonlinear spatial and temporal dynamics.
In this complex and dynamic setting, the current treatment paradigm of uniformly and continuously administering a ‘maximum tolerated dose’ (MTD), i.e. the largest possible dose that patients can tolerate with acceptable side effects, oftentimes fails as it leads to disease relapse due to the emergence of drug resistance (5). Furthermore, the MTD paradigm was developed during the ‘era of cytotoxic drugs’ (1, 6); however, most new cancer therapeutics today (such as targeted therapies or immunotherapies) have a different mode of action. Experience with current therapeutics shows that dose efficacy can saturate, which leads to additional toxicity without significantly enhancing efficacy (6). This effect is further exacerbated when evaluating drug combinations (1, 7). Determining optimal treatment strategies that reflect the complexity of treatment and its induced effects therefore remains a critical issue (6, 8).
‘Mathematical Oncology’ (2, 9–15) is a growing discipline in which mechanistic mathematical models are integrated with experimental and clinical data to improve clinical decision making in oncology. Such mathematical models are often based on biological first principles to capture spatial or temporal dynamics of the drug, tumor, and microenvironment. Models may be used to understand the complex and multi-scale nature of cancer, predict outcomes, and derive personalized treatment schedules. These approaches stand in contrast to recent machine learning and artificial intelligence methods, such as neural networks, which focus on mechanism-agnostic extraction of information from large-scale data (16).
In this review, we focus on how mathematical models are integrated within clinical workflows: influencing chemotherapeutic treatment scheduling, helping establish the MTD paradigm, and planning radiation treatments. We then present the latest pilot and phase I/II trials in which mathematical oncology is advancing patient care beyond the MTD paradigm by capturing treatment and eco-evolutionary dynamics and allowing for patient-specific treatment personalization. We believe that in the near future, tighter integration of models with novel computational tools (including virtual trials, digital twins, and artificial intelligence) will further advance translation in the field. This will, however, require overcoming current translational barriers, including the lack of standardized and accessible clinical data and regulatory constraints.
Introduction
Cancer is inherently complex, dynamic, and exhibits significant heterogeneity, both between and within patients, especially in metastatic settings (1–3). Differences are visible across multiple scales: from the genetic and epigenetic scale, over cells and tissues, and up to the organism level (2, 4). Across these scales, the growth of tumors and metastases, their interactions with the microenvironment, and their response to treatment exhibit complex and nonlinear spatial and temporal dynamics.
In this complex and dynamic setting, the current treatment paradigm of uniformly and continuously administering a ‘maximum tolerated dose’ (MTD), i.e. the largest possible dose that patients can tolerate with acceptable side effects, oftentimes fails as it leads to disease relapse due to the emergence of drug resistance (5). Furthermore, the MTD paradigm was developed during the ‘era of cytotoxic drugs’ (1, 6); however, most new cancer therapeutics today (such as targeted therapies or immunotherapies) have a different mode of action. Experience with current therapeutics shows that dose efficacy can saturate, which leads to additional toxicity without significantly enhancing efficacy (6). This effect is further exacerbated when evaluating drug combinations (1, 7). Determining optimal treatment strategies that reflect the complexity of treatment and its induced effects therefore remains a critical issue (6, 8).
‘Mathematical Oncology’ (2, 9–15) is a growing discipline in which mechanistic mathematical models are integrated with experimental and clinical data to improve clinical decision making in oncology. Such mathematical models are often based on biological first principles to capture spatial or temporal dynamics of the drug, tumor, and microenvironment. Models may be used to understand the complex and multi-scale nature of cancer, predict outcomes, and derive personalized treatment schedules. These approaches stand in contrast to recent machine learning and artificial intelligence methods, such as neural networks, which focus on mechanism-agnostic extraction of information from large-scale data (16).
In this review, we focus on how mathematical models are integrated within clinical workflows: influencing chemotherapeutic treatment scheduling, helping establish the MTD paradigm, and planning radiation treatments. We then present the latest pilot and phase I/II trials in which mathematical oncology is advancing patient care beyond the MTD paradigm by capturing treatment and eco-evolutionary dynamics and allowing for patient-specific treatment personalization. We believe that in the near future, tighter integration of models with novel computational tools (including virtual trials, digital twins, and artificial intelligence) will further advance translation in the field. This will, however, require overcoming current translational barriers, including the lack of standardized and accessible clinical data and regulatory constraints.
Mathematical Models Drive Novel Treatment Decisions
2
Mathematical Models Drive Novel Treatment Decisions
Mechanistic mathematical models use equations to represent the underlying processes within a system, rather than just inputs and outputs (2, 17). Most clinical cancer models describe quantities of interest over time, such as tumor size dynamics or drug concentrations (e.g., measurable in the plasma or delivered to the tumor). These models may capture treatment dynamics, including the dose-response of systemic drugs (18) or radiotherapy (19), and eco-evolutionary principles, such as ecological interactions of cell-based immunotherapies (20–22) or evolutionary dynamics due to the emergence of resistance (23). The results of these modeling approaches can inform treatment planning with respect to drug dosing, timing, and drug combinations. Recent trials are showing promise of adapting such mechanistic models into clinical pathways, which generally requires a workflow that integrates experimental and/or clinical data to derive treatment strategies (9, 15, 24), as portrayed in Figure 1.
Constructing a mathematical model first requires making assumptions about the underlying biological processes at play, and how they interact to produce the observed dynamics. These hypotheses may be tested by calibrating mathematical models to available preclinical or clinical data. If the initial model assumptions do not capture the dynamics of the data, this can generate questions about the underlying biological mechanisms that lead to novel experimental or clinical studies aimed at answering them. A particular strength of mechanistic models is that they can be calibrated to capture heterogeneity and variability of dynamics across different scales (i.e., different tumors, patients, or cohorts), since various sets of model parameters can be chosen to reflect the corresponding heterogeneity. Although the limited amount of patient-specific data from an individual necessitates the use of less complex models compared to those that can be calibrated from experimental data sets, the use of modeling for individual personalized treatment has value in that it can incorporate expected heterogeneity based on retrospective clinical data and biological processes understood from preclinical data.
Once mathematical models have been calibrated satisfactorily with available data (15, 25, 26), they can be used to make predictions or generate treatment recommendations on a cohort or individual level. Using model predictions to derive effective treatment doses or schedules not only avoids a trial-and-error approach that would require excessive experimental and clinical resources but also bridges the divide between experiment and clinic by integrating a mechanistic understanding from various sources.
In the following, we highlight the clinical use of mathematical models historically and currently, capturing different aspects of drug and tumor dynamics. Figure 2 graphically represents specific modeling examples we discuss in this review (27–29). Starting with simple models of tumor growth curves and dose-response (Figure 2A–B), model dynamics have increased in complexity both in terms of tumor heterogeneity (Figure 2C) and drug delivery (Figure 2D). More recently, models have begun to embrace further the multi-scale nature of both tumor and non-tumor components (Figure 2E–F), as well as consider multi-drug treatment schedules. A separate overview of the latest modeling-informed clinical trials is also provided in Table 1, continued from (14). Overall, we will show that mathematical models can be used to drive clinical decisions that capture the complex, multi-scale, and dynamic mechanisms involved in cancer treatment.
2.1
Historical Beginnings in Chemotherapy: Log-Kill & Norton-Simon Models
Historically, mathematical models in oncology described simple tumor growth and dose-response dynamics of chemotherapeutic treatment, initially in leukemia and breast cancer. In the 1960s, Skipper et al. (30) developed the log-kill model, which revolutionized the treatment of childhood leukemia and helped establish many critical concepts such as multicycle and combination chemotherapies, as well as today’s MTD paradigm (5, 31–33). Using data from a murine model of leukemia, they described tumor cell growth using an exponential / logarithmic growth law, i.e. tumor cells doubling at a constant rate until reaching a lethal threshold size. This also implied fractional killing of cells relative to dose; if a single treatment dose reduces tumor burden e.g. from 109 to 108 cells, the same dose reduces a burden of 105 to 104 cells. Due to such diminishing returns in absolute cell kill, this model suggested that chemotherapeutic agents should be given as frequently as possible (e.g., in combinations or when patients exhibit minimal residual disease), which challenged the prevailing paradigm of the time (5, 32). Freireich and Frei (34) then hypothesized that the combination of chemotherapies would be additive under the log-kill model (35), leading to the development of the four-drug VAMP regimen (36) for pediatric acute lymphocytic leukemia (ALL). Despite initial reticence (35), the concept was highly successful and evolved into a series of combination therapies that raised the 10-year survival of pediatric ALL patients from 10% to over 90% (35, 37).
Norton and Simon (27) further refined the log-kill model in the late 1970s to describe the growth of solid tumors and their response to treatment (5, 32, 33, 38). At this time, the convention for chemotherapy dose was that intensity should be lowered after patients achieved complete remission to manage the cumulative toxicity or prevent secondary malignancies (27). In their landmark study (27), Norton and Simon challenged this convention by using mathematical models to derive insights from experimental data of various tumors that matched clinical experience. They fit untreated growth to a Gompertzian growth law, which represents exponential growth with a steadily decaying growth rate; the drug-induced death rate depended on the proliferating fraction (as shown in Figure 2A) due to the antimitotic mechanism induced by chemotherapy (27). The model implied that small residual tumors required much larger doses than previously assumed due to their large growth rates and suggested that treatment breaks should be shortened to avoid tumor regrowth (39). This dose-dense approach was then validated in a phase III trial of adjuvant chemotherapy in breast cancer (40), significantly improving clinical outcomes without increasing toxicity, and is currently the preferred dosing schedule for HER2-negative breast cancer patients receiving adjuvant chemotherapy (41).
Traina et al. (42) later extended the Norton-Simon model to improve chemotherapy dosing schedules of capecitabine for breast cancer, which comes with significant dose-limiting toxicity. They modeled tumor xenograft response dynamics with a similar Gompertzian model under the conventional capecitabine schedule and found that treatment efficacy decreased within the administration window of the conventional schedule. Subsequently, they identified in vivo that shortening the administration time of the drug could improve survival (42). This novel schedule has since been tested in various phase I/II trials (42–46), both as monotherapy and in combination, and a recently completed phase II trial (47) found that the novel dosing schedule resulted in less toxicity while retaining similar efficacy.
Crucially, these relatively simple models continue to have a significant impact on today’s treatment paradigm and first demonstrated the capability of mathematical models to infer novel treatment schedules informed by experimental or clinical evidence.
2.2
Modeling & Personalization in Radiation Oncology
Mathematical modeling has also greatly contributed to advancing radiation therapy treatment scheduling by describing crucial dose-response dynamics (3, 19, 28, 32). The linear-quadratic (LQ) model forms the basis of radiotherapy planning and optimization in the clinic by quantifying dose-responses in the tumor or surrounding normal tissue (19, 28, 32, 48, 49). It describes the survival probability S of a cell being exposed to a radiation dose D by S = , where α and β reflect the cell’s radiosensitivity. Intuitively, the model is often described as inducing cell death from single or multiple radiation ‘hits’ via the α and β parameters (28), respectively, which can be interpreted as an extension of the log-kill model.
Douglas and Fowler first quantified the effect of radiation dose fractions using the LQ model in the late 1970s (28, 50), fitting to in vitro and in vivo data. During the late 1980s, it was then identified that tumors with high proliferation rates should exhibit a dose-response behavior with higher α/β ratios than the surrounding healthy tissue (28, 32, 51), as exemplified in Figure 2B (28). This implied that it would be beneficial to divide a total dose into smaller but more frequent fractions, i.e. using a hyperfractionation schedule, as it maximizes tumor cell kill while minimizing damage to healthy tissue. Such a benefit of hyperfractionation was e.g. observed in head and neck cancer (32, 52), as it exhibits a high α/β ratio compared to other cancers (28, 32). Today, the LQ model has widespread applicability in the clinic (28, 49), and the model’s simplicity has been attributed as a major contributing factor to its successful translation.
Beyond predicting tissue-specific radiation response, mathematical modeling may also capture the inherent heterogeneity of tumors (3, 19). Leder et al. (53) employed the LQ model to consider the dose-response of PDGF-driven glioblastoma and iteratively refined a mechanistic cell-population model (incorporating temporal dynamics of sensitive and resistant cell growth) through comparison of predicted tumor outcomes with murine experiments. Based on this successful model calibration, they derived a novel treatment schedule that showed prolonged survival over conventional schedules in vivo. The feasibility and safety of this computationally-derived schedule were subsequently trialed in a recent phase I study (54), exemplifying the potential of more complex mathematical approaches to derive novel treatment schedules.
The LQ model may also be extended to account for tissue recovery after radiation, or in combination with genomic data to bridge the multi-scale nature of the disease (3, 19). López-Alfonso et al. (55) developed a nonlinear model of normal tissue recovery that suggested altering the fractional doses delivered to organs at risk to reduce normal tissue toxicity, where the feasibility of this approach was confirmed by a phase I study of head and neck squamous cell carcinoma (56). Furthermore, Scott et al. (57) combined a gene expression-based radiosensitivity index (58) with the linear-quadratic model to develop the genomic-adjusted radiation dose (3), which is being tested in a phase II trial (59), exemplifying the potential of integrating individual-specific molecular data with mathematical modeling.
A recently completed single-center nonrandomized phase II single-arm trial has also shown the potential of capturing patient-specific tumor heterogeneity to make individualized treatment decisions. The proliferation saturation index (PSI) (60) is a dynamic biomarker that estimates the proportion of proliferating radiosensitive cells, which can be estimated pre-treatment based on two conventional longitudinal tumor volume measurements using a logistic growth model (3, 19). Prokopiou et al. (60) proposed and validated the model through retrospective longitudinal data and showed that pre-treatment PSI was prognostic for radiation response. Furthermore, they identified in silico that stratifying patients into conventional and hyperfractionated radiation protocols based on pre-treatment PSI could improve the tumor response (60, 61). This was successfully validated in a prospective phase II trial for HPV+ oropharyngeal cancers (61, 62), increasing the percentage of patients who achieved a robust mid-treatment response (tumor volume reduction of more than 32% by week 4 of radiotherapy) from 50% to 58%, and showed that patient-specific tumor dynamics can be leveraged to personalize radiotherapy and improve responses.
Overall, recent trials exemplify that dose-response modeling (based on the LQ model) can be integrated with more complex disease scales (such as genomic information or individual-specific cell growth dynamics), which show promise to improve patient outcomes.
2.3
Modeling Drug Dynamics
Pharmacokinetic and pharmacodynamic mathematical models are used extensively for monitoring and predicting drug concentrations and their effects, with applications ranging from preclinical drug development to clinical decision-making. Pharmacokinetic (PK) approaches focus on the drug concentration profiles in the body, including uptake, distribution, metabolism, and clearance, whilst pharmacodynamic (PD) methods investigate the dynamic effects on the body and disease. Quantitative Systems Pharmacology (QSP) approaches extend these mathematical models to integrate drug dynamics across multiple scales (63–65) and are already aiding the design of clinical studies across all trial phases (64).
Many oncological drugs do not make it into clinical use due to high toxicity, complex pharmacology, and high variability between individuals (66). In these scenarios, the PK/PD dose-response relationship is crucial for identifying a therapeutic window (depicted in Figure 2D) where a drug has a high enough dose for an efficacious response, whilst constraining for severe dose-limiting, irreversible, or life-threatening toxic effects (18). The PK/PD time course may be nonlinear and/or multi-scale, as different complexities are considered in relation to the drug and system compartments to determine pharmacological, physiological, and pathological responses. Models are often individualized to patients and leverage prior knowledge using so-called nonlinear mixed-effects models, population PK models, or Bayesian methods (67). These models are usually implemented with the goals of maintaining a certain target exposure and personalizing treatment dose for efficacy and toxicity.
Maintaining a target exposure is a critical application for PK/PD models, as a fixed standard dose over a patient cohort may easily overdose or underdose individual patients. If the drug exposure is predicted to be suboptimal, adjustments may be made through escalation or de-escalation (dose changes), or intensification (frequency changes) guided by PK/PD models for dose optimization. Oftentimes, the goal is to achieve a particular target dose or a target total dose over time, denoted as area under the curve (AUC) (68–72). In an early study by Calvert et al. (68), dosing was adjusted to meet a target AUC for carboplatin using PK/PD models that considered peripheral and/or central drug concentrations separately. Model insights revealed that adjusting for body size was not as important as individualizing for variable renal clearance rates. In a later study in children with acute lymphoblastic leukemia by Evans et al. (69), dosing strategies personalized to individual PK clearance rates had a higher rate of continuous complete remissions compared to standard chemotherapy regimens based on body surface area.
Several studies also showed that dose personalization may be critical for keeping the measurable drug concentration within a range that balances efficacy and toxicity (73–75). Gamelin et al. (74) conducted a prospective multicenter phase III randomized trial comparing a standard body-weight constant dosing with individualized, PK-guided dose-intensified treatment of fluorouracil in metastatic colorectal cancer patients. They found a wide pharmacokinetic variability between patients and a large distribution of optimal doses that achieve the same plasma ranges in the personalized arm, which ultimately had a higher objective response rate (74). Furthermore, Long-Boyle et al. (76) developed a model calibrated with retrospective data, implementing a clinical tool to recommend busulfan dose to pediatric and young adult patients undergoing hematopoietic cell transplants. The model quickly reaches a steady-state target concentration depending on body weight, age, and clearance rates without exceeding toxic thresholds (76).
In certain cases, toxicity needs to be more explicitly modeled to monitor and predict dose limitations accurately, as there may be differences between patients due to age, body size, sex, or other PK factors. In the early 2000s, investigations on dose-densifying the administration of the standard first-line regimen of docetaxel plus epirubicin for metastatic breast cancer showed promising responses, but the more frequent administration was limited by the risk of severe, life-threatening neutropenia (77). PK/PD models were subsequently designed to enable personalized treatment of this drug combination, explicitly modeling drug pharmacokinetics, tumor size dynamics, and crucially, neutrophil counts, which guided intensification in a phase I/II trial (77, 78). These studies showed that unacceptable toxicities from densified regimens could be circumvented using model-driven drug administration. Moreover, PK/PD models have been developed and applied to achieve target drug clearance rates to balance efficacy and toxicity for docetaxel explicitly (79) and to personalize computational metronomic schedules of vinorelbine with non-small cell lung cancer and mesothelioma patients (80, 81). Specifically, children, the elderly, and patients with renal or liver impairment may have reduced drug clearance and higher toxicities (18, 66).
2.4
Evolution-based Treatment Strategies
Despite significant progress in drug development, metastatic and advanced-stage cancers remain largely incurable, since initial response is often followed by disease relapse over multiple lines of treatment (5, 23). Evolution of treatment resistance is driven by competitive interactions between cancer cells and complex interaction dynamics within the microenvironment and with other non-malignant cells (23). Mathematical modeling is particularly suited to capture such complex nonlinear dynamics, which has resulted in a rich history of numerous model-based evolutionary treatment strategies.
Early History of Modeling Resistance
Building on the Norton–Simon model discussed above, one of the first mathematical approaches to consider the emergence of treatment resistance from the 1980s is the Goldie–Coldman model. This model assumed that resistant cells originate from sensitive ones, due to spontaneous mutations occurring and accumulating prior to treatment. Logic follows that any delay in treatment would result in larger, more resistant tumors that are more likely to relapse (82, 83). However, this assumption is not universally true. The Goldie–Coldman model’s emphasis on early treatment aligned partially with pancreatic (84), colorectal (85), and breast (86, 87) cancer data. However, patient outcomes have since been observed to be independent of the timing of adjuvant chemotherapy across multiple cancer types, including in esophageal (88) and lung (89, 90) cancer studies. This strategy also argued for alternating rather than sequential application of multiple drugs (91), where cycles of each drug would be interleaved; however, clinical trials in breast cancer based on this mathematical model observed longer overall survival on the sequential arm, where each drug was given to completion before moving to the next (92, 93).
The potential of treatment breaks was identified by developing models to account for drug sensitivity of the tumor during the emergence of drug resistance using in vivo studies of hormone therapy in a mouse model of androgen-sensitive prostate cancer (94, 95). Intermittent approaches that switch between periods of the drug being on and off have since been tested prospectively in phase II clinical studies in locally-advanced prostate cancer, demonstrating reduced side effects and enhanced disease control relative to continuous therapy (96–98). Both continuous (99) and intermittent (100) drug schedules have been informed by mathematical modeling of the treatment response, characterizing the relative impacts of therapy on cellular turnover, mutation, and receptor regulation to explain the treatment responses observed clinically. Using models to predict tumor dynamics (101) and individual patient outcomes (102) also improved upon the prospective studies in the clinic that did not use modeling to create intermittent schedules. Mathematical modeling additionally enabled in silico comparison of different scheduling approaches (103) and the application of control theory to develop optimized intermittent treatment protocols (104).
Subsequently, novel competition-based models were developed as an alternative description of the emergence of treatment resistance during cancer therapy. These models assume distinct populations of drug-sensitive and -resistant cells, which, in contrast to the Goldie–Coldman model, compete for some shared resource. Shimada et al. (105) modeled prostate tumor dynamics assuming distinct, competing populations of drug-sensitive and resistant cells, demonstrating that the potential benefits of intermittent therapy are retained under the assumption of distinct sensitive and resistant populations. These mathematical models redefined the role of sensitive cells; their continued, intratumoral presence enabled by intermittent treatment holidays not only maintains the drug sensitivity of the tumor but also suppresses the growth of resistant cells through intercellular competition.
Personalizing Competition-Based Models: Adaptive Therapy
Selection and competition motivated Gatenby et al.’s work to personalize intermittent therapy schedules in the form of adaptive therapy (AT) (106). AT differs from intermittent treatment in that the timing of holidays is adapted to patient-specific dynamics (Figure 2C) rather than using fixed timing or burden thresholds across the cohort. The first clinical trial of AT was conducted in metastatic castration-resistant prostate cancer, where mathematical models guided the cycling of abiraterone therapy based on patient-specific prostate-specific antigen (PSA) levels (107). The primary adaptive protocol, inspired by model simulations, administered abiraterone and prednisone until a patient’s PSA level dropped by 50% from baseline, before suspending treatment until PSA rebounded above the baseline. The median time until disease progression more than doubled compared to a matched contemporaneous cohort on continuous therapy (33.5 months vs. 14.3 months) while drug exposure and associated costs were concurrently reduced (108). These results also displayed significant heterogeneity in patients’ responses to AT, motivating modeling efforts to identify the patient-specific benefit of AT over conventional treatment approaches (109, 110). Such mathematical approaches have since been extended to predict the expected benefit of AT after the first treatment cycle (bioRxiv 2025.04.03.646615), potentially enabling future clinical stratification of patients based on their expected response.
Clinical trials based on adaptive strategies for tumor control have been extended to metastatic castrate-sensitive prostate cancer (NCT03511196, NCT06734130), melanoma (NCT03543969), and basal cell carcinoma (NCT05651828). Further phase II AT trials in prostate (NCT05393791) and ovarian cancer (NCT05080556) also exemplify differing approaches to implementing AT, testing treatment break and de-escalation approaches, respectively. Recently published abstracts credit the ongoing role that mathematical modeling is playing in the design of current AT trials for melanoma (111) and breast cancer (112).
Combining Evolutionary Dynamics with Drug Dynamics
Whilst mathematical modeling for clinical use generally favors simplicity, drug dynamic models can be combined with evolutionary modeling to account for several factors at once. More recent PK/PD models have explicitly accounted for the heterogeneity of the tumor population, evolution of resistance, and its effect on optimal drug dosing (113–115). PK/PD models have been combined with simple tumor dynamic models, such as tumor-growth-inhibition models, which account for reduced drug response via increased drug resistance over time (113) and with Lotka-Volterra two-compartment models that represent drug-sensitive and drug-resistant cells (NCT05651828).
In another approach, Chmielecki et al. (116) built on a multi-type branching process model of sensitive and resistant cells (117, 118) and proposed that intermittent high-dose pulses combined with a lower daily dose of EGFR inhibitors could delay the emergence of resistance. In vitro cell culture experiments were coupled with a stochastic mathematical model that was calibrated with clinical pharmacokinetic data. The same group then extended the model to incorporate dose-dependent mutations of sensitive cells and showed that a pulsing strategy is effective irrespective of the dependency of the mutation rate on the drug concentration (114). This high-dose/low-dose pulsing approach was tested with erlotinib in a phase I clinical trial (NCT01967095), and while the strategy did not effectively delay the emergence of resistance, it was associated with fewer metastases (119). Poels et al. (115) later combined the same branching process model (118) with a PK model of osimertinib-dacomitinib combination therapy (120). Using an optimized dosing schedule to minimize tumor burden and toxicity, they observed a response rate of 73% in a phase I trial (NCT03810807), with fewer adverse events than osimertinib or dacomitinib monotherapies.
Targeting Tumor Heterogeneity via Combination Therapies
Combination therapies are emerging as a promising strategy to address tumor heterogeneity and resistance. In many hematologic malignancies, drugs have additive effects within single patients that yield potent cell kill by combining therapies (121), as we have e.g. seen at the beginning of this review with the VAMP regimen that exhibited additivity of log-kills in ALL. Statistical models including Loewe Additivity (122) or the Bliss Independence model (123) can account for intra-tumoral variability of response across different therapeutics. The Bliss Independence model assumes cells respond heterogeneously to independent drugs and converges to log-kill additivity for cancer chemotherapy (35). This model serves as a null model demonstrating synergy of drugs when the effect of the combination exceeds the expected response under independence (121). When the resistance to multiple drugs is due to independent mechanisms, the likelihood of cross resistance is successively lower with each additional drug (35, 124).
Solid tumors, in contrast, typically exhibit greater inter-patient heterogeneity in drug response and display a higher likelihood of pre-existing resistance (121, 124). Models focused on genetic mechanisms of resistance found that combination therapies would not succeed if single mutations could confer cross-resistance across all used drugs (124). Pioneering work by Palmer and Sorger (125) further demonstrated that the benefit of combination therapies across various clinical trials oftentimes can be explained by the independent action of drugs (i.e., low cross-resistance), with one therapy acting as the primary contributor to a patient’s response to combination therapy. Their methodology relied on a statistical model that was further refined by relying on the additivity of progression-free survival (PFS) times (126). By selecting published phase 3 combination trial results between 2014 and 2018 with available Kaplan-Meier PFS survival curves of the ‘standard of care’ therapy and of a single-agent therapeutic to be added, they were able to correctly predict combination trial PFS outcomes with 100% sensitivity and 78% specificity.
These findings have the potential to directly inform the design of combination therapy in future prospective trials (121), as computational tools can be used to predict the results of combination therapy trials before they begin or conclude. Furthermore, high single-agent activity implies that predictive biomarkers should be developed to personalize therapy to the most effective drug(s) for any individual patient (121). The search for such suitable biomarkers could also be supported by mathematical models (127).
Recent Evolutionary Therapies
Inspired by ecological principles driving mass-extinction events, an emerging form of evolution-based therapy is ‘extinction therapy’ (5), where mathematical modeling has suggested that tumor eradication could be achieved by administering treatments in a ‘first-strike-second-strike’ fashion (128). In this scenario, a ‘first strike’ would dramatically reduce tumor mass and heterogeneity via a first treatment, rendering the small remaining tumor population vulnerable to a ‘second strike’ — a sequence of secondary treatments which eradicate the remaining susceptible population (5, 128), as illustrated in Figure 2E. Extinction therapy is currently being evaluated in rhabdomyosarcoma (129) (NCT04388839), metastatic castration sensitive prostate cancer (NCT05189457), metastatic breast cancer (NCT06409390), and an extinction trial for Ewing sarcoma will be opening soon (130).
Acquiring resistance is an inherently dynamic process, and treatment personalization based on longitudinal monitoring is also starting to show great promise in counteracting disease recurrence (5, 131). Ongoing trials are investigating the feasibility of implementing real-time, model-informed decision support for individual patients with the Evolutionary Tumor Board (ETB) pilot studies, which integrate mechanistic modeling and evolutionary theory into the traditional tumor board concept (NCT04343365, NCT06423950) (132). In these studies, a multidisciplinary team that includes oncologists, radiologists, mathematicians, and evolutionary biologists uses patient-specific mathematical models to simulate tumor dynamics, forecast treatment outcomes, and guide future treatments. These models incorporate the evolution of resistance and possible treatment resensitization and are calibrated using longitudinal tumor burden data from previous treatment histories. This approach can guide what or why a specific treatment should be administered, and when, providing patient-specific decision support for the treating physician.
2.5
Immunotherapy
In recent years, numerous immunotherapy approaches have revolutionized treatment for various cancers. Novel immunotherapies, such as immune checkpoint inhibitors or Chimeric Antigen Receptor (CAR) T-cell therapy, are known to induce complex temporal and spatially heterogeneous effects (133), and thus, the dose-response relationships needed for immunotherapy modeling are often different from those developed for anti-proliferative and targeted agents. Toxicity considerations also differ. The limitations of traditional dose selection principles and trial designs for immunotherapy, based on the MTD paradigm, have been recognized (7, 134), and mathematical model-informed designs have been proposed (7).
The most notable and clinically successful model applications in immuno-oncology are the post-approval drug label changes of PD-1/PD-L1 inhibitors, based on computational modeling that supported e.g. the extension of dosing intervals (31, 113). The application of a PK model to data from phase I and phase III trials of atezolizumab (a PD-L1 inhibitor) to determine exposure-response relationships (i.e., considering both efficacy and drug-related toxicity) revealed that administering adjusted doses every two or four weeks would have comparable efficacy and safety as the previously approved three-week interval (135). This work, along with similar modeling for nivolumab (136) and others, supported the interchangeable use of different treatment schedules, offering patients and their healthcare providers greater flexibility without having to incur additional trials (113, 135).
Mathematical modeling has also played a role in regulatory approval of immunotherapies, with the FDA employing PK modeling to evaluate the first approved CAR-T cell therapy (tisagenlecleucel) (134). This model describes the expansion of effector cells after T-cell administration, and their subsequent conversion into memory cells, recapitulating the observed phases of patient T-cell dynamics (shown in Figure 2F) from the first CAR-T cell trials (29). The model also accounted for the effects of co-medications, in order to understand adverse events such as cytokine release syndrome (CRS) (134, 137), and remained crucial for the evaluation of subsequent CAR-T cell therapies (134).
As our mechanistic understanding of effective immunotherapy increases, the number of mathematical models that consider its multi-scale effects will also increase (64, 134, 138). Numerous mathematical modeling studies have considered tumor-immune interactions and drug dynamics in the last three decades (139, 140). The application of such models to identify promising combination therapies is also becoming more prevalent in industry (7, 140), as various ‘what-if’ scenarios targeting different parts of the larger cancer-immunity cycle (7) can be tested in silico. Overall, the approval of increasingly complex immune therapies (including combinations) requires a quantitative and mechanistic understanding of the underlying dynamics and therefore is an area where mathematical modeling will become vital.
Mathematical Models Drive Novel Treatment Decisions
Mechanistic mathematical models use equations to represent the underlying processes within a system, rather than just inputs and outputs (2, 17). Most clinical cancer models describe quantities of interest over time, such as tumor size dynamics or drug concentrations (e.g., measurable in the plasma or delivered to the tumor). These models may capture treatment dynamics, including the dose-response of systemic drugs (18) or radiotherapy (19), and eco-evolutionary principles, such as ecological interactions of cell-based immunotherapies (20–22) or evolutionary dynamics due to the emergence of resistance (23). The results of these modeling approaches can inform treatment planning with respect to drug dosing, timing, and drug combinations. Recent trials are showing promise of adapting such mechanistic models into clinical pathways, which generally requires a workflow that integrates experimental and/or clinical data to derive treatment strategies (9, 15, 24), as portrayed in Figure 1.
Constructing a mathematical model first requires making assumptions about the underlying biological processes at play, and how they interact to produce the observed dynamics. These hypotheses may be tested by calibrating mathematical models to available preclinical or clinical data. If the initial model assumptions do not capture the dynamics of the data, this can generate questions about the underlying biological mechanisms that lead to novel experimental or clinical studies aimed at answering them. A particular strength of mechanistic models is that they can be calibrated to capture heterogeneity and variability of dynamics across different scales (i.e., different tumors, patients, or cohorts), since various sets of model parameters can be chosen to reflect the corresponding heterogeneity. Although the limited amount of patient-specific data from an individual necessitates the use of less complex models compared to those that can be calibrated from experimental data sets, the use of modeling for individual personalized treatment has value in that it can incorporate expected heterogeneity based on retrospective clinical data and biological processes understood from preclinical data.
Once mathematical models have been calibrated satisfactorily with available data (15, 25, 26), they can be used to make predictions or generate treatment recommendations on a cohort or individual level. Using model predictions to derive effective treatment doses or schedules not only avoids a trial-and-error approach that would require excessive experimental and clinical resources but also bridges the divide between experiment and clinic by integrating a mechanistic understanding from various sources.
In the following, we highlight the clinical use of mathematical models historically and currently, capturing different aspects of drug and tumor dynamics. Figure 2 graphically represents specific modeling examples we discuss in this review (27–29). Starting with simple models of tumor growth curves and dose-response (Figure 2A–B), model dynamics have increased in complexity both in terms of tumor heterogeneity (Figure 2C) and drug delivery (Figure 2D). More recently, models have begun to embrace further the multi-scale nature of both tumor and non-tumor components (Figure 2E–F), as well as consider multi-drug treatment schedules. A separate overview of the latest modeling-informed clinical trials is also provided in Table 1, continued from (14). Overall, we will show that mathematical models can be used to drive clinical decisions that capture the complex, multi-scale, and dynamic mechanisms involved in cancer treatment.
2.1
Historical Beginnings in Chemotherapy: Log-Kill & Norton-Simon Models
Historically, mathematical models in oncology described simple tumor growth and dose-response dynamics of chemotherapeutic treatment, initially in leukemia and breast cancer. In the 1960s, Skipper et al. (30) developed the log-kill model, which revolutionized the treatment of childhood leukemia and helped establish many critical concepts such as multicycle and combination chemotherapies, as well as today’s MTD paradigm (5, 31–33). Using data from a murine model of leukemia, they described tumor cell growth using an exponential / logarithmic growth law, i.e. tumor cells doubling at a constant rate until reaching a lethal threshold size. This also implied fractional killing of cells relative to dose; if a single treatment dose reduces tumor burden e.g. from 109 to 108 cells, the same dose reduces a burden of 105 to 104 cells. Due to such diminishing returns in absolute cell kill, this model suggested that chemotherapeutic agents should be given as frequently as possible (e.g., in combinations or when patients exhibit minimal residual disease), which challenged the prevailing paradigm of the time (5, 32). Freireich and Frei (34) then hypothesized that the combination of chemotherapies would be additive under the log-kill model (35), leading to the development of the four-drug VAMP regimen (36) for pediatric acute lymphocytic leukemia (ALL). Despite initial reticence (35), the concept was highly successful and evolved into a series of combination therapies that raised the 10-year survival of pediatric ALL patients from 10% to over 90% (35, 37).
Norton and Simon (27) further refined the log-kill model in the late 1970s to describe the growth of solid tumors and their response to treatment (5, 32, 33, 38). At this time, the convention for chemotherapy dose was that intensity should be lowered after patients achieved complete remission to manage the cumulative toxicity or prevent secondary malignancies (27). In their landmark study (27), Norton and Simon challenged this convention by using mathematical models to derive insights from experimental data of various tumors that matched clinical experience. They fit untreated growth to a Gompertzian growth law, which represents exponential growth with a steadily decaying growth rate; the drug-induced death rate depended on the proliferating fraction (as shown in Figure 2A) due to the antimitotic mechanism induced by chemotherapy (27). The model implied that small residual tumors required much larger doses than previously assumed due to their large growth rates and suggested that treatment breaks should be shortened to avoid tumor regrowth (39). This dose-dense approach was then validated in a phase III trial of adjuvant chemotherapy in breast cancer (40), significantly improving clinical outcomes without increasing toxicity, and is currently the preferred dosing schedule for HER2-negative breast cancer patients receiving adjuvant chemotherapy (41).
Traina et al. (42) later extended the Norton-Simon model to improve chemotherapy dosing schedules of capecitabine for breast cancer, which comes with significant dose-limiting toxicity. They modeled tumor xenograft response dynamics with a similar Gompertzian model under the conventional capecitabine schedule and found that treatment efficacy decreased within the administration window of the conventional schedule. Subsequently, they identified in vivo that shortening the administration time of the drug could improve survival (42). This novel schedule has since been tested in various phase I/II trials (42–46), both as monotherapy and in combination, and a recently completed phase II trial (47) found that the novel dosing schedule resulted in less toxicity while retaining similar efficacy.
Crucially, these relatively simple models continue to have a significant impact on today’s treatment paradigm and first demonstrated the capability of mathematical models to infer novel treatment schedules informed by experimental or clinical evidence.
2.2
Modeling & Personalization in Radiation Oncology
Mathematical modeling has also greatly contributed to advancing radiation therapy treatment scheduling by describing crucial dose-response dynamics (3, 19, 28, 32). The linear-quadratic (LQ) model forms the basis of radiotherapy planning and optimization in the clinic by quantifying dose-responses in the tumor or surrounding normal tissue (19, 28, 32, 48, 49). It describes the survival probability S of a cell being exposed to a radiation dose D by S = , where α and β reflect the cell’s radiosensitivity. Intuitively, the model is often described as inducing cell death from single or multiple radiation ‘hits’ via the α and β parameters (28), respectively, which can be interpreted as an extension of the log-kill model.
Douglas and Fowler first quantified the effect of radiation dose fractions using the LQ model in the late 1970s (28, 50), fitting to in vitro and in vivo data. During the late 1980s, it was then identified that tumors with high proliferation rates should exhibit a dose-response behavior with higher α/β ratios than the surrounding healthy tissue (28, 32, 51), as exemplified in Figure 2B (28). This implied that it would be beneficial to divide a total dose into smaller but more frequent fractions, i.e. using a hyperfractionation schedule, as it maximizes tumor cell kill while minimizing damage to healthy tissue. Such a benefit of hyperfractionation was e.g. observed in head and neck cancer (32, 52), as it exhibits a high α/β ratio compared to other cancers (28, 32). Today, the LQ model has widespread applicability in the clinic (28, 49), and the model’s simplicity has been attributed as a major contributing factor to its successful translation.
Beyond predicting tissue-specific radiation response, mathematical modeling may also capture the inherent heterogeneity of tumors (3, 19). Leder et al. (53) employed the LQ model to consider the dose-response of PDGF-driven glioblastoma and iteratively refined a mechanistic cell-population model (incorporating temporal dynamics of sensitive and resistant cell growth) through comparison of predicted tumor outcomes with murine experiments. Based on this successful model calibration, they derived a novel treatment schedule that showed prolonged survival over conventional schedules in vivo. The feasibility and safety of this computationally-derived schedule were subsequently trialed in a recent phase I study (54), exemplifying the potential of more complex mathematical approaches to derive novel treatment schedules.
The LQ model may also be extended to account for tissue recovery after radiation, or in combination with genomic data to bridge the multi-scale nature of the disease (3, 19). López-Alfonso et al. (55) developed a nonlinear model of normal tissue recovery that suggested altering the fractional doses delivered to organs at risk to reduce normal tissue toxicity, where the feasibility of this approach was confirmed by a phase I study of head and neck squamous cell carcinoma (56). Furthermore, Scott et al. (57) combined a gene expression-based radiosensitivity index (58) with the linear-quadratic model to develop the genomic-adjusted radiation dose (3), which is being tested in a phase II trial (59), exemplifying the potential of integrating individual-specific molecular data with mathematical modeling.
A recently completed single-center nonrandomized phase II single-arm trial has also shown the potential of capturing patient-specific tumor heterogeneity to make individualized treatment decisions. The proliferation saturation index (PSI) (60) is a dynamic biomarker that estimates the proportion of proliferating radiosensitive cells, which can be estimated pre-treatment based on two conventional longitudinal tumor volume measurements using a logistic growth model (3, 19). Prokopiou et al. (60) proposed and validated the model through retrospective longitudinal data and showed that pre-treatment PSI was prognostic for radiation response. Furthermore, they identified in silico that stratifying patients into conventional and hyperfractionated radiation protocols based on pre-treatment PSI could improve the tumor response (60, 61). This was successfully validated in a prospective phase II trial for HPV+ oropharyngeal cancers (61, 62), increasing the percentage of patients who achieved a robust mid-treatment response (tumor volume reduction of more than 32% by week 4 of radiotherapy) from 50% to 58%, and showed that patient-specific tumor dynamics can be leveraged to personalize radiotherapy and improve responses.
Overall, recent trials exemplify that dose-response modeling (based on the LQ model) can be integrated with more complex disease scales (such as genomic information or individual-specific cell growth dynamics), which show promise to improve patient outcomes.
2.3
Modeling Drug Dynamics
Pharmacokinetic and pharmacodynamic mathematical models are used extensively for monitoring and predicting drug concentrations and their effects, with applications ranging from preclinical drug development to clinical decision-making. Pharmacokinetic (PK) approaches focus on the drug concentration profiles in the body, including uptake, distribution, metabolism, and clearance, whilst pharmacodynamic (PD) methods investigate the dynamic effects on the body and disease. Quantitative Systems Pharmacology (QSP) approaches extend these mathematical models to integrate drug dynamics across multiple scales (63–65) and are already aiding the design of clinical studies across all trial phases (64).
Many oncological drugs do not make it into clinical use due to high toxicity, complex pharmacology, and high variability between individuals (66). In these scenarios, the PK/PD dose-response relationship is crucial for identifying a therapeutic window (depicted in Figure 2D) where a drug has a high enough dose for an efficacious response, whilst constraining for severe dose-limiting, irreversible, or life-threatening toxic effects (18). The PK/PD time course may be nonlinear and/or multi-scale, as different complexities are considered in relation to the drug and system compartments to determine pharmacological, physiological, and pathological responses. Models are often individualized to patients and leverage prior knowledge using so-called nonlinear mixed-effects models, population PK models, or Bayesian methods (67). These models are usually implemented with the goals of maintaining a certain target exposure and personalizing treatment dose for efficacy and toxicity.
Maintaining a target exposure is a critical application for PK/PD models, as a fixed standard dose over a patient cohort may easily overdose or underdose individual patients. If the drug exposure is predicted to be suboptimal, adjustments may be made through escalation or de-escalation (dose changes), or intensification (frequency changes) guided by PK/PD models for dose optimization. Oftentimes, the goal is to achieve a particular target dose or a target total dose over time, denoted as area under the curve (AUC) (68–72). In an early study by Calvert et al. (68), dosing was adjusted to meet a target AUC for carboplatin using PK/PD models that considered peripheral and/or central drug concentrations separately. Model insights revealed that adjusting for body size was not as important as individualizing for variable renal clearance rates. In a later study in children with acute lymphoblastic leukemia by Evans et al. (69), dosing strategies personalized to individual PK clearance rates had a higher rate of continuous complete remissions compared to standard chemotherapy regimens based on body surface area.
Several studies also showed that dose personalization may be critical for keeping the measurable drug concentration within a range that balances efficacy and toxicity (73–75). Gamelin et al. (74) conducted a prospective multicenter phase III randomized trial comparing a standard body-weight constant dosing with individualized, PK-guided dose-intensified treatment of fluorouracil in metastatic colorectal cancer patients. They found a wide pharmacokinetic variability between patients and a large distribution of optimal doses that achieve the same plasma ranges in the personalized arm, which ultimately had a higher objective response rate (74). Furthermore, Long-Boyle et al. (76) developed a model calibrated with retrospective data, implementing a clinical tool to recommend busulfan dose to pediatric and young adult patients undergoing hematopoietic cell transplants. The model quickly reaches a steady-state target concentration depending on body weight, age, and clearance rates without exceeding toxic thresholds (76).
In certain cases, toxicity needs to be more explicitly modeled to monitor and predict dose limitations accurately, as there may be differences between patients due to age, body size, sex, or other PK factors. In the early 2000s, investigations on dose-densifying the administration of the standard first-line regimen of docetaxel plus epirubicin for metastatic breast cancer showed promising responses, but the more frequent administration was limited by the risk of severe, life-threatening neutropenia (77). PK/PD models were subsequently designed to enable personalized treatment of this drug combination, explicitly modeling drug pharmacokinetics, tumor size dynamics, and crucially, neutrophil counts, which guided intensification in a phase I/II trial (77, 78). These studies showed that unacceptable toxicities from densified regimens could be circumvented using model-driven drug administration. Moreover, PK/PD models have been developed and applied to achieve target drug clearance rates to balance efficacy and toxicity for docetaxel explicitly (79) and to personalize computational metronomic schedules of vinorelbine with non-small cell lung cancer and mesothelioma patients (80, 81). Specifically, children, the elderly, and patients with renal or liver impairment may have reduced drug clearance and higher toxicities (18, 66).
2.4
Evolution-based Treatment Strategies
Despite significant progress in drug development, metastatic and advanced-stage cancers remain largely incurable, since initial response is often followed by disease relapse over multiple lines of treatment (5, 23). Evolution of treatment resistance is driven by competitive interactions between cancer cells and complex interaction dynamics within the microenvironment and with other non-malignant cells (23). Mathematical modeling is particularly suited to capture such complex nonlinear dynamics, which has resulted in a rich history of numerous model-based evolutionary treatment strategies.
Early History of Modeling Resistance
Building on the Norton–Simon model discussed above, one of the first mathematical approaches to consider the emergence of treatment resistance from the 1980s is the Goldie–Coldman model. This model assumed that resistant cells originate from sensitive ones, due to spontaneous mutations occurring and accumulating prior to treatment. Logic follows that any delay in treatment would result in larger, more resistant tumors that are more likely to relapse (82, 83). However, this assumption is not universally true. The Goldie–Coldman model’s emphasis on early treatment aligned partially with pancreatic (84), colorectal (85), and breast (86, 87) cancer data. However, patient outcomes have since been observed to be independent of the timing of adjuvant chemotherapy across multiple cancer types, including in esophageal (88) and lung (89, 90) cancer studies. This strategy also argued for alternating rather than sequential application of multiple drugs (91), where cycles of each drug would be interleaved; however, clinical trials in breast cancer based on this mathematical model observed longer overall survival on the sequential arm, where each drug was given to completion before moving to the next (92, 93).
The potential of treatment breaks was identified by developing models to account for drug sensitivity of the tumor during the emergence of drug resistance using in vivo studies of hormone therapy in a mouse model of androgen-sensitive prostate cancer (94, 95). Intermittent approaches that switch between periods of the drug being on and off have since been tested prospectively in phase II clinical studies in locally-advanced prostate cancer, demonstrating reduced side effects and enhanced disease control relative to continuous therapy (96–98). Both continuous (99) and intermittent (100) drug schedules have been informed by mathematical modeling of the treatment response, characterizing the relative impacts of therapy on cellular turnover, mutation, and receptor regulation to explain the treatment responses observed clinically. Using models to predict tumor dynamics (101) and individual patient outcomes (102) also improved upon the prospective studies in the clinic that did not use modeling to create intermittent schedules. Mathematical modeling additionally enabled in silico comparison of different scheduling approaches (103) and the application of control theory to develop optimized intermittent treatment protocols (104).
Subsequently, novel competition-based models were developed as an alternative description of the emergence of treatment resistance during cancer therapy. These models assume distinct populations of drug-sensitive and -resistant cells, which, in contrast to the Goldie–Coldman model, compete for some shared resource. Shimada et al. (105) modeled prostate tumor dynamics assuming distinct, competing populations of drug-sensitive and resistant cells, demonstrating that the potential benefits of intermittent therapy are retained under the assumption of distinct sensitive and resistant populations. These mathematical models redefined the role of sensitive cells; their continued, intratumoral presence enabled by intermittent treatment holidays not only maintains the drug sensitivity of the tumor but also suppresses the growth of resistant cells through intercellular competition.
Personalizing Competition-Based Models: Adaptive Therapy
Selection and competition motivated Gatenby et al.’s work to personalize intermittent therapy schedules in the form of adaptive therapy (AT) (106). AT differs from intermittent treatment in that the timing of holidays is adapted to patient-specific dynamics (Figure 2C) rather than using fixed timing or burden thresholds across the cohort. The first clinical trial of AT was conducted in metastatic castration-resistant prostate cancer, where mathematical models guided the cycling of abiraterone therapy based on patient-specific prostate-specific antigen (PSA) levels (107). The primary adaptive protocol, inspired by model simulations, administered abiraterone and prednisone until a patient’s PSA level dropped by 50% from baseline, before suspending treatment until PSA rebounded above the baseline. The median time until disease progression more than doubled compared to a matched contemporaneous cohort on continuous therapy (33.5 months vs. 14.3 months) while drug exposure and associated costs were concurrently reduced (108). These results also displayed significant heterogeneity in patients’ responses to AT, motivating modeling efforts to identify the patient-specific benefit of AT over conventional treatment approaches (109, 110). Such mathematical approaches have since been extended to predict the expected benefit of AT after the first treatment cycle (bioRxiv 2025.04.03.646615), potentially enabling future clinical stratification of patients based on their expected response.
Clinical trials based on adaptive strategies for tumor control have been extended to metastatic castrate-sensitive prostate cancer (NCT03511196, NCT06734130), melanoma (NCT03543969), and basal cell carcinoma (NCT05651828). Further phase II AT trials in prostate (NCT05393791) and ovarian cancer (NCT05080556) also exemplify differing approaches to implementing AT, testing treatment break and de-escalation approaches, respectively. Recently published abstracts credit the ongoing role that mathematical modeling is playing in the design of current AT trials for melanoma (111) and breast cancer (112).
Combining Evolutionary Dynamics with Drug Dynamics
Whilst mathematical modeling for clinical use generally favors simplicity, drug dynamic models can be combined with evolutionary modeling to account for several factors at once. More recent PK/PD models have explicitly accounted for the heterogeneity of the tumor population, evolution of resistance, and its effect on optimal drug dosing (113–115). PK/PD models have been combined with simple tumor dynamic models, such as tumor-growth-inhibition models, which account for reduced drug response via increased drug resistance over time (113) and with Lotka-Volterra two-compartment models that represent drug-sensitive and drug-resistant cells (NCT05651828).
In another approach, Chmielecki et al. (116) built on a multi-type branching process model of sensitive and resistant cells (117, 118) and proposed that intermittent high-dose pulses combined with a lower daily dose of EGFR inhibitors could delay the emergence of resistance. In vitro cell culture experiments were coupled with a stochastic mathematical model that was calibrated with clinical pharmacokinetic data. The same group then extended the model to incorporate dose-dependent mutations of sensitive cells and showed that a pulsing strategy is effective irrespective of the dependency of the mutation rate on the drug concentration (114). This high-dose/low-dose pulsing approach was tested with erlotinib in a phase I clinical trial (NCT01967095), and while the strategy did not effectively delay the emergence of resistance, it was associated with fewer metastases (119). Poels et al. (115) later combined the same branching process model (118) with a PK model of osimertinib-dacomitinib combination therapy (120). Using an optimized dosing schedule to minimize tumor burden and toxicity, they observed a response rate of 73% in a phase I trial (NCT03810807), with fewer adverse events than osimertinib or dacomitinib monotherapies.
Targeting Tumor Heterogeneity via Combination Therapies
Combination therapies are emerging as a promising strategy to address tumor heterogeneity and resistance. In many hematologic malignancies, drugs have additive effects within single patients that yield potent cell kill by combining therapies (121), as we have e.g. seen at the beginning of this review with the VAMP regimen that exhibited additivity of log-kills in ALL. Statistical models including Loewe Additivity (122) or the Bliss Independence model (123) can account for intra-tumoral variability of response across different therapeutics. The Bliss Independence model assumes cells respond heterogeneously to independent drugs and converges to log-kill additivity for cancer chemotherapy (35). This model serves as a null model demonstrating synergy of drugs when the effect of the combination exceeds the expected response under independence (121). When the resistance to multiple drugs is due to independent mechanisms, the likelihood of cross resistance is successively lower with each additional drug (35, 124).
Solid tumors, in contrast, typically exhibit greater inter-patient heterogeneity in drug response and display a higher likelihood of pre-existing resistance (121, 124). Models focused on genetic mechanisms of resistance found that combination therapies would not succeed if single mutations could confer cross-resistance across all used drugs (124). Pioneering work by Palmer and Sorger (125) further demonstrated that the benefit of combination therapies across various clinical trials oftentimes can be explained by the independent action of drugs (i.e., low cross-resistance), with one therapy acting as the primary contributor to a patient’s response to combination therapy. Their methodology relied on a statistical model that was further refined by relying on the additivity of progression-free survival (PFS) times (126). By selecting published phase 3 combination trial results between 2014 and 2018 with available Kaplan-Meier PFS survival curves of the ‘standard of care’ therapy and of a single-agent therapeutic to be added, they were able to correctly predict combination trial PFS outcomes with 100% sensitivity and 78% specificity.
These findings have the potential to directly inform the design of combination therapy in future prospective trials (121), as computational tools can be used to predict the results of combination therapy trials before they begin or conclude. Furthermore, high single-agent activity implies that predictive biomarkers should be developed to personalize therapy to the most effective drug(s) for any individual patient (121). The search for such suitable biomarkers could also be supported by mathematical models (127).
Recent Evolutionary Therapies
Inspired by ecological principles driving mass-extinction events, an emerging form of evolution-based therapy is ‘extinction therapy’ (5), where mathematical modeling has suggested that tumor eradication could be achieved by administering treatments in a ‘first-strike-second-strike’ fashion (128). In this scenario, a ‘first strike’ would dramatically reduce tumor mass and heterogeneity via a first treatment, rendering the small remaining tumor population vulnerable to a ‘second strike’ — a sequence of secondary treatments which eradicate the remaining susceptible population (5, 128), as illustrated in Figure 2E. Extinction therapy is currently being evaluated in rhabdomyosarcoma (129) (NCT04388839), metastatic castration sensitive prostate cancer (NCT05189457), metastatic breast cancer (NCT06409390), and an extinction trial for Ewing sarcoma will be opening soon (130).
Acquiring resistance is an inherently dynamic process, and treatment personalization based on longitudinal monitoring is also starting to show great promise in counteracting disease recurrence (5, 131). Ongoing trials are investigating the feasibility of implementing real-time, model-informed decision support for individual patients with the Evolutionary Tumor Board (ETB) pilot studies, which integrate mechanistic modeling and evolutionary theory into the traditional tumor board concept (NCT04343365, NCT06423950) (132). In these studies, a multidisciplinary team that includes oncologists, radiologists, mathematicians, and evolutionary biologists uses patient-specific mathematical models to simulate tumor dynamics, forecast treatment outcomes, and guide future treatments. These models incorporate the evolution of resistance and possible treatment resensitization and are calibrated using longitudinal tumor burden data from previous treatment histories. This approach can guide what or why a specific treatment should be administered, and when, providing patient-specific decision support for the treating physician.
2.5
Immunotherapy
In recent years, numerous immunotherapy approaches have revolutionized treatment for various cancers. Novel immunotherapies, such as immune checkpoint inhibitors or Chimeric Antigen Receptor (CAR) T-cell therapy, are known to induce complex temporal and spatially heterogeneous effects (133), and thus, the dose-response relationships needed for immunotherapy modeling are often different from those developed for anti-proliferative and targeted agents. Toxicity considerations also differ. The limitations of traditional dose selection principles and trial designs for immunotherapy, based on the MTD paradigm, have been recognized (7, 134), and mathematical model-informed designs have been proposed (7).
The most notable and clinically successful model applications in immuno-oncology are the post-approval drug label changes of PD-1/PD-L1 inhibitors, based on computational modeling that supported e.g. the extension of dosing intervals (31, 113). The application of a PK model to data from phase I and phase III trials of atezolizumab (a PD-L1 inhibitor) to determine exposure-response relationships (i.e., considering both efficacy and drug-related toxicity) revealed that administering adjusted doses every two or four weeks would have comparable efficacy and safety as the previously approved three-week interval (135). This work, along with similar modeling for nivolumab (136) and others, supported the interchangeable use of different treatment schedules, offering patients and their healthcare providers greater flexibility without having to incur additional trials (113, 135).
Mathematical modeling has also played a role in regulatory approval of immunotherapies, with the FDA employing PK modeling to evaluate the first approved CAR-T cell therapy (tisagenlecleucel) (134). This model describes the expansion of effector cells after T-cell administration, and their subsequent conversion into memory cells, recapitulating the observed phases of patient T-cell dynamics (shown in Figure 2F) from the first CAR-T cell trials (29). The model also accounted for the effects of co-medications, in order to understand adverse events such as cytokine release syndrome (CRS) (134, 137), and remained crucial for the evaluation of subsequent CAR-T cell therapies (134).
As our mechanistic understanding of effective immunotherapy increases, the number of mathematical models that consider its multi-scale effects will also increase (64, 134, 138). Numerous mathematical modeling studies have considered tumor-immune interactions and drug dynamics in the last three decades (139, 140). The application of such models to identify promising combination therapies is also becoming more prevalent in industry (7, 140), as various ‘what-if’ scenarios targeting different parts of the larger cancer-immunity cycle (7) can be tested in silico. Overall, the approval of increasingly complex immune therapies (including combinations) requires a quantitative and mechanistic understanding of the underlying dynamics and therefore is an area where mathematical modeling will become vital.
Computational Tools Beyond Mathematical Modeling
3
Computational Tools Beyond Mathematical Modeling
We have shown here how mathematical models using equations that describe cancer and treatment dynamics can drive clinical decisions through prediction and decision support at the bedside. Crucially, translation of mathematical models to the clinic requires calibration via integration of experimental and clinical data. With the increasing availability of large troves of experimental and clinical data, novel computational methods can be used to assist with clinical translation advancement (141–143). We highlight here how virtual cohorts and artificial intelligence methods can be integrated into the workflow of mathematical model-driven decision support.
3.1
Mechanistic Virtual Patient Modeling
Understanding both inter- and intra-patient variability is a key application of modeling approaches in the clinical setting. One tool to capture this variability is a ‘virtual patient’ cohort (26, 144, 145). Individual virtual patients are in silico instances of an underlying model, calibrated from plausible ranges of biological and clinical data. When analyzed as a group, their simulated outcomes should reflect the heterogeneity observed in equivalent clinical cohorts. These virtual patients can be created from a combination of statistical models (e.g., sampling from survival models, Bayesian methods), machine learning and AI models (e.g., generative models, predictors conditioned on patient features), and mechanistic mathematical models (e.g., ODE systems, PKPD/QSP frameworks). Here we focus on mechanistic models, which are derived from the underlying biology of the cancer, calibrated with real-world data, and framed in a clinical context.
Many of the previously presented clinically-actionable models used virtual cohorts in some capacity to facilitate model translation (60, 115, 132). Prokopiu et al. (60) generated virtual patient cohorts to quantify the inherent uncertainty of parameter estimates for the dynamics-informed pre-treatment PSI biomarker for radiation response. Furthermore, Poels et al. (115) used an in silico trial approach to estimate the best combination therapy in a virtual population with inter-subject variability in pharmacokinetics and pre-existing resistance. The virtual cohort methodology was also pivotal in the studies led by Sorger and Palmer (35, 125, 126), which predicted the efficacy of drugs in combination using retrospective single-drug outcome data.
In general, model parameters for mechanistic virtual cohorts are sampled from distributions representing the plausible variability in patient biology (e.g., tumor growth rates, treatment efficacy, drug resistance) to predict outcomes on the population level (26, 144). By calibrating parameter values to retrospective data (including tumor growth curves, biomarkers, and demographic data), a virtual patient cohort can be matched to observed metrics in a real patient population. We believe that virtual cohorts are crucial for model generalizability and successful translation, as they allow models to explicitly account for variability across different scales (i.e., variability across measurements, or heterogeneity in patient outcomes).
Virtual cohorts can further be used in a variety of clinical applications, for example, to test hypotheses, design and simulate clinical trials, and to personalize treatment. Running virtual trials (26, 146, 147) on virtual cohorts is a rapid and cost-effective way to connect experimental and clinical data (26) and to explore trial scenarios in advance, where a rich set of longitudinal data (e.g., imaging or blood-based biomarkers) will be essential for model calibration. Mathematical models can then be leveraged to explore novel treatment strategies under a variety of assumptions, enabling investigators to test counterfactual ‘what-if’ scenarios on these virtual cohorts before committing significant resources to a trial in humans (26).
Furthermore, the virtual patient approach can be applied to the individual, instead of focusing on cohort outcomes alone. This methodology, often termed ‘digital twin’ or ‘patient avatar’ (148–151), could forecast the disease trajectories of various treatment options, essentially allowing clinicians to simulate different interventions for a patient and allowing for dynamic updates during the course of therapy. Early work by the Swanson lab significantly advanced the field, showing how spatial patient-specific mechanistic models could be incorporated with routinely available longitudinal imaging data in brain cancer (152–154). Further, there has been significant interest from industry in integrating the virtual patient and virtual trial methodology in immuno-oncology to optimize combination therapies using mechanistic mathematical modeling (7, 138, 140). Our ongoing Evolutionary Tumor Board studies (132) also demonstrate the clinical feasibility of a virtual patient-based decision support tool, which can serve both as an advisor to, and analyst for, the treating oncologist.
3.2
Translational Opportunities in the Era of Artificial Intelligence
There has been considerable interest in integrating artificial intelligence (AI) methods into clinical workflows in recent years (155). Most clinical applications have centered around diagnostic applications of machine learning (ML) in radiology and pathology (16, 156), but beyond imaging, the use of AI for therapeutic decision making has remained challenging (156). The lack of mechanistic interpretability of ML models is considered one of the most critical barriers for clinical translation (155). Further, the ability to generalize models from preclinical datasets to real-world clinical scenarios persists as a major limitation, particularly due to the lack of access to patient-specific data, inherently heterogeneous patient populations, and the dynamically changing nature of the disease (155). Furthermore, deriving novel strategies, such as adaptive therapy, would be difficult for a machine learning algorithm based on retrospective data from continuous therapy alone.
Mechanistic learning (31, 142), wherein mechanistic mathematical models and AI methods complement each other, has emerged as a promising avenue to bridge this translational gap, as similarly noted for other medical applications (24, 148). Mechanistic mathematical models with few parameters succeed at capturing and predicting temporal or spatial dynamics of sparse datasets, while machine learning methods excel at capturing patterns from high-dimensional data. On their own, machine learning models might not be suitable to incorporate explicit biological mechanisms or patient-specific longitudinal dynamics when data availability is sparse (142, 157, 158).
When combined, deep learning models may be used for image analysis, extracting tumor information across multiple scales (e.g., identifying key molecular markers or estimating volumetric tumor information through image segmentation). These results could subsequently inform mechanistic mathematical models (141). Current work shows the potential of using the two methodologies in conjunction (141), e.g. to infer patient-specific spatial tumor distributions for radiotherapy planning (159, 160). Machine learning methods are also emerging as a promising avenue to learn underlying mathematical model equations directly from data, as demonstrated by e.g. the SINDy framework (161, 162) or physics-informed neural networks (PINNs) (163). Alternatively, deep reinforcement learning can be used to optimize treatment protocols in complex disease settings, as seen in adaptive therapy (164, 165).
Computational Tools Beyond Mathematical Modeling
We have shown here how mathematical models using equations that describe cancer and treatment dynamics can drive clinical decisions through prediction and decision support at the bedside. Crucially, translation of mathematical models to the clinic requires calibration via integration of experimental and clinical data. With the increasing availability of large troves of experimental and clinical data, novel computational methods can be used to assist with clinical translation advancement (141–143). We highlight here how virtual cohorts and artificial intelligence methods can be integrated into the workflow of mathematical model-driven decision support.
3.1
Mechanistic Virtual Patient Modeling
Understanding both inter- and intra-patient variability is a key application of modeling approaches in the clinical setting. One tool to capture this variability is a ‘virtual patient’ cohort (26, 144, 145). Individual virtual patients are in silico instances of an underlying model, calibrated from plausible ranges of biological and clinical data. When analyzed as a group, their simulated outcomes should reflect the heterogeneity observed in equivalent clinical cohorts. These virtual patients can be created from a combination of statistical models (e.g., sampling from survival models, Bayesian methods), machine learning and AI models (e.g., generative models, predictors conditioned on patient features), and mechanistic mathematical models (e.g., ODE systems, PKPD/QSP frameworks). Here we focus on mechanistic models, which are derived from the underlying biology of the cancer, calibrated with real-world data, and framed in a clinical context.
Many of the previously presented clinically-actionable models used virtual cohorts in some capacity to facilitate model translation (60, 115, 132). Prokopiu et al. (60) generated virtual patient cohorts to quantify the inherent uncertainty of parameter estimates for the dynamics-informed pre-treatment PSI biomarker for radiation response. Furthermore, Poels et al. (115) used an in silico trial approach to estimate the best combination therapy in a virtual population with inter-subject variability in pharmacokinetics and pre-existing resistance. The virtual cohort methodology was also pivotal in the studies led by Sorger and Palmer (35, 125, 126), which predicted the efficacy of drugs in combination using retrospective single-drug outcome data.
In general, model parameters for mechanistic virtual cohorts are sampled from distributions representing the plausible variability in patient biology (e.g., tumor growth rates, treatment efficacy, drug resistance) to predict outcomes on the population level (26, 144). By calibrating parameter values to retrospective data (including tumor growth curves, biomarkers, and demographic data), a virtual patient cohort can be matched to observed metrics in a real patient population. We believe that virtual cohorts are crucial for model generalizability and successful translation, as they allow models to explicitly account for variability across different scales (i.e., variability across measurements, or heterogeneity in patient outcomes).
Virtual cohorts can further be used in a variety of clinical applications, for example, to test hypotheses, design and simulate clinical trials, and to personalize treatment. Running virtual trials (26, 146, 147) on virtual cohorts is a rapid and cost-effective way to connect experimental and clinical data (26) and to explore trial scenarios in advance, where a rich set of longitudinal data (e.g., imaging or blood-based biomarkers) will be essential for model calibration. Mathematical models can then be leveraged to explore novel treatment strategies under a variety of assumptions, enabling investigators to test counterfactual ‘what-if’ scenarios on these virtual cohorts before committing significant resources to a trial in humans (26).
Furthermore, the virtual patient approach can be applied to the individual, instead of focusing on cohort outcomes alone. This methodology, often termed ‘digital twin’ or ‘patient avatar’ (148–151), could forecast the disease trajectories of various treatment options, essentially allowing clinicians to simulate different interventions for a patient and allowing for dynamic updates during the course of therapy. Early work by the Swanson lab significantly advanced the field, showing how spatial patient-specific mechanistic models could be incorporated with routinely available longitudinal imaging data in brain cancer (152–154). Further, there has been significant interest from industry in integrating the virtual patient and virtual trial methodology in immuno-oncology to optimize combination therapies using mechanistic mathematical modeling (7, 138, 140). Our ongoing Evolutionary Tumor Board studies (132) also demonstrate the clinical feasibility of a virtual patient-based decision support tool, which can serve both as an advisor to, and analyst for, the treating oncologist.
3.2
Translational Opportunities in the Era of Artificial Intelligence
There has been considerable interest in integrating artificial intelligence (AI) methods into clinical workflows in recent years (155). Most clinical applications have centered around diagnostic applications of machine learning (ML) in radiology and pathology (16, 156), but beyond imaging, the use of AI for therapeutic decision making has remained challenging (156). The lack of mechanistic interpretability of ML models is considered one of the most critical barriers for clinical translation (155). Further, the ability to generalize models from preclinical datasets to real-world clinical scenarios persists as a major limitation, particularly due to the lack of access to patient-specific data, inherently heterogeneous patient populations, and the dynamically changing nature of the disease (155). Furthermore, deriving novel strategies, such as adaptive therapy, would be difficult for a machine learning algorithm based on retrospective data from continuous therapy alone.
Mechanistic learning (31, 142), wherein mechanistic mathematical models and AI methods complement each other, has emerged as a promising avenue to bridge this translational gap, as similarly noted for other medical applications (24, 148). Mechanistic mathematical models with few parameters succeed at capturing and predicting temporal or spatial dynamics of sparse datasets, while machine learning methods excel at capturing patterns from high-dimensional data. On their own, machine learning models might not be suitable to incorporate explicit biological mechanisms or patient-specific longitudinal dynamics when data availability is sparse (142, 157, 158).
When combined, deep learning models may be used for image analysis, extracting tumor information across multiple scales (e.g., identifying key molecular markers or estimating volumetric tumor information through image segmentation). These results could subsequently inform mechanistic mathematical models (141). Current work shows the potential of using the two methodologies in conjunction (141), e.g. to infer patient-specific spatial tumor distributions for radiotherapy planning (159, 160). Machine learning methods are also emerging as a promising avenue to learn underlying mathematical model equations directly from data, as demonstrated by e.g. the SINDy framework (161, 162) or physics-informed neural networks (PINNs) (163). Alternatively, deep reinforcement learning can be used to optimize treatment protocols in complex disease settings, as seen in adaptive therapy (164, 165).
Barriers to Translation
4
Barriers to Translation
Despite promising results from current clinical trials and the prospect of integrating novel computational tools with mathematical models, several translational challenges remain. The clinical data available to calibrate models in real time is oftentimes sparse (e.g., tumor burden is typically measured only every few months by imaging), lacking in biological detail (e.g., the fraction of resistant cells in a tumor often cannot be directly measured), and is often measured across very different biological scales (e.g., a mutational analysis versus a tumor biopsy image versus a CT scan). Differences between humans and experimental systems also complicate deriving the response, toxicity, and feasible administration schedules via modeling (53, 54). Integration of clinical data into a model must also account for uncertainty in patient measurements (e.g., imaging variability, biomarker noise, and detection thresholds).
Furthermore, mathematical models developed by industry can suffer from overfitting (166), and validation remains difficult due to a general lack of transparency (64, 140); yet, these models often support novel drug development. There is a need for both data and model sharing, as well as the use of open-source platforms (127, arXiv:2509.13360) to allow for independent analysis (121, 167) of these approaches. This could not only help bridge the gap between academia and industry but also improve model predictions and consequently clinical model robustness and impact. Two additional translational barriers we will discuss further are the lack of clinical data in standardized and accessible formats, and the regulatory barriers that hinder clinical adaptation of both mechanistic and personalized models.
4.1
Data Integration and Standardization
Despite the abundance of retrospective patient data in cancer research, the lack of automated and standardized methods to convert raw clinical records into model-ready datasets persists as a critical bottleneck of model translation (11, 31, 168). Clinical data is often stored in highly variable and unstructured forms, limited by inconsistent date formats, non-numeric lab values, institution-specific terminology, and extended metadata frequently embedded with protected health information. These inconsistencies, coupled with the limitations on data sharing due to privacy concerns, hinder collaboration and data reuse across institutions (169). Whilst clinicians have embraced the Electronic Medical Records (EMR), data recording is a secondary priority to patient care and is not typically designed with computational analysis in mind (11, 169). To overcome these limitations, medical centers may implement automated pipelines for data collection, cleaning, and anonymization, based on the FAIR data principles — Findable, Accessible, Interoperable, and Reusable (168). These principles provide both technical and ethical guidelines for transforming disorganized clinical datasets into standardized, reusable research assets. Standardization frameworks, such as the Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM) (170), are further being adapted into practice and could inform mathematical models to guide clinical decision-making (169).
4.2
Regulatory Barriers - Modeling & Personalization
Regulatory pathways that would allow mathematical and computational models to enter standard clinical workflows are currently limited. As the clinical trial methodology has for decades been based upon the MTD paradigm, drug tolerability remains the primary design constraint (instead of optimal dose efficacy) (6, 8), and trials are generally designed to find a therapeutic regimen that works best, on average, for a selected patient population (171). Within this framework, it remains difficult to integrate strategies derived from mathematical models into standard clinical workflows.
However, the current paradigm is changing. Regulatory agencies, including the U.S. Food and Drug Administration (FDA) and European Medicines Agency (EMA), denote mathematical modeling approaches that support drug research and development as ‘Model-Informed Drug Development (MIDD)’ methods (134, 172), which the FDA expects will become standard within drug development pipelines (134). The aforementioned flaws of the current MTD paradigm have further been recognized by the FDA (6), and it is actively promoting dose optimization via its ‘Project Optimus’ program (6, 8, 173) that requires a mechanistic understanding of drug action. The ‘Fit-for-Purpose’ initiative and the ‘MIDD Paired Meeting Program’ were also established to promote quantitative tools for drug development and to improve communication between modelers and regulatory agencies (134, 174).
Due to the growing concerns and recognized limitations of animal testing in cancer (7, 175), the FDA has further highlighted the critical role that mathematical modeling will play in improving the translational capabilities of preclinical studies while reducing animal use (175). The average rate of translation from animal model to clinical cancer trial is below 8% (175, 176), largely due to unexpected safety or efficacy issues (175). Major challenges are also associated with testing immunotherapeutic strategies due to interspecies differences (7, 140, 175). In silico modeling and novel in vitro human-derived systems (such as organoids (177) and microphysiological systems) have thus been recognized as an opportunity to improve the predictive relevance of preclinical drug tests while saving costs (175).
The field will also need to address the notable difficulty of running clinical trials designed for treatment personalization under traditional trial regulations. The traditional approach continues to perform dose escalation in phase I trials, before evaluating safety and efficacy during phase II trials, and conducting randomized comparisons against the standard of care in phase III trials. However, this paradigm is slowly changing from a regulatory standpoint; adaptive (178) and Bayesian (179, 180) trial designs are becoming increasingly common, allowing for dynamic changes to enrollment, treatment, and endpoint analysis during trial execution, as compared to standard trials where the protocol is fixed. The long-running I-SPY 2 adaptive trial in breast cancer (181) exemplifies an innovative trial design that aims to accelerate the discovery of successful drugs while limiting patient exposure to inferior arms. Innovations include adaptive randomization, where the probability of enrollment in each arm is a function of the current predicted success rate; rapid drop-out and success rules, to promote the most promising therapies and quickly discontinue futile ones; and early signal detection based on early biomarkers (181, 182). Overall, the success of the I-SPY 2 study has demonstrated the feasibility of implementing complex trial protocols that would be needed to test model-informed trials. Although some longitudinal MRI measures were included in this study (181, 182), they were handled statistically as covariates; explicit mechanistic modeling of longitudinal data, using mathematical tools such as virtual cohorts, could provide a deeper layer of dynamic analysis for integration in future trial designs.
Nonetheless, overcoming any regulatory barriers will require collaboration between stakeholders of the translational process to bridge the gap between mathematical models and routine clinical workflows. From the scientific perspective, the integration of mathematical and computational tools must include input from healthcare providers, experimentalists, and modelers, as seen during the ‘Integrated Mathematical Oncology’ (IMO) workshop that has facilitated communication between these parties for more than 10 years (183). However, the ongoing discussion also needs to include industry partners, regulators, software developers, and lawmakers, who are crucial stakeholders within the larger translational process (113, 155).
Barriers to Translation
Despite promising results from current clinical trials and the prospect of integrating novel computational tools with mathematical models, several translational challenges remain. The clinical data available to calibrate models in real time is oftentimes sparse (e.g., tumor burden is typically measured only every few months by imaging), lacking in biological detail (e.g., the fraction of resistant cells in a tumor often cannot be directly measured), and is often measured across very different biological scales (e.g., a mutational analysis versus a tumor biopsy image versus a CT scan). Differences between humans and experimental systems also complicate deriving the response, toxicity, and feasible administration schedules via modeling (53, 54). Integration of clinical data into a model must also account for uncertainty in patient measurements (e.g., imaging variability, biomarker noise, and detection thresholds).
Furthermore, mathematical models developed by industry can suffer from overfitting (166), and validation remains difficult due to a general lack of transparency (64, 140); yet, these models often support novel drug development. There is a need for both data and model sharing, as well as the use of open-source platforms (127, arXiv:2509.13360) to allow for independent analysis (121, 167) of these approaches. This could not only help bridge the gap between academia and industry but also improve model predictions and consequently clinical model robustness and impact. Two additional translational barriers we will discuss further are the lack of clinical data in standardized and accessible formats, and the regulatory barriers that hinder clinical adaptation of both mechanistic and personalized models.
4.1
Data Integration and Standardization
Despite the abundance of retrospective patient data in cancer research, the lack of automated and standardized methods to convert raw clinical records into model-ready datasets persists as a critical bottleneck of model translation (11, 31, 168). Clinical data is often stored in highly variable and unstructured forms, limited by inconsistent date formats, non-numeric lab values, institution-specific terminology, and extended metadata frequently embedded with protected health information. These inconsistencies, coupled with the limitations on data sharing due to privacy concerns, hinder collaboration and data reuse across institutions (169). Whilst clinicians have embraced the Electronic Medical Records (EMR), data recording is a secondary priority to patient care and is not typically designed with computational analysis in mind (11, 169). To overcome these limitations, medical centers may implement automated pipelines for data collection, cleaning, and anonymization, based on the FAIR data principles — Findable, Accessible, Interoperable, and Reusable (168). These principles provide both technical and ethical guidelines for transforming disorganized clinical datasets into standardized, reusable research assets. Standardization frameworks, such as the Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM) (170), are further being adapted into practice and could inform mathematical models to guide clinical decision-making (169).
4.2
Regulatory Barriers - Modeling & Personalization
Regulatory pathways that would allow mathematical and computational models to enter standard clinical workflows are currently limited. As the clinical trial methodology has for decades been based upon the MTD paradigm, drug tolerability remains the primary design constraint (instead of optimal dose efficacy) (6, 8), and trials are generally designed to find a therapeutic regimen that works best, on average, for a selected patient population (171). Within this framework, it remains difficult to integrate strategies derived from mathematical models into standard clinical workflows.
However, the current paradigm is changing. Regulatory agencies, including the U.S. Food and Drug Administration (FDA) and European Medicines Agency (EMA), denote mathematical modeling approaches that support drug research and development as ‘Model-Informed Drug Development (MIDD)’ methods (134, 172), which the FDA expects will become standard within drug development pipelines (134). The aforementioned flaws of the current MTD paradigm have further been recognized by the FDA (6), and it is actively promoting dose optimization via its ‘Project Optimus’ program (6, 8, 173) that requires a mechanistic understanding of drug action. The ‘Fit-for-Purpose’ initiative and the ‘MIDD Paired Meeting Program’ were also established to promote quantitative tools for drug development and to improve communication between modelers and regulatory agencies (134, 174).
Due to the growing concerns and recognized limitations of animal testing in cancer (7, 175), the FDA has further highlighted the critical role that mathematical modeling will play in improving the translational capabilities of preclinical studies while reducing animal use (175). The average rate of translation from animal model to clinical cancer trial is below 8% (175, 176), largely due to unexpected safety or efficacy issues (175). Major challenges are also associated with testing immunotherapeutic strategies due to interspecies differences (7, 140, 175). In silico modeling and novel in vitro human-derived systems (such as organoids (177) and microphysiological systems) have thus been recognized as an opportunity to improve the predictive relevance of preclinical drug tests while saving costs (175).
The field will also need to address the notable difficulty of running clinical trials designed for treatment personalization under traditional trial regulations. The traditional approach continues to perform dose escalation in phase I trials, before evaluating safety and efficacy during phase II trials, and conducting randomized comparisons against the standard of care in phase III trials. However, this paradigm is slowly changing from a regulatory standpoint; adaptive (178) and Bayesian (179, 180) trial designs are becoming increasingly common, allowing for dynamic changes to enrollment, treatment, and endpoint analysis during trial execution, as compared to standard trials where the protocol is fixed. The long-running I-SPY 2 adaptive trial in breast cancer (181) exemplifies an innovative trial design that aims to accelerate the discovery of successful drugs while limiting patient exposure to inferior arms. Innovations include adaptive randomization, where the probability of enrollment in each arm is a function of the current predicted success rate; rapid drop-out and success rules, to promote the most promising therapies and quickly discontinue futile ones; and early signal detection based on early biomarkers (181, 182). Overall, the success of the I-SPY 2 study has demonstrated the feasibility of implementing complex trial protocols that would be needed to test model-informed trials. Although some longitudinal MRI measures were included in this study (181, 182), they were handled statistically as covariates; explicit mechanistic modeling of longitudinal data, using mathematical tools such as virtual cohorts, could provide a deeper layer of dynamic analysis for integration in future trial designs.
Nonetheless, overcoming any regulatory barriers will require collaboration between stakeholders of the translational process to bridge the gap between mathematical models and routine clinical workflows. From the scientific perspective, the integration of mathematical and computational tools must include input from healthcare providers, experimentalists, and modelers, as seen during the ‘Integrated Mathematical Oncology’ (IMO) workshop that has facilitated communication between these parties for more than 10 years (183). However, the ongoing discussion also needs to include industry partners, regulators, software developers, and lawmakers, who are crucial stakeholders within the larger translational process (113, 155).
Conclusion
5
Conclusion
Mathematical models in the clinic have been employed across a variety of different cancers, treatment modalities, and various scales (ranging from small-scale molecular pharmacokinetic dynamics to large-scale tumor and microenvironmental influences). Historically, these were simpler models initially focused on understanding dose-response dynamics. Skipper’s model assumed that the dose effect is log-kill and additive across different treatment regimens, implying that the reduction of tumor size under treatment is greatest when the dose is densified in the shortest tolerable time window. This assumption formed the basis of many subsequent dose-response and tumor growth models, including the linear-quadratic and Norton-Simon models, that have shaped the current treatment paradigm across different cancers and treatment modalities for many decades.
As more effective treatment interventions were developed, the range of curable cancers expanded, but metastatic disease has remained largely incurable (23). In challenging cases, it has now become apparent that treatment strategies need to reflect the complex and dynamic nature of cancer that encompasses multiple scales (reflecting tumor heterogeneity, immune interactions, evolution of resistance, etc.), beyond the reductionist view of cancer as a disease of the genes. As these complex dynamics are better understood and clinical data become more readily available, treatment strategies are being tailored towards individual patient dynamics, improving outcomes across different therapeutic modalities and strategies.
However, incorporating complex models for personalized treatment regimens will require access to more detailed data. Increased model complexity will need higher resolution experimental and aggregated clinical data for calibration, while personalized treatment regimens necessitate individual patient-specific clinical data collected in standardized and accessible formats. While virtual patient methodologies and machine learning methods could help support the integration of such data, adapting model-driven treatment strategies into routine clinical workflows will require overcoming crucial translational barriers and regulatory limitations.
Overall, mathematical models in oncology integrate experimental and clinical evidence to provide mechanistic insights into drug and disease dynamics, and infer valuable treatment strategies both on a cohort and individual level. They have played a key role in shaping current treatment strategies of chemotherapy and radiotherapy, and recent trial results are showing the immense potential of mathematical modeling to improve clinical workflows — bridging the divide between the current MTD paradigm and the complex, multi-scale, and dynamic nature of cancer.
Conclusion
Mathematical models in the clinic have been employed across a variety of different cancers, treatment modalities, and various scales (ranging from small-scale molecular pharmacokinetic dynamics to large-scale tumor and microenvironmental influences). Historically, these were simpler models initially focused on understanding dose-response dynamics. Skipper’s model assumed that the dose effect is log-kill and additive across different treatment regimens, implying that the reduction of tumor size under treatment is greatest when the dose is densified in the shortest tolerable time window. This assumption formed the basis of many subsequent dose-response and tumor growth models, including the linear-quadratic and Norton-Simon models, that have shaped the current treatment paradigm across different cancers and treatment modalities for many decades.
As more effective treatment interventions were developed, the range of curable cancers expanded, but metastatic disease has remained largely incurable (23). In challenging cases, it has now become apparent that treatment strategies need to reflect the complex and dynamic nature of cancer that encompasses multiple scales (reflecting tumor heterogeneity, immune interactions, evolution of resistance, etc.), beyond the reductionist view of cancer as a disease of the genes. As these complex dynamics are better understood and clinical data become more readily available, treatment strategies are being tailored towards individual patient dynamics, improving outcomes across different therapeutic modalities and strategies.
However, incorporating complex models for personalized treatment regimens will require access to more detailed data. Increased model complexity will need higher resolution experimental and aggregated clinical data for calibration, while personalized treatment regimens necessitate individual patient-specific clinical data collected in standardized and accessible formats. While virtual patient methodologies and machine learning methods could help support the integration of such data, adapting model-driven treatment strategies into routine clinical workflows will require overcoming crucial translational barriers and regulatory limitations.
Overall, mathematical models in oncology integrate experimental and clinical evidence to provide mechanistic insights into drug and disease dynamics, and infer valuable treatment strategies both on a cohort and individual level. They have played a key role in shaping current treatment strategies of chemotherapy and radiotherapy, and recent trial results are showing the immense potential of mathematical modeling to improve clinical workflows — bridging the divide between the current MTD paradigm and the complex, multi-scale, and dynamic nature of cancer.
출처: PubMed Central (JATS). 라이선스는 원 publisher 정책을 따릅니다 — 인용 시 원문을 표기해 주세요.
🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반
- A Phase I Study of Hydroxychloroquine and Suba-Itraconazole in Men with Biochemical Relapse of Prostate Cancer (HITMAN-PC): Dose Escalation Results.
- Self-management of male urinary symptoms: qualitative findings from a primary care trial.
- Clinical and Liquid Biomarkers of 20-Year Prostate Cancer Risk in Men Aged 45 to 70 Years.
- Diagnostic accuracy of Ga-PSMA PET/CT versus multiparametric MRI for preoperative pelvic invasion in the patients with prostate cancer.
- Association of patient health education with the postoperative health related quality of life in low- intermediate recurrence risk differentiated thyroid cancer patients.
- Early local immune activation following intra-operative radiotherapy in human breast tissue.