Assessing the assessment: a scoping review of the mode of patient-reported outcome assessment in solid cancer clinical trials.
리뷰
1/5 보강
PICO 자동 추출 (휴리스틱, conf 2/4)
유사 논문P · Population 대상 환자/모집단
환자: the six most common solid cancers
I · Intervention 중재 / 시술
추출되지 않음
C · Comparison 대조 / 비교
추출되지 않음
O · Outcome 결과 / 결론
[CONCLUSION] Despite clear guidelines, reporting of the mode of PRO assessment remains inadequate, and active review of PRO data is uncommon. Strengthening transparency and using PROs more actively within trials could enhance patient-centered cancer research.
[PURPOSE] Transparent reporting of how patient-reported outcomes (PROs) are collected is essential to ensure reproducible and interpretable data.
- p-value p = .005
- p-value p < .001
- OR 2.00
APA
Hubel N, Krepper D, et al. (2026). Assessing the assessment: a scoping review of the mode of patient-reported outcome assessment in solid cancer clinical trials.. Quality of life research : an international journal of quality of life aspects of treatment, care and rehabilitation, 35(4). https://doi.org/10.1007/s11136-026-04179-y
MLA
Hubel N, et al.. "Assessing the assessment: a scoping review of the mode of patient-reported outcome assessment in solid cancer clinical trials.." Quality of life research : an international journal of quality of life aspects of treatment, care and rehabilitation, vol. 35, no. 4, 2026.
PMID
41764668 ↗
Abstract 한글 요약
[PURPOSE] Transparent reporting of how patient-reported outcomes (PROs) are collected is essential to ensure reproducible and interpretable data. Different modes of assessment may affect data quality and feasibility, yet their use in cancer trials is poorly described. Electronic PRO (ePRO) assessment may improve data quality and enable active review, but it is unclear how often different modes of assessment like ePRO assessment are used and in which trials.
[METHODS] We systematically searched PubMed for randomized controlled trials (published 2019-2023) that used cancer-specific PRO measures in patients with the six most common solid cancers. Trial characteristics, PRO reporting practices, and evidence of active review of PRO data were summarized descriptively. Univariate logistic regression was used to examine predictors of (1) reporting the mode of PRO assessment and (2) use of ePROs exclusively.
[RESULTS] Of 9331 references screened, 296 trials were included in the analysis. 135 (45.6%) reported the mode of PRO assessment: paper (51.9%), ePRO (20.7%), mixed modes (24.4%). Trials were more likely to report the mode of assessment if they were industry-sponsored (OR = 2.00, 95%, p = .005), or had larger sample sizes (OR = 1.11, 95%, p < .001). ePRO assessment exclusively was used more in recently registered trials (OR = 1.41, p < .001) and in industry-sponsored trials (OR = 8.38, p < .001). Active, in-stream review of PRO results was reported in 2.0% of trials.
[CONCLUSION] Despite clear guidelines, reporting of the mode of PRO assessment remains inadequate, and active review of PRO data is uncommon. Strengthening transparency and using PROs more actively within trials could enhance patient-centered cancer research.
[METHODS] We systematically searched PubMed for randomized controlled trials (published 2019-2023) that used cancer-specific PRO measures in patients with the six most common solid cancers. Trial characteristics, PRO reporting practices, and evidence of active review of PRO data were summarized descriptively. Univariate logistic regression was used to examine predictors of (1) reporting the mode of PRO assessment and (2) use of ePROs exclusively.
[RESULTS] Of 9331 references screened, 296 trials were included in the analysis. 135 (45.6%) reported the mode of PRO assessment: paper (51.9%), ePRO (20.7%), mixed modes (24.4%). Trials were more likely to report the mode of assessment if they were industry-sponsored (OR = 2.00, 95%, p = .005), or had larger sample sizes (OR = 1.11, 95%, p < .001). ePRO assessment exclusively was used more in recently registered trials (OR = 1.41, p < .001) and in industry-sponsored trials (OR = 8.38, p < .001). Active, in-stream review of PRO results was reported in 2.0% of trials.
[CONCLUSION] Despite clear guidelines, reporting of the mode of PRO assessment remains inadequate, and active review of PRO data is uncommon. Strengthening transparency and using PROs more actively within trials could enhance patient-centered cancer research.
🏷️ 키워드 / MeSH 📖 같은 키워드 OA만
📖 전문 본문 읽기 PMC JATS · ~56 KB · 영문
Introduction
Introduction
Patient-reported outcomes (PROs) are playing an increasingly important role in cancer clinical trials. As self-reported reflections of patients’ perspectives, PROs capture vital information on symptoms, quality of life, functional status, and the burden of treatment, factors that are essential to understanding the full impact of cancer therapies beyond traditional clinical endpoints [1, 2]. Regulatory bodies such as the U.S. Food and Drug Administration (FDA) [3] and the European Medicines Agency (EMA) [4] have recognized this growing importance, incorporating PRO data into decision-making processes for drug approval and labeling claims [3–6]. Future steps were further concretized in two recent joint publications by various interest groups advocating for greater utilization of PROs in the regulatory environment [5, 6].
Given this relevance, it is critical to ensure that PRO-based endpoints are held to high methodological and reporting standards. These outcomes must be meaningful, replicable, and assessed using clearly described procedures. To support such rigor, specific guidelines have been developed, most notably the SPIRIT-PRO extension for trial protocols and the CONSORT-PRO extension for trial reporting [7, 8]. Both emphasize the importance of specifying how, when, and where PRO data are collected, which are critical to reproducing methods and evaluating PRO data quality. While these aspects are often overlooked, empirical research has shown that the timing and setting of PRO assessments can meaningfully influence the results [9, 10]. Specifically, Giesinger et al. [9] observed significant differences in perceived treatment burden between assessments conducted on the day of chemotherapy admission versus assessments conducted one week later at home, whereas Shiroiwa et al. [10] identified systematic differences between electronic remote and on-site paper-based assessments. Such findings underscore the rationale for the detailed recommendations made in the SPIRIT-PRO and CONSORT-PRO guidelines.
Further, the mode of how PRO data are collected can vary. Historically, paper-based questionnaires were the standard, but over time, electronic methods, using provisioned devices, web-based solutions, or bring-your-own-device (BYOD) solutions, have been introduced. A growing body of evidence, summarized in two meta-analyses, supports the comparability of different PRO data collection modes [11, 12], provided that best practices for implementation and instrument migration are followed [13]. The authors stress the consistent demonstration of the equivalence of validity of electronically administered measures [12] as well as the high cross-mode retest reliability [11]. Regulatory guidance, including from the FDA, accepts multiple modes of administration as long as comparability and usability are demonstrated [14]. The most recent recommendations from an International Society for Pharmacoeconomics and Outcomes Research (ISPOR) task force provide clear criteria for determining when electronically administered PROs can be considered comparable to their paper counterparts [15]. Building on this, electronic PRO (ePRO) data collection offers several potential advantages over traditional methods. Several sources suggest potential advantages of ePRO systems over paper-based assessment, including improved data quality, such as fewer missing or invalid responses [16]. Reviews and commentaries further note that ePRO platforms allow real-time monitoring of completion and the use of automated reminders, which may support more timely and complete data capture [17, 18]. The presence of an electronic audit trail also facilitates traceable, time-stamped records and system validation procedures that support data integrity and regulatory compliance [19]. In addition, diary-based studies indicate that electronic data capture may be associated with higher patient compliance, particularly for very frequent (e.g., daily) assessments [20]. Organizations such as the Center for Medical Technology Policy [21] and regulatory bodies, including the FDA [14] and EMA [4, 22], have acknowledged the benefits of using electronic methods for PRO data collection. However, challenges remain, including the risk of sampling bias [16, 23] and limited digital literacy [23, 24] in certain populations, usability issues [23], as well as uncertainties about how ePRO systems are integrated into routine trial workflows [17]. Systematic evidence on how frequently different modes of PRO assessment are used in clinical trials, whether paper, electronic, or hybrid, remains limited. Moreover, there is little systematic evidence about how different trial characteristics like trial size, blinding, or location of PRO data assessment are associated with the mode of PRO assessment.
Finally, ePRO assessment theoretically allows for real-time monitoring of patient-reported symptoms and proactive clinical management [25]. Based on literature highlighting the benefits of using PROs to inform individual patients’ care or improving clinical teams’ awareness of patients health status [26–28] it has been suggested to also use PROs collected as part of clinical trials for patients immediate management or for trial documentation purposes (e.g., to inform adverse event documentation) [25]. It remains unclear, however, how often such active, in-stream review is reported in protocols or used in practice. Recent evidence from ovarian cancer studies suggests that active review of PRO results is still uncommon in cancer trials [29].
This scoping review aims to examine how patient-reported outcome measures (PROMs) are collected and reported in randomized controlled trials involving patients with solid cancers. Specifically, we describe how and where PROs are assessed, and how clearly these methods are reported following relevant SPIRIT-PRO [8] and CONSORT-PRO [7] reporting items. We further explore whether certain trial characteristics, such as sponsorship, sample size, or phase, are associated with more transparent reporting or the use of electronic assessment, as these factors may influence the feasibility and implementation of different modes. Finally, we investigate whether PRO data were actively reviewed by trial personnel during the course of the trial.
Patient-reported outcomes (PROs) are playing an increasingly important role in cancer clinical trials. As self-reported reflections of patients’ perspectives, PROs capture vital information on symptoms, quality of life, functional status, and the burden of treatment, factors that are essential to understanding the full impact of cancer therapies beyond traditional clinical endpoints [1, 2]. Regulatory bodies such as the U.S. Food and Drug Administration (FDA) [3] and the European Medicines Agency (EMA) [4] have recognized this growing importance, incorporating PRO data into decision-making processes for drug approval and labeling claims [3–6]. Future steps were further concretized in two recent joint publications by various interest groups advocating for greater utilization of PROs in the regulatory environment [5, 6].
Given this relevance, it is critical to ensure that PRO-based endpoints are held to high methodological and reporting standards. These outcomes must be meaningful, replicable, and assessed using clearly described procedures. To support such rigor, specific guidelines have been developed, most notably the SPIRIT-PRO extension for trial protocols and the CONSORT-PRO extension for trial reporting [7, 8]. Both emphasize the importance of specifying how, when, and where PRO data are collected, which are critical to reproducing methods and evaluating PRO data quality. While these aspects are often overlooked, empirical research has shown that the timing and setting of PRO assessments can meaningfully influence the results [9, 10]. Specifically, Giesinger et al. [9] observed significant differences in perceived treatment burden between assessments conducted on the day of chemotherapy admission versus assessments conducted one week later at home, whereas Shiroiwa et al. [10] identified systematic differences between electronic remote and on-site paper-based assessments. Such findings underscore the rationale for the detailed recommendations made in the SPIRIT-PRO and CONSORT-PRO guidelines.
Further, the mode of how PRO data are collected can vary. Historically, paper-based questionnaires were the standard, but over time, electronic methods, using provisioned devices, web-based solutions, or bring-your-own-device (BYOD) solutions, have been introduced. A growing body of evidence, summarized in two meta-analyses, supports the comparability of different PRO data collection modes [11, 12], provided that best practices for implementation and instrument migration are followed [13]. The authors stress the consistent demonstration of the equivalence of validity of electronically administered measures [12] as well as the high cross-mode retest reliability [11]. Regulatory guidance, including from the FDA, accepts multiple modes of administration as long as comparability and usability are demonstrated [14]. The most recent recommendations from an International Society for Pharmacoeconomics and Outcomes Research (ISPOR) task force provide clear criteria for determining when electronically administered PROs can be considered comparable to their paper counterparts [15]. Building on this, electronic PRO (ePRO) data collection offers several potential advantages over traditional methods. Several sources suggest potential advantages of ePRO systems over paper-based assessment, including improved data quality, such as fewer missing or invalid responses [16]. Reviews and commentaries further note that ePRO platforms allow real-time monitoring of completion and the use of automated reminders, which may support more timely and complete data capture [17, 18]. The presence of an electronic audit trail also facilitates traceable, time-stamped records and system validation procedures that support data integrity and regulatory compliance [19]. In addition, diary-based studies indicate that electronic data capture may be associated with higher patient compliance, particularly for very frequent (e.g., daily) assessments [20]. Organizations such as the Center for Medical Technology Policy [21] and regulatory bodies, including the FDA [14] and EMA [4, 22], have acknowledged the benefits of using electronic methods for PRO data collection. However, challenges remain, including the risk of sampling bias [16, 23] and limited digital literacy [23, 24] in certain populations, usability issues [23], as well as uncertainties about how ePRO systems are integrated into routine trial workflows [17]. Systematic evidence on how frequently different modes of PRO assessment are used in clinical trials, whether paper, electronic, or hybrid, remains limited. Moreover, there is little systematic evidence about how different trial characteristics like trial size, blinding, or location of PRO data assessment are associated with the mode of PRO assessment.
Finally, ePRO assessment theoretically allows for real-time monitoring of patient-reported symptoms and proactive clinical management [25]. Based on literature highlighting the benefits of using PROs to inform individual patients’ care or improving clinical teams’ awareness of patients health status [26–28] it has been suggested to also use PROs collected as part of clinical trials for patients immediate management or for trial documentation purposes (e.g., to inform adverse event documentation) [25]. It remains unclear, however, how often such active, in-stream review is reported in protocols or used in practice. Recent evidence from ovarian cancer studies suggests that active review of PRO results is still uncommon in cancer trials [29].
This scoping review aims to examine how patient-reported outcome measures (PROMs) are collected and reported in randomized controlled trials involving patients with solid cancers. Specifically, we describe how and where PROs are assessed, and how clearly these methods are reported following relevant SPIRIT-PRO [8] and CONSORT-PRO [7] reporting items. We further explore whether certain trial characteristics, such as sponsorship, sample size, or phase, are associated with more transparent reporting or the use of electronic assessment, as these factors may influence the feasibility and implementation of different modes. Finally, we investigate whether PRO data were actively reviewed by trial personnel during the course of the trial.
Methods
Methods
The review was reported according to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) 2020 checklist [30] (see Supplementary Material 1).
A systematic search of MedLine (PubMed interface) was conducted to identify RCTs using PROs as an endpoint in the six most prevalent solid tumor types (lung, breast, prostate, colorectal, bladder, gynecological) [31], published between January 2019 and November 2023. Only trials investigating biomedical interventions and utilizing PRO instruments developed by either the European Organisation for Research and Treatment of Cancer (EORTC) or the Functional Assessment of Chronic Illness Therapy (FACIT) measurement system were included. We focused on EORTC or FACIT questionnaires, as these are by far the most used tools in cancer clinical trials [32]. A comprehensive description of the eligibility criteria and the full search strategy is provided in the published protocol [33].
A pool of six reviewers participated in the screening and data extraction process. All steps were executed in DistillerSR [34], which was used to coordinate independent reviews, track discrepancies, and document resolutions throughout the process. Two reviewers independently screened abstracts and full-text articles to determine eligibility. Discrepancies were first resolved through discussion, and if consensus could not be reached, a third reviewer was consulted to reach a final decision. Following the selection process, trials were matched using their registration number or study acronym, as data extraction was conducted at the trial level rather than per individual publication. This approach enabled the comprehensive inclusion of information from associated publications. Additionally, trial protocols were included if available. For data charting, reviewer pairs independently extracted information for each included study within DistillerSR to ensure accuracy and consistency. Any disagreements were discussed within the pair and resolved by consensus, with final decisions documented in the software.
Extracted variables included key trial characteristics such as industry sponsorship, trial organization involvement (not necessarily sponsorship), year of first trial registration, blinding, trial phase, disease stage, type of treatment evaluated, control arm design, and the sample size of the intention-to-treat (ITT) population. With respect to the PRO endpoints, we documented whether PROs were designated as pre-defined trial endpoints (either primary, secondary or exploratory). The subsequent part of the data extraction form was informed by relevant EQUATOR guidelines (i.e., SPIRIT [35] and the SPIRIT-PRO extension [8] for protocols, CONSORT [36] and CONSORT-PRO extension [7] for published trials). Information on the PRO data management and assessment encompassed the mode(s) of assessment (specifying which modes were used and, if multiple modes were used, if evidence for comparability was cited [15]), the assessment setting (field-based vs. site-based assessment), and whether PRO data were actively reviewed by trial personnel or healthcare professionals during the trial (e.g., a site nurse or a doctor reviewed individual patients PRO scores).
Statistical analysis
Trial characteristics were summarized using descriptive statistics. Categorical variables are presented as frequencies with corresponding percentages, while continuous variables are reported as means with standard deviations.
We conducted univariate logistic regression analyses to examine trial characteristics as predictors of two binary outcomes:Reporting of the PRO mode of assessment: Each covariate was individually tested to assess if it predicted whether a trial disclosed their mode of PRO assessment (reported vs not-reported).
Exclusive electronic PRO assessment: Models identified factors associated with trials solely assessing PRO electronically, compared to all other modes (e.g. hybrid or paper assessment methods)
Predictors included in the univariate analyses were: Date of trial registration, disease stage, cancer site, industry sponsoring, availability of a study protocol, sample size, trial organisation involvement, trial phase, and whether the PRO endpoint was defined as primary, secondary, or exploratory (including not defined). The selection of predictor variables was guided by our published protocol [30] and by theoretical considerations related to trial design and resource availability. For instance, industry sponsorship may facilitate electronic data capture through greater infrastructure and regulatory expectations, whereas smaller or academic trials may rely on paper-based methods due to resource constraints. Similarly, larger and later-phase trials may have greater operational capacity and standardized processes supporting electronic data collection. Given the limited empirical evidence on these associations, our analyses were exploratory and aimed to identify potential patterns for future research rather than causal relationships. Model coefficients are expressed as odds ratios (ORs), each with 95% CIs and an α level of .05. Calculations were done using R version 4.3.1 [37].
The review was reported according to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) 2020 checklist [30] (see Supplementary Material 1).
A systematic search of MedLine (PubMed interface) was conducted to identify RCTs using PROs as an endpoint in the six most prevalent solid tumor types (lung, breast, prostate, colorectal, bladder, gynecological) [31], published between January 2019 and November 2023. Only trials investigating biomedical interventions and utilizing PRO instruments developed by either the European Organisation for Research and Treatment of Cancer (EORTC) or the Functional Assessment of Chronic Illness Therapy (FACIT) measurement system were included. We focused on EORTC or FACIT questionnaires, as these are by far the most used tools in cancer clinical trials [32]. A comprehensive description of the eligibility criteria and the full search strategy is provided in the published protocol [33].
A pool of six reviewers participated in the screening and data extraction process. All steps were executed in DistillerSR [34], which was used to coordinate independent reviews, track discrepancies, and document resolutions throughout the process. Two reviewers independently screened abstracts and full-text articles to determine eligibility. Discrepancies were first resolved through discussion, and if consensus could not be reached, a third reviewer was consulted to reach a final decision. Following the selection process, trials were matched using their registration number or study acronym, as data extraction was conducted at the trial level rather than per individual publication. This approach enabled the comprehensive inclusion of information from associated publications. Additionally, trial protocols were included if available. For data charting, reviewer pairs independently extracted information for each included study within DistillerSR to ensure accuracy and consistency. Any disagreements were discussed within the pair and resolved by consensus, with final decisions documented in the software.
Extracted variables included key trial characteristics such as industry sponsorship, trial organization involvement (not necessarily sponsorship), year of first trial registration, blinding, trial phase, disease stage, type of treatment evaluated, control arm design, and the sample size of the intention-to-treat (ITT) population. With respect to the PRO endpoints, we documented whether PROs were designated as pre-defined trial endpoints (either primary, secondary or exploratory). The subsequent part of the data extraction form was informed by relevant EQUATOR guidelines (i.e., SPIRIT [35] and the SPIRIT-PRO extension [8] for protocols, CONSORT [36] and CONSORT-PRO extension [7] for published trials). Information on the PRO data management and assessment encompassed the mode(s) of assessment (specifying which modes were used and, if multiple modes were used, if evidence for comparability was cited [15]), the assessment setting (field-based vs. site-based assessment), and whether PRO data were actively reviewed by trial personnel or healthcare professionals during the trial (e.g., a site nurse or a doctor reviewed individual patients PRO scores).
Statistical analysis
Trial characteristics were summarized using descriptive statistics. Categorical variables are presented as frequencies with corresponding percentages, while continuous variables are reported as means with standard deviations.
We conducted univariate logistic regression analyses to examine trial characteristics as predictors of two binary outcomes:Reporting of the PRO mode of assessment: Each covariate was individually tested to assess if it predicted whether a trial disclosed their mode of PRO assessment (reported vs not-reported).
Exclusive electronic PRO assessment: Models identified factors associated with trials solely assessing PRO electronically, compared to all other modes (e.g. hybrid or paper assessment methods)
Predictors included in the univariate analyses were: Date of trial registration, disease stage, cancer site, industry sponsoring, availability of a study protocol, sample size, trial organisation involvement, trial phase, and whether the PRO endpoint was defined as primary, secondary, or exploratory (including not defined). The selection of predictor variables was guided by our published protocol [30] and by theoretical considerations related to trial design and resource availability. For instance, industry sponsorship may facilitate electronic data capture through greater infrastructure and regulatory expectations, whereas smaller or academic trials may rely on paper-based methods due to resource constraints. Similarly, larger and later-phase trials may have greater operational capacity and standardized processes supporting electronic data collection. Given the limited empirical evidence on these associations, our analyses were exploratory and aimed to identify potential patterns for future research rather than causal relationships. Model coefficients are expressed as odds ratios (ORs), each with 95% CIs and an α level of .05. Calculations were done using R version 4.3.1 [37].
Results
Results
Our initial literature search yielded 9331 references. After title and abstract screening for eligibility, we included 1708 publications in the full-text review (Fig. 1). The resulting 840 articles were matched on the trial level, resulting in 698 trials. We excluded trials without results from EORTC (201/698, 28.8% of all trials using any PRO endpoint) and FACIT (105/698, 15.0%) PROMs. A final number of 296 trials were included in the analysis. Figure 1 depicts the PRISMA-ScR flowchart.
Trial characteristics and reporting of PRO mode of assessment
Trials were registered between 2001 and 2022 (median: 2014), with the last publications ranging from 2019 to 2023 (Table 1). Breast cancer was the most common cancer type (n = 76, 25.7%), and most studies were Phase III trials (n = 182, 61.5%). Industry sponsorship was identified in 35.8% of trials (n = 106), while the remaining trials were either academically sponsored or did not specify their sponsor. Separately, 29.7% of trials (n = 88) involved a trial organization in the conduct or coordination of the study. The most frequently involved organizations were the National Cancer Institute (n = 33, 11.1%) and NRG Oncology (n = 8, 2.7%). PROs were predominantly used as secondary endpoints (n = 220, 74.6%).
Assessment location was reported in 153/296 trials (51.7%) and took place field-based only (6/296, 2%), site-based (91/296, 30.7%), or both (56/296, 18.9%).
A total of 135/296 (45.6%) trials reported the mode of PROM assessment (Table 2). Specifically, 67/296 (22.6%) reported it only in the protocol, 36/296 (12.2%) only in the publication, and 32/296 (10.8%) in both publication and protocol. Out of the 135 trials reporting the mode of assessment, 70/135 (51.9%) used paper only, and 28/135 (20.7%) used electronic PRO assessment only. Among trials mixing different modes of assessment (33/135), 13/135 (9.6%) used ePRO and paper, 19/135 (14.1%) used paper and non-automated telephone interviews, and one trial (0.7%) mixed paper and interviewer administration. Evidence for comparability between mixed modes of assessment was provided in just one of the 33 trials (3%) using multiple modes.
When PROs were assessed electronically, they were primarily assessed on a provisioned device (22/41, 53.7%) or via an unspecified modality (12/41, 29.3%). The complete table of evidence is given in Supplementary Materials 2.
Active in-stream review
A total of six trials (2.0%) reported active review of PRO data by investigators or site staff, as specified in their protocols (Table S1). In five trials, this review was limited to identifying and documenting potential adverse events. One trial instructed treating clinicians to review PRO responses after toxicity ratings to identify symptoms requiring clinical attention and initiate supportive care if necessary. Among the six trials, two used ePRO, two used paper-based assessment, and two used a mixed approach.
Trial characteristics and mode of assessment
Univariate logistic regression analyses were conducted to examine predictors of whether a trial reported the PRO mode of assessment (Table 3). The odds of reporting the mode were 2 times higher when the trial was sponsored by industry (OR = 2.00, 95% CI [1.24, 3.25], p = .005). When a protocol was available, the odds of reporting the mode were 9.49 times higher (OR = 9.49, 95% CI [5.57, 16.66], p < .001). For every additional 100 participants in the ITT sample, the odds of reporting the mode of assessment increased by 11% (OR = 1.11, 95% CI [1.06, 1.18], p < .001). Finally, compared to phase II trials, phase III trials were associated with 2.21 times higher odds of reporting the mode of assessment (OR = 2.21, 95% CI [1.22, 4.07], p = .009).
Univariate logistic regression analyses were conducted to examine predictors of whether a trial used exclusively ePRO assessment (Table 4). For each additional year of trial registration, the odds of using ePRO were 1.41 times higher (OR = 1.41, 95% CI [1.22, 1.68], p < .001). From zero uses in trials registered before 2009 to a maximum of 4/10 (40.0%) trials registered in 2018. Trials involving patients with advanced cancer were associated with 3.72 times higher odds of using ePRO (OR = 3.72, 95% CI [1.72, 8.45], p = .001), indicating 272% higher odds of using ePRO. The odds of using ePRO were 8.38 times higher in trials sponsored by industry (OR = 8.38, 95% CI [3.67, 20.83], p < .001), and 3.87 times higher when a protocol was available (OR = 3.87, 95% CI [1.24, 17.09], p = .037). Notably, only one trial involving a trial organisation used ePRO (no OR calculated). Finally, phase III trials were associated with 12.04 times higher odds of using ePRO compared to phase II trials (OR = 12.04, 95% CI [2.40, 219.27], p = .017).
Our initial literature search yielded 9331 references. After title and abstract screening for eligibility, we included 1708 publications in the full-text review (Fig. 1). The resulting 840 articles were matched on the trial level, resulting in 698 trials. We excluded trials without results from EORTC (201/698, 28.8% of all trials using any PRO endpoint) and FACIT (105/698, 15.0%) PROMs. A final number of 296 trials were included in the analysis. Figure 1 depicts the PRISMA-ScR flowchart.
Trial characteristics and reporting of PRO mode of assessment
Trials were registered between 2001 and 2022 (median: 2014), with the last publications ranging from 2019 to 2023 (Table 1). Breast cancer was the most common cancer type (n = 76, 25.7%), and most studies were Phase III trials (n = 182, 61.5%). Industry sponsorship was identified in 35.8% of trials (n = 106), while the remaining trials were either academically sponsored or did not specify their sponsor. Separately, 29.7% of trials (n = 88) involved a trial organization in the conduct or coordination of the study. The most frequently involved organizations were the National Cancer Institute (n = 33, 11.1%) and NRG Oncology (n = 8, 2.7%). PROs were predominantly used as secondary endpoints (n = 220, 74.6%).
Assessment location was reported in 153/296 trials (51.7%) and took place field-based only (6/296, 2%), site-based (91/296, 30.7%), or both (56/296, 18.9%).
A total of 135/296 (45.6%) trials reported the mode of PROM assessment (Table 2). Specifically, 67/296 (22.6%) reported it only in the protocol, 36/296 (12.2%) only in the publication, and 32/296 (10.8%) in both publication and protocol. Out of the 135 trials reporting the mode of assessment, 70/135 (51.9%) used paper only, and 28/135 (20.7%) used electronic PRO assessment only. Among trials mixing different modes of assessment (33/135), 13/135 (9.6%) used ePRO and paper, 19/135 (14.1%) used paper and non-automated telephone interviews, and one trial (0.7%) mixed paper and interviewer administration. Evidence for comparability between mixed modes of assessment was provided in just one of the 33 trials (3%) using multiple modes.
When PROs were assessed electronically, they were primarily assessed on a provisioned device (22/41, 53.7%) or via an unspecified modality (12/41, 29.3%). The complete table of evidence is given in Supplementary Materials 2.
Active in-stream review
A total of six trials (2.0%) reported active review of PRO data by investigators or site staff, as specified in their protocols (Table S1). In five trials, this review was limited to identifying and documenting potential adverse events. One trial instructed treating clinicians to review PRO responses after toxicity ratings to identify symptoms requiring clinical attention and initiate supportive care if necessary. Among the six trials, two used ePRO, two used paper-based assessment, and two used a mixed approach.
Trial characteristics and mode of assessment
Univariate logistic regression analyses were conducted to examine predictors of whether a trial reported the PRO mode of assessment (Table 3). The odds of reporting the mode were 2 times higher when the trial was sponsored by industry (OR = 2.00, 95% CI [1.24, 3.25], p = .005). When a protocol was available, the odds of reporting the mode were 9.49 times higher (OR = 9.49, 95% CI [5.57, 16.66], p < .001). For every additional 100 participants in the ITT sample, the odds of reporting the mode of assessment increased by 11% (OR = 1.11, 95% CI [1.06, 1.18], p < .001). Finally, compared to phase II trials, phase III trials were associated with 2.21 times higher odds of reporting the mode of assessment (OR = 2.21, 95% CI [1.22, 4.07], p = .009).
Univariate logistic regression analyses were conducted to examine predictors of whether a trial used exclusively ePRO assessment (Table 4). For each additional year of trial registration, the odds of using ePRO were 1.41 times higher (OR = 1.41, 95% CI [1.22, 1.68], p < .001). From zero uses in trials registered before 2009 to a maximum of 4/10 (40.0%) trials registered in 2018. Trials involving patients with advanced cancer were associated with 3.72 times higher odds of using ePRO (OR = 3.72, 95% CI [1.72, 8.45], p = .001), indicating 272% higher odds of using ePRO. The odds of using ePRO were 8.38 times higher in trials sponsored by industry (OR = 8.38, 95% CI [3.67, 20.83], p < .001), and 3.87 times higher when a protocol was available (OR = 3.87, 95% CI [1.24, 17.09], p = .037). Notably, only one trial involving a trial organisation used ePRO (no OR calculated). Finally, phase III trials were associated with 12.04 times higher odds of using ePRO compared to phase II trials (OR = 12.04, 95% CI [2.40, 219.27], p = .017).
Discussion
Discussion
Our review revealed that less than half of all trials reported the mode of PRO assessment, with paper methods remaining the most common, and exclusively electronic assessment remained rare. Reporting was more likely in industry-sponsored trials, those with available protocols, larger sample sizes, and phase III design. The use of exclusively electronic PRO methods increased over time and was more common in industry-sponsored trials and those involving patients with advanced cancer. Active review of PRO results during the trial was almost never reported.
Insufficient reporting of the mode of assessment of patient-reported outcomes
Despite clear guidance from SPIRIT-PRO and CONSORT-PRO, fewer than half of the trials (45.6%) in our review reported the mode of PRO assessment, highlighting a persistent and concerning gap in reporting standards. While this represents an improvement compared to what we know from earlier periods (2007–2011: 16% [38]), our review covered trials published between 2019 and 2023, well after the introduction of CONSORT-PRO in 2013. The lack of progress is echoed by Efficace et al. [39], who found that the mode of assessment was reported in around 22 percent of publications. Compared to that review, we found higher rates of reporting the mode of assessment, likely because we also included available protocols in our review. When considering only information from publications, we found a similar rate of 23% of trials reporting the mode. This highlights the lack of progress in reporting and reinforces the importance of publicly sharing protocols [40, 41].
Additionally, we observed that evidence of comparability between mixed modes of administration was reported in only one of 33 trials using multiple modes, underlining a broader issue of methodological transparency. Although it is established that paper and electronic PRO assessments are comparable when appropriate migration procedures are followed [13] or evidence for comparability is cited [15], we find that these assumptions are rarely supported by trial-level documentation. Reporting the mode of assessment is essential for transparency, reproducibility, and the interpretation of PRO results. This is especially important when modes differ in characteristics likely to influence responses, such as self-administered questionnaires versus interviewer-based assessments. As CONSORT-PRO notes, patients may respond differently when completing measures in private versus in a telephone interview [7]. Moreover, including such information requires minimal space or could be done in the supplementary materials, making arguments about word count limitations difficult to justify.
Use and reporting of electronic patient-reported outcome assessment
The underreporting of the PRO mode of assessment also limits understanding of how ePRO data collection methods are used in clinical trials. While there has been long-standing enthusiasm for ePROs [18, 42] and our findings show increasing adoption, particularly in recent and industry-sponsored trials, overall use of ePROs remains modest in our trial sample. Paper-based methods likely remain the default, especially in non-industry-sponsored trials, suggesting that many trials not reporting the mode used paper. As a result, the observed 30.3% ePRO use (alone or in mixed modes) may overestimate actual adoption.
Electronic data collection was more frequent in industry-sponsored trials, possibly due to greater resources and regulatory expectations around auditability. However, despite the growing feasibility of BYOD strategies, their adoption remains limited, even when considering that most trials in our review were planned several years ago. A lack of public case studies where BYOD data supported regulatory approvals may contribute to sponsor hesitancy in adopting or reporting BYOD approaches [43] and therefore, be a byproduct of the inadequate mode of PRO assessment reporting we observed in our review.
Moreover, while field-based assessments are possible, most trials in our review still relied on site-based data collection. We found little evidence of decentralized strategies or participant-centered flexibility in assessment modes. Trials rarely implemented multiple modes of administration, and even in mixed-mode studies, variation was mainly due to pragmatic follow-up via mail or telephone rather than intentional patient choice. This reinforces concerns raised in recent literature that decentralized trial methods and participant-tailored approaches are still the exception rather than the rule [44].
Active review of patient-reported outcome data during trials
We found almost no evidence that PRO data were actively reviewed for trial monitoring or for clinical care during the course of the trial. Among trials that actively reviewed PROs during the trial, they were primarily used to identify and document adverse events. While trial-level instructions for this may exist outside the main protocol, especially for industry-sponsored trials, our review found little indication of such supplementary guidance. A 2018 regulatory perspectives paper authored by representatives from major U.S. regulatory and oversight bodies acknowledged the debate about whether PRO data should be reviewed during a trial [45]. The authors emphasized that PROs are not considered safety data because they lack clinical interpretation. They noted that PROs could be reviewed during a trial, for example to support adverse event ratings, but that such review is not required. This cautious stance, recognizing the possibility of in-trial review without mandating it, may help explain why active review has so far gained little traction in oncology trials.
Electronic systems make real-time scoring and review feasible, yet this potential remains largely untapped. Active monitoring could improve both patient care and trial data quality [25, 46]. Some trialists might worry that such active review could introduce bias but ignoring available PRO data may also introduce inconsistency. Unstructured review of questionnaires likely occurs at individual sites, creating undocumented variability across centers [47]. Several barriers may explain the limited use of active PRO review, although these remain largely speculative due to limited empirical evidence [see [48] for more in-depth discussion). Implementing such processes would require a cultural and operational shift from current trial practices, change that might be met with resistance in the highly regulated context of clinical research. Additional resources and infrastructure may be needed to integrate real-time PRO monitoring into existing systems, increasing cost and logistical complexity. Moreover, the expected benefits have not yet been clearly quantified, and PROs are still frequently viewed as secondary rather than core trial data, potentially reducing the incentive for active use. Whether PRO data are reviewed in real time also reflects a broader trial design question. Active clinical use of PROs aligns more with pragmatic trials focused on real-world benefit, especially as PRO monitoring systems are increasingly used in routine clinical practice [49]. Passive electronic capture without clinical response fits an explanatory approach prioritizing internal validity. These trial-level decisions have ethical and methodological implications. If patients contribute data, there is a responsibility to use it meaningfully, ideally not only in publications but also to support care when appropriate.
Limitations
First, our search relied on trial publications mentioning PROs, which may have excluded trials with PROs listed only in protocols. However, such endpoints are typically reported in publications, so most relevant trials were likely captured. Second, we limited our search to a single database and to studies published between 2019 and 2023 to manage workload, which may have led to some eligible trials being missed. Still, our aim was to reflect the most recent trial reporting practices.
Third, to enhance methodological consistency and feasibility, we confined our analysis to trials employing the two most extensively validated and widely implemented measurement systems in oncology (EORTC and FACIT). While this decision may have excluded some otherwise eligible trials utilizing other validated instruments, the absence of significant differences in reporting quality between EORTC- and FACIT-based studies supports the generalizability of our conclusions (data not shown). Nonetheless, this restriction may have led to a modest overestimation of reporting quality, as non-validated or self-developed instruments might be associated with less rigorous reporting practices.
Fourth, our regression analysis was exploratory in nature. We used univariable models to examine associations between trial characteristics and both the reporting of the mode of PRO assessment and the use of ePRO. While some of these characteristics may be correlated, our aim was to highlight potential patterns rather than to establish causal or independent effects. Hence, these associations should be interpreted cautiously, and future research is needed.
Finally, restricting our regression analysis to trials that reported some information on their PRO mode of administration may introduce sampling bias by favoring studies with more complete reporting. However, the direction of this potential bias remains uncertain, as it could, for instance, either overestimate or underestimate the use of electronic data capture.
Our review revealed that less than half of all trials reported the mode of PRO assessment, with paper methods remaining the most common, and exclusively electronic assessment remained rare. Reporting was more likely in industry-sponsored trials, those with available protocols, larger sample sizes, and phase III design. The use of exclusively electronic PRO methods increased over time and was more common in industry-sponsored trials and those involving patients with advanced cancer. Active review of PRO results during the trial was almost never reported.
Insufficient reporting of the mode of assessment of patient-reported outcomes
Despite clear guidance from SPIRIT-PRO and CONSORT-PRO, fewer than half of the trials (45.6%) in our review reported the mode of PRO assessment, highlighting a persistent and concerning gap in reporting standards. While this represents an improvement compared to what we know from earlier periods (2007–2011: 16% [38]), our review covered trials published between 2019 and 2023, well after the introduction of CONSORT-PRO in 2013. The lack of progress is echoed by Efficace et al. [39], who found that the mode of assessment was reported in around 22 percent of publications. Compared to that review, we found higher rates of reporting the mode of assessment, likely because we also included available protocols in our review. When considering only information from publications, we found a similar rate of 23% of trials reporting the mode. This highlights the lack of progress in reporting and reinforces the importance of publicly sharing protocols [40, 41].
Additionally, we observed that evidence of comparability between mixed modes of administration was reported in only one of 33 trials using multiple modes, underlining a broader issue of methodological transparency. Although it is established that paper and electronic PRO assessments are comparable when appropriate migration procedures are followed [13] or evidence for comparability is cited [15], we find that these assumptions are rarely supported by trial-level documentation. Reporting the mode of assessment is essential for transparency, reproducibility, and the interpretation of PRO results. This is especially important when modes differ in characteristics likely to influence responses, such as self-administered questionnaires versus interviewer-based assessments. As CONSORT-PRO notes, patients may respond differently when completing measures in private versus in a telephone interview [7]. Moreover, including such information requires minimal space or could be done in the supplementary materials, making arguments about word count limitations difficult to justify.
Use and reporting of electronic patient-reported outcome assessment
The underreporting of the PRO mode of assessment also limits understanding of how ePRO data collection methods are used in clinical trials. While there has been long-standing enthusiasm for ePROs [18, 42] and our findings show increasing adoption, particularly in recent and industry-sponsored trials, overall use of ePROs remains modest in our trial sample. Paper-based methods likely remain the default, especially in non-industry-sponsored trials, suggesting that many trials not reporting the mode used paper. As a result, the observed 30.3% ePRO use (alone or in mixed modes) may overestimate actual adoption.
Electronic data collection was more frequent in industry-sponsored trials, possibly due to greater resources and regulatory expectations around auditability. However, despite the growing feasibility of BYOD strategies, their adoption remains limited, even when considering that most trials in our review were planned several years ago. A lack of public case studies where BYOD data supported regulatory approvals may contribute to sponsor hesitancy in adopting or reporting BYOD approaches [43] and therefore, be a byproduct of the inadequate mode of PRO assessment reporting we observed in our review.
Moreover, while field-based assessments are possible, most trials in our review still relied on site-based data collection. We found little evidence of decentralized strategies or participant-centered flexibility in assessment modes. Trials rarely implemented multiple modes of administration, and even in mixed-mode studies, variation was mainly due to pragmatic follow-up via mail or telephone rather than intentional patient choice. This reinforces concerns raised in recent literature that decentralized trial methods and participant-tailored approaches are still the exception rather than the rule [44].
Active review of patient-reported outcome data during trials
We found almost no evidence that PRO data were actively reviewed for trial monitoring or for clinical care during the course of the trial. Among trials that actively reviewed PROs during the trial, they were primarily used to identify and document adverse events. While trial-level instructions for this may exist outside the main protocol, especially for industry-sponsored trials, our review found little indication of such supplementary guidance. A 2018 regulatory perspectives paper authored by representatives from major U.S. regulatory and oversight bodies acknowledged the debate about whether PRO data should be reviewed during a trial [45]. The authors emphasized that PROs are not considered safety data because they lack clinical interpretation. They noted that PROs could be reviewed during a trial, for example to support adverse event ratings, but that such review is not required. This cautious stance, recognizing the possibility of in-trial review without mandating it, may help explain why active review has so far gained little traction in oncology trials.
Electronic systems make real-time scoring and review feasible, yet this potential remains largely untapped. Active monitoring could improve both patient care and trial data quality [25, 46]. Some trialists might worry that such active review could introduce bias but ignoring available PRO data may also introduce inconsistency. Unstructured review of questionnaires likely occurs at individual sites, creating undocumented variability across centers [47]. Several barriers may explain the limited use of active PRO review, although these remain largely speculative due to limited empirical evidence [see [48] for more in-depth discussion). Implementing such processes would require a cultural and operational shift from current trial practices, change that might be met with resistance in the highly regulated context of clinical research. Additional resources and infrastructure may be needed to integrate real-time PRO monitoring into existing systems, increasing cost and logistical complexity. Moreover, the expected benefits have not yet been clearly quantified, and PROs are still frequently viewed as secondary rather than core trial data, potentially reducing the incentive for active use. Whether PRO data are reviewed in real time also reflects a broader trial design question. Active clinical use of PROs aligns more with pragmatic trials focused on real-world benefit, especially as PRO monitoring systems are increasingly used in routine clinical practice [49]. Passive electronic capture without clinical response fits an explanatory approach prioritizing internal validity. These trial-level decisions have ethical and methodological implications. If patients contribute data, there is a responsibility to use it meaningfully, ideally not only in publications but also to support care when appropriate.
Limitations
First, our search relied on trial publications mentioning PROs, which may have excluded trials with PROs listed only in protocols. However, such endpoints are typically reported in publications, so most relevant trials were likely captured. Second, we limited our search to a single database and to studies published between 2019 and 2023 to manage workload, which may have led to some eligible trials being missed. Still, our aim was to reflect the most recent trial reporting practices.
Third, to enhance methodological consistency and feasibility, we confined our analysis to trials employing the two most extensively validated and widely implemented measurement systems in oncology (EORTC and FACIT). While this decision may have excluded some otherwise eligible trials utilizing other validated instruments, the absence of significant differences in reporting quality between EORTC- and FACIT-based studies supports the generalizability of our conclusions (data not shown). Nonetheless, this restriction may have led to a modest overestimation of reporting quality, as non-validated or self-developed instruments might be associated with less rigorous reporting practices.
Fourth, our regression analysis was exploratory in nature. We used univariable models to examine associations between trial characteristics and both the reporting of the mode of PRO assessment and the use of ePRO. While some of these characteristics may be correlated, our aim was to highlight potential patterns rather than to establish causal or independent effects. Hence, these associations should be interpreted cautiously, and future research is needed.
Finally, restricting our regression analysis to trials that reported some information on their PRO mode of administration may introduce sampling bias by favoring studies with more complete reporting. However, the direction of this potential bias remains uncertain, as it could, for instance, either overestimate or underestimate the use of electronic data capture.
Conclusion
Conclusion
This review highlights persistent gaps in the reporting of PRO data collection methods in cancer trials, despite long-standing guidelines. Transparent reporting of the mode of PRO assessment is essential for reproducibility, data interpretation, and systematic evidence generation. Despite much enthusiasm for ePRO data collection and growing use over time, it remains relatively scarce. To make ethical and full use of the data that patients provide, the research field should commit not only to better reporting practices but also to actively using PRO data.
This review highlights persistent gaps in the reporting of PRO data collection methods in cancer trials, despite long-standing guidelines. Transparent reporting of the mode of PRO assessment is essential for reproducibility, data interpretation, and systematic evidence generation. Despite much enthusiasm for ePRO data collection and growing use over time, it remains relatively scarce. To make ethical and full use of the data that patients provide, the research field should commit not only to better reporting practices but also to actively using PRO data.
Supplementary Information
Supplementary Information
Below is the link to the electronic supplementary material.
Below is the link to the electronic supplementary material.
출처: PubMed Central (JATS). 라이선스는 원 publisher 정책을 따릅니다 — 인용 시 원문을 표기해 주세요.
🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반
- A Phase I Study of Hydroxychloroquine and Suba-Itraconazole in Men with Biochemical Relapse of Prostate Cancer (HITMAN-PC): Dose Escalation Results.
- Self-management of male urinary symptoms: qualitative findings from a primary care trial.
- Clinical and Liquid Biomarkers of 20-Year Prostate Cancer Risk in Men Aged 45 to 70 Years.
- Diagnostic accuracy of Ga-PSMA PET/CT versus multiparametric MRI for preoperative pelvic invasion in the patients with prostate cancer.
- Clinical Presentation and Outcomes of Patients Undergoing Surgery for Thyroid Cancer.
- Association of patient health education with the postoperative health related quality of life in low- intermediate recurrence risk differentiated thyroid cancer patients.