Adaptation of digital integration of PROMs and PREMs in oncology during implementation: a scoping review.
리뷰
1/5 보강
[PURPOSE] Digital tools facilitate the timely collection of patient-reported outcome and experience measures (ePROMs/PREMs), but there is no consistent reporting on the technical and content adaptatio
- 표본수 (n) 5597
APA
Farič N, Scherrens AL, et al. (2026). Adaptation of digital integration of PROMs and PREMs in oncology during implementation: a scoping review.. Supportive care in cancer : official journal of the Multinational Association of Supportive Care in Cancer, 34(4). https://doi.org/10.1007/s00520-026-10509-0
MLA
Farič N, et al.. "Adaptation of digital integration of PROMs and PREMs in oncology during implementation: a scoping review.." Supportive care in cancer : official journal of the Multinational Association of Supportive Care in Cancer, vol. 34, no. 4, 2026.
PMID
41824067 ↗
Abstract 한글 요약
[PURPOSE] Digital tools facilitate the timely collection of patient-reported outcome and experience measures (ePROMs/PREMs), but there is no consistent reporting on the technical and content adaptations made essential to implementing these digital tools in a specific context. Adaptations made to ePROMs/ePREMs can improve data quality, clinical management, and patient outcomes. We explored how studies report on adaptations and the reasons and types of these during an implementation process of ePROMs/ePREMs systems in routine cancer care.
[METHODS] We conducted a systematic scoping review. We searched PubMed, Embase, PsychINFO, and CINAHL (inception-May 5, 2023), using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist. Guided by the Population, Concept, and Context (PCC) framework, data were extracted and summarised in tables in four dimensions: context, content, evaluation, and training.
[RESULTS] The systematic search found n = 5597 publications, and n = 20 were included (85% published since 2019). No studies reported on ePREMs. Various data collection methods and stakeholders were utilised to make adaptations, guided by one or more implementation frameworks (80% of studies). Common types of adaptations included changing context (e.g. complex onboarding), content (e.g. readability) (all studies), evaluation (e.g. alerts), and training (e.g. patients and clinicians). The use of an implementation framework did not affect the types of adaptations made.
[CONCLUSIONS] This review summarises the types of adaptations made to oncology ePROMs during implementation. To date, there has been no agreed system to capture adaptations of ePROMs in oncology, nor a system or framework to assess ePROMs efficacy.
[METHODS] We conducted a systematic scoping review. We searched PubMed, Embase, PsychINFO, and CINAHL (inception-May 5, 2023), using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist. Guided by the Population, Concept, and Context (PCC) framework, data were extracted and summarised in tables in four dimensions: context, content, evaluation, and training.
[RESULTS] The systematic search found n = 5597 publications, and n = 20 were included (85% published since 2019). No studies reported on ePREMs. Various data collection methods and stakeholders were utilised to make adaptations, guided by one or more implementation frameworks (80% of studies). Common types of adaptations included changing context (e.g. complex onboarding), content (e.g. readability) (all studies), evaluation (e.g. alerts), and training (e.g. patients and clinicians). The use of an implementation framework did not affect the types of adaptations made.
[CONCLUSIONS] This review summarises the types of adaptations made to oncology ePROMs during implementation. To date, there has been no agreed system to capture adaptations of ePROMs in oncology, nor a system or framework to assess ePROMs efficacy.
🏷️ 키워드 / MeSH 📖 같은 키워드 OA만
📖 전문 본문 읽기 PMC JATS · ~89 KB · 영문
Relevance Statement
Relevance Statement
i)What is already known about the topic?To date, there has been no agreed system to capture adaptations of ePROMs in oncology, and currently, there is no system or framework to assess ePROMs’ efficacy.
This review offers a complete summary description of the reasons and types of adaptations made to oncology ePROMs during an implementation process.
ii)What this paper addsWe created a complete overview of reasons and types of adaptations made to ePROMs which can be used in future development and implementation of ePROMs in oncology evidence.
iii)Implications for practice, theory or policyOur review showed the complete absence of studies that include adaptations to oncology ePREMs, potentially indicating a gap in reporting on patients’ experiences, while much focus has been on patient outcomes (ePROMs).
Our review highlighted the low availability of data for in-patient groups and these patient groups could benefit from being included more frequently in the literature.
Our review also found no relationship between the type of an implementation framework and adaptation made to ePROMs
i)What is already known about the topic?To date, there has been no agreed system to capture adaptations of ePROMs in oncology, and currently, there is no system or framework to assess ePROMs’ efficacy.
This review offers a complete summary description of the reasons and types of adaptations made to oncology ePROMs during an implementation process.
ii)What this paper addsWe created a complete overview of reasons and types of adaptations made to ePROMs which can be used in future development and implementation of ePROMs in oncology evidence.
iii)Implications for practice, theory or policyOur review showed the complete absence of studies that include adaptations to oncology ePREMs, potentially indicating a gap in reporting on patients’ experiences, while much focus has been on patient outcomes (ePROMs).
Our review highlighted the low availability of data for in-patient groups and these patient groups could benefit from being included more frequently in the literature.
Our review also found no relationship between the type of an implementation framework and adaptation made to ePROMs
Introduction
Introduction
In recent years, health information technology (HIT) has become an integral part of oncology care to facilitate the electronic assessment of patient-reported outcome measures (PROMs) and patient-reported experience measures (PREMs) [1, 2]. Electronic PROMs (ePROMs) assess patients’ views on their health, well-being, or symptoms related to their care (e.g. nausea and pain). ePREMs capture patients’ perspectives on the overall quality of and satisfaction with received care, which is essential for personalised oncology treatment and holistic care [3]. Assessing ePROMs and ePREMs can help to identify unmet patient needs, improve patient-clinician communication, patient satisfaction, increase supportive care measures, alleviate symptom burden, and improve patient-centred care [4–7]. Real-world implementation of ePROMs, and in particular ePREMs, outside the context of feasibility or research intervention in cancer care remains sparse, and adoption/implementation attempts rates are low [2, 8, 9].
Up to now, the scientific literature shows that ePROMs and ePREMs are prone to different barriers at distinct stages of the implementation process in healthcare [10–13]. These include the following: lack of clinicians’ knowledge; disbelief in their value; lack of digital literacy of both patients and clinicians; technology confidence; inadequate infrastructures; and challenges with integrating ePROMs and ePREMs into the electronic medical health record [2, 10]. Adaptations are an integral part of the implementation process of ePROMs and ePREMs. Adaptations refer to a process of purposeful or unplanned modifications to system design or implementation to fit the local context better, to retain fidelity (maintaining the integrity of the intervention and its implementation), or to facilitate acceptability [14–17]. Many early-stage implementation reports provide preliminary observations rather than established findings. We aimed to base the review on evidence derived from fully executed interventions to avoid drawing conclusions based on incomplete or tentative data. In the ePROMs/ePREMs literature, most studies do not track nor report on the adaptations performed. Adaptations to ePROMs/ePREMs in oncology should be a topic of research because they directly impact data quality, clinical decision-making, and patient engagement. ePROMs/PREMs must be tailored to the specific context to be successfully integrated, as there is no one-size-fits-all solution. Consequently, ongoing formative evaluation and adaptation are essential strategies. Optimised electronic measures can play an important role in early detection of complications and tailoring of patients’ disease management, potentially improving quality of life or survival [5].
In this scoping review, therefore, we try to map those studies that describe ePROMs and ePREMs adaptations and map how and what types of adaptations are reported to try to improve fit with the local context during the implementation study. Learning about adaptations made to ePROMs/ePREMs in oncology could facilitate informing the development of other systems or outcome measures applied in oncology.
In recent years, health information technology (HIT) has become an integral part of oncology care to facilitate the electronic assessment of patient-reported outcome measures (PROMs) and patient-reported experience measures (PREMs) [1, 2]. Electronic PROMs (ePROMs) assess patients’ views on their health, well-being, or symptoms related to their care (e.g. nausea and pain). ePREMs capture patients’ perspectives on the overall quality of and satisfaction with received care, which is essential for personalised oncology treatment and holistic care [3]. Assessing ePROMs and ePREMs can help to identify unmet patient needs, improve patient-clinician communication, patient satisfaction, increase supportive care measures, alleviate symptom burden, and improve patient-centred care [4–7]. Real-world implementation of ePROMs, and in particular ePREMs, outside the context of feasibility or research intervention in cancer care remains sparse, and adoption/implementation attempts rates are low [2, 8, 9].
Up to now, the scientific literature shows that ePROMs and ePREMs are prone to different barriers at distinct stages of the implementation process in healthcare [10–13]. These include the following: lack of clinicians’ knowledge; disbelief in their value; lack of digital literacy of both patients and clinicians; technology confidence; inadequate infrastructures; and challenges with integrating ePROMs and ePREMs into the electronic medical health record [2, 10]. Adaptations are an integral part of the implementation process of ePROMs and ePREMs. Adaptations refer to a process of purposeful or unplanned modifications to system design or implementation to fit the local context better, to retain fidelity (maintaining the integrity of the intervention and its implementation), or to facilitate acceptability [14–17]. Many early-stage implementation reports provide preliminary observations rather than established findings. We aimed to base the review on evidence derived from fully executed interventions to avoid drawing conclusions based on incomplete or tentative data. In the ePROMs/ePREMs literature, most studies do not track nor report on the adaptations performed. Adaptations to ePROMs/ePREMs in oncology should be a topic of research because they directly impact data quality, clinical decision-making, and patient engagement. ePROMs/PREMs must be tailored to the specific context to be successfully integrated, as there is no one-size-fits-all solution. Consequently, ongoing formative evaluation and adaptation are essential strategies. Optimised electronic measures can play an important role in early detection of complications and tailoring of patients’ disease management, potentially improving quality of life or survival [5].
In this scoping review, therefore, we try to map those studies that describe ePROMs and ePREMs adaptations and map how and what types of adaptations are reported to try to improve fit with the local context during the implementation study. Learning about adaptations made to ePROMs/ePREMs in oncology could facilitate informing the development of other systems or outcome measures applied in oncology.
Methods
Methods
Design
The Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist and explanation [18] was used to draft the protocol which was subsequently registered with the Open Science Framework 2022 https://osf.io/z3wc7/overview. A systematic scoping review methodology was chosen as it was most appropriate to the study aim of mapping the existing literature to provide a more exploratory overview of a topic that has not been previously characterised in this manner.
Eligibility criteria
To be included in this review, articles needed to (i) focus on routine cancer care, (ii) report on the implementation of an ePROMs and/or ePREMs intervention, and (iii) (retrospectively) describe an iterative process of adapting the system and/or methods of implementation. As adaptations often take place after the initial stages of the ePROMs/ePREMs implementation, it is therefore important that studies describing complete implementations were identified. RCTs are not designed to assess the sustainability of implementing ePROMs in daily practice and often do not allow for the adaptations needed for real-world use—both important elements of our scoping review. Articles were excluded if they did not/were (i) not oncology focussed; (ii) implemented paper-based PROMs and PREMs; (iii) exclusively focused on the impact, effectiveness, measurement properties, and/or language validity of ePROMs and ePREMs; (iv) used ePROMs and ePREMs as a measure of therapy effect of drugs, such as in clinical trials; and (v) described the implementation of ePROMs and ePREMs in research contexts (e.g. clinical trials) or were prospective because these were usually still ongoing. Furthermore, the search was limited to primary studies written in English. Duplicates, protocols, non-empirical publications, editorials, letters to editors, conference abstracts, and commentaries were also excluded. Reviews were not included in the synthesis, but reference lists of relevant reviews were screened for primary studies. Full criteria are available in the Supplementary Material 01.
Literature search
We systematically searched the following academic databases: PubMed, Embase, PsychINFO, and CINAHL (from inception to May 5, 2023). Through multidisciplinary team discussion, a search strategy was developed combining indexing terms, keywords, and synonyms for cancer (e.g. neoplasm), implementation and adaptation (e.g. adoption), and PROMs/PREMs (e.g. patient-reported outcomes). The full search strategy was adapted for each electronic database (see Supplementary Material 01). All records were imported using the reference manager EndNote (Clarivate), and the screening process was facilitated by the web-based tool Rayyan (Rayyan Systems Inc.). EndNote allowed us to import the references more quickly than downloading each reference individually and importing it into Rayyan. After removing duplicates, records were screened for eligibility based on title and abstract by E.R. (see Acknowledgements). In this first round of screening (i.e. based on title and abstract), a second reviewer (N.F.) independently screened at least 10% (n = 494) to check inter-rater reliability. This was followed by a screening on full-text level, in which all records were screened by the two reviewers (E.R. and N.F.) blinded to each other. Disagreements and conflicts were resolved through discussion. The electronic database search was supplemented by screening reference lists of relevant reviews and included papers, by exploring grey literature (the last search was on July 24, 2023), all in the English language. The latter included exploring Dutch literature (in the English language) in Medline, MedRxiv (i.e. preprints), the International Network of Agencies for Health Technology Assessment (HTA) database, and Open Access Theses and Dissertations. We used this because a significant proportion of the authors on our paper are based in the Dutch-speaking part of Belgium.
Data extraction and synthesis
This review was guided by the Population, Concept, and Context (PCC) framework [14], which helped us construct clear and meaningful objectives and categories for data extraction in our scoping review. The data from the included studies were extracted and summarised in a table in Excel (Microsoft Inc) by one researcher (E.R.), and uncertainties were discussed with a second (N.F.) and third reviewer (A.-L.S.). The extracted data included PCC dimensions: context, content, evaluation, and training—categories that shaped our data organisation and discussion.
The extracted data also included (i) background information: first author, year, and country where the implementation took place; (ii) details on the population and setting: number of sites where the implementation took place, in- and/or outpatient setting, and target group; (iii) details on the ePROMs and/or ePREMs intervention: measurement instruments, timing, and core features of the digital system (e.g. alert system and integration in the electronic medical health record); (iv) information on the process and reporting of adaptation: theoretical frameworks, process for making adaptations, clinical champions supporting the adaptation process, qualitative and/or quantitative methods supporting the adaptation process (methodology described for the adaptation), and presentation of adapted elements in the manuscript (i.e. first aim); and (v) reasons for and types of adaptations (i.e., second aim). Stirman and associates’ typology of modifications was used to categorise adaptations [14]. Contextual adaptations refer to changes to the delivery of the ePROMs/ePREMs in terms of format, setting, personnel, and the target population, while content adaptations refer to changes to the procedures, materials, or delivery. Adapting training and evaluation processes refer to training and evaluation adaptations, respectively [14]. Numerical analyses using descriptive methods and a narrative synthesis were conducted to summarise results.
Design
The Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist and explanation [18] was used to draft the protocol which was subsequently registered with the Open Science Framework 2022 https://osf.io/z3wc7/overview. A systematic scoping review methodology was chosen as it was most appropriate to the study aim of mapping the existing literature to provide a more exploratory overview of a topic that has not been previously characterised in this manner.
Eligibility criteria
To be included in this review, articles needed to (i) focus on routine cancer care, (ii) report on the implementation of an ePROMs and/or ePREMs intervention, and (iii) (retrospectively) describe an iterative process of adapting the system and/or methods of implementation. As adaptations often take place after the initial stages of the ePROMs/ePREMs implementation, it is therefore important that studies describing complete implementations were identified. RCTs are not designed to assess the sustainability of implementing ePROMs in daily practice and often do not allow for the adaptations needed for real-world use—both important elements of our scoping review. Articles were excluded if they did not/were (i) not oncology focussed; (ii) implemented paper-based PROMs and PREMs; (iii) exclusively focused on the impact, effectiveness, measurement properties, and/or language validity of ePROMs and ePREMs; (iv) used ePROMs and ePREMs as a measure of therapy effect of drugs, such as in clinical trials; and (v) described the implementation of ePROMs and ePREMs in research contexts (e.g. clinical trials) or were prospective because these were usually still ongoing. Furthermore, the search was limited to primary studies written in English. Duplicates, protocols, non-empirical publications, editorials, letters to editors, conference abstracts, and commentaries were also excluded. Reviews were not included in the synthesis, but reference lists of relevant reviews were screened for primary studies. Full criteria are available in the Supplementary Material 01.
Literature search
We systematically searched the following academic databases: PubMed, Embase, PsychINFO, and CINAHL (from inception to May 5, 2023). Through multidisciplinary team discussion, a search strategy was developed combining indexing terms, keywords, and synonyms for cancer (e.g. neoplasm), implementation and adaptation (e.g. adoption), and PROMs/PREMs (e.g. patient-reported outcomes). The full search strategy was adapted for each electronic database (see Supplementary Material 01). All records were imported using the reference manager EndNote (Clarivate), and the screening process was facilitated by the web-based tool Rayyan (Rayyan Systems Inc.). EndNote allowed us to import the references more quickly than downloading each reference individually and importing it into Rayyan. After removing duplicates, records were screened for eligibility based on title and abstract by E.R. (see Acknowledgements). In this first round of screening (i.e. based on title and abstract), a second reviewer (N.F.) independently screened at least 10% (n = 494) to check inter-rater reliability. This was followed by a screening on full-text level, in which all records were screened by the two reviewers (E.R. and N.F.) blinded to each other. Disagreements and conflicts were resolved through discussion. The electronic database search was supplemented by screening reference lists of relevant reviews and included papers, by exploring grey literature (the last search was on July 24, 2023), all in the English language. The latter included exploring Dutch literature (in the English language) in Medline, MedRxiv (i.e. preprints), the International Network of Agencies for Health Technology Assessment (HTA) database, and Open Access Theses and Dissertations. We used this because a significant proportion of the authors on our paper are based in the Dutch-speaking part of Belgium.
Data extraction and synthesis
This review was guided by the Population, Concept, and Context (PCC) framework [14], which helped us construct clear and meaningful objectives and categories for data extraction in our scoping review. The data from the included studies were extracted and summarised in a table in Excel (Microsoft Inc) by one researcher (E.R.), and uncertainties were discussed with a second (N.F.) and third reviewer (A.-L.S.). The extracted data included PCC dimensions: context, content, evaluation, and training—categories that shaped our data organisation and discussion.
The extracted data also included (i) background information: first author, year, and country where the implementation took place; (ii) details on the population and setting: number of sites where the implementation took place, in- and/or outpatient setting, and target group; (iii) details on the ePROMs and/or ePREMs intervention: measurement instruments, timing, and core features of the digital system (e.g. alert system and integration in the electronic medical health record); (iv) information on the process and reporting of adaptation: theoretical frameworks, process for making adaptations, clinical champions supporting the adaptation process, qualitative and/or quantitative methods supporting the adaptation process (methodology described for the adaptation), and presentation of adapted elements in the manuscript (i.e. first aim); and (v) reasons for and types of adaptations (i.e., second aim). Stirman and associates’ typology of modifications was used to categorise adaptations [14]. Contextual adaptations refer to changes to the delivery of the ePROMs/ePREMs in terms of format, setting, personnel, and the target population, while content adaptations refer to changes to the procedures, materials, or delivery. Adapting training and evaluation processes refer to training and evaluation adaptations, respectively [14]. Numerical analyses using descriptive methods and a narrative synthesis were conducted to summarise results.
Results
Results
Study selection
As shown in the PRISMA-ScR flow diagram [18] (Fig. 1), the systematic search in academic databases resulted in n = 5597 publications. The titles and abstracts of 3700 records were screened for eligibility after the removal of duplicates. From these, 244 were retrieved for the full-text screening round. The full-text screening round identified 15 studies eligible for inclusion. The main reasons for exclusion were the wrong publication type (i.e. conference abstracts and literature reviews) and the lack of an iterative process of adapting the system and/or implementation methods. Per cent agreement between the first and second reviewers was 92.5% and 94.7% in the first (i.e. title and abstract) and second (i.e. full text) screening rounds, respectively. Scanning reference lists of relevant reviews resulted in five additional studies being identified. We did not find grey literature publications eligible for inclusion. In total, n = 20 articles were included in this scoping review, in which two publications reported on the same ePROM intervention and the accompanying process of adapting the digital tool [19, 20].
General study characteristics
Among the included studies, the oldest one dated from 2010 [21], and n = 17 (85%) of the studies were published between 2019 and 2022, with eight studies alone in 2022 [16, 19, 20, 22–25]. Studies reported locations in which they were conducted as follows: USA (n = 7) [16, 22, 25–29], Europe (n = 6; i.e. Austria (n = 3), Germany, Belgium, and the Netherlands) [21, 23, 24, 30–32], Australia (n = 3) [19, 20, 33, 34], and the UK (n = 2) [35, 36]. In one study, the implementation took place in different countries [37]. Concerning the target group, n = 15 (70%) targeted outpatients [19–21, 23, 26–31, 33–37] and n = 12 (60%) included patients with various cancer diagnoses [16, 21, 24–27, 29, 32, 33, 37] (Table 1). Implementation of ePROMs assessments was guided by various groups such as study authors, clinicians, administration staff, champions, nurses, etc. (see under “Process of integration and adaptations”). Two studies included children with cancer [21, 26].
Intervention characteristics
All the included studies focused on ePROMs, and none reported on ePREMs. A variety of measurement instruments were used to assess PROMs, including European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire Core (EORTC QLQ) and its specific modules (n = 7) [23, 24, 29–32, 35], Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCATE) (n = 5) [16, 22, 23, 33, 36], EQ-5D (health-related quality of life) and its three-level version (n = 3) [30–32], and Patient-Reported Outcome Measurement Information System (PROMIS) (n = 2) [27, 29] (Table 2). The frequency and timing points of the ePROMs assessment varied ranging from daily (n = 2) [24, 37], weekly (n = 3) [16, 23, 36], monthly (n = 3) [19, 20, 27] to yearly (n = 1) [26]. More than half of the studies reported that the frequency of ePROM assessment was either varied or very specific to their study (e.g., at a new cancer diagnosis, prior treatment, depending on treatment, and before a consultation) (n = 11) [21, 22, 28, 29, 31, 32, 35]. In terms of the access location, the sites offered either remote, in situ, or a combination of both as follows: in situ (n = 5) [21, 24, 30, 33, 35], remote (n = 7) [26, 31, 36, 38–41], and both (in situ and remote) (n = 7) [19, 20, 22, 26, 27, 29, 32, 34]. In addition, 11 ePROM systems offered alerts [16, 19, 20, 22, 23, 25, 28, 29, 33, 36, 37] and n = 14 (64%) studies [16, 19, 20, 22, 25, 29, 31, 33] reported integration of patients’ ePROMs into the electronic health record (EHR).
Types of adaptations
As shown in Table 3, studies included different types of adaptations, which were categorised into four dimensions. For complete details and descriptions of each type of adaptation, please refer to the table. Types of adaptations included context adaptations (e.g. increasing visibility of the tool in services, changing complex onboarding) (n = 12) [16, 19, 20, 27, 30, 32, 33, 35] and content adaptations (e.g. changing visibility, readability, and feedback within the ePROMs system) [16, 19–37] which were included in all 20 studies. Other adaptations were made to the evaluation (e.g. mostly configuration of alerts) (n = 3) [22, 33, 35] and training (e.g., group training, training new staff) (n = 10) [19–22, 26, 28, 33–37] (see also Supplementary Material 02 for an overview).
Different dimensions of adaptation for ePROMs and ePREMs—such as context, content, evaluation, and training—address unique barriers and needs; for example, context adaptations involve refining eligibility criteria or clinical workflow, content adaptations change the format, presentation, or symptom highlights, evaluation focuses on readiness surveys and feedback loops, and training covers orientation, role changes, and the practical/technical skill-building required for effective use.
Common reasons for adaptations
Common reasons for adaptations included a combination of those reported by patients and some reported by clinicians: (i) low response rates and non-completion of ePROMs by patients (n = 6) [16, 26, 31, 32, 34, 36], (ii) high workload of clinicians (n = 5) [16, 27, 29, 33, 35], (iii) time constraints of clinicians (n = 3) [29, 33, 35], (iv) alert fatigue and unwarranted alerts for both (n = 5) [16, 25, 29, 36, 37], (v) technical issues for both (n = 5) [29, 33, 35–37], (vi) staff’s lack of knowledge about how to assess, interpret, and use the ePROMs system (n = 5) [16, 20, 28, 33, 35], and (vii) a high frequency of positive responses that overburdened staff because it resulted in increased assessment (follow-up) for the clinical team (n = 1) [27]. Low response rates and non-completion of ePROMs by patients were, for instance, attributed to a combination of infrastructural and human factors, such as a lack of computer access [26], forgetting [36], and patients being unaware of needing to complete ePROMs [34, 35].
To address low response and engagement, possible adaptations included using automated patient reminders [16, 36], refining cover letters or patient training to emphasise the importance of PROMs and the time points of survey assessment [31, 32, 36], providing patients with a tablet device in situ on which to complete ePROMs [26], and staff education (orientations to train new staff and existing staff) [20, 26, 34]. Although some studies provided automated alerts, adaptations included managing those alerts better and making workflow adaptations to address irrelevant and excessive notifications involved. For example, better management of alerts for outpatient staff meant that the staff believed they were not the appropriate person to receive inpatient information when not managing inpatient care [16, 35]. Additionally, making workflow adaptations meant abandoning the alert system in favour of better integration of reports into workflows [33] to address irrelevant and excessive notifications such as revising alert thresholds [20, 36], silencing alerts [16, 25, 37], allowing patients to opt out of receiving calls [16], and adding branching questions to filter current problems [36]. Technical issues, such as connectivity issues [29, 33, 35, 37] and software problems [35], were addressed by adapting the ePROMs system to monitor for and make clinicians aware of connectivity issues [37] and software upgrades [35]. Finally, the staff’s lack of knowledge led to different adaptations, including organising (additional) professional education and training sessions [20, 28], as well as developing detailed guidelines [35].
Process of integration and adaptations
Various procedures for the integration and adaptation process were reported. Across studies, multiple champions were involved in the adaptation process, including study staff, medical and allied healthcare staff, management and administrative personnel, and information technology experts (Table 4). Twelve (55%) studies used a combination of qualitative (e.g. observation, interviews, focus groups, and written feedback) and quantitative methods (e.g. surveys and outcomes extracted from the EHR) to support and inform the adaptation process, with the majority receiving feedback from the users of ePROMs [16, 19, 20, 26, 30, 32, 33, 35, 36]. The adaptations were collected and presented in different formats: written text (n = 11) [21, 24–26, 30–32, 34, 37], a combination of a table and written text (n = 6) [16, 22, 28, 29, 33, 36], and supplementary materials (e.g. healthcare provider survey) (n = 3) [19, 20, 35].
Theoretical frameworks for implementation processes
In n = 15 (68%) studies, one (or more) framework(s) or principles were used to guide the development, implementation, adaptation, and/or evaluation [16, 19–27, 29, 30, 33, 35, 36] such as Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM), Consolidated Framework for Implementation Research (CIFR), and Fit between Individuals, Task and Technology (FITT) (see all listed in Table 4). Some frameworks or principles were specifically used to guide the adaptation process. The frameworks or principles guided the researchers to make adaptations at varying time points in the implementation process, which varied from 3 to 52 months (mean duration = 24.5 months SD 3.2 months). The majority of the studies made adaptations at pre-specified time points (n = 11/19), some at random time points (n = 7/11), and one included a combination of both (n = 1/19). The reason we report out of 19 and not 20 studies is that Bamgboje‐Ayodele [20] and Girgis [19] used the same dataset and were part of the same study. The frameworks were applied in ways that either provided step-wise guidance on how to perform the adaptation process [19, 20, 29, 33, 35] or by guiding the reporting on and categorisation of adaptations [26] (e.g. creating three pilot versions; prioritising adaptations in order of importance). Only one study exclusively reported the use of user-centred design principles [22]. To provide an example of how a framework guided the implementation process, Furlong et al. [37] used RE-AIM framework, which has five domains: (i) reach of the target population, (ii) effect on key outcomes, (iii) adoption by people responsible for its delivery, (iv) success of its implementation, and (v) potential for it to be maintained. Each of these five domains includes specific questions that researchers or implementers can ask or address, using data sources (e.g., clinical audit logs completed over 3 months, EHR data, and ePROM survey) and at different time points. This is applied arbitrarily or as decided by the particular team. There was no consistency or pattern in terms of the framework used and phases or frequency of data collection, stakeholders involved, access location, ePROMs measurement use, or frequency or types of adaptations made.
Study selection
As shown in the PRISMA-ScR flow diagram [18] (Fig. 1), the systematic search in academic databases resulted in n = 5597 publications. The titles and abstracts of 3700 records were screened for eligibility after the removal of duplicates. From these, 244 were retrieved for the full-text screening round. The full-text screening round identified 15 studies eligible for inclusion. The main reasons for exclusion were the wrong publication type (i.e. conference abstracts and literature reviews) and the lack of an iterative process of adapting the system and/or implementation methods. Per cent agreement between the first and second reviewers was 92.5% and 94.7% in the first (i.e. title and abstract) and second (i.e. full text) screening rounds, respectively. Scanning reference lists of relevant reviews resulted in five additional studies being identified. We did not find grey literature publications eligible for inclusion. In total, n = 20 articles were included in this scoping review, in which two publications reported on the same ePROM intervention and the accompanying process of adapting the digital tool [19, 20].
General study characteristics
Among the included studies, the oldest one dated from 2010 [21], and n = 17 (85%) of the studies were published between 2019 and 2022, with eight studies alone in 2022 [16, 19, 20, 22–25]. Studies reported locations in which they were conducted as follows: USA (n = 7) [16, 22, 25–29], Europe (n = 6; i.e. Austria (n = 3), Germany, Belgium, and the Netherlands) [21, 23, 24, 30–32], Australia (n = 3) [19, 20, 33, 34], and the UK (n = 2) [35, 36]. In one study, the implementation took place in different countries [37]. Concerning the target group, n = 15 (70%) targeted outpatients [19–21, 23, 26–31, 33–37] and n = 12 (60%) included patients with various cancer diagnoses [16, 21, 24–27, 29, 32, 33, 37] (Table 1). Implementation of ePROMs assessments was guided by various groups such as study authors, clinicians, administration staff, champions, nurses, etc. (see under “Process of integration and adaptations”). Two studies included children with cancer [21, 26].
Intervention characteristics
All the included studies focused on ePROMs, and none reported on ePREMs. A variety of measurement instruments were used to assess PROMs, including European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire Core (EORTC QLQ) and its specific modules (n = 7) [23, 24, 29–32, 35], Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCATE) (n = 5) [16, 22, 23, 33, 36], EQ-5D (health-related quality of life) and its three-level version (n = 3) [30–32], and Patient-Reported Outcome Measurement Information System (PROMIS) (n = 2) [27, 29] (Table 2). The frequency and timing points of the ePROMs assessment varied ranging from daily (n = 2) [24, 37], weekly (n = 3) [16, 23, 36], monthly (n = 3) [19, 20, 27] to yearly (n = 1) [26]. More than half of the studies reported that the frequency of ePROM assessment was either varied or very specific to their study (e.g., at a new cancer diagnosis, prior treatment, depending on treatment, and before a consultation) (n = 11) [21, 22, 28, 29, 31, 32, 35]. In terms of the access location, the sites offered either remote, in situ, or a combination of both as follows: in situ (n = 5) [21, 24, 30, 33, 35], remote (n = 7) [26, 31, 36, 38–41], and both (in situ and remote) (n = 7) [19, 20, 22, 26, 27, 29, 32, 34]. In addition, 11 ePROM systems offered alerts [16, 19, 20, 22, 23, 25, 28, 29, 33, 36, 37] and n = 14 (64%) studies [16, 19, 20, 22, 25, 29, 31, 33] reported integration of patients’ ePROMs into the electronic health record (EHR).
Types of adaptations
As shown in Table 3, studies included different types of adaptations, which were categorised into four dimensions. For complete details and descriptions of each type of adaptation, please refer to the table. Types of adaptations included context adaptations (e.g. increasing visibility of the tool in services, changing complex onboarding) (n = 12) [16, 19, 20, 27, 30, 32, 33, 35] and content adaptations (e.g. changing visibility, readability, and feedback within the ePROMs system) [16, 19–37] which were included in all 20 studies. Other adaptations were made to the evaluation (e.g. mostly configuration of alerts) (n = 3) [22, 33, 35] and training (e.g., group training, training new staff) (n = 10) [19–22, 26, 28, 33–37] (see also Supplementary Material 02 for an overview).
Different dimensions of adaptation for ePROMs and ePREMs—such as context, content, evaluation, and training—address unique barriers and needs; for example, context adaptations involve refining eligibility criteria or clinical workflow, content adaptations change the format, presentation, or symptom highlights, evaluation focuses on readiness surveys and feedback loops, and training covers orientation, role changes, and the practical/technical skill-building required for effective use.
Common reasons for adaptations
Common reasons for adaptations included a combination of those reported by patients and some reported by clinicians: (i) low response rates and non-completion of ePROMs by patients (n = 6) [16, 26, 31, 32, 34, 36], (ii) high workload of clinicians (n = 5) [16, 27, 29, 33, 35], (iii) time constraints of clinicians (n = 3) [29, 33, 35], (iv) alert fatigue and unwarranted alerts for both (n = 5) [16, 25, 29, 36, 37], (v) technical issues for both (n = 5) [29, 33, 35–37], (vi) staff’s lack of knowledge about how to assess, interpret, and use the ePROMs system (n = 5) [16, 20, 28, 33, 35], and (vii) a high frequency of positive responses that overburdened staff because it resulted in increased assessment (follow-up) for the clinical team (n = 1) [27]. Low response rates and non-completion of ePROMs by patients were, for instance, attributed to a combination of infrastructural and human factors, such as a lack of computer access [26], forgetting [36], and patients being unaware of needing to complete ePROMs [34, 35].
To address low response and engagement, possible adaptations included using automated patient reminders [16, 36], refining cover letters or patient training to emphasise the importance of PROMs and the time points of survey assessment [31, 32, 36], providing patients with a tablet device in situ on which to complete ePROMs [26], and staff education (orientations to train new staff and existing staff) [20, 26, 34]. Although some studies provided automated alerts, adaptations included managing those alerts better and making workflow adaptations to address irrelevant and excessive notifications involved. For example, better management of alerts for outpatient staff meant that the staff believed they were not the appropriate person to receive inpatient information when not managing inpatient care [16, 35]. Additionally, making workflow adaptations meant abandoning the alert system in favour of better integration of reports into workflows [33] to address irrelevant and excessive notifications such as revising alert thresholds [20, 36], silencing alerts [16, 25, 37], allowing patients to opt out of receiving calls [16], and adding branching questions to filter current problems [36]. Technical issues, such as connectivity issues [29, 33, 35, 37] and software problems [35], were addressed by adapting the ePROMs system to monitor for and make clinicians aware of connectivity issues [37] and software upgrades [35]. Finally, the staff’s lack of knowledge led to different adaptations, including organising (additional) professional education and training sessions [20, 28], as well as developing detailed guidelines [35].
Process of integration and adaptations
Various procedures for the integration and adaptation process were reported. Across studies, multiple champions were involved in the adaptation process, including study staff, medical and allied healthcare staff, management and administrative personnel, and information technology experts (Table 4). Twelve (55%) studies used a combination of qualitative (e.g. observation, interviews, focus groups, and written feedback) and quantitative methods (e.g. surveys and outcomes extracted from the EHR) to support and inform the adaptation process, with the majority receiving feedback from the users of ePROMs [16, 19, 20, 26, 30, 32, 33, 35, 36]. The adaptations were collected and presented in different formats: written text (n = 11) [21, 24–26, 30–32, 34, 37], a combination of a table and written text (n = 6) [16, 22, 28, 29, 33, 36], and supplementary materials (e.g. healthcare provider survey) (n = 3) [19, 20, 35].
Theoretical frameworks for implementation processes
In n = 15 (68%) studies, one (or more) framework(s) or principles were used to guide the development, implementation, adaptation, and/or evaluation [16, 19–27, 29, 30, 33, 35, 36] such as Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM), Consolidated Framework for Implementation Research (CIFR), and Fit between Individuals, Task and Technology (FITT) (see all listed in Table 4). Some frameworks or principles were specifically used to guide the adaptation process. The frameworks or principles guided the researchers to make adaptations at varying time points in the implementation process, which varied from 3 to 52 months (mean duration = 24.5 months SD 3.2 months). The majority of the studies made adaptations at pre-specified time points (n = 11/19), some at random time points (n = 7/11), and one included a combination of both (n = 1/19). The reason we report out of 19 and not 20 studies is that Bamgboje‐Ayodele [20] and Girgis [19] used the same dataset and were part of the same study. The frameworks were applied in ways that either provided step-wise guidance on how to perform the adaptation process [19, 20, 29, 33, 35] or by guiding the reporting on and categorisation of adaptations [26] (e.g. creating three pilot versions; prioritising adaptations in order of importance). Only one study exclusively reported the use of user-centred design principles [22]. To provide an example of how a framework guided the implementation process, Furlong et al. [37] used RE-AIM framework, which has five domains: (i) reach of the target population, (ii) effect on key outcomes, (iii) adoption by people responsible for its delivery, (iv) success of its implementation, and (v) potential for it to be maintained. Each of these five domains includes specific questions that researchers or implementers can ask or address, using data sources (e.g., clinical audit logs completed over 3 months, EHR data, and ePROM survey) and at different time points. This is applied arbitrarily or as decided by the particular team. There was no consistency or pattern in terms of the framework used and phases or frequency of data collection, stakeholders involved, access location, ePROMs measurement use, or frequency or types of adaptations made.
Discussion
Discussion
Principal findings
This review summarises the type of adaptations made to ePROMs during implementation using data from n = 20 studies. Various data collection methods and stakeholders were utilised to make adaptations, guided by one or more implementation frameworks (80% of studies). Common types of adaptations included changing context (e.g. complex onboarding), content (e.g. readability) (all studies), evaluation (e.g. alerts), and training of patients and clinicians (e.g. training). No evidence was found for the type of implementation framework or the number of types of adaptations made. Overall, the existing evidence of ePROMs adaptations made during implementation in clinical care in the published literature is rather limited, and although the studies contain varying degrees of details of the adaptations made, there is no standardised method for reporting these adaptations or assessing their efficacy.
At the time of writing this review, this was the first review to date on the topic of how studies undertake and report on adaptations of ePROMs/ePREMs systems for integration into oncology care. In addition, there are no reports on the reasons and types of adaptations made to system design and/or implementation strategy to promote implementation. Similar reviews exist on the implementation of ePROMs systems in general [2, 8, 39, 42], but not explicitly on the adaptations made during process implementation, which is a unique contribution of this review to the literature gap. One of the recent reviews also shows similar findings related to the adaptations [39]. For example, the review demonstrated that adaptations require personalisation and are often not aligned with clinical reality, calling for the development of ePROMs that would fit better with the existing workflows and programmes [39]. We found no evidence that an application of a particular implementation framework results in capturing or applying a different number or types of adaptations to any of the dimensions: content, context, evaluation, or training. The data also show no consistency in terms of the implementation framework that guided the implementation process and all other factors such as phases or frequency of data collection, stakeholders involved, access location, choice of ePROMs, or frequency or types of adaptations.
What this study adds and implications for practice
We did not aim to report on the successes or failures of the implementations and adaptations of ePROMs in clinical practice. Instead, we attempted to map which types of adaptations are made within a local context, and how they are reported and captured during the implementation of each study. Each study had made sets of adaptations specific to their local context, but we could not identify any patterns apart from that they have all ended up making changes to the content (e.g. changing visibility or presentation, readability, and feedback for users within the ePROMs system) of ePROMs during implementation, and that all included various stakeholders in the adaptation process. The majority of the changes were therefore to the presentation, but also to alerts, response options, and frequency of administration of ePROMs. We did not collect data on whether the changes improved ePROMs response rates because this was not the objective of the review. It is possible that the complexity of changes was likely individual to every context, cancer group, ePROM, and individual study, which would warrant further review in future. The studies included a large variety of stakeholders in the adaptation process, ranging from oncologists, nursing and allied staff, clinical and administrative teams, technology partners, study staff, study authors, health informatics experts, researchers, advisory workgroups, etc. Although this was not our aim, we became aware that the studies do not describe which method or group of stakeholders was more or less effective in making adaptations and when, because this was entirely individual to each study, their target clinical and patient population, their chosen ePROMs system, the chosen implementation framework(s) and their local contexts [11].
The studies included in this review have captured the adaptations in various ways, using a structured and unstructured collection of feedback; through mostly qualitative but also quantitative feedback, such as (semi-structured) interviews, focus groups, reports, technological feasibility evaluation forms, observation notes, meeting logs, evaluation of transcripts at engagement meetings and detailed notes, and abstracted outcomes. Mixed methods are a preferred method to capture data in this context; however, it was not an objective of this review to evaluate the levels of organisational readiness for implementing change, nor the effect the clinicians had on the success or use of ePROMs [19, 43].
We could not assess the clinical impact of the adaptations made as has been done in some reviews [44] because this was not the objective of this review but also because many studies did not report the outcomes of the adaptations but just the adaptations made (usually as a list or within the text). However, we succeeded in offering a comprehensive, detailed (up to May 5, 2023, for databases and July 24, 2023, for grey literature) description of types and reasons for adaptations of oncology ePROMs during implementation—which should be of great value to any implementation scientists wishing to develop, evaluate, or implement PROMs into clinical practice across different clinical specialties.
There were no studies found that detailed the adaptation of ePREMs in routine cancer care. The reasons for this are unknown; however, it could possibly be due to differences in how ePROMs versus ePREMs are used and/or studied. Future research should focus on the use of ePREMs as well as ePROMs more, particularly how they are adapted for local contexts. Also, this review identified the limited data on the use of ePROMs in in-patients, and these groups could benefit from being included more frequently in this literature. This could point to a broader problem of not (yet) including people’s experience measures (ePREMs) in care as frequently as their medical outcomes (ePROMs) [45].
Strengths and limitations of the study
The strengths of this review include compliance with PRISMA-ScR guidelines, the development of a comprehensive search strategy, and the review of results applying the PCC framework [14]. The results of the review help further the conversation on ePROMs implementation needs in oncological clinical practice and shed light on the interaction between people and technology in a real-world clinical setting. A closer inspection is required to elucidate the interplay between patient management based on ePROMs results and the success of care. The findings of this review are likely transferable and generalisable to other ePROMs in medical fields, which include other patient populations and countries. This is because digital ePROMs are digital tools, and digital tools of a certain nature often share similar characteristics. For example, ePROMs are designed with a specific purpose of gathering patient information for more streamlined and patient-centred care. The issue with patient-centred data is that it is not always collected efficiently, in a cost-effective manner or frequently enough. Introducing ways to improve this process, i.e. by adapting the implementation and use of ePROMs, may yield increased efficiency, better response rates, and data quality, which in turn enhances clinical decision-making or enables better identification of patients’ health changes or health needs. Since all ePROMs are designed with this purpose and implemented into various clinical fields, they likely share similar challenges.
The review also has some limitations, including the relatively small number of databases and papers included in the review. The grey literature search resulted in adding one paper which was published in the English language. Grey literature from other languages could potentially result in capturing more studies and more insights or details of adaptations made during the implementation of ePROMs as part of the routine oncology care. Another limitation of this review is that it was not possible to capture or describe each of the implementation frameworks used in each study due to the sheer size and aspects of this information. A separate review could be carried out in describing just the implementation of different frameworks used in ePROMs in oncology.
Clearly, we could only examine the material available to us. We did not investigate the effectiveness or outcomes of the adaptations, but rather focused on identifying and reporting them. Our aim was to understand the nature of the adaptations, not their impact (e.g. in practice or for oncology populations), as this was beyond the scope of our study. The general trend we observed was that adaptations made to ePROMs during the implementation are highly contextualised to the local setting in which they are implemented. This observation aligns with findings from other studies [2].
Studies have also pointed to the importance of clinicians in implementing systematic ePROMs/ePREMs, with increased engagement from clinicians leading to higher success rates in patient enrolment and responses [40]. Our review did not assess the clinicians’ involvement, be it to elicit patients’ feedback or as part of the adaptation process. Not describing clinicians’ involvement with ePROMs/ePREMs as part of adaptations more broadly can mean that we have not considered adaptations that happen on a more macro level [41, 46].
Lastly, this review had several methodological limitations. We could not capture the function of PROMs at different levels (e.g. PROMs as a screening tool, a management tool to identify and prioritise issues, or a tool to improve patient-physician information [41, 47]). Additionally, our search of literature could have been widened, and findings of ePROMs adaptations could have been compared based on the different conditions, such as cancer and non-cancer diseases, for example. We could not assess if specific feedback and later types of adaptations were linked to different functions of PROMs, nor if different implementation frameworks yielded different adaptation types. Also, our exclusion criteria included studies that exclusively focus on looking at the effectiveness of ePROMs/PREMs, but details of adaptations may have been included in such studies. We also recognise that the review was conducted in a specific timeframe and data extracted by specific researchers and as such, it is apparent that any potentially relevant new literature published afterwards will not have been included. This time lag may limit the comprehensiveness of the most-up-to-date evidence.
Principal findings
This review summarises the type of adaptations made to ePROMs during implementation using data from n = 20 studies. Various data collection methods and stakeholders were utilised to make adaptations, guided by one or more implementation frameworks (80% of studies). Common types of adaptations included changing context (e.g. complex onboarding), content (e.g. readability) (all studies), evaluation (e.g. alerts), and training of patients and clinicians (e.g. training). No evidence was found for the type of implementation framework or the number of types of adaptations made. Overall, the existing evidence of ePROMs adaptations made during implementation in clinical care in the published literature is rather limited, and although the studies contain varying degrees of details of the adaptations made, there is no standardised method for reporting these adaptations or assessing their efficacy.
At the time of writing this review, this was the first review to date on the topic of how studies undertake and report on adaptations of ePROMs/ePREMs systems for integration into oncology care. In addition, there are no reports on the reasons and types of adaptations made to system design and/or implementation strategy to promote implementation. Similar reviews exist on the implementation of ePROMs systems in general [2, 8, 39, 42], but not explicitly on the adaptations made during process implementation, which is a unique contribution of this review to the literature gap. One of the recent reviews also shows similar findings related to the adaptations [39]. For example, the review demonstrated that adaptations require personalisation and are often not aligned with clinical reality, calling for the development of ePROMs that would fit better with the existing workflows and programmes [39]. We found no evidence that an application of a particular implementation framework results in capturing or applying a different number or types of adaptations to any of the dimensions: content, context, evaluation, or training. The data also show no consistency in terms of the implementation framework that guided the implementation process and all other factors such as phases or frequency of data collection, stakeholders involved, access location, choice of ePROMs, or frequency or types of adaptations.
What this study adds and implications for practice
We did not aim to report on the successes or failures of the implementations and adaptations of ePROMs in clinical practice. Instead, we attempted to map which types of adaptations are made within a local context, and how they are reported and captured during the implementation of each study. Each study had made sets of adaptations specific to their local context, but we could not identify any patterns apart from that they have all ended up making changes to the content (e.g. changing visibility or presentation, readability, and feedback for users within the ePROMs system) of ePROMs during implementation, and that all included various stakeholders in the adaptation process. The majority of the changes were therefore to the presentation, but also to alerts, response options, and frequency of administration of ePROMs. We did not collect data on whether the changes improved ePROMs response rates because this was not the objective of the review. It is possible that the complexity of changes was likely individual to every context, cancer group, ePROM, and individual study, which would warrant further review in future. The studies included a large variety of stakeholders in the adaptation process, ranging from oncologists, nursing and allied staff, clinical and administrative teams, technology partners, study staff, study authors, health informatics experts, researchers, advisory workgroups, etc. Although this was not our aim, we became aware that the studies do not describe which method or group of stakeholders was more or less effective in making adaptations and when, because this was entirely individual to each study, their target clinical and patient population, their chosen ePROMs system, the chosen implementation framework(s) and their local contexts [11].
The studies included in this review have captured the adaptations in various ways, using a structured and unstructured collection of feedback; through mostly qualitative but also quantitative feedback, such as (semi-structured) interviews, focus groups, reports, technological feasibility evaluation forms, observation notes, meeting logs, evaluation of transcripts at engagement meetings and detailed notes, and abstracted outcomes. Mixed methods are a preferred method to capture data in this context; however, it was not an objective of this review to evaluate the levels of organisational readiness for implementing change, nor the effect the clinicians had on the success or use of ePROMs [19, 43].
We could not assess the clinical impact of the adaptations made as has been done in some reviews [44] because this was not the objective of this review but also because many studies did not report the outcomes of the adaptations but just the adaptations made (usually as a list or within the text). However, we succeeded in offering a comprehensive, detailed (up to May 5, 2023, for databases and July 24, 2023, for grey literature) description of types and reasons for adaptations of oncology ePROMs during implementation—which should be of great value to any implementation scientists wishing to develop, evaluate, or implement PROMs into clinical practice across different clinical specialties.
There were no studies found that detailed the adaptation of ePREMs in routine cancer care. The reasons for this are unknown; however, it could possibly be due to differences in how ePROMs versus ePREMs are used and/or studied. Future research should focus on the use of ePREMs as well as ePROMs more, particularly how they are adapted for local contexts. Also, this review identified the limited data on the use of ePROMs in in-patients, and these groups could benefit from being included more frequently in this literature. This could point to a broader problem of not (yet) including people’s experience measures (ePREMs) in care as frequently as their medical outcomes (ePROMs) [45].
Strengths and limitations of the study
The strengths of this review include compliance with PRISMA-ScR guidelines, the development of a comprehensive search strategy, and the review of results applying the PCC framework [14]. The results of the review help further the conversation on ePROMs implementation needs in oncological clinical practice and shed light on the interaction between people and technology in a real-world clinical setting. A closer inspection is required to elucidate the interplay between patient management based on ePROMs results and the success of care. The findings of this review are likely transferable and generalisable to other ePROMs in medical fields, which include other patient populations and countries. This is because digital ePROMs are digital tools, and digital tools of a certain nature often share similar characteristics. For example, ePROMs are designed with a specific purpose of gathering patient information for more streamlined and patient-centred care. The issue with patient-centred data is that it is not always collected efficiently, in a cost-effective manner or frequently enough. Introducing ways to improve this process, i.e. by adapting the implementation and use of ePROMs, may yield increased efficiency, better response rates, and data quality, which in turn enhances clinical decision-making or enables better identification of patients’ health changes or health needs. Since all ePROMs are designed with this purpose and implemented into various clinical fields, they likely share similar challenges.
The review also has some limitations, including the relatively small number of databases and papers included in the review. The grey literature search resulted in adding one paper which was published in the English language. Grey literature from other languages could potentially result in capturing more studies and more insights or details of adaptations made during the implementation of ePROMs as part of the routine oncology care. Another limitation of this review is that it was not possible to capture or describe each of the implementation frameworks used in each study due to the sheer size and aspects of this information. A separate review could be carried out in describing just the implementation of different frameworks used in ePROMs in oncology.
Clearly, we could only examine the material available to us. We did not investigate the effectiveness or outcomes of the adaptations, but rather focused on identifying and reporting them. Our aim was to understand the nature of the adaptations, not their impact (e.g. in practice or for oncology populations), as this was beyond the scope of our study. The general trend we observed was that adaptations made to ePROMs during the implementation are highly contextualised to the local setting in which they are implemented. This observation aligns with findings from other studies [2].
Studies have also pointed to the importance of clinicians in implementing systematic ePROMs/ePREMs, with increased engagement from clinicians leading to higher success rates in patient enrolment and responses [40]. Our review did not assess the clinicians’ involvement, be it to elicit patients’ feedback or as part of the adaptation process. Not describing clinicians’ involvement with ePROMs/ePREMs as part of adaptations more broadly can mean that we have not considered adaptations that happen on a more macro level [41, 46].
Lastly, this review had several methodological limitations. We could not capture the function of PROMs at different levels (e.g. PROMs as a screening tool, a management tool to identify and prioritise issues, or a tool to improve patient-physician information [41, 47]). Additionally, our search of literature could have been widened, and findings of ePROMs adaptations could have been compared based on the different conditions, such as cancer and non-cancer diseases, for example. We could not assess if specific feedback and later types of adaptations were linked to different functions of PROMs, nor if different implementation frameworks yielded different adaptation types. Also, our exclusion criteria included studies that exclusively focus on looking at the effectiveness of ePROMs/PREMs, but details of adaptations may have been included in such studies. We also recognise that the review was conducted in a specific timeframe and data extracted by specific researchers and as such, it is apparent that any potentially relevant new literature published afterwards will not have been included. This time lag may limit the comprehensiveness of the most-up-to-date evidence.
Conclusion
Conclusion
To our knowledge, this was the first scoping review to explicitly describe and categorise adaptations made to ePROMs in oncology during digital implementation. Previous and existing reviews briefly mention adaptations, but do not describe them in detail. Although an increasing amount of literature continues to be published on the implementation of ePROMs systems in oncology care worldwide, several “grey areas” remain. These include systematically reporting and managing feedback from patients and clinicians, as well as measuring the effectiveness of adaptations. This review may broadly suggest that the adaptations are dependent on local needs and structures, and what works for whom and when. The findings of this review demonstrate that adaptations during an implementation process for ePROMs in oncology happen by using phased approaches, holding regular meetings with stakeholders, continuous testing and monitoring of the tool and feedback, using multiple clinical champions, and not relying solely on the quantitative data but instead relying more heavily on qualitative reports of immediate users. Future research could focus on describing factors for the overall success of an adaptation (i.e. better outcomes).
To our knowledge, this was the first scoping review to explicitly describe and categorise adaptations made to ePROMs in oncology during digital implementation. Previous and existing reviews briefly mention adaptations, but do not describe them in detail. Although an increasing amount of literature continues to be published on the implementation of ePROMs systems in oncology care worldwide, several “grey areas” remain. These include systematically reporting and managing feedback from patients and clinicians, as well as measuring the effectiveness of adaptations. This review may broadly suggest that the adaptations are dependent on local needs and structures, and what works for whom and when. The findings of this review demonstrate that adaptations during an implementation process for ePROMs in oncology happen by using phased approaches, holding regular meetings with stakeholders, continuous testing and monitoring of the tool and feedback, using multiple clinical champions, and not relying solely on the quantitative data but instead relying more heavily on qualitative reports of immediate users. Future research could focus on describing factors for the overall success of an adaptation (i.e. better outcomes).
Supplementary Information
Supplementary Information
Below is the link to the electronic supplementary material.
Below is the link to the electronic supplementary material.
출처: PubMed Central (JATS). 라이선스는 원 publisher 정책을 따릅니다 — 인용 시 원문을 표기해 주세요.
🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반
- A Phase I Study of Hydroxychloroquine and Suba-Itraconazole in Men with Biochemical Relapse of Prostate Cancer (HITMAN-PC): Dose Escalation Results.
- Self-management of male urinary symptoms: qualitative findings from a primary care trial.
- Clinical and Liquid Biomarkers of 20-Year Prostate Cancer Risk in Men Aged 45 to 70 Years.
- Diagnostic accuracy of Ga-PSMA PET/CT versus multiparametric MRI for preoperative pelvic invasion in the patients with prostate cancer.
- Comprehensive analysis of androgen receptor splice variant target gene expression in prostate cancer.
- Clinical Presentation and Outcomes of Patients Undergoing Surgery for Thyroid Cancer.