본문으로 건너뛰기
← 뒤로

Using Quality Improvement and Workflow Analysis to Successfully Implement Evidence-Based Interventions to Increase Colorectal Cancer Screening Rates.

1/5 보강
Cancer medicine 📖 저널 OA 98.3% 2022: 15/15 OA 2023: 14/14 OA 2024: 36/36 OA 2025: 164/164 OA 2026: 224/232 OA 2022~2026 2026 Vol.15(2) p. e71634
Retraction 확인
출처

Macauda MM, Scott LA, Eaddy RP, Fadic Quijano LS, Lewis TR, Monalisa NN

📝 환자 설명용 한 줄

[BACKGROUND] Colorectal cancer (CRC) is the third leading cause of cancer deaths in the United States for men and women combined but is preventable with timely screening.

🔬 핵심 임상 통계 (초록에서 자동 추출 — 원문 검증 권장)
  • p-value p < 0.05

이 논문을 인용하기

↓ .bib ↓ .ris
APA Macauda MM, Scott LA, et al. (2026). Using Quality Improvement and Workflow Analysis to Successfully Implement Evidence-Based Interventions to Increase Colorectal Cancer Screening Rates.. Cancer medicine, 15(2), e71634. https://doi.org/10.1002/cam4.71634
MLA Macauda MM, et al.. "Using Quality Improvement and Workflow Analysis to Successfully Implement Evidence-Based Interventions to Increase Colorectal Cancer Screening Rates.." Cancer medicine, vol. 15, no. 2, 2026, pp. e71634.
PMID 41685389 ↗
DOI 10.1002/cam4.71634

Abstract

[BACKGROUND] Colorectal cancer (CRC) is the third leading cause of cancer deaths in the United States for men and women combined but is preventable with timely screening. Evidence-based interventions (EBIs) provide promising opportunities to increase screening. There are few descriptive examples of the processes used to assess and implement EBIs to increase CRC screening.

[PROJECT DESCRIPTION] The Colorectal Cancer Prevention Network (CCPN) in South Carolina facilitated an intensive quality improvement technical assistance project aimed to increase CRC screening in 25 primary care clinics. In this paper we provide a detailed description of the process used to implement EBIs, report on the changes in CRC screening rates, and examine the impact of the interventions across clinics with different attributes (such as clinic size and rurality).

[METHODS] We used Chi-square to explore changes in screening rates from baseline to years two and three of clinic implementation. We used Difference-in-Differences analysis to assess changes in screening rates from baseline to third year for clinics with different attributes.

[RESULTS AND CONCLUSIONS] Across all clinics, the CRC screening increased from 45% to 51% (p < 0.05) from baseline to third year of participation. Sixteen of out 25 clinics saw an increase in screening rates for their second year, and 14 out of 25 saw an increase in their third year. Clinics with smaller patient populations, rural clinics, clinics with fewer uninsured patients, and clinics with lower baseline rates saw greater percentage point improvements. Clinics onboarded in the second year saw the lowest gains. We conclude that a structured tailored approach to the selection of EBIs can have positive effects on CRC screening rates, but positive change may vary depending on clinic attributes.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

📖 전문 본문 읽기 PMC JATS · ~44 KB · 영문

Introduction

1
Introduction
Colorectal cancer (CRC) is the second leading cause of cancer deaths in the United States and in South Carolina [1, 2, 3, 4] for men and women combined. Across the United States, the incidence rate (per 100,000) of CRC between 2017 and 2021 was 36.4, and mortality was 12.9 [5]. In South Carolina, our project location, CRC incidence from 2017 to 2021 (35.6) was slightly lower than the national rate, but mortality (13.7) was higher [5]. CRC is highly preventable and treatable when individuals comply with timely screening [6, 7]. Unfortunately, CRC screening participation remains low across the United States [8, 9, 10, 11] and research suggests that complex sociodemographic, psychosocial, and economic barriers to screening can be difficult to mitigate [12, 13, 14, 15, 16, 17].
Since the National Colorectal Cancer Roundtable launched its national campaign to increase CRC screening to “80% in every community” [18], the emergence of new interventions and screening modalities has offered promising opportunities to increase CRC screening compliance [11, 19]. Evidence‐based interventions like provider assessment and feedback, provider reminders, patient reminders, and reducing structural barriers to care [9, 20] have been associated with increases in CRC screening rates [5, 9]. However, evidence‐based interventions remain underused in primary care clinics.
The Centers for Disease Control and Prevention (CDC) Colorectal Cancer Control Program (CRCCP) funded a state‐level program to increase CRC screening participation by supporting primary care clinic implementation and usage of evidence‐based interventions (EBIs) to address patient‐specific barriers to CRC screening. The project “SC Communities Unite to Increase CRC Screening” (SC Communities Unite) was carried out by the Colorectal Cancer Prevention Network (CCPN) at the University of South Carolina. The CCPN provided technical assistance (TA) to primary care clinics in selecting, implementing, and refining evidence‐based interventions using robust quality improvement processes.
While there are several examples in the literature of the implementation of EBIs to increase colorectal cancer screening rates [21, 22, 23, 24, 25, 26, 27], as well as the implementation of evidence‐based practice more broadly [28, 29], there are few descriptive examples of the specific processes used to integrate EBIs into clinic workflow for CRC screening. In this paper, we (1) describe, in detail, the quality improvement process used by the CCPN to integrate EBIs into the workflow of partner clinics. (2) assess the impact of our process on clinic CRC screening rates (3) explore differences in screening rate changes based on clinic attributes (such as clinic size, location, number of uninsured patients, and perceived readiness).

Materials and Methods

2
Materials and Methods
2.1
Clinic Enrollment
Clinics included 23 from four Federally Qualified Health Center (FQHC) systems, and two from one hospital system. Clinic enrollment in the project was staggered over a 3‐year period, starting on June 30th of each project year. “Cohort 1” was comprised of 13 clinics onboarded in 2020, “Cohort 2” comprised eight clinics onboarded in 2021, and “Cohort 3” comprised four clinics onboarded in 2022. All health systems had clinics onboard in Cohort 1.

2.2
Outcome Assessment Data
To assess FQHC clinic CRC screening rates, we utilized the CRC Screening Uniform Data Set (UDS) [30] quality measures collected through electronic medical records. Screening rates were further validated by each FQHC clinic prior to analysis. For hospital system primary care clinics, we used data collected from the Healthcare Effectiveness Data and Information Set (HEDIS) [31] reporting, which was also validated by the clinics. Both sources of data reported CRC screening rates for an entire calendar year (Jan–Dec) and were used as our official CRC screening rate for CDC project reporting. The Cohort 2 and Cohort 3 clinic CRC rates for 2023 included patients ages 45–74 in response to the change in screening guidelines recommended by the United States Preventive Services Taskforce (USPSTF) in 2021 [9] and present in our UDS and HEDIS measures for 2023. Thus Cohort 2 included 45–49 year olds in their third participation year, and Cohort 3 included them in their second participation year.
We used the following data points from UDS and HEDIS in our assessment of screening outcome: (1) trailing year baseline screening rate collected the year prior to each clinic's project onboarding; (2) trailing year screening rate from each clinic's second year of participation after implementation of interventions; and (3) trailing year screening rate from the third year of clinic's participation, after 2 years of intensive monthly technical assistance sessions and transition to quarterly meetings.
In addition to assessing the clinic, system, and overall CRC screening outcomes, we wanted to assess whether some clinic attributes correlated with variation in clinic outcomes. These included the number of screening‐eligible patients averaged across project years as a proxy measure of clinic size [32]; (2) whether the clinic was situated in an urban or rural setting [33], (3) the number of uninsured screening‐eligible patients averaged across project years [34], (4) whether the clinic belonged to Cohort 1, Cohort 2 or Cohort 3, since the second two cohorts included 45–49 year olds in their screening rates during their first 3 years [9], (5) the average score from pre‐implementation clinic readiness assessments [35] and (6) CRC screening rate at baseline (to examine whether the initial screening rate affected screening rate increases).
For CRC screening rates for continuous clinic characteristics, such as number of patients seen, baseline screening rate, readiness score, and number of uninsured patients, we calculated the median value for each attribute and split the clinics into two groups, above and below the median. The median value was assigned to the lower group. For categorical variables, such as rurality (self‐reported by clinics during onboarding) and cohort, the clinics were divided into the respective groups.
Readiness assessments (which were part of clinic onboarding, see below) included questions that were formatted on a 4‐point scale from 1 = Disagree to 4 = Strongly Agree (Do not know = 0 was removed prior to analysis). Thus, the higher the scores, the greater the degree of perceived readiness for intervention implementation. The maximum score was 80. There were 20 questions including “Promoting CRC screening is a priority for my clinic”, “My clinic monitors colorectal cancer screening rates monthly,” and “This clinic's staff and providers are receptive to making practice changes to match the system's priorities”. Readiness assessments were completed yearly by staff from each clinic. All clinic staff were invited to complete the assessments anonymously. For our analysis we averaged the readiness assessment responses across the first 3 years for each clinic and summed the total score from the average. This helped to address missing data and to gather as many staff datapoints as possible.

2.3
Statistical Approach
To test for statistical significance in CRC screening rate changes between baseline and subsequent years, we used Microsoft Excel to calculate Chi‐square contingency tables. Significance was set at a threshold of p < 0.05. We used the total number of screened patients and the total number of unscreened patients for each category of interest (Individual clinics, systems, overall screening rate) from baseline compared to years two and three. To assess the differential performance of clinics, we used general linear modeling in SPSS 29 to create a Difference‐in‐Differences measure. We compared the number of screened patients (numerator) vs. eligible patients (denominator) across the two time periods of baseline and the third year of participation for each of the clinic groupings by attribute (e.g., low vs. high readiness score, low vs. high number of uninsured patients, rural vs. urban). Significance was tested using Wald Chi‐square. Significance was set at a threshold of p < 0.05. We chose to compare baseline to the third year since it reflected the clinic's ability to sustain CRC rates after the conclusion of intensive technical assistance. For all variables we used the validated clinic data as it was reported to CDC.

Description of Communities Unite Implementation Process

3
Description of SC Communities Unite EBI Implementation Process
From 2020 to 2025, CCPN worked with the staff and leadership of our four FQHCs and one hospital system to foster a supportive and cohesive approach in the selection, implementation, and quality improvement of evidence‐based interventions in 25 primary care clinics. Clinics were chosen in consultation with the health systems. CRC screening rates were below 60% at project baseline for all 25 clinics. All clinics participated in four project phases of quality improvement (QI) activities that were tailored to each clinic based on their location, size, patient population profile, staffing capacity, and baseline CRC screening rate.
To gather data to support the continuous quality improvement process, FQHC clinics used the cloud‐based health information technology platform Azara [36] to extract electronic medical record data, whereas the hospital system used their electronic medical record system with a built‐in population health dashboard. In partnership with the South Carolina Primary Healthcare Association, the CCPN developed a monthly customized Azara Data Reporting and Visualization System (DRVS) report that summarized trailing year aggregate CRC screening rates by patient demographics, insurance status, and social determinants of health. These data extractions provided clinics with very precise CRC screening data and were used to guide decisions around intervention enhancement. Lucidchart [37], a cloud‐based visual collaborative platform, was used to document processes, guide interactive technical assistance sessions, and track outcome measures. Lucidchart provided our TA team and clinic staff with the ability to create and edit virtual interactive diagrams, offering a visual component used by the clinics to discuss their quality improvement steps.
3.1
Phase 1: Identify Areas for Improvement
This phase fostered a comprehensive clinic‐level review of each step of their CRC screening process at project baseline. First, clinic staff and leadership completed readiness and capacity assessments to gather specific clinic‐level information (e.g., number of staff by role/position, type of CRC screening modalities ordered, use of EMR and health information technology tools to monitor CRC screening rates, consistency of patient CRC risk assessment, clinic CRC screening protocols, staff knowledge and preparedness to implement interventions, and perceptions of CRC screening prioritization by staff and leadership). Readiness and Capacity assessments were also completed by each clinic yearly. Next, clinic leadership and staff attended a virtual workshop conducted by the American Cancer Society (QI bootcamp) where participants were provided an overview of the Institute for Healthcare Improvement's (IHI) Model for Improvement [38]. Additionally, participants learned key team building strategies, goal setting, and the role of data monitoring in clinical quality improvement.
Following QI bootcamp, each clinic began monthly technical assistance sessions. The first activity required clinic staff to write an aim statement, which was the foundation of each individual clinic's desired CRC screening outcomes. Next, clinics joined a process mapping session in Lucidchart to capture their current CRC screening workflow. Clinic staff delineated clinical roles (e.g., provider, medical assistant, nurse, lab, reception, and support staff) and responsibilities (e.g., completion of patients' risk assessment, screening order creation, reception and upload of screening results, and delivery of results to patient) in the CRC screening process. Subsequently, the CCPN team converted the documented workflow into a current state “swim lane” workflow diagram (Figure 1). This diagram allowed clinic staff to assess potential process gaps and barriers to effective CRC screening.
Next, clinics used the defined workflow to identify screening challenges using a root cause analysis. The TA team facilitated clinic staff through a series of questions to identify barriers that impacted CRC screening completion rates based on specific categories (e.g., staffing, process, measurement, education, environment, and resources) using a “fishbone diagram” exercise (See Figure 2). Once identified and categorized, clinics were asked to prioritize the screening barriers to address.
Using the results obtained from these activities, clinics selected at least two of four evidence‐based interventions, which were incorporated into their clinic's practice workflow. In Lucidchart, clinics recorded which interventions they selected, how the interventions addressed screening barriers, and detailed how each intervention would be implemented. Of the 25 clinics, 21 selected patient reminder, 22 selected provider reminder, 14 selected provider assessment and feedback, and seven selected reducing structural barriers. The median number of interventions implemented by clinics was three.

3.2
Phase II: Implementation
Once clinics selected their interventions and began implementation, clinic staff were asked to complete Plan‐Do‐Study‐Act (PDSA) cycles to identify potential areas for process improvement and make small process adaptations that could improve their interventions [38]. Clinics were encouraged to use monthly CRC screening data reports to monitor CRC screening outcomes. After PDSA cycles were conducted, clinic staff met with the TA team to discuss and evaluate the impact that process adjustments had on screening outcomes and decide if further adjustments were warranted.

3.3
Phase III: Sustainability
The sustainability phase ensured clinics' intervention implementation and clinical workflow could be sustained beyond the scope of the project. With the support of the CCPN TA team, clinics were asked to review their initial swim lane workflow diagram to document where it was refined to achieve optimal intervention implementation. The TA team created a future state process map that could be used as a reference and to train new clinic staff, ensuring consistency and sustainability of intervention implementations over time.
As clinics transitioned to the sustainability phase, they created a sustainability plan with the support of the Washington University in St. Louis Center for Public Health Systems Science (WashU). Using WashU's Clinical Sustainability Assessment Tool (CSAT) [39], clinics assessed their capacity to sustain interventions based on seven domains known to impact organizational sustainability (Engaged Staff & Leadership, Engaged Stakeholders, Organizational Readiness, Workflow Integration, Implementation & Training, Monitoring & Evaluation, and Outcomes & Effectiveness). Sustainability plans were reviewed by clinic staff after 6 months. Twelve months later, clinics completed a second CSAT to further evaluate and monitor the sustainability of interventions within the clinic workflow.

3.4
Phase IV: Medical Neighborhood
This final phase of the project turned staff attention outside of their individual clinics to determine if untapped external medical and public health entities and/or resources were available and could further support screening intervention implementation and increases in CRC screening completion (access to follow‐up colonoscopy, community transportation services, etc.). In support of this phase, CCPN's TA team assisted clinics in completing a local environmental scan and stakeholder mapping exercise in Lucidchart. The goal was to identify and prioritize community partnerships that played a role in their patients' completion of CRC screening. Clinics assigned stakeholders (such as gastroenterology practices, community organizations, pharmaceutical organizations, and other healthcare organizations) into quadrants by level of interest (low‐high) and power to enact change (low‐high). The systems could then use this information to guide sustainability post‐project.

Screening Results

4
Screening Results
Figure 3 shows each clinic's CRC rate changes from their individual baselines to their second and third years of participation. Of the 25 clinics, 16 clinics saw statistically significant increases in CRC screening rate from baseline to their second year of project participation, with a median positive change of 13.04 percentage points (SD = 7.91). Conversely, four clinics experienced a decrease in their CRC screening rate between baseline and their second year of participation, with a median decrease of 8.5 percentage points (SD = 3.95). Finally, five clinics did not see a statistically significant change in CRC screening rate between baseline and their second year of participation. In clinics' respective third year of participation, 14 clinics saw a significant positive change in their CRC screening rate from baseline, with a median increase of 19.22 percentage points (SD = 9.03). Conversely, six clinics saw their CRC screening rate decrease from baseline, with a median decrease of six percentage points (SD = 4.43). Five clinics saw no significant change.
Across all clinics, the CRC screening rate saw a positive statistically significant increase from 45% to 51%. Rate increases did vary significantly by health system, from a less‐than‐one percentage point increase to a 23‐percentage point increase. Four out of five systems saw a statistically significant increase in CRC screening rates from their baseline to their third year of participation. See Table 1, below.
The clinic characteristics that we tested showed that all were significant except the readiness score. Clinics with larger patient populations had a higher baseline rate (47% vs. 38%) but saw a smaller baseline to third year change (five percentage points vs. 12). Similarly, clinics that started at a lower baseline saw an overall lower third year rate than clinics that started at a higher baseline, but saw a greater baseline to third year rate increase (11 percentage points vs. four). Compared to Cohort 1, Cohort 2 clinics started at a slightly lower baseline and saw a smaller percentage point gain (three vs. 10). Clinics in Cohort 3 started at a lower baseline (31% vs. 48%) but saw a greater percentage point gain (14 vs. seven points). Similarly, rural clinics started at a lower baseline (41% vs. 48%) than urban clinics but saw a greater percentage point increase (11 vs. three percentage points). Finally, clinics with a greater percentage of uninsured patients had a slightly higher initial screening rate, but had smaller gains than those with fewer uninsured patients (four vs. nine points), see Table 2, below.

Discussion

5
Discussion
Clinics enrolled in SC Communities Unite participated in intensive multi‐year quality improvement activities to select, implement, and enhance evidence‐based interventions to improve their clinic's overall CRC screening rates. Many clinics saw statistically significant increases in their CRC screening rates and maintained these increases across project years, and four out of the five systems saw statistically significant increases, including the system with the largest number of clinics. However, the system that did not see increases was our second largest with six clinics. Of these six, only two had positive significant changes from baseline to year three. While we did not have a control group of clinics for this project, it is worth noting that in FQHCs across South Carolina, the CRC screening rate was 44.08% in 2021 and 46.06% in 2024 [40], a gain of 1.98 percentage points, whereas for our participating clinics, the CRC screening rate went from 47.07% to 53.29%, a gain of 5.72 percentage points during the same period (these numbers are slightly different than our results above due to reporting by calendar year, when our clinics were in different implementation stages).
Our Difference in Differences analysis shows that clinic characteristics also correlate with performance. Those with smaller patient populations started with a lower baseline but saw greater gains, as did rural clinics (though these two factors are likely related), and clinics with fewer uninsured patients. The performance of the three cohorts also differed. Cohort 2 saw the smallest improvement to third year, and this may be because this cohort had to adjust to new screening guidelines that included 45–49 year olds. This change would have been part of Cohort 3's second year measures. We note that the readiness assessment was not predictive of clinic performance. This may be because the readiness assessment did not collect information that truly assessed readiness for this specific implementation, especially considering the degree to which the project, by design, increased implementation capacity through QI. Alternately, it could be because the scores were generally high and there was little variation, as the median score was 69 and the range was 60 to 76 out of a possible score of 80. This may be because clinic staff consistently felt ready to participate. Alternately, this may indicate a response bias, as clinic staff may have been reluctant to give a negative view of their respective clinics.
An interesting trend was that across different comparators, those that started out with a lower baseline CRC screening rate tended to see greater percentage point gains than those that started higher baselines (with the notable exception of readiness, which was not predictive, and membership in cohort 2, which is likely due to the inclusion of younger patients in screening). While such a pattern could reflect regression to the mean, the screening rate data used in our analysis represent trailing year data and thousands of patients, which reduces the likelihood of measurement errors or the chance that a random extreme measure is solely responsible for the observed pattern.

Limitations

6
Limitations
Limitations of our evaluation study include that the clinic sample size was too small to explore the relationship between clinic attributes and outcome in statistical models, limiting us to univariate analysis. In addition, three clinics did not have baseline CRC rates available, so we used rates from those clinics' first year of participation, which were likely higher than baseline. Additionally, since our project was an intervention implementation, we did not have a comparison group. In addition, since this was an intervention implementation, we selected clinics based on need and consultation with health systems. We realize the implication that our results may not be generalizable to FQHC clinics more broadly. There also may be other factors that affected screening rates that are unreported in this evaluation.

Conclusions

7
Conclusions
Integrating effective evidence‐based interventions into a primary care clinic workflow to achieve increased CRC rates is time‐intensive and can be challenging when faced with a clinic environment where staff shortages, staff turnover, and leadership changes are common (a fact that became evident during the course of the project). A structured, tailored approach to clinical quality improvement processes where clinic staff are guided step by step through workflow assessment, identification of screening barriers, and data‐informed selection of EBIs can yield improved CRC screening rates despite obstacles faced by clinics. Though we believe that this work represents necessary initial steps to understand effective EBI implementation, additional work is necessary to understand how different attributes of clinics and patient populations influence successful implementation, and ultimately how to best support different types of clinics and patients in achieving desired CRC screening rates.

Author Contributions

Author Contributions

Mark M. Macauda: conceptualization (lead), formal analysis (lead), methodology (lead), writing – original draft (lead), writing – review and editing (equal). Lisa A. Scott: conceptualization (supporting), data curation (equal), project administration (equal), writing – review and editing (equal). Rebecca P. Eaddy: project administration (equal), writing – review and editing (equal). Ljubitca S. Fadic Quijano: project administration (equal), writing – review and editing (equal). Tracie R. Lewis: funding acquisition (lead), project administration (equal), writing – review and editing (equal). Nazratun N. Monalisa: data curation (equal), methodology (supporting), writing – review and editing (equal). Annie Thibault: funding acquisition (lead), project administration (equal), writing – review and editing (equal).

Funding

Funding
This article was supported by Cooperative Agreement Number CRCCP‐RFA‐DP‐20‐2002 from the Centers for Disease Control and Prevention.

Ethics Statement

Ethics Statement
This study was reviewed by the Institutional Review Board at the University of South Carolina and was deemed “not human subjects”. Review number Pro00145016.

Conflicts of Interest

Conflicts of Interest
The authors declare no conflicts of interest.

출처: PubMed Central (JATS). 라이선스는 원 publisher 정책을 따릅니다 — 인용 시 원문을 표기해 주세요.

🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반

🟢 PMC 전문 열기