Bookshelf

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Goodrich DE, Miake-Lye I, Braganza MZ, et al. The QUERI Roadmap for Implementation and Quality Improvement [Internet]. Washington (DC): Department of Veterans Affairs (US); 2020.

Cover of The QUERI Roadmap for Implementation and Quality Improvement

The QUERI Roadmap for Implementation and Quality Improvement [Internet].

Goodrich DE, Miake-Lye I, Braganza MZ, et al. Washington (DC): Department of Veterans Affairs (US); 2020.

Evaluation and Study Designs for Implementation and Quality Improvement

The following is a general overview on the use of common study designs for implementation and quality improvement initiatives. For further information additional details including examples and suggested references are also provided.

Experimental and quasi-experimental designs are two broad categories of evaluation designs used to rigorously and systematically assess the implementation of effective practices. They are also used to compare different implementation strategies. Experimental designs randomly assign participants, which could be consumers, clinicians, health care clinics or sites, or other types of organizational health care units (hospitals) to one or more intervention groups or a control group (standard of care) to test the causal impact of these conditions on desired outcomes (Figure 18). Interventions in these examples can include a clinical intervention or effective practice, as well as a quality improvement intervention or implementation strategy (e.g., methods to help clinicians deploy effective practices).

Randomization helps minimize selection bias because it ensures that the groups being compared are similar initially. It can also mitigate the influence of measured and unmeasured variables that might be associated with the outcome of interest (confounding variables**) so that evaluators can feel confident that differences between groups can be attributed to the intervention.

**Confounding Variables

Systematic differences between intervention and control groups that obscure the effect of an intervention https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4024462/

Julnes G, Mark M, Shipman S. Conditions to Consider in the Use of Randomized Experimental Designs in Evaluation. Available at: https://static1.squarespace.com/static/56534df0e4b0c2babdb6644d/t/5f5a338a67d71a06db275fe9/1599746955716/Conditions+to+consider+for+randomized+experimental+designs.pdf. Data Foundation. June 25, 20202. Accessed September 18, 2020

Finkelstein A. A strategy for improving U.S. health care delivery - conducting more randomized, controlled trials. N Engl J Med. 2020;382(16):1485–1488 [PubMed : 32294343 ].

Figure 18

Illustration of a Generic Randomized Controlled Trial Design at the Patient-Level of Analysis.

Designs in Implementation Science

A fundamental goal of implementation science is to reduce the time it takes for an effective practice to go from testing in a clinical trial to use in real world health care settings. In a randomized-controlled clinical effectiveness study, the focus is on measuring the effectiveness, or benefit, that an intervention/practice provides consumers when used in routine clinical practice settings. The assessment focus is on health outcomes like consumer symptoms, behaviors, function, survival, or satisfaction with care. Thus, consumers are the unit of analysis and randomization in effectiveness trials, not the people or organizations delivering the clinical practice.

Effectiveness trials can range from those with tightly controlled procedures under ideal conditions to explain the benefit of an intervention, to highly pragmatic trials that test the benefit of the intervention in the real-world settings where the effective practice is meant to be used. Tools like the Pragmatic-Explanatory Continuum Indicator Summary-2 (PRECIS-2) can help evaluators understand the degree to which key aspects of an intervention protocol are for real world use. Note, pragmatic effectiveness trials are the domain of research studies whereas the QUERI Roadmap focuses on the evaluation of non-research implementation and quality improvement activities.

Notably, randomized implementation designs aim to assess the impact of different implementation strategies on the uptake of effective practices. The units of analysis and randomization in implementation studies are at the clinician, clinical unit, facility, or system level in which the implementation strategy is designed to change clinical care processes, workflows, behaviors, or decision-making. Implementation outcomes often reflect process or quality measures of practice uptake such as acceptability, adoption, appropriateness, feasibility, fidelity, penetration, and sustainability.

Suggested Readings

Coly A, Parry G. Evaluating Complex Health Interventions: A Guide to Rigorous Research Designs. Washington, D.C.: AcademyHealth; June 2017. Available at: https://www ​.academyhealth ​.org/evaluationguide

Landsverk J, Hendricks-Brown C, Smith JD, et al. Design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and Implementation Research in Health. 2nd ed. New York: Oxford University Press; 2018:201–227.

Miller CJ, Smith SN, Pugatch M. Experimental and quasi-experimental designs in implementation research. Psychiatry Res. 2020;283:112452. [PMC free article : PMC6923620 ] [PubMed : 31255320 ]

Wholey J, Hatry HP, Newcomer K, eds. Handbook of Practical Program Evaluation. San Francisco: Jossey-Bass, Inc; 2004.

Lewis, E., Baumann, A., Gerke, D., Tabak, R., Ramsey, A., Small, S. & Proctor, E. D&I Research Designs. St. Louis, MO: Washington University; 2017 July. Eight toolkits related to Dissemination and Implementation. Available from https://sites ​.wustl.edu/wudandi

Aarons. G. Module 5. Designs for Implementation Studies. National Cancer Institute, open access Training Institute for Dissemination and Implementation Research (TIDIRC) curriculum; July 2019. Available at: https: ​//cancercontrol ​.cancer.gov/IS/training-education ​/tidirc ​/openaccess-module5.html

University of Washington Implementation Science Resource Hub. Select Study Design. Available at: https://impsciuw ​.org ​/implementation-science ​/research/designing-is-research/

Hybrid Effectiveness-Implementation Designs

To begin understanding the “implementability” of a promising clinical practice earlier in the research pipeline, implementation scientists developed a continuum of “hybrid” effectiveness-implementation designs in which the focus of the research questions changes as the trial becomes more pragmatic. This design continuum combines the best design elements of traditional effectiveness trials with those of implementation research trials to help evaluators to consider factors that affect how an effective practice can be implemented at the end of the clinical research pipeline.

For example, in a hybrid 1 design, the primary aim is to evaluate an intervention’s effectiveness with a secondary emphasis on assessing barriers and facilitators to successful practice uptake. The primary focus of a hybrid type 2 study is to assess both the effectiveness of the clinical practice and its implementation strategy. Hybrid type 2 designs are often used to test an implementation strategy in a few sites. In comparison, the primary goal of a hybrid 3 design is to assess the effectiveness of one or more implementation strategies used to get an effective practice into health care settings while also monitoring to ensure that clinical effectiveness of the practice is maintained.

Figure 19

Examples of Evaluation Aims by Hybrid Effectiveness-Implementation Design Type.

Evaluations randomized at the patient level are generally considered effectiveness studies or studies assessing the effectiveness of both the clinical practice and implementation strategy (hybrid type 1 or 2, respectively). In contrast, hybrid type 3 designs typically involve multiple sites which are allocated different implementation strategies or implementation strategy versus no implementation strategy to promote the uptake of an effective practice. The primary outcome of a hybrid type 3 design is the uptake of the effective practice at the clinician or site level, and secondary outcomes are consumer-level outcomes.

Suggested Readings

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–226. [PMC free article : PMC3731143 ] [PubMed : 22310560 ]

Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147. [PubMed : 25956159 ]

Landes SJ, McBain SA, Curran GM. An introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2019;280:112513. [PMC free article : PMC6779135 ] [PubMed : 31434011 ]

Burke RE, Shojania KG. Rigorous evaluations of evolving interventions: can we have our cake and eat it too? BMJ Qual Saf. 2018;27(4):254–257. [PubMed : 29440483 ]

Experimental or Randomized Designs

As noted previously in the QUERI Roadmap, non-research implementation initiatives and quality improvement projects can still use rigorous, randomized designs to enable health care leaders to understand which implementation strategies are more effective to bring a new promising practice into widespread use.

Cluster randomized trials Stepped Wedge Factorial and Sequential, multiple-assignment randomized trials (SMART)

Cluster or group-randomized trials

The two primary types of rigorous experimental designs used in implementation and quality improvement evaluations, are the parallel cluster-randomized and cluster-randomized stepped wedge designs. These trials are referred to as cluster- or group randomized controlled trials in which the unit of randomization is a cluster or group, and outcome measures are obtained from members of the cluster.

These comparative designs are preferable for implementation evaluations which aim to compare different implementation strategies designed to cause changes in “clusters” or groups at the social, physical, or environmental level to health care outcomes. These designs randomize centers (hospitals, community sites) or units (clinician groups, departments) rather than individuals, which helps to avoid some of the contamination that might occur when randomization occurs within settings at the individual-level.

The goal of randomization in cluster randomized trials is to achieve balance of baseline covariates among the individuals who comprise a cluster or among other cluster characteristics. In addition, evaluators seek to achieve balance in cluster sample size at baseline randomization. Since both forms of balance play a role in the sample size and power calculations required to design cluster randomized trial designs, we will present three methods that help achieve these aims later in this appendix.

Allocation to an implementation strategy may occur via simple or restricted randomization. Simple randomization is akin to a coin flip and is more likely to lead to balance in baseline covariates in larger sample sizes. If the number of units being randomized is small, simple randomization can lead to imbalance in unit characteristics by chance. Restricted randomization allocates treatments within constraints and includes strategies such as stratification, matching, minimization, and covariate-constrained randomization.

Stratification includes simple randomization within separate subgroups of important confounders (e.g., urban vs rural) — this is an especially useful strategy when there are few units in one stratum and evaluators want to ensure that both treatment and control conditions are well-represented in small strata. In matching, one identifies pairs of units with identical values of specific baseline covariates and randomly assigns one member of each pair to the treatment condition.

Matching leads to precise balance in baseline covariates, but if one member of a pair drops out of the study, the other member also must be dropped to retain balance. When there are several confounders that one wishes to balance, minimization and covariate constrained randomization may be considered. For more information, see Suggested Readings at the end of this section.

In the AcademyHealth’s Evaluating Complex Health Interventions: A Guide to Rigorous Research Designs, evaluators of randomized evaluations are encouraged to report variations in the nature and size of the effects across clusters, not just “average” effects in the study, to enhance learning from variation.

Equipoise – the degree to which there is genuine uncertainty among clinicians that there is not one “better” implementation strategy present (for either the control or experimental group) during the design of a cluster randomized controlled trial (RCT).

Feasibility – practical considerations for conducting design based on the costs or personnel needed to carry out and collect adequate data for a specific design given funding support, stakeholder expectations, timeline to carryout, ethical considerations of withholding a treatment intervention, and other logistical considerations dictated by a setting, priority population of patients, and the nature of the implementation strategies to be tested.

Statistical power – the ability to detect a meaningful effect of an implementation strategy on an outcome, given the number and size of clusters included in the trial and the outcome of interest. Recent publications have reported that the relative power of the stepped wedge design and parallel RCT depend on the value of the intracluster correlations (the degree to which individuals within a cluster resemble each other in terms of the outcome of interest). Stepped wedge cluster RCTs are likely to be more efficient than parallel cluster RCTs when intracluster correlation is high.

Suggested Readings

Ivers, N.M., Halperin, I.J., Barnsley, J. et al. Allocation techniques for balance at baseline in cluster randomized trials: a methodological review. Trials 13, 120 (2012). Available at: https: ​//trialsjournal ​.biomedcentral.com/articles/10 ​.1186/1745-6215-13-120 [PMC free article : PMC3503622 ] [PubMed : 22853820 ]

Turner EL, Li F, Gallis JA, Prague M, Murray DM. Review of Recent Methodological Developments in Group-Randomized Trials: Part 1-Design. Am J Public Health. 2017;107(6):907–915. [PMC free article : PMC5425852 ] [PubMed : 28426295 ]

Parallel cluster- or group-randomized design

A parallel cluster randomized design compares two or more implementation strategy conditions and clusters are randomly assigned to these two strategies, implementation strategies are deployed, and then the results are compared as shown in Figure 20 below. Randomization is based on assigning implementation treatment conditions.

Figure 20

A Generic Parallel Cluster Design.

In some cases, a new implementation strategy may be compared to a standard implementation as usual to bring the same effective practice into use such that some clusters receive an innovative implementation strategy to deploy the effective practice while other clusters employ a standard method of bringing effective practice into use over a similar time period. Process and output measures used as the primary end points are measured for all eligible patients or subjects in both conditions and aggregated to the level of the randomized unit at the cluster level. The goal of this basic design is to understand whether a new implementation strategy results in better or more efficient implementation processes and outcomes compared to the methods currently employed.

An alternative parallel design is a head-to-head cluster randomized implementation trial that enables the comparison of two distinctly different implementation strategies to determine which is more successful in implementing an effective practice. In this design, the same effective clinical practice is used for both arms of the trial, and clusters are assigned randomly to one of the two different implementation strategies as shown in Figure 21. Both implementation strategies are manualized and carried out with equivalent attention to fidelity. Additionally, both implementation strategies are compared on the quality, quantity, or speed of implementing the effective practice.

Figure 21

Example of a Generic Head-to Head Parallel Cluster-Randomized Implementation Trial.

Stepped-wedge cluster randomized design

A stepped wedge cluster randomized design is a one-directional crossover experimental design in which time is divided into multiple phases enabling clusters to cross over from a routine implementation or control condition to an implementation condition so that all clusters eventually receive the same implementation strategy. As shown in Figure 22, the initial phase begins with no clusters receiving the enhanced implementation strategy and subsequent phases during which one or more clusters are randomized to receive the implementation strategy at regular pre-specified intervals or steps. By the end of the study, all clusters will have randomized to the intervention group. There should be an expectation that the benefits of the intervention exceed the potential harm.

Figure 22

Stepped-Wedge Cluster Randomized Design.

Evaluators are strongly encouraged to ensure that more than one cluster be assigned to the treatment condition at each time period. Otherwise, it can be difficult to separate out time and site characteristics’ impact on the outcome. This is a relevant concern when there is a possibility that treatment effects vary with time.

This design is ideal for situations where operational stakeholders all want the enhanced implementation strategy, it might be perceived as unethical for a cluster to not receive the enhanced implementation strategy, and all participating health units cannot receive the enhanced implementation strategy at the same time due to logistical, practical or financial consideration.

A randomized stepped wedge design permits within-cluster effects to be evaluated. Each cluster receives the implementation strategy intervention while also serving as its own control. If there is a signifi-cant cluster effect, this increases the power of the evaluation to detect differences compared to a design where clusters are assigned to only a control or intervention group. Stepped wedge designs also support the iterative evaluation of barriers to the implementation of the intervention which can help improve implementation in subsequent steps or phases of the implementation rollout.

A stepped wedge design can have downsides, especially if rapid results are important to an operational partner because this design can take longer than conventional cluster randomized trials, particularly if only one or a few clusters can be allocated to an enhanced implementation strategy at each step. Longer study duration can require more resources and results in higher costs. This design also requires frequent data collection which can be challenging from a measurement perspective if key outcomes cannot be accessed through administrative databases and primary data collected is needed. Finally, evaluators must control for secular trends in key outcomes and avoid contamination between those receiving the implementation intervention and those waiting to receive the intervention.

Factorial Designs, including Sequential Multiple Assignment Randomized Trials (SMARTs)

Given known and well-documented delays in widespread adoption of effective practices, implementation trials that focus on improving uptake or adoption of a particular effective practice commonly forego traditional randomized comparisons between “treatment” and “control” arms. Instead, the focus of these trials is on the comparative effectiveness of different implementation strategies or sets of implementation strategies. Additionally, implementation trials often involve testing interventions or implementation strategies that are multi-component, and questions may revolve around determining which components are most effective.

As such, trial designs that relate to optimization, or determining the most efficient provision of limited resources for the largest public benefit, are often of interest to implementation scientists. The Multi-phase Optimization Strategy (MOST) borrows from two engineering principles—resource management and continuous optimization—to provide a framework for building and evaluating multicomponent interventions. While there are a number of different experimental methods that can inform optimization within the MOST framework, factorial designs are the most common. Sequential multiple assignment randomized trial (SMART) designs are a variant of factorial designs that inform sequential optimization of components.

Factorial designs

Factorial designs allow for “screening” of active or important intervention components in multi-component interventions. Factorial designs randomize participants (individuals, clinicians, sites) to different intervention components separately to allow for direct evaluation of different intervention components (e.g., each of several implementation strategies bundled together). This includes both the main effects of and interactions between different components. This process allows investigators to “screen” or identify key “active” components (or combinations of components) for optimizing the intervention package prior to evaluating its effectiveness. As multiple components can be screened within a single factorial experiment, a factorial design is more efficient than conducting multiple randomized controls to examine each component individually.

In factorial designs, the levels of two or more intervention components (or implementation strategies) are “crossed” so that all possible combinations of each component are implemented. Table 26 shows a simple example of a factorial design combining two different implementation strategies—the presence/absence of Strategy A (on rows) and Low vs. High intensity versions of Strategy B (on columns).

Table 26

Illustration of a 2x2 Factorial Design Comparing Two Implementation Strategies.

For example, as shown in Figure 23, Strategy A could represent the presence of a virtual learning collaborative for clinicians to share best practices to adopt a new practice whereas Strategy B could represent high vs. low intensity of quality improvement coaching by experts from a clinical program office. The factorial design thus results in four different experimental conditions to which participants are randomized.

Figure 23

Illustration of a Factorial Design to Compare Implementation Strategies.

Strategy A: the effect of receiving access to the learning collaborative vs. not can be evaluated by comparing Conditions 3+4 with Conditions 1+2.

Strategy B: the effect of receiving high intensity vs. low intensity expert coaching can be evaluated by comparing Conditions 2+4 with Conditions 1+3.

As both main effects can be evaluated within the context of a single factorial study without needing to increase sample size, factorial designs offer an advantage over other study designs that might test components separately. Interaction effects can also be tested—for example, whether high intensity coaching is more effective when the learning collaborative is also provided—although as with any study, power for detecting interaction effects will be lower than power to detect the factor main effects.

Suggested Readings

Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. 2007;32:(5 Suppl):S112–118. [PMC free article : PMC2062525 ] [PubMed : 17466815 ]

Collins LM. Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: The Multiphase Optimization Strategy (MOST). New York, NY: Springer International Publishing; 2018.

Gallis JA, Bennett GG, Steinberg DM, Askew S, Turner EL. Randomization procedures for multicomponent behavioral intervention factorial trials in the multiphase optimization strategy framework: challenges and recommendations. Transl Behav Med. 2019;9(6):1047–1056. [PMC free article : PMC6875651 ] [PubMed : 30590759 ]

Spring B, Pfammatter AF, Marchese SH, et al. A Factorial Experiment to Optimize Remotely Delivered Behavioral Treatment for Obesity: Results of the Opt-IN Study. Obesity. 2020;4(5):e16297 [PMC free article : PMC8045468 ] [PubMed : 32656994 ]

Quanbeck A, Almirall D, Jacobson N, et al. The Balanced Opioid Initiative: protocol for a clustered, sequential, multiple-assignment randomized trial to construct an adaptive implementation strategy to improve guideline-concordant opioid prescribing in primary care. Implement Sci. 2020;15(1):26. [PMC free article : PMC7183389 ] [PubMed : 32334632 ]

Sequential Multiple-Assignment Randomized Trials

Sequential multiple-assignment randomized trials (SMARTs) are a variant of a factorial design that can be useful for implementation researchers that are interested in optimizing sequences of implementation support that adapt to ongoing to needs and/or implementation success over time. Adaptive interventions, or interventions that adapt the type, intensity and/or duration of treatment to participant needs as they change over time, are often appealing to implementation researchers as there is recognition that implementation strategies are not necessarily “one size fits all” and that some sites (or individuals or clinicians) may need more, less or different implementation support than others. Additionally, adaptive interventions allow for accommodation of heterogeneity in treatment—for example, increasing or changing implementation support for participants that are not responding (or implementing) under an initial form of implementation support.

SMARTs are multistage randomized trials in which some or all participants (who may be sites, clinicians, and/or patients depending on the target of the intervention or implementation strategy) are randomized more than once, often based on ongoing information (e.g., treatment response, adherence to and/or success under prior implementation support).

For scientists interested in providing better implementation support or testing the comparative effectiveness of different forms of implementation support, SMARTs allow for direct comparisons of different sequences of implementation support to maximize EBP uptake and/or improvement in downstream clinical outcomes. As such, SMART designs are well-suited to answering questions related to what forms of implementation support should be used and in what order to achieve the best results.

A variety of SMART designs are possible, depending on current science; however, one popular SMART design randomizes all participants (patients, clinicians, sites) upfront to one of two possible first-line treatments or strategies. After a certain period of time, those participants that do not show sufficient “response” to the first-line strategy (e.g., by not exhibiting a certain amount of adoption, lacking adherence or engagement to their first-stage strategy, or exhibiting barriers that their first-stage treatment is not capable of addressing) are re-randomized to, for example, either continue that first-line strategy or augment their first-line strategy with a different strategy.

One example of this hypothetical design is shown in Figure 24.

Figure 24

Hypothetical SMART Design.

In this design, all participants—let’s imagine they are clinicians--are randomized at study start to receive either Implementation Strategy A or Implementation Strategy B to support implementation of an EBP. After six months, an intermediate evaluation of each clinician’s response to their first-line treatment is conducted, using a pre-defined definition of response vs. non-response (e.g., a responder is a clinician that delivered the EBP to at least 10 patients). Clinicians that were considered responsive to their first-line implementation support are not re-randomized, and instead having their current implementation support continued (Cells A & D). However, clinicians that failed to meet the response threshold are rerandomized to either continue with their current form of implementation support (Cells B & E) or to add the other form of implementation support (Cells C & F).

Determination of the best first-stage treatment: This question asks which first-stage strategy results in the best endof-study results, marginalizing over what happens in the second stage, e.g., is it better to start with Strategy A or Strategy B? - In Figure 24, this would be evaluated by comparing the participants who received Strategy A (experimental conditions A+B+C) with those that received Strategy B (conditions D+E+F). In terms of power, this is equivalent to analyzing a two-arm trial.

Determination of the best second-stage treatment for nonresponders: This question asks, among participants that are deemed non-responsive at month 6 and re-randomized, is it better to continue them on their first treatment or augment with the other strategy? - In Figure 24, this would be evaluated by comparing end-of-study outcomes for the non-responsive participants who were randomized to continue their first stage treatment (conditions B+E) with the non-responsive participants randomized to add the second strategy (conditions C+F). This is equivalent to analyzing a two-arm trial among participants that were non-responsive after the first stage.

Begin with Strategy A; after six months, continue Strategy A for all participants (Conditions A+B);

Begin with Strategy A; after six months, continue Strategy A for responders, and augment with Strategy B for nonresponders (Conditions A+C);

Begin with Strategy B; after six months, continue Strategy B for all participants (Conditions D+E);

Begin with Strategy B; after six months, continue Strategy B for responders, and augment with Strategy A for nonresponders (Conditions D+F).

A primary aim that compares different embedded sequences of treatment, thus, might compare sequence #1 with sequence #4 on end-of-study outcomes. This analysis, while similar to a two-arm trial, does require a bit of additional work to account for the fact that some participants contribute to multiple treatment sequences. Details on these analytic methods, as well as associated power calculations, can be found in the references below.

Suggested Readings

Almirall D, Compton SN, Gunlicks-Stoessel M, Duan N, Murphy SA. Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy. Stat Med. 2012;31(17):1887–1902. [PMC free article : PMC3399974 ] [PubMed : 22438190 ]

Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A “SMART” design for building individualized treatment sequences. Annu Rev Clin Psychol. 2012;8:21–48. [PMC free article : PMC3887122 ] [PubMed : 22224838 ]

Nahum-Shani I, Qian M, Almirall D, et al. Experimental design and primary data analysis methods for comparing adaptive interventions. Psychol Methods. 2012;17(4):457–477. [PMC free article : PMC3825557 ] [PubMed : 23025433 ]

Collins LM, Nahum-Shani I, Almirall D. Optimization of behavioral dynamic treatment regimens based on the sequential, multiple assignment, randomized trial (SMART). Clin Trials. 2014;11(4):426–434. [PMC free article : PMC4257903 ] [PubMed : 24902922 ]

Kilbourne AM, Almirall D, Eisenberg D, et al. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci. 2014;9:132. [PMC free article : PMC4189548 ] [PubMed : 25267385 ]

NeCamp T, Kilbourne A, Almirall D. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations. Stat Methods Med Res. 2017;26(4):1572–1589. [PMC free article : PMC5802435 ] [PubMed : 28627310 ]

Quasi-experimental Designs

Quasi-experimental designs test causal hypotheses like experimental designs but lack random assignment of clusters to intervention arms. Quasi-experimental designs seek to identify a comparison group or time period that is as similar as possible to the enhanced implementation treatment group in terms of baseline (pre-intervention) characteristics.

Non-equivalent control group design Stepped wedge design (without randomization) Interrupted time series Regression discontinuity design

Pre-post with non-equivalent control group

In this type of evaluation as shown in Figure 25, comparisons are made between health care units receiving an implementation strategy condition versus those not receiving the strategy which serve as a control or implementation as usual group. This is a design well-suited for testing a quality improvement innovation or implementation strategy with a smaller number of sites because the design is easier to enact without the cost and complexity of designs requiring more frequent collection of data over time.

Analyses in such designs aim to determine the difference in the amount of change over time in clinical and implementation outcomes of interest between groups starting with pre-intervention and moving forward to post-intervention time periods. It is important for study and control sites to be comparable, and a number of statistical methods can help control for confounding variables and secular trends that may occur at the health care unit level of analysis as well as among the patients nested within these units. Similarly, pre- and post-implementation intervention periods should be the same across implementation intervention and comparison control sites.

Figure 25

Example of a Pre-Post Design with Non-Equivalent Control Group.

Stepped Wedge Design

As previously introduced above, a stepped-wedge evaluation is a type of rollout design in which sites are systematically assigned to an implementation strategy at specific time points by the evaluation team (Figure 26). Like its randomized cousin, this design allows for all sites to receive the implementation strategy used to deploy an effective practice. Repeated measurement at each phase of assigning sites/clusters to the implementation strategy condition enables each site to serve as its own control (within site comparisons) while also enabling between site comparisons.

This design can be useful to employ when random assignment of sites/clusters may not be feasible due to resource constraints or readiness of sites to employ an implementation strategy. As in a randomized stepped wedge design, evaluators are strongly encouraged to ensure that more than one cluster be assigned to the treatment condition at each time period. Otherwise, it can be difficult to separate out time and site characteristics’ impact on the outcome. This is a relevant concern when there is a possibility that treatment effects vary with time.

Figure 26

Stepped Wedge Design.

Interrupted Time Series Design

Interrupted time series evaluations are a type of quasi-experimental design that is useful in determining whether a new clinical practice or implementation strategy was effective based on time trend analyses before and after the implementation of a change. This design is often used when it is not possible to randomly assign units such as in the case of implementing a new healthy policy, regulation, or clinical service at a specific point in time.

An interrupted time series design requires data collection at multiple periods of time prior to, during and after the introduction of an implementation initiative which may not be feasible or affordable for some evaluation budgets. When circumstances permit adequate collection of data over time such as data tracked in electronic patient care databases, analysts can perform relatively simple pre-post test comparisons, and can allow for these comparisons to be adjusted for potential secular trends in the data before and after the introduction of the intervention (Figure 27). When adequate data is available, evaluators can determine how long it takes for an intervention to result in measurable improvements in care as well as the impact on the consumer population targeted for care by the policy or practice. This design can be enhanced by including a comparison group that does not experience the implementation of a change (a comparative or controlled time series analysis).

Figure 27

Illustration of a Generic Interrupted Time Series Design.

Suggested Readings

Coly A, Parry G. Evaluating Complex Health Interventions: A Guide to Rigorous Research Designs. Washington, D.C.: AcademyHealth; June 2017. Available at: https://www ​.academyhealth ​.org/evaluationguide

Cruz M, Bender M, Ombao H. A robust interrupted time series model for analyzing complex health care intervention data. Stat Med. 2017;36(29):4660–76. [PubMed : 28850683 ]

Cruz M, Gillen DL, Bender M, Ombao H. Assessing health care interventions via an interrupted time series model: Study power and design considerations. Stat Med. 2019;38(10):1734–52. [PMC free article : PMC7959401 ] [PubMed : 30616298 ]

Linden A. A matching framework to improve causal inference in interrupted time-series analysis. J Eval Clin Pract. 2018;24(2):408–15. [PubMed : 29266646 ]

Regression Discontinuity

Regression-discontinuity designs are a type of quasi-experimental design used to compare the pre-post impact of implementation interventions. In these evaluations, the implementation strategy is assigned to an intervention or exposed group/unit based on their need for implementation based on a cutoff score for a predetermined assignment variable or qualifying condition. The assignment variable is assessed before the implementation intervention on a continuous measure such as illness severity or patient need for a service. All patients, on scoring below the pre-screening cut-off score, are assigned to one group (e.g., intervention group) and all patients scoring above are assigned to the other condition.

In a hypothetical example by Trochim, the blue Xs to the left of the cutoff show the cases for lower performing facilities on a measure of quality at pre- and post-evaluation. The green circles show the comparison group that is comparatively higher performing at both assessments. The vertical line at the pretest score of 50 indicates the cutoff point with no implementation treatment given. The solid line through the bivariate distribution is the linear regression line. The distribution depicts a strong positive relationship between the pre- and post-assessment scores such that the higher the scores a facility scores at pre-testing, the more likely this facility will score a higher score at post-test assessment at a later time. Similarly, the more site scores low on quality at pre-intervention assessment, the more likely the facility will score lower on the quality indicator at post-intervention.

Figure 28

Example of Pre-Post Distribution with No Treatment Effect.

Evaluators use regression analyses to compare the relationship between the assignment variable and outcome of interest (e.g., health, care utilization) for both groups prior to and after the intervention to infer an intervention effect.

Figure 29 is the same as the prior figure except that all points to the left of the cutoff (i.e., the treatment group) have been raised by 10 points at post intervention assessment. The dashed line in Figure 29 shows what we would expect the treated group’s regression line to look like if the program had no effect as show in Figure 28. An implementation treatment effect is suggested when you see a “jump” or discontinuity in the regression lines at the cutoff point as shown by the orange line in Figure 29.

Figure 29

Regression-Discontinuity Design with Ten-point Treatment Effect.

The following example from Trochim illustrates the use of a regression discontinuity design for quality improvement purposes. In this case, a hospital administrator would like to improve the quality of consumer care through the institution of an intensive quality of care training program for staff. Because of financial constraints, the program is too costly to implement for all employees and so instead it will be administered to the entire staff from specifically targeted units or wards which seem most in need of improving quality of care.

Two general measures of quality of care are available. The first is an aggregate rating of quality of care (QOC) based on observation and rating by an administrative staff member and will be labeled here the QOC rating. The second is the ratio of the number of recorded patient complaints relative to the number of patients in the unit over a set period of time and will be defined here as the Complaint Ratio. In this case example, the administrator could use either the QOC rating or Complaint Ratio as the basis for assigning units to receive the training. Similarly, the effects of the training could be measured on either variable.

Figure 30

Regression Discontinuity Design for the Purpose of Quality Improvement.

In these examples, only the regression lines are shown in the figure and hospital units were assigned to training because they scored below some cutoff score on the QOC rating. The first figure depicts a positive implementation treatment effect because training raised the program group regression line on the QOC rating over what would have been expected. However, the second figure illustrates a negative effect because the program raised training group scores on the Complaint Ratio indicating increased complaint rates. In either case, a discontinuity in regression lines indicates a program effect in the regression continuity design.

This design is less efficient than many experimental designs and requires more cases to reach statistical power than a randomized trial. Accordingly, evaluators should avoid using this design in situations when assignment is a larger organizational unit (e.g., health system or network).

Suggested Readings

Bor J, Moscoe E, Mutevedzi P, Newell ML, Barnighausen T. Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology. 2014;25(5):729–37. [PMC free article : PMC4162343 ] [PubMed : 25061922 ]

Coly A, Parry G. Evaluating Complex Health Interventions: A Guide to Rigorous Research Designs. Washington, D.C.: AcademyHealth; June 2017. Available at: https://www ​.academyhealth ​.org/evaluationguide

Moscoe E, Bor J, Barnighausen T. Regression discontinuity designs are underutilized in medicine, epidemiology, and public health: a review of current and best practice. J Clin Epidemiol. 2015;68(2):122–33. [PubMed : 25579639 ]

Trochim, W. M. K. hosted by Conjoint.ly. Research Methods Designs. Regression Discontinuity Design. June 2020. Available at: https://conjointly ​.com ​/kb/regression-discontinuity-design/

University of Washington Implementation Science Resource Hub. Select Study Design. Quasi-experimental Designs – regression discontinuity. Available at: https://impsciuw ​.org ​/implementation-science ​/research/designing-is-research/

Venkataramani AS, Bor J, Jena AB. Regression discontinuity designs in healthcare research. BMJ. 2016;352:i1216. [PMC free article : PMC6884311 ] [PubMed : 26977086 ]

Walkey AJ, Drainoni ML, Cordella N, Bor J. Advancing Quality Improvement with Regression Discontinuity Designs. Ann Am Thorac Soc. 2018;15(5):523–9. [PMC free article : PMC5955054 ] [PubMed : 29466673 ]

Observational Designs

Observational designs are useful for studying the effectiveness of implementation and quality improvement efforts as they occur naturally over time when it is not feasible for experimental designs to manipulate treatment conditions or to randomly assign groups or sites to different implementation interventions. These study designs are subject to selection bias and confounding variables.

Cohort Study Design Cross-sectional design Case Control Design

Confounding due to selection bias is a concern in observational data due to absence of randomization to treatment – patient characteristics may be associated with both treatment and outcomes. Pre-processing is a set of strategies that can be used to account for observable selection bias and includes exact matching, coarsened exact matching, propensity scores, and entropy balancing. In all of these strategies, the goal is to make the treatment and comparison groups as similar as possible on observed characteristics, other than the receipt of the treatment. Strategies differ in the extent to which matching is conducted on specific variables, moments (e.g., mean, variance) of the distributions of variables, or values of functions of multiple covariates. For more information, see Suggested Readings at the end of this section.

Strategies to balance unobserved and observed characteristics across treatment and comparison groups include regression discontinuity (described above), difference-in-differences, and instrumental variables. In difference-in-differences analyses, the treatment effect is estimated by observing the difference in changes over time in an outcome over groups that do and do not receive an intervention. In instrumental variable analyses, one identifies a variable (the instrument) that is associated with likelihood of receipt of treatment but not the outcome. This method allows for estimation of a treatment effect among individuals whose treatment receipt depends on the value of the instrument.

Suggested Readings

Daw, J.R. and Hatfield, L.A. (2018), Matching and Regression to the Mean in Difference-in-Differences Analysis. Health Serv Res, 53: 4138–4156. Available at: https: ​//onlinelibrary ​.wiley.com/doi/abs/10 ​.1111/1475-6773.12993 [PMC free article : PMC6232412 ] [PubMed : 29957834 ]

Stuart E. A. (2010). Matching methods for causal inference: A review and a look forward. Stat Sci: a review journal of the Institute of Mathematical Statistics, 25(1), 1–21. Available at: https://www ​.ncbi.nlm ​.nih.gov/pmc/articles/PMC2943670/ [PMC free article : PMC2943670 ] [PubMed : 20871802 ]

Garrido, M. M., Kelley, A. S., Paris, J., Roza, K., Meier, D. E., Morrison, R. S., & Aldridge, M. D. (2014). Methods for constructing and assessing propensity scores. Health Serv Res, 49(5), 1701–1720. Available at: https://www ​.ncbi.nlm ​.nih.gov/pmc/articles/PMC4213057/ [PMC free article : PMC4213057 ] [PubMed : 24779867 ]

Pizer, S.D. An intuitive review of methods for observational studies of comparative effectiveness. Health Serv Outcomes Res Method 9, 54–68 (2009). Available at: https://link ​.springer ​.com/article/10.1007/s10742-009-0045-3

Pizer S. D. (2016). Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research. Health Serv Res, 51(2), 790–811. Available at: https://www ​.ncbi.nlm ​.nih.gov/pmc/articles/PMC4799892/ [PMC free article : PMC4799892 ] [PubMed : 26293167 ]

Other Implementation Resources

AGENCY FOR HEALTH RESEARCH AND QUALITY: ADVANCES IN PATIENT SAFETY: FROM RESEARCH TO DISSEMINATION – PLANNING TOOL TO GUIDE RESEARCH DISSEMINATION

This Dissemination and Implementation toolkit was developed to help investigators evaluate their research and develop effective and appropriate dissemination plans. With a focus on assessing “real-world” impact, this tool aids patient-safety investigators to go beyond publishing and presenting and into the workforce. https://www.ahrq.gov/patient-safety/resources/advances/vol4/planning.html

AGENCY FOR HEALTH RESEARCH AND QUALITY’S EVIDENCE-BASED PRACTICE CENTERS

In 1997, the Evidence-based Practice Center Program was launched to promote evidence-based practice in everyday care. Evidence-based Practice Centers develop evidence reports and technology assessments on topics relevant to clinical and other health care organization and delivery issues. https://www.ahrq.gov/research/findings/evidence-based-reports/index.html

AGENCY FOR HEALTH RESEARCH AND QUALITY’S PRACTICE FACILITATION HANDBOOK

A comprehensive, 21 module handbook designed to support the training of new practice facilitators to acquire relevant knowledge and skills to support meaningful improvement in primary care practices, particularly related to the implementation of the chronic care or consumer-centered medical home models. https://www.ahrq.gov/ncepcr/tools/pf-handbook/index.html

AMERICAN PSYCHOLOGICAL ASSOCIATION DIVISION 12 AND SOCIETY OF CLINICAL PSYCHOLOGY

An informational website designed as an integrated resource for clinical psychologists interested in learning more about—and accessing resources related to—dissemination and implementation. This website was developed by the Society of Clinical Psychology Dissemination and Implementation Task Force to promote the involvement of clinical psychology, and Society of Clinical Psychology specifically, in the important and developing field of implementation science. http://www.div12.org/implementation/

CANCER PREVENTION AND CONTROL RESEARCH NETWORK – PUTTING PUBLIC HEALTH EVIDENCE IN ACTION TRAINING

An interactive training curriculum to teach community program planners and health educators to use evidence-based approaches, including how to adapt programs. http://cpcrn.org/pub/evidence-in-action/#

CANCER CONTROL P.L.A.N.E.T. (PLAN, LINK, ACT, NETWORK WITH EVIDENCE-BASED TOOLS)

A portal that provides access to data and resources that can help planners, program staff, and investigators design, implement, and evaluate evidence-based cancer control programs. https://cancercontrolplanet.cancer.gov/planet

CENTERS FOR DISEASE CONTROL AND PREVENTION INTRODUCTION TO PROGRAM EVALUATION FOR PUBLIC HEALTH PROGRAMS: A SELF-STUDY GUIDE

This document is a “how to” guide for planning and implementing evaluation activities. The manual, based on the Centers for Disease Control and Prevention’s Framework for Program Evaluation in Public Health, assists managers and staff of public, private, and community health programs to plan, design, implement, and use comprehensive evaluations in a practical way. https://www.cdc.gov/eval/guide/index.htm

CENTERS FOR DISEASE CONTROL & PREVENTION POLARIS ECONOMIC EVALUATION OVERVIEW

This website provides an overview of how economic evaluation can support efforts to identify, measure, value, and compare costs and consequences of different health intervention strategies and policies. https://www.cdc.gov/policy/polaris/economics/index.html

COCHRANE

Cochrane’s mission is to promote evidence-informed, health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence. Their work is internationally recognized as the benchmark for high-quality information about the effectiveness of health care. https://www.cochranelibrary.com

THE COMMUNITY GUIDE

A searchable collection of evidence-based findings of the Community Preventive Services Task Force. It is a resource to help select interventions improve health and prevent disease in a state, community, community organization, business, health care organization, or school. www.thecommunityguide.org

THE COMMUNITY TOOLBOX

A free, online resource supported by the University of Kansas for those working to build healthier communities and bring about social change by offering access to over 300 educational modules and tools on the topics of community assessment, planning, intervention, evaluation, advocacy, and other aspects of community practice. http://ctb.ku.edu/en

CONSOLIDATED FRAMEWORK FOR IMPLEMENTATION RESEARCH – TECHNICAL ASSISTANCE WEBSITE

A site created for individuals considering the use of this framework to evaluate an implementation or design an implementation study. http://cfirguide.org

CENTER FOR RESEARCH IN IMPLEMENTATION SCIENCE AND PREVENTION (CRISP) AT UC DENVER: DISSEMINATION AND IMPLEMENTATION IN HEALTH TRAINING GUIDE AND WORKBOOK

This 2013 Dissemination and Implementation navigation guide, created by the University of Colorado Denver CRISP, seeks to enable and equip investigators to close the gap between research and practice in the health field. It focuses on five main topics: 1) why dissemination and implementation is important; 2) definitions, theories, and concepts; 3) strategies and tools for designing successful dissemination and implementation interventions; 4) recommendations for evaluation design and measurement; and 5) tips for success for investigators and clinicians. http://www.ucdenver.edu/academics/colleges/medicalschool/programs/crisp/training/Documents/Workshops/CRISP%20Dissemination%20and%20Implementation%20in%20Health%20Workbook.pdf

DISSEMINATION AND IMPLEMENTATION MODELS IN HEALTH RESEARCH AND PRACTICE

An interactive database to help investigators and clinicians select, adapt, and integrate the dissemination and implementation model that best fits their research question or practice problem. http://dissemination-implementation.org

QUERI IMPLEMENTATION FACILITATION TRAINING MANUAL VERSION 2

This manual provides information and resources to individuals who want to understand the process of implementation facilitation, a multi-faceted process of enabling and supporting individuals, groups, and organizations in their efforts to adopt and incorporate effective practices into routine practices. The manual seeks to develop the skills needed to help organizations implement effective practices using external and/or internal facilitation. https://www.queri.research.va.gov/tools/implementation.cfm

IMPLEMENTATION SCIENCE AT A GLANCE: A GUIDE FOR CANCER CONTROL PRACTITIONERS

An introductory guide to implementation science that provides a succinct overview of the rapidly evolving field of dissemination and implementation science. This 30-page resource is available in three electronic formats. Through summaries of key theories, methods, and models, the guide shows how greater use of implementation science can support the adoption of effective practices. Case studies illustrate how practitioners are successfully applying implementation science in their cancer control programs. https://cancercontrol.cancer.gov/IS/tools/practice.html

INSTITUTE FOR HEALTHCARE IMPROVEMENT

An influential, private institute that has been dedicated to redesigning health care into a system without errors, waste, delay, and unsustainable costs for nearly three decades. The Institute for Healthcare Improvement’s website offers a variety of free and fee-based resources, tools, white papers, and links related to improvement and implementation science. The institute has helped develop a number of effective practices in collaboration with the VA, including rapid-cycle testing (Model for Improvement), break through collaborative series, the Triple Aim, intervention “bundles,” and a framework for spread and scale, to name a few. http://www.ihi.org

KNOWLEDGE TRANSLATION CANADA

An online resource that seeks to improve how research results are communicated; to develop a consensus on knowledge translation terminology and methods for measuring success; to evaluate various knowledge translation approaches; and to find ways to ensure that knowledge translation efforts have a lasting impact across the continuum of care by engaging health professionals, community members, and various health decision-making groups. http://ktcanada.net/

NATIONAL CANCER INSTITUTE - IMPLEMENTATION SCIENCE

A website featuring resources, tools, and links related to implementation science training and education, research and practice tools, research funding opportunities, and research initiatives. http://cancercontrol.cancer.gov/is/

NATIONAL CANCER INSTITUTE, QUALITATIVE RESEARCH IN IMPLEMENTATION SCIENCE (QualRIS GROUP): QUALITATIVE METHODS IN IMPLEMENTATION SCIENCE WHITEPAPER

This paper focuses on the multiple ways in which qualitative methods can be effectively used to answer a range of high-priority implementation science questions and describes resources that are available to support the community. https://cancercontrol.cancer.gov/sites/default/files/2020-04/NCI-DCCPS-ImplementationScience-WhitePaper.pdf

NIH EVIDENCE-BASED PRACTICE AND PROGRAMS

A collection of several databases and other resources with information on evidence-based disease prevention services, programs, and practices with the potential to impact public health. https://prevention.nih.gov/research-priorities/dissemination-implementation/evidence-based-practices-programs

NIH OFFICE OF BEHAVIORAL AND SOCIAL SCIENCE RESEARCH (OBSSR) BEST PRACTICES FOR MIXED METHODS RESEARCH

A resource developed to provide guidance to implementation researchers on how to rigorously develop and evaluate mixed methods research applications using best practices. https://obssr.od.nih.gov/training/online-training-resources/mixed-methods-research/

PATIENT-REPORTED OUTCOMES MEASUREMENT INFORMATION SYSTEM (PROMIS®)

PROMIS® (Patient-Reported Outcomes Measurement Information System) is a set of person-centered measures that evaluates and monitors physical, mental, and social health in adults and children. It can be used with the general population and with individuals living with chronic conditions. http://www.healthmeasures.net/explore-measurement-systems/promis

PROGRAM SUSTAINABILITY ASSESSMENT TOOL (PSAT)

A 40-question self-assessment that program staff and stakeholders can take to evaluate the sustainability capacity of a program. Use the results to help with sustainability planning. https://sustaintool.org

REACH, EFFECTIVENESS, ADOPTION, IMPLEMENTATION, AND MAINTENANCE (RE-AIM) FRAMEWORK

Resources and tools for those wanting to apply the RE-AIM framework. Includes planning tools, calculation tools, measures, checklists, visual displays, figures, an online RE-AIM module, and more. http://www.re-aim.org

RESEARCH-TESTED INTERVENTION PROGRAMS

A searchable database with evidence-based cancer control interventions and programs specifically for program planners and public health practitioners. https://rtips.cancer.gov/rtips/index.do

THE F.A.S.T. LAB – THE FIDELITY, ADAPTATION, SUSTAINABILITY AND TRAINING LAB

The overarching goal of the F.A.S.T. Lab is to determine how to facilitate the high-quality delivery of effective psychosocial practices in public sector mental health settings. Areas of emphasis include training and consultation, treatment fidelity and adaptation, and the identification of strategies that promote sustained implementation of effective practices. http://med.stanford.edu/fastlab/research/adaptation.html

THE GRID-ENABLED MEASURES DATABASE

Grid-Enabled Measures is a web-based collaborative tool containing behavioral, social science, and other relevant science measures organized by theoretical constructs. Grid-Enabled Measures enable investigators to collaborate with others, encourages the use of common measures, and facilitates the sharing of harmonized data. https://cancercontrol.cancer.gov/brp/research/gem.html

THE UNIVERSITY OF WASHINGTON IMPLEMENTATION SCIENCE RESOURCE HUB

The Implementation Science Resource Hub is a resource of University of Washington Department of Global Health’s Implementation Science Program to help further ongoing implementation research and education, providing introduction to the field of implementation science for students and new implementation researchers, and curating selections of supporting resources for further study. https://impsciuw.org/

UNITED STATES PREVENTIVE SERVICES TASK FORCE RECOMMENDATIONS

The United States Preventive Services Task Force is an independent, volunteer panel of national experts in disease prevention and evidence-based medicine. The Task Force works to improve the health of all Americans by making evidence-based recommendations about clinical preventive services. https://www.uspreventiveservicestaskforce.org/Page/Name/recommendations

VA HEALTH SERVICES RESEARCH & DEVELOPMENT AND QUALITY ENHANCEMENT RESEARCH INITIATIVE CYBER SEMINARS

A central hub for online presentations on implementation science, evaluation, and health services presentations in health care that occur daily and are hosted by the VA’s Center for Information Dissemination and Education Resources (CIDER). Both VA and non-VA users can subscribe for announcements of upcoming topics; users must register for each seminar. Both the Health Services Research and Development Service and QUERI cyber-seminar pages feature archives of presentations that include slide PDF files and audio files of the recorded presentation that go back to 2016. https://www.hsrd.research.va.gov/cyberseminars/default.cfm

WASHINGTON UNIVERSITY IN ST. LOUIS DISSEMINATION AND IMPLEMENTATION HOME PAGE

The one-stop shop for dissemination and implementation activities at Washington University in St. Louis. This is site provides current information about training and consultation in dissemination and implementation at Washington University in St. Louis and links to the many centers and projects engaged in dissemination and implementation research and consultation. https://sites.wustl.edu/wudandi/

BOOKS

Albers B, Shlonsky A, Mildon R, eds. Implementation Science 3.0. 1st ed. Charn, Switzerland: Springer International Publishing;2020.

Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. New York, NY: Oxford University Press;2018.

Chambers DA, Vinson CA, Norton WE, eds. Advancing the Science of Implementation across the Cancer Continuum. New York, NY: Oxford University Press;2019.

Grol R, Wensing M, Eccles M, Davis D, eds. Improving Patient Care: The Implementation of Change in Health Care. 2nd ed. Hoboken, NJ: John Wiley & Sons, Ltd.;2013.

Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco, CA: Jossey-Bass;2009.

Nilsen P, Birken SA, eds. Handbook of Implementation Science. Cheltenham, UK. Edward Elgar Publishing Ltd.;2020.

This publication is in the public domain and is therefore without copyright. All text from this work may be reprinted freely. Use of these materials should be acknowledged.