Implementation science: Relevance in the real world without sacrificing rigor

PLoS Medicine
http://www.plosmedicine.org/
(Accessed 29 April 2017)

Editorial
Implementation science: Relevance in the real world without sacrificing rigor
Elvin H. Geng, David Peiris, Margaret E. Kruk
| published 25 Apr 2017 PLOS Medicine
https://doi.org/10.1371/journal.pmed.1002288
[Initial text]
The need for implementation science in health is now broadly recognized, and a working understanding of the qualities that make an implementation study “good” is needed more than ever before. As defined by Mittman and Eccles, implementation research “is the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services. It includes the study of influences on healthcare professional and organizational behavior” [1]. The scope of implementation science is broad, ranging from observational studies seeking to characterize and understand evidence-practice gaps, to proof-of-concept studies of efficacy, to large-scale implementation and effectiveness trials of complex interventions. Certainly, if findings in this field are not internally valid (i.e., wrong within the source population), they won’t be of use to anyone. But even if findings are internally valid, to be of value, they must be applicable and useful for implementers (e.g., governments, organizations, health care workers, and communities) in diverse real-world contexts. What kinds of findings in implementation science are most useful? Must a trade-off exist between rigor and relevance? If so, what is the right balance between rigor and applicability in a variety of contexts?
The tension between rigor and relevance across contexts is at the center of two conversations in implementation research. One conversation is among investigators immersed in the traditional scientific principles of rigorous human subject research (e.g., sampling, measurement, and confounding) and who must sometimes be persuaded of the importance of usability, applicability, and, therefore, relevance across varied real-world practice contexts. The second conversation is among implementers and evaluators embedded in real-world programs, settings, and populations. Some from this group must be persuaded that rigorous evaluation is needed and that scientific fundamentals, with accompanying effort and planning, are requisite when implementation research is the goal.…