• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

how to conduct educational research

Home Surveys Academic Research

Educational Research: What It Is + How to Do It

Educational research is collecting and systematically analyzing information on education methods to explain them better. Learn more.

Education is a pillar in modern society, it provides the tools to develop critical thinking, decision making, and social abilities. Education helps individuals to secure the necessary research skills to secure jobs or to be entrepreneurs in new technologies. This is where educational research takes an important place in the overall improvement of the education system (pedagogy, learning programs, investigation, etc.).

Educational research is the spectrum that involves multiple fields of knowledge that scope the different research problems of the learning system and provides a variety of perspectives to solve the issues and improve in general. Educators need ways to filter through the noise of information to find the best practices to better their jobs and deliver better students. This is why educational research that attaches to the scientific method and creates better ideas and new knowledge is essential. The classroom response system allowed students to answer multiple-choice questions and engage in real-time discussions instantly.

What is educational research?

Educational research is collecting and systematically analyzing information on education methods to explain them better. It should be viewed as a critical, reflexive, and professional activity that adopts rigorous methods to gather data, analyze it, and solve educational challenges to help advance knowledge.

Educational research typically begins with identifying a problem or an academic issue. From there, it involves the research of all the data, the information must be analyzed to interpret it. This process ends with a report where results are presented in an understandable form of speech, which can be used by both the researcher and the educational community.

Why is educational research important?

The primary purpose of educational research is to improve the knowledge it exists towards the pedagogy and educational system as a whole. Improving the learning practices and developing new ways of teaching can be achieved more efficiently when the information is shared by the entire community rather than guarded by one institution. Simply put, we can tell that the main three reasons to conduct educational research are:

  • To explore issues . Undertaking research leads to finding answers to specific questions that can help students, teachers, and administrators. Why is student experience design important in new university models? What is the impact of education on new generations? Why is the importance of language while redacting a survey for a Ph.D.?
  • To shape policy . This type of educational research is conducted to collect information to make sustained judgments that can be informed to societies or institutions to improve the governance of education.
  • To improve the quality . Trying to do something better than what is done now is a common reason for educational research to be done. What if we can improve the quality of education by adopting new processes; what if we can achieve the same outcomes with fewer resources? This is quite common in the educational system, but to adapt, institutions must have a base ground of information, which can be obtained by conducting educational research.

Educational Research Methods

Educational research methods are the tools used to carry out research to prove or not the hypothesis of the study.

     An interview is a qualitative research technique that allows the researcher to gather data from the subject using open-ended questions. The most important aspect of an interview is how it is made, typically, it would be a one-on-one conversation that focuses on the substance of what is asked.

Focus Group

Focus groups are also one of the best example of qualitative data in education or approach to gathering information. The main difference from an interview is that the group is composed of 6 – 10 people purposely selected to understand the perception of a social group. Rather than trying to understand a more significant population in the form of statistics, the focus group is directed by a moderator to keep the group in topic conversation. Hence, all the participants contribute to the research.

Observation

Observation is a method of data collection that incorporates the researcher into the natural setting where the participants or the phenomenon is happening. This enables the researcher to see what is happening in real time, eliminating some bias that interviews or focus groups can have by having the moderator intervene with the subjects.

A survey is a research method used to collect data from a determined population to gain information on a subject of interest. The nature of the survey allows gathering the information at any given time and typically takes no time, depending on the research. Another benefit of a survey is its quantitative approach, which makes it easier to present it comprehensively.

How to do educational research

Like any other type of research, educational research involves steps that must be followed to make the information gathered from it valuable and usable. 

  • Identifying the problem. The first step in the process is to identify the problem or formulate a research question. 
  • Formulating objectives and hypotheses. Research objectives are the goal intended for the research to take place, they must be explicit at the beginning of the research and related to the problem. The hypothesis is a statement of the research in the form of a question, it helps the researcher to decide which research method is going to be used as well as the data that needs to be collected.
  • Deciding the method of research. There are plenty of research methods, but deciding which one is the best for each case depends on the researcher’s objectives and hypothesis planted in the previous step.
  • Collecting the data. The research method determines how the data is going to be collected. Whether it’s going to be an interview, focus group, or survey depends on the research method.
  • Analyzing and interpreting the data. Arranging and organizing the data collected and making the necessary calculations. A correct translation/interpretation of the data is primordial for everyone to understand, not only the researcher.
  • Writing a report. After the analysis and interpretation of data, the researcher will form a conclusion, a result of his research which can be shared with everyone. This will be done through a report, or a thesis, which includes all the information related to the research. It will include a detailed summary of all his work and findings during the research process.

Educational research is crucial for the improvement of the education system, the improvement of the teaching/learning process relies on the information that’s available in the field. Statements without research evidence are nothing but opinions, the gathering and distribution of information are fundamental in order to improve what we have as an educational system, as it provides explanations to the big questions and provides a bigger picture for future generations. 

As stated before, educational research is crucial for improving the education system. In QuestionPro we believe in providing the best tools to academic researchers to keep creating valuable knowledge.

MORE LIKE THIS

how to conduct educational research

When You Have Something Important to Say, You want to Shout it From the Rooftops

Jun 28, 2024

The Item I Failed to Leave Behind — Tuesday CX Thoughts

The Item I Failed to Leave Behind — Tuesday CX Thoughts

Jun 25, 2024

feedback loop

Feedback Loop: What It Is, Types & How It Works?

Jun 21, 2024

how to conduct educational research

QuestionPro Thrive: A Space to Visualize & Share the Future of Technology

Jun 18, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Introduction to Education Research

  • First Online: 29 November 2023

Cite this chapter

how to conduct educational research

  • Sharon K. Park 3 ,
  • Khanh-Van Le-Bucklin 4 &
  • Julie Youm 4  

346 Accesses

Educators rely on the discovery of new knowledge of teaching practices and frameworks to improve and evolve education for trainees. An important consideration that should be made when embarking on a career conducting education research is finding a scholarship niche. An education researcher can then develop the conceptual framework that describes the state of knowledge, realize gaps in understanding of the phenomenon or problem, and develop an outline for the methodological underpinnings of the research project. In response to Ernest Boyer’s seminal report, Priorities of the Professoriate , research was conducted about the criteria and decision processes for grants and publications. Six standards known as the Glassick’s criteria provide a tangible measure by which educators can assess the quality and structure of their education research—clear goals, adequate preparation, appropriate methods, significant results, effective presentation, and reflective critique. Ultimately, the promise of education research is to realize advances and innovation for learners that are informed by evidence-based knowledge and practices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

how to conduct educational research

“Just Tell Us the Formula!” Co-Constructing the Relevance of Education in Freshman Learning Communities

how to conduct educational research

Reconciling educational research traditions

how to conduct educational research

Building the Infrastructure to Improve the Use and Usefulness of Research in Education

Boyer EL. Scholarship reconsidered: priorities of the professoriate. Princeton: Carnegie Foundation for the Advancement of Teaching; 1990.

Google Scholar  

Munoz-Najar Galvez S, Heiberger R, McFarland D. Paradigm wars revisited: a cartography of graduate research in the field of education (1980–2010). Am Educ Res J. 2020;57(2):612–52.

Article   Google Scholar  

Ringsted C, Hodges B, Scherpbier A. ‘The research compass’: an introduction to research in medical education: AMEE Guide no. 56. Med Teach. 2011;33(9):695–709.

Article   PubMed   Google Scholar  

Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ. 2009;43(4):312–9.

Varpio L, Paradis E, Uijtdehaage S, Young M. The distinctions between theory, theoretical framework, and conceptual framework. Acad Med. 2020;95(7):989–94.

Ravitch SM, Riggins M. Reason & Rigor: how conceptual frameworks guide research. Thousand Oaks: Sage Publications; 2017.

Park YS, Zaidi Z, O'Brien BC. RIME foreword: what constitutes science in educational research? Applying rigor in our research approaches. Acad Med. 2020;95(11S):S1–5.

National Institute of Allergy and Infectious Diseases. Writing a winning application—You’re your niche. 2020a. https://www.niaid.nih.gov/grants-contracts/find-your-niche . Accessed 23 Jan 2022.

National Institute of Allergy and Infectious Diseases. Writing a winning application—conduct a self-assessment. 2020b. https://www.niaid.nih.gov/grants-contracts/winning-app-self-assessment . Accessed 23 Jan 2022.

Glassick CE, Huber MT, Maeroff GI. Scholarship assessed: evaluation of the professoriate. San Francisco: Jossey Bass; 1997.

Simpson D, Meurer L, Braza D. Meeting the scholarly project requirement-application of scholarship criteria beyond research. J Grad Med Educ. 2012;4(1):111–2. https://doi.org/10.4300/JGME-D-11-00310.1 .

Article   PubMed   PubMed Central   Google Scholar  

Fincher RME, Simpson DE, Mennin SP, Rosenfeld GC, Rothman A, McGrew MC et al. The council of academic societies task force on scholarship. Scholarship in teaching: an imperative for the 21st century. Academic Medicine. 2000;75(9):887–94.

Hutchings P, Shulman LS. The scholarship of teaching new elaborations and developments. Change. 1999;11–5.

Download references

Author information

Authors and affiliations.

School of Pharmacy, Notre Dame of Maryland University, Baltimore, MD, USA

Sharon K. Park

University of California, Irvine School of Medicine, Irvine, CA, USA

Khanh-Van Le-Bucklin & Julie Youm

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sharon K. Park .

Editor information

Editors and affiliations.

Johns Hopkins University School of Medicine, Baltimore, MD, USA

April S. Fitzgerald

Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA

Gundula Bosch

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Park, S.K., Le-Bucklin, KV., Youm, J. (2023). Introduction to Education Research. In: Fitzgerald, A.S., Bosch, G. (eds) Education Scholarship in Healthcare. Springer, Cham. https://doi.org/10.1007/978-3-031-38534-6_2

Download citation

DOI : https://doi.org/10.1007/978-3-031-38534-6_2

Published : 29 November 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-38533-9

Online ISBN : 978-3-031-38534-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Methodologies for Conducting Education Research

Introduction, general overviews.

  • Experimental Research
  • Quasi-Experimental Research
  • Hierarchical Linear Modeling
  • Survey Research
  • Assessment and Measurement
  • Qualitative Research Methodologies
  • Program Evaluation
  • Research Syntheses
  • Implementation

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Action Research in Education
  • Data Collection in Educational Research
  • Educational Assessment
  • Educational Statistics for Longitudinal Research
  • Grounded Theory
  • Literature Reviews
  • Meta-Analysis and Research Synthesis in Education
  • Mixed Methods Research
  • Multivariate Research Methodology
  • Narrative Research in Education
  • Performance Objectives and Measurement
  • Performance-based Research Assessment in Higher Education
  • Qualitative Research Design
  • Quantitative Research Designs in Educational Research
  • Single-Subject Research Design
  • Social Network Analysis
  • Social Science and Education Research
  • Statistical Assumptions

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Educational Research Approaches: A Comparison
  • Girls' Education in the Developing World
  • History of Education in Europe
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Methodologies for Conducting Education Research by Marisa Cannata LAST REVIEWED: 15 December 2011 LAST MODIFIED: 15 December 2011 DOI: 10.1093/obo/9780199756810-0061

Education is a diverse field and methodologies used in education research are necessarily diverse. The reasons for the methodological diversity of education research are many, including the fact that the field of education is composed of a multitude of disciplines and tensions between basic and applied research. For example, accepted methods of systemic inquiry in history, sociology, economics, and psychology vary, yet all of these disciplines help answer important questions posed in education. This methodological diversity has led to debates about the quality of education research and the perception of shifting standards of quality research. The citations selected for inclusion in this article provide a broad overview of methodologies and discussions of quality research standards across the different types of questions posed in educational research. The citations represent summaries of ongoing debates, articles or books that have had a significant influence on education research, and guides to those who wish to implement particular methodologies. Most of the sections focus on specific methodologies and provide advice or examples for studies employing these methodologies.

The interdisciplinary nature of education research has implications for education research. There is no single best research design for all questions that guide education research. Even through many often heated debates about methodologies, the common strand is that research designs should follow the research questions. The following works offer an introduction to the debates, divides, and difficulties of education research. Schoenfeld 1999 , Mitchell and Haro 1999 , and Shulman 1988 provide perspectives on diversity within the field of education and the implications of this diversity on the debates about education research and difficulties conducting such research. National Research Council 2002 outlines the principles of scientific inquiry and how they apply to education. Published around the time No Child Left Behind required education policies to be based on scientific research, this book laid the foundation for much of the current emphasis of experimental and quasi-experimental research in education. To read another perspective on defining good education research, readers may turn to Hostetler 2005 . Readers who want a general overview of various methodologies in education research and directions on how to choose between them should read Creswell 2009 and Green, et al. 2006 . The American Educational Research Association (AERA), the main professional association focused on education research, has developed standards for how to report methods and findings in empirical studies. Those wishing to follow those standards should consult American Educational Research Association 2006 .

American Educational Research Association. 2006. Standards for reporting on empirical social science research in AERA publications. Educational Researcher 35.6: 33–40.

DOI: 10.3102/0013189X035006033

The American Educational Research Association is the professional association for researchers in education. Publications by AERA are a well-regarded source of research. This article outlines the requirements for reporting original research in AERA publications.

Creswell, J. W. 2009. Research design: Qualitative, quantitative, and mixed methods approaches . 3d ed. Los Angeles: SAGE.

Presents an overview of qualitative, quantitative and mixed-methods research designs, including how to choose the design based on the research question. This book is particularly helpful for those who want to design mixed-methods studies.

Green, J. L., G. Camilli, and P. B. Elmore. 2006. Handbook of complementary methods for research in education . Mahwah, NJ: Lawrence Erlbaum.

Provides a broad overview of several methods of educational research. The first part provides an overview of issues that cut across specific methodologies, and subsequent chapters delve into particular research approaches.

Hostetler, K. 2005. What is “good” education research? Educational Researcher 34.6: 16–21.

DOI: 10.3102/0013189X034006016

Goes beyond methodological concerns to argue that “good” educational research should also consider the conception of human well-being. By using a philosophical lens on debates about quality education research, this article is useful for moving beyond qualitative-quantitative divides.

Mitchell, T. R., and A. Haro. 1999. Poles apart: Reconciling the dichotomies in education research. In Issues in education research . Edited by E. C. Lagemann and L. S. Shulman, 42–62. San Francisco: Jossey-Bass.

Chapter outlines several dichotomies in education research, including the tension between applied research and basic research and between understanding the purposes of education and the processes of education.

National Research Council. 2002. Scientific research in education . Edited by R. J. Shavelson and L. Towne. Committee on Scientific Principles for Education Research. Center for Education. Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

This book was released around the time the No Child Left Behind law directed that policy decisions should be guided by scientific research. It is credited with starting the current debate about methods in educational research and the preference for experimental studies.

Schoenfeld, A. H. 1999. The core, the canon, and the development of research skills. Issues in the preparation of education researchers. In Issues in education research . Edited by E. C. Lagemann and L. S. Shulman, 166–202. San Francisco: Jossey-Bass.

Describes difficulties in preparing educational researchers due to the lack of a core and a canon in education. While the focus is on preparing researchers, it provides valuable insight into why debates over education research persist.

Shulman, L. S. 1988. Disciplines of inquiry in education: An overview. In Complementary methods for research in education . Edited by R. M. Jaeger, 3–17. Washington, DC: American Educational Research Association.

Outlines what distinguishes research from other modes of disciplined inquiry and the relationship between academic disciplines, guiding questions, and methods of inquiry.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Education »
  • Meet the Editorial Board »
  • Academic Achievement
  • Academic Audit for Universities
  • Academic Freedom and Tenure in the United States
  • Adjuncts in Higher Education in the United States
  • Administrator Preparation
  • Adolescence
  • Advanced Placement and International Baccalaureate Courses
  • Advocacy and Activism in Early Childhood
  • African American Racial Identity and Learning
  • Alaska Native Education
  • Alternative Certification Programs for Educators
  • Alternative Schools
  • American Indian Education
  • Animals in Environmental Education
  • Art Education
  • Artificial Intelligence and Learning
  • Assessing School Leader Effectiveness
  • Assessment, Behavioral
  • Assessment, Educational
  • Assessment in Early Childhood Education
  • Assistive Technology
  • Augmented Reality in Education
  • Beginning-Teacher Induction
  • Bilingual Education and Bilingualism
  • Black Undergraduate Women: Critical Race and Gender Perspe...
  • Black Women in Academia
  • Blended Learning
  • Case Study in Education Research
  • Changing Professional and Academic Identities
  • Character Education
  • Children’s and Young Adult Literature
  • Children's Beliefs about Intelligence
  • Children's Rights in Early Childhood Education
  • Citizenship Education
  • Civic and Social Engagement of Higher Education
  • Classroom Learning Environments: Assessing and Investigati...
  • Classroom Management
  • Coherent Instructional Systems at the School and School Sy...
  • College Admissions in the United States
  • College Athletics in the United States
  • Community Relations
  • Comparative Education
  • Computer-Assisted Language Learning
  • Computer-Based Testing
  • Conceptualizing, Measuring, and Evaluating Improvement Net...
  • Continuous Improvement and "High Leverage" Educational Pro...
  • Counseling in Schools
  • Critical Approaches to Gender in Higher Education
  • Critical Perspectives on Educational Innovation and Improv...
  • Critical Race Theory
  • Crossborder and Transnational Higher Education
  • Cross-National Research on Continuous Improvement
  • Cross-Sector Research on Continuous Learning and Improveme...
  • Cultural Diversity in Early Childhood Education
  • Culturally Responsive Leadership
  • Culturally Responsive Pedagogies
  • Culturally Responsive Teacher Education in the United Stat...
  • Curriculum Design
  • Data-driven Decision Making in the United States
  • Deaf Education
  • Desegregation and Integration
  • Design Thinking and the Learning Sciences: Theoretical, Pr...
  • Development, Moral
  • Dialogic Pedagogy
  • Digital Age Teacher, The
  • Digital Citizenship
  • Digital Divides
  • Disabilities
  • Distance Learning
  • Distributed Leadership
  • Doctoral Education and Training
  • Early Childhood Education and Care (ECEC) in Denmark
  • Early Childhood Education and Development in Mexico
  • Early Childhood Education in Aotearoa New Zealand
  • Early Childhood Education in Australia
  • Early Childhood Education in China
  • Early Childhood Education in Europe
  • Early Childhood Education in Sub-Saharan Africa
  • Early Childhood Education in Sweden
  • Early Childhood Education Pedagogy
  • Early Childhood Education Policy
  • Early Childhood Education, The Arts in
  • Early Childhood Mathematics
  • Early Childhood Science
  • Early Childhood Teacher Education
  • Early Childhood Teachers in Aotearoa New Zealand
  • Early Years Professionalism and Professionalization Polici...
  • Economics of Education
  • Education For Children with Autism
  • Education for Sustainable Development
  • Education Leadership, Empirical Perspectives in
  • Education of Native Hawaiian Students
  • Education Reform and School Change
  • Educator Partnerships with Parents and Families with a Foc...
  • Emotional and Affective Issues in Environmental and Sustai...
  • Emotional and Behavioral Disorders
  • English as an International Language for Academic Publishi...
  • Environmental and Science Education: Overlaps and Issues
  • Environmental Education
  • Environmental Education in Brazil
  • Epistemic Beliefs
  • Equity and Improvement: Engaging Communities in Educationa...
  • Equity, Ethnicity, Diversity, and Excellence in Education
  • Ethical Research with Young Children
  • Ethics and Education
  • Ethics of Teaching
  • Ethnic Studies
  • Evidence-Based Communication Assessment and Intervention
  • Family and Community Partnerships in Education
  • Family Day Care
  • Federal Government Programs and Issues
  • Feminization of Labor in Academia
  • Finance, Education
  • Financial Aid
  • Formative Assessment
  • Future-Focused Education
  • Gender and Achievement
  • Gender and Alternative Education
  • Gender, Power and Politics in the Academy
  • Gender-Based Violence on University Campuses
  • Gifted Education
  • Global Mindedness and Global Citizenship Education
  • Global University Rankings
  • Governance, Education
  • Growth of Effective Mental Health Services in Schools in t...
  • Higher Education and Globalization
  • Higher Education and the Developing World
  • Higher Education Faculty Characteristics and Trends in the...
  • Higher Education Finance
  • Higher Education Governance
  • Higher Education Graduate Outcomes and Destinations
  • Higher Education in Africa
  • Higher Education in China
  • Higher Education in Latin America
  • Higher Education in the United States, Historical Evolutio...
  • Higher Education, International Issues in
  • Higher Education Management
  • Higher Education Policy
  • Higher Education Research
  • Higher Education Student Assessment
  • High-stakes Testing
  • History of Early Childhood Education in the United States
  • History of Education in the United States
  • History of Technology Integration in Education
  • Homeschooling
  • Inclusion in Early Childhood: Difference, Disability, and ...
  • Inclusive Education
  • Indigenous Education in a Global Context
  • Indigenous Learning Environments
  • Indigenous Students in Higher Education in the United Stat...
  • Infant and Toddler Pedagogy
  • Inservice Teacher Education
  • Integrating Art across the Curriculum
  • Intelligence
  • Intensive Interventions for Children and Adolescents with ...
  • International Perspectives on Academic Freedom
  • Intersectionality and Education
  • Knowledge Development in Early Childhood
  • Leadership Development, Coaching and Feedback for
  • Leadership in Early Childhood Education
  • Leadership Training with an Emphasis on the United States
  • Learning Analytics in Higher Education
  • Learning Difficulties
  • Learning, Lifelong
  • Learning, Multimedia
  • Learning Strategies
  • Legal Matters and Education Law
  • LGBT Youth in Schools
  • Linguistic Diversity
  • Linguistically Inclusive Pedagogy
  • Literacy Development and Language Acquisition
  • Mathematics Identity
  • Mathematics Instruction and Interventions for Students wit...
  • Mathematics Teacher Education
  • Measurement for Improvement in Education
  • Measurement in Education in the United States
  • Methodological Approaches for Impact Evaluation in Educati...
  • Methodologies for Conducting Education Research
  • Mindfulness, Learning, and Education
  • Motherscholars
  • Multiliteracies in Early Childhood Education
  • Multiple Documents Literacy: Theory, Research, and Applica...
  • Museums, Education, and Curriculum
  • Music Education
  • Native American Studies
  • Nonformal and Informal Environmental Education
  • Note-Taking
  • Numeracy Education
  • One-to-One Technology in the K-12 Classroom
  • Online Education
  • Open Education
  • Organizing for Continuous Improvement in Education
  • Organizing Schools for the Inclusion of Students with Disa...
  • Outdoor Play and Learning
  • Outdoor Play and Learning in Early Childhood Education
  • Pedagogical Leadership
  • Pedagogy of Teacher Education, A
  • Performance-based Research Funding
  • Phenomenology in Educational Research
  • Philosophy of Education
  • Physical Education
  • Podcasts in Education
  • Policy Context of United States Educational Innovation and...
  • Politics of Education
  • Portable Technology Use in Special Education Programs and ...
  • Post-humanism and Environmental Education
  • Pre-Service Teacher Education
  • Problem Solving
  • Productivity and Higher Education
  • Professional Development
  • Professional Learning Communities
  • Programs and Services for Students with Emotional or Behav...
  • Psychology Learning and Teaching
  • Psychometric Issues in the Assessment of English Language ...
  • Qualitative Data Analysis Techniques
  • Qualitative, Quantitative, and Mixed Methods Research Samp...
  • Queering the English Language Arts (ELA) Writing Classroom
  • Race and Affirmative Action in Higher Education
  • Reading Education
  • Refugee and New Immigrant Learners
  • Relational and Developmental Trauma and Schools
  • Relational Pedagogies in Early Childhood Education
  • Reliability in Educational Assessments
  • Religion in Elementary and Secondary Education in the Unit...
  • Researcher Development and Skills Training within the Cont...
  • Research-Practice Partnerships in Education within the Uni...
  • Response to Intervention
  • Restorative Practices
  • Risky Play in Early Childhood Education
  • Scale and Sustainability of Education Innovation and Impro...
  • Scaling Up Research-based Educational Practices
  • School Accreditation
  • School Choice
  • School Culture
  • School District Budgeting and Financial Management in the ...
  • School Improvement through Inclusive Education
  • School Reform
  • Schools, Private and Independent
  • School-Wide Positive Behavior Support
  • Science Education
  • Secondary to Postsecondary Transition Issues
  • Self-Regulated Learning
  • Self-Study of Teacher Education Practices
  • Service-Learning
  • Severe Disabilities
  • Single Salary Schedule
  • Single-sex Education
  • Social Context of Education
  • Social Justice
  • Social Pedagogy
  • Social Studies Education
  • Sociology of Education
  • Standards-Based Education
  • Student Access, Equity, and Diversity in Higher Education
  • Student Assignment Policy
  • Student Engagement in Tertiary Education
  • Student Learning, Development, Engagement, and Motivation ...
  • Student Participation
  • Student Voice in Teacher Development
  • Sustainability Education in Early Childhood Education
  • Sustainability in Early Childhood Education
  • Sustainability in Higher Education
  • Teacher Beliefs and Epistemologies
  • Teacher Collaboration in School Improvement
  • Teacher Evaluation and Teacher Effectiveness
  • Teacher Preparation
  • Teacher Training and Development
  • Teacher Unions and Associations
  • Teacher-Student Relationships
  • Teaching Critical Thinking
  • Technologies, Teaching, and Learning in Higher Education
  • Technology Education in Early Childhood
  • Technology, Educational
  • Technology-based Assessment
  • The Bologna Process
  • The Regulation of Standards in Higher Education
  • Theories of Educational Leadership
  • Three Conceptions of Literacy: Media, Narrative, and Gamin...
  • Tracking and Detracking
  • Traditions of Quality Improvement in Education
  • Transformative Learning
  • Transitions in Early Childhood Education
  • Tribally Controlled Colleges and Universities in the Unite...
  • Understanding the Psycho-Social Dimensions of Schools and ...
  • University Faculty Roles and Responsibilities in the Unite...
  • Using Ethnography in Educational Research
  • Value of Higher Education for Students and Other Stakehold...
  • Virtual Learning Environments
  • Vocational and Technical Education
  • Wellness and Well-Being in Education
  • Women's and Gender Studies
  • Young Children and Spirituality
  • Young Children's Learning Dispositions
  • Young Children's Working Theories
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [185.80.149.115]
  • 185.80.149.115

Harvard EdCast: Applying Education Research to Practice

  • Posted November 4, 2020
  • By Jill Anderson

Magnifying glass in the middle of puzzle

Senior Lecturer Carrie Conaway, an expert on making use of data and research to improve education, knows education research can truly be useful for education leaders — some leaders just may need to be enlightened as to how.

One way to prevent the research from being disconnected from practice, says Conway, is keeping educators from getting bogged down in statistical details, and instead helping them to use their own common sense and experience to apply the research. “Part of our professional obligation as educators is to learn from our work," she says. "If we're not incorporating learning from what we're doing as we're going, we're not doing ourselves a service and we're not using our own common sense. We're just sort of blindly moving around and trying to hit a target and we're not actually being intentional and strategic.”

In this episode of the Harvard EdCast, Conaway discusses how school leaders can make education research work for them, as well as implement their own evidence-based research.

  • Find new sources for education research beyond your usual “go tos.” Be sure that one resource is based in research and another based in practice.
  • Conduct the research yourself as part of a school improvement plan.
  • Ask deeper questions like: What am I trying to see? What do I want to learn from my practice and not just all about sort of the impact, but also how did I accomplish that? What did we do and how could we improve that work over time?

TRANSCRIPT:

The referenced media source is missing and needs to be re-embedded. Jill Anderson: I'm Jill Anderson. This is the Harvard EdCast. There's a lot of education data out there, but it's not always easy for school leaders to use it. Harvard's Carrie Conaway has spent her career figuring out how to take research and apply it to education in ways that improve outcomes and make a difference. She says part of the problem is educators and researchers don't connect enough to ask the questions that really matter.

Carrie Conaway: Well, I think right now too frequently the problem is that the research that we have, the evidence that we have is designed for researchers and not for practitioners. So it's answering questions that are of interest for general use or to build generalizable knowledge or are answering questions that are easy to answer especially when you're talking about causal evidence, meaning that a particular program caused an impact on something else. It's a lot easier to answer a question about the impact when people are admitted by lottery. So it's random assignment or something like that.

And there's lots and lots and lots of questions, real practical questions practitioners have that don't lend themselves to that type of analysis. And so if that's all the evidence that practitioners are seeing, it's not answering the questions they actually have about their own practice.

Jill Anderson: And so in the end, it's very difficult for practitioners to read a study and take something and actually implemented in their schools, right?

Carrie Conaway: Yeah, it can be if that's sort of the type of evidence that you're looking for. It can be missing a lot of the key information that a practitioner would immediately ask too many studies are just, "Here's the impact of this program." And what a practitioner wants to know is, "Well, that's great. How much did it cost? How many teachers did you need to train? What sort of context are you in and what enabled you to be successful with this program? Do I have those same conditions in place in my district?"

There's too little of that information about context and implementation that is actually what practitioners need to know if they're going to actually implement that idea.

Jill Anderson: What could education leaders do in order to get more from the research that exists and also to sort of create their own evidence-based research?

Carrie Conaway: To get more from the research that already exists, I think one piece of it is just getting a little more research in your reading diet. I think everyone as a professional has their go-to sources for where they're getting ideas from and professional knowledge from. So getting a few more places that are more based in research in there as well as the ones based in practice, I think is one piece that's a fairly easy shift for people to make.

But I actually think there is no better way to get people to use research than to do it on yourself. Because then you're automatically engaged in the answer to the question. That's a real problem you're trying to solve and you're trying to answer yourself. So I think a big piece of what could really help drive greater research use is more districts and more states taking on doing research as part of their improvement strategy, which is really what my role was at the State Department of Ed.

My job was to help us figure out how to do our work better. And I never had to worry about my colleagues in the program offices, reading the work we were doing because they were engaged in it to begin with. They helped design the questions. They helped us interpret the answers. They help frame the agenda that we were working from. So that solves a lot of the problem.

Jill Anderson: I mean, how hard is it to implement that on the ground in a school where you might have limited resources, may not have someone specifically working on research in your district?

Carrie Conaway: I think everyone can be a learner. It's a funny conversation to me that in education we care about learning, that is literally our profession and our job, and research is just a structured way of learning. So this should be a pretty easy sell and we should all be able to find ways to do this work. Whether you're a classroom teacher where you're sort of as a part of your own self-reflection of professional growth. You're reflecting on what did I try with this particular group of students? Importantly, who can I compare them to that might be a good comparison group for what would have happened had I not done that intervention?

That's something I think we tend to forget about. It's easy to just do a pre-post like the kids improve, but what if everybody else improved at the same rate that they didn't really gain anything extra? So getting better at asking those kinds of questions without having to be super fancy in statistics-oriented about it. Just a little bit more depth of who can I compare this to relative to what am I trying to see? What do I want to learn from my practice and not just all about sort of the impact, but also how did I accomplish that? What did we do and how could we improve that work over time?

Jill Anderson: Do you think that part of the challenge is just you get a little caught up in this idea of numbers and statistics and having that knowledge in order to execute research?

Carrie Conaway: Yeah, I do think people get bogged down in that and they think they need to be more formal and more fancy than they actually need to be. I mean, really I think we could get pretty far in education with simply asking ourselves how much improvement did I see in the group I was working with and how much improvement did I see in some other group of students that is roughly similar to them? And we don't need to get fancier than that. It'd be great if we did randomized control trials for everything, but a lot of things don't lend themselves to that.

And we can't just be like, "Oh well, we're not going to learn from that." I would have dropped out 90% of the questions that my colleagues had at the agency if I limited myself only to things where I could get a really strong estimate of the impact. Some information is better than none. And seeing some improvement relative to another group is a good place to start for a lot of educators.

Jill Anderson: You reference a term called common sense evidence. Is that what you really mean when you say common sense evidence?’

Carrie Conaway: Yeah. I think it's sort of a few dimensions. One is this sort of like don't let the perfect be the enemy of the good. Learning something from your work is important. I think don't get bogged down in the technical details of the statistics and do your best. But I also think part of our professional obligation as educators is to learn from our work. And that dimension to me is also common sense as well.

That if we're not incorporating learning from what we're doing as we're going, we're not doing ourselves a service and we're not using our own common sense. We're just sort of blindly moving around and trying to hit a target and we're not actually being intentional and strategic.

Jill Anderson: Looking at education leaders, they're tasked with looking at what exists within their district or their school system and making decisions and implementing some kind of change. But I imagine that has to be very hard to do because it's overwhelming. And there's a lot of different things at play that maybe a study from some other part of the country might not be able to have insight into your unique problems. So I'm wondering where is a good place to start with beginning to do this work on your own as an education leader?

Carrie Conaway: Well when I started at the State Department of Ed, I did not start by attempting to implement some giant research agenda across the entire agency. You have to kind of get some quick wins to change management strategy, which leaders have lots of experience with. Good leaders, part of what they're doing is leading change. And so it's just applying those same strategies towards leading the change of using more research.

So in my case, I started with a combination of a couple of things that basically my boss told me we should get such and so a study going. And working with the leadership to understand what were the priorities of the agency at the time. And picking strategically a couple of those. To give a concrete example, this is in 2007, there was a big policy initiative in the state to give our very lowest performing schools some additional autonomy from district policies around curriculum and instruction and time of the school day, that sort of thing. And it was a huge priority of our state board director, I'm looking around I'm like, "Is anybody collecting any data on this as we're going?" And at the time nobody had really planned anything systematic. And so I went to the woman who was running that part of the, and said, "It seems like this would be a great opportunity to collect some baseline data before we start this.And then to collect information along the way." We did interviews with the school leaders and people involved in the program side. And we looked at some data just to sort of capture and document as we're going.

Because we know the board director is going to come back around in a year and be like, "How'd that initiative go? Wouldn't it be better if we planned ahead?" So that's sort of starting small with some stuff that you know is going to have an impact because you know people are going to ask you how did this study matter? I think is a great place to start. The other place you can start I would say is if you know something's coming down the pike a year or two from now like you're planning to take on a big initiative or you know your state legislature's likely to pass a law.

Planning ahead for that by looking in that lead up time, what do we already know from existing research that might be useful here? Do we want to collect any baseline data so that we can measure the change? If this is an intervention, how are we selecting students to participate? Is there a way we could introduce an element of randomness, even if it's not literally a lottery to the assignment to help get a better answer on the impact?

There's lots of little small things like that. Just thinking a little farther ahead can really, I think, pay off big benefits later on in terms of being able to really understand and learn from what you've done and tweak it and improve it over time.

Jill Anderson: One of the things that I was reading in your work was this idea that leaders often start with the problem and researchers start with the question. And it's important for education leaders to think about how to turn a problem into a good line of questioning. Can you talk a little bit more about that idea of coming up with questions and lines of questioning as you work through a certain problem you're trying to solve?

Carrie Conaway: I mean, that's another way I could have answered your question about what's the fundamental issue here is that educators have problems and researchers answer questions. And so someone has to shift to get to the point that we're on the same page about what we're trying to solve. When I think about questions, I think of three buckets of questions that I have found very useful in my own practice. Questions of implementation and questions of impact.

So one question is how much impact does this program that we've just implemented or we're about to implement having on student outcomes? That's a great question to ask. And then there's a bunch of to me lead up questions that are about how did the implementation go? Did this turn out the way we expected it to? Where were the weak links in the chain that from sort of the idea in the superintendents head down into the classroom, where did things fall apart or where do they have the potential to fall apart so you can catch those later? Those are two very practical questions that I think educators bring to the table that can guide a lot of that work.

And then the third piece is a little bit harder. It's thinking about a question of diagnosis. So you're getting more at what is the problem actually that we have for whom and when is this problem worse? And that will help figure out what the policy solution is or the practice solution depending on the level of granularity. So again, to give an example, my colleague Nate Schwartz in Tennessee did a fantastic report a few years ago, where he looked at the reasons why kids were not successful in Tennessee on AP tests at the high school level.

It turns out that the answer is different depending on the school. In some schools it was they didn't have a whole lot of kids scoring very high on grade nine and 10 tests. So there weren't a lot of kids that were academically prepared in 11th and 12th grade. In other schools, lots of kids were well-prepared, but they weren't offering AP classes. In other schools they were offering, but only the more advantaged kids were taking them. And you could imagine there's five or six reasons.

But for each school, it really matters which circumstance you're in. If you're in the circumstance of no one's scoring well enough earlier on, nobody's prepared, you got to tackle that problem. Whereas if your kids are prepared, but you're not offering AP, then that's a different solution. And so really asking yourself to think carefully about for whom and when is this problem worse can help dig from the problem space into the question space. And get a better set of questions that hopefully guide your practice better.

Jill Anderson: How hard is it to actually implement this way of thinking and approach into your work?

Carrie Conaway: Certainly every educator right now is under a tremendous amount of stress and strain. And this is probably not the best time to bring on a brand new thing, but it turns out again like learning from your work actually is the work. So if you're doing your job well as a leader, what you should be working towards is building an organization that can learn and iterate and improve. And this is a way of doing that. So it fits very well into strategic planning processes.

My job at the State Department of Ed in Massachusetts was research and planning for that reason because they fit together quite well. Districts are already doing strategic planning. So this isn't really adding that much extra. And the other thing I would say is an aspiration on its own probably isn't going to get you anywhere. You need a little bit of space in somebody's time to do this, but it doesn't have to be the superintendent.

I mean, if you're fortunate and you're a larger district, you might be able to hire some staff, but even if you can't having someone whose part of their role is to help figure out what are the key questions we need to answer this year and who can help us answer them? Along with perhaps some other set of duties or some sort of buyout for part of their time, I think can go a long way. So it's sort of like, yes, it takes some time, but lots of things that are worth doing take time. And if you don't have a system to learn from your work, in the end that's far more costly.

Jill Anderson: I imagine there will be tons of research coming out in the next probably decades about everything that's been happening.

Carrie Conaway: It's already coming, yes.

Jill Anderson: How do you find really good evidence-based research that you can use versus maybe something that's a little bit lower quality? How do you differentiate between the two and know something that you can actually really use as a leader?

Carrie Conaway: In general, reading single studies is probably not the way to go. Because in general, that's a particular moment in time in a particular context. And what you really want to know is what in general do we know across lots of studies about what the impact of a dual enrollment program at the high school, or of a tiered intervention system at a elementary school? You don't want to know how exactly did it work in Chicago in 2018. Unless it's your district that did the study, you want kind of that broader picture. Because any given study could be kind of pro or con.

It's sort of like nutrition studies. In general, you will find that studies that look at the impact of high cholesterol will find a negative impact on later health outcomes, but some of them won't, and that's just because of how research works. So I would look towards summaries of research that are done by credible organizations. The U.S. Department of Education has the What Works Clearinghouse. That's a great example. There's other evidence clearinghouses that try to summarize and not just show individual studies. So that's one piece.

And then I think you do need to build a little bit of facility for what makes for a better and worse comparison group. That's really mainly what it comes down to when you're talking about studies that get into did this quote work or not has to do with how good is the comparison group? Researchers will say the supposed gold standard is a randomized control trial. Because you have randomly assigned some students to get something and some not to. And so they should be the same except for the treatment. There's lots of other ways you can answer questions just as credibly, but building a little bit of facility with what makes for stronger or weaker methods in that regard is probably the one piece that you would need to have to kind of add on to that to build your judgment.

And then finally, as an educator, the thing only you can bring is your knowledge of your kids. So the research will tell you some stuff and it'll hopefully tell you a little bit about what context that happens in, and then you have to use your professional judgment to know is that similar enough to the circumstances I'm in that this is likely to be relevant? So if you haven't thought about the relevance of the work, who cares what the impact is, right? If it's not relevant to you don't even look at it.

It's an interesting challenge. Because I do one challenge I had working with district people when I was at the state was everyone believes their district to be special and they are all special. They're all very special, but they are not necessarily so different from one another as people think they are. And so pushing yourselves to think a little bit about like, "Okay, well, yeah, we are different from that town down the road, but how different are we really? Is it different enough that it would make a difference for this study to be relevant for us or not?"

Jill Anderson: I have to ask a little bit about the other side of the coin, which is education research. And how does that start to look a little bit different in a way that it's more useful for practitioners?

Carrie Conaway: yeah, this is such a huge challenge, but also an opportunity. In the 13 years that I've worked in education research, it has improved tremendously in terms of the connection between the relevance of research to practice. First of all, I would say nobody goes into education policy research, or education like intervention research wanting their work to be irrelevant. You would pick a different topic if you didn't care about it having an impact. And so people are fundamentally I think in general motivated that they want their work to have an impact. And it's more a matter of both building the skill on how to do that and the incentives in the higher ed institutions that it counts as part of their work.

So in terms of the skill I think I see tremendous hope in the younger generation of scholars that are coming out of doctoral programs right now. I see a lot more work that is more closely collaborative with practitioners, not just sort of what people think is interesting, but what is also a value to practice. And I think increasingly we are starting to figure out what incentives might help there. I think some organizations are doing better at that than others. But I'm optimistic that over time the work that it takes to do partnership work well will be recognized and valued as part of what makes research good, which is really what it comes down to.

Jill Anderson: Well, thank you so much for enlightening us on education research and what education leaders can do to make a difference in their own practice.

Carrie Conaway: Wonderful. I'm happy to join. Thank you for inviting me.

Jill Anderson: Carrie Conaway is a Senior Lecturer at the Harvard Graduate School of Education. She's the coauthor of Common-Sense Evidence: The Education Leader's Guide to Using Data and Research. I'm Jill Anderson. This is the Harvard EdCast produced by the Harvard Graduate School of Education. Thanks for listening.

EdCast logo

An education podcast that keeps the focus simple: what makes a difference for learners, educators, parents, and communities

Related Articles

HGSE shield on blue background

Understanding Educational Ethics

The case for homework.

Chris Emdin

Embracing the Whole Student, Being Ratchetdemic

  • What is Educational Research? + [Types, Scope & Importance]

busayo.longe

Education is an integral aspect of every society and in a bid to expand the frontiers of knowledge, educational research must become a priority. Educational research plays a vital role in the overall development of pedagogy, learning programs, and policy formulation. 

Educational research is a spectrum that bothers on multiple fields of knowledge and this means that it draws from different disciplines. As a result of this, the findings of this research are multi-dimensional and can be restricted by the characteristics of the research participants and the research environment. 

What is Educational Research?

Educational research is a type of systematic investigation that applies empirical methods to solving challenges in education. It adopts rigorous and well-defined scientific processes in order to gather and analyze data for problem-solving and knowledge advancement. 

J. W. Best defines educational research as that activity that is directed towards the development of a science of behavior in educational situations. The ultimate aim of such a science is to provide knowledge that will permit the educator to achieve his goals through the most effective methods.

The primary purpose of educational research is to expand the existing body of knowledge by providing solutions to different problems in pedagogy while improving teaching and learning practices. Educational researchers also seek answers to questions bothering on learner motivation, development, and classroom management. 

Characteristics of Education Research  

While educational research can take numerous forms and approaches, several characteristics define its process and approach. Some of them are listed below:

  • It sets out to solve a specific problem.
  • Educational research adopts primary and secondary research methods in its data collection process . This means that in educational research, the investigator relies on first-hand sources of information and secondary data to arrive at a suitable conclusion. 
  • Educational research relies on empirical evidence . This results from its largely scientific approach.
  • Educational research is objective and accurate because it measures verifiable information.
  • In educational research, the researcher adopts specific methodologies, detailed procedures, and analysis to arrive at the most objective responses
  • Educational research findings are useful in the development of principles and theories that provide better insights into pressing issues.
  • This research approach combines structured, semi-structured, and unstructured questions to gather verifiable data from respondents.
  • Many educational research findings are documented for peer review before their presentation. 
  • Educational research is interdisciplinary in nature because it draws from different fields and studies complex factual relations.

Types of Educational Research 

Educational research can be broadly categorized into 3 which are descriptive research , correlational research , and experimental research . Each of these has distinct and overlapping features. 

Descriptive Educational Research

In this type of educational research, the researcher merely seeks to collect data with regards to the status quo or present situation of things. The core of descriptive research lies in defining the state and characteristics of the research subject being understudied. 

Because of its emphasis on the “what” of the situation, descriptive research can be termed an observational research method . In descriptive educational research, the researcher makes use of quantitative research methods including surveys and questionnaires to gather the required data.

Typically, descriptive educational research is the first step in solving a specific problem. Here are a few examples of descriptive research: 

  • A reading program to help you understand student literacy levels.
  • A study of students’ classroom performance.
  • Research to gather data on students’ interests and preferences. 

From these examples, you would notice that the researcher does not need to create a simulation of the natural environment of the research subjects; rather, he or she observes them as they engage in their routines. Also, the researcher is not concerned with creating a causal relationship between the research variables. 

Correlational Educational Research

This is a type of educational research that seeks insights into the statistical relationship between two research variables. In correlational research, the researcher studies two variables intending to establish a connection between them. 

Correlational research can be positive, negative, or non-existent. Positive correlation occurs when an increase in variable A leads to an increase in variable B, while negative correlation occurs when an increase in variable A results in a decrease in variable B. 

When a change in any of the variables does not trigger a succeeding change in the other, then the correlation is non-existent. Also, in correlational educational research, the research does not need to alter the natural environment of the variables; that is, there is no need for external conditioning. 

Examples of educational correlational research include: 

  • Research to discover the relationship between students’ behaviors and classroom performance.
  • A study into the relationship between students’ social skills and their learning behaviors. 

Experimental Educational Research

Experimental educational research is a research approach that seeks to establish the causal relationship between two variables in the research environment. It adopts quantitative research methods in order to determine the cause and effect in terms of the research variables being studied. 

Experimental educational research typically involves two groups – the control group and the experimental group. The researcher introduces some changes to the experimental group such as a change in environment or a catalyst, while the control group is left in its natural state. 

The introduction of these catalysts allows the researcher to determine the causative factor(s) in the experiment. At the core of experimental educational research lies the formulation of a hypothesis and so, the overall research design relies on statistical analysis to approve or disprove this hypothesis.

Examples of Experimental Educational Research

  • A study to determine the best teaching and learning methods in a school.
  • A study to understand how extracurricular activities affect the learning process. 

Based on functionality, educational research can be classified into fundamental research , applied research , and action research. The primary purpose of fundamental research is to provide insights into the research variables; that is, to gain more knowledge. Fundamental research does not solve any specific problems. 

Just as the name suggests, applied research is a research approach that seeks to solve specific problems. Findings from applied research are useful in solving practical challenges in the educational sector such as improving teaching methods, modifying learning curricula, and simplifying pedagogy. 

Action research is tailored to solve immediate problems that are specific to a context such as educational challenges in a local primary school. The goal of action research is to proffer solutions that work in this context and to solve general or universal challenges in the educational sector. 

Importance of Educational Research

  • Educational research plays a crucial role in knowledge advancement across different fields of study. 
  • It provides answers to practical educational challenges using scientific methods.
  • Findings from educational research; especially applied research, are instrumental in policy reformulation. 
  • For the researcher and other parties involved in this research approach, educational research improves learning, knowledge, skills, and understanding.
  • Educational research improves teaching and learning methods by empowering you with data to help you teach and lead more strategically and effectively.
  • Educational research helps students apply their knowledge to practical situations.

Educational Research Methods 

  • Surveys/Questionnaires

A survey is a research method that is used to collect data from a predetermined audience about a specific research context. It usually consists of a set of standardized questions that help you to gain insights into the experiences, thoughts, and behaviors of the audience. 

Surveys can be administered physically using paper forms, face-to-face conversations, telephone conversations, or online forms. Online forms are easier to administer because they help you to collect accurate data and to also reach a larger sample size. Creating your online survey on data-gathering platforms like Formplus allows you to.also analyze survey respondent’s data easily. 

In order to gather accurate data via your survey, you must first identify the research context and the research subjects that would make up your data sample size. Next, you need to choose an online survey tool like Formplus to help you create and administer your survey with little or no hassles. 

An interview is a qualitative data collection method that helps you to gather information from respondents by asking questions in a conversation. It is typically a face-to-face conversation with the research subjects in order to gather insights that will prove useful to the specific research context. 

Interviews can be structured, semi-structured , or unstructured . A structured interview is a type of interview that follows a premeditated sequence; that is, it makes use of a set of standardized questions to gather information from the research subjects. 

An unstructured interview is a type of interview that is fluid; that is, it is non-directive. During a structured interview, the researcher does not make use of a set of predetermined questions rather, he or she spontaneously asks questions to gather relevant data from the respondents. 

A semi-structured interview is the mid-point between structured and unstructured interviews. Here, the researcher makes use of a set of standardized questions yet, he or she still makes inquiries outside these premeditated questions as dedicated by the flow of the conversations in the research context. 

Data from Interviews can be collected using audio recorders, digital cameras, surveys, and questionnaires. 

  • Observation

Observation is a method of data collection that entails systematically selecting, watching, listening, reading, touching, and recording behaviors and characteristics of living beings, objects, or phenomena. In the classroom, teachers can adopt this method to understand students’ behaviors in different contexts. 

Observation can be qualitative or quantitative in approach . In quantitative observation, the researcher aims at collecting statistical information from respondents and in qualitative information, the researcher aims at collecting qualitative data from respondents. 

Qualitative observation can further be classified into participant or non-participant observation. In participant observation, the researcher becomes a part of the research environment and interacts with the research subjects to gather info about their behaviors. In non-participant observation, the researcher does not actively take part in the research environment; that is, he or she is a passive observer. 

How to Create Surveys and Questionnaires with Formplus

  • On your dashboard, choose the “create new form” button to access the form builder. You can also choose from the available survey templates and modify them to suit your need.
  • Save your online survey to access the form customization section. Here, you can change the physical appearance of your form by adding preferred background images and inserting your organization’s logo.
  • Formplus has a form analytics dashboard that allows you to view insights from your data collection process such as the total number of form views and form submissions. You can also use the reports summary tool to generate custom graphs and charts from your survey data. 

Steps in Educational Research

Like other types of research, educational research involves several steps. Following these steps allows the researcher to gather objective information and arrive at valid findings that are useful to the research context. 

  • Define the research problem clearly. 
  • Formulate your hypothesis. A hypothesis is the researcher’s reasonable guess based on the available evidence, which he or she seeks to prove in the course of the research.
  • Determine the methodology to be adopted. Educational research methods include interviews, surveys, and questionnaires.
  • Collect data from the research subjects using one or more educational research methods. You can collect research data using Formplus forms.
  • Analyze and interpret your data to arrive at valid findings. In the Formplus analytics dashboard, you can view important data collection insights and you can also create custom visual reports with the reports summary tool. 
  • Create your research report. A research report details the entire process of the systematic investigation plus the research findings. 

Conclusion 

Educational research is crucial to the overall advancement of different fields of study and learning, as a whole. Data in educational research can be gathered via surveys and questionnaires, observation methods, or interviews – structured, unstructured, and semi-structured. 

You can create a survey/questionnaire for educational research with Formplu s. As a top-tier data tool, Formplus makes it easy for you to create your educational research survey in the drag-and-drop form builder, and share this with survey respondents using one or more of the form sharing options. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • education research
  • educational research types
  • examples of educational research
  • importance of educational research
  • purpose of educational research
  • busayo.longe

Formplus

You may also like:

User Research: Definition, Methods, Tools and Guide

In this article, you’ll learn to provide value to your target market with user research. As a bonus, we’ve added user research tools and...

how to conduct educational research

Goodhart’s Law: Definition, Implications & Examples

In this article, we will discuss Goodhart’s law in different fields, especially in survey research, and how you can avoid it.

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Assessment Tools: Types, Examples & Importance

In this article, you’ll learn about different assessment tools to help you evaluate performance in various contexts

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Logo for New Prairie Press Open Book Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

1 What is Action Research for Classroom Teachers?

ESSENTIAL QUESTIONS

  • What is the nature of action research?
  • How does action research develop in the classroom?
  • What models of action research work best for your classroom?
  • What are the epistemological, ontological, theoretical underpinnings of action research?

Educational research provides a vast landscape of knowledge on topics related to teaching and learning, curriculum and assessment, students’ cognitive and affective needs, cultural and socio-economic factors of schools, and many other factors considered viable to improving schools. Educational stakeholders rely on research to make informed decisions that ultimately affect the quality of schooling for their students. Accordingly, the purpose of educational research is to engage in disciplined inquiry to generate knowledge on topics significant to the students, teachers, administrators, schools, and other educational stakeholders. Just as the topics of educational research vary, so do the approaches to conducting educational research in the classroom. Your approach to research will be shaped by your context, your professional identity, and paradigm (set of beliefs and assumptions that guide your inquiry). These will all be key factors in how you generate knowledge related to your work as an educator.

Action research is an approach to educational research that is commonly used by educational practitioners and professionals to examine, and ultimately improve, their pedagogy and practice. In this way, action research represents an extension of the reflection and critical self-reflection that an educator employs on a daily basis in their classroom. When students are actively engaged in learning, the classroom can be dynamic and uncertain, demanding the constant attention of the educator. Considering these demands, educators are often only able to engage in reflection that is fleeting, and for the purpose of accommodation, modification, or formative assessment. Action research offers one path to more deliberate, substantial, and critical reflection that can be documented and analyzed to improve an educator’s practice.

Purpose of Action Research

As one of many approaches to educational research, it is important to distinguish the potential purposes of action research in the classroom. This book focuses on action research as a method to enable and support educators in pursuing effective pedagogical practices by transforming the quality of teaching decisions and actions, to subsequently enhance student engagement and learning. Being mindful of this purpose, the following aspects of action research are important to consider as you contemplate and engage with action research methodology in your classroom:

  • Action research is a process for improving educational practice. Its methods involve action, evaluation, and reflection. It is a process to gather evidence to implement change in practices.
  • Action research is participative and collaborative. It is undertaken by individuals with a common purpose.
  • Action research is situation and context-based.
  • Action research develops reflection practices based on the interpretations made by participants.
  • Knowledge is created through action and application.
  • Action research can be based in problem-solving, if the solution to the problem results in the improvement of practice.
  • Action research is iterative; plans are created, implemented, revised, then implemented, lending itself to an ongoing process of reflection and revision.
  • In action research, findings emerge as action develops and takes place; however, they are not conclusive or absolute, but ongoing (Koshy, 2010, pgs. 1-2).

In thinking about the purpose of action research, it is helpful to situate action research as a distinct paradigm of educational research. I like to think about action research as part of the larger concept of living knowledge. Living knowledge has been characterized as “a quest for life, to understand life and to create… knowledge which is valid for the people with whom I work and for myself” (Swantz, in Reason & Bradbury, 2001, pg. 1). Why should educators care about living knowledge as part of educational research? As mentioned above, action research is meant “to produce practical knowledge that is useful to people in the everyday conduct of their lives and to see that action research is about working towards practical outcomes” (Koshy, 2010, pg. 2). However, it is also about:

creating new forms of understanding, since action without reflection and understanding is blind, just as theory without action is meaningless. The participatory nature of action research makes it only possible with, for and by persons and communities, ideally involving all stakeholders both in the questioning and sense making that informs the research, and in the action, which is its focus. (Reason & Bradbury, 2001, pg. 2)

In an effort to further situate action research as living knowledge, Jean McNiff reminds us that “there is no such ‘thing’ as ‘action research’” (2013, pg. 24). In other words, action research is not static or finished, it defines itself as it proceeds. McNiff’s reminder characterizes action research as action-oriented, and a process that individuals go through to make their learning public to explain how it informs their practice. Action research does not derive its meaning from an abstract idea, or a self-contained discovery – action research’s meaning stems from the way educators negotiate the problems and successes of living and working in the classroom, school, and community.

While we can debate the idea of action research, there are people who are action researchers, and they use the idea of action research to develop principles and theories to guide their practice. Action research, then, refers to an organization of principles that guide action researchers as they act on shared beliefs, commitments, and expectations in their inquiry.

Reflection and the Process of Action Research

When an individual engages in reflection on their actions or experiences, it is typically for the purpose of better understanding those experiences, or the consequences of those actions to improve related action and experiences in the future. Reflection in this way develops knowledge around these actions and experiences to help us better regulate those actions in the future. The reflective process generates new knowledge regularly for classroom teachers and informs their classroom actions.

Unfortunately, the knowledge generated by educators through the reflective process is not always prioritized among the other sources of knowledge educators are expected to utilize in the classroom. Educators are expected to draw upon formal types of knowledge, such as textbooks, content standards, teaching standards, district curriculum and behavioral programs, etc., to gain new knowledge and make decisions in the classroom. While these forms of knowledge are important, the reflective knowledge that educators generate through their pedagogy is the amalgamation of these types of knowledge enacted in the classroom. Therefore, reflective knowledge is uniquely developed based on the action and implementation of an educator’s pedagogy in the classroom. Action research offers a way to formalize the knowledge generated by educators so that it can be utilized and disseminated throughout the teaching profession.

Research is concerned with the generation of knowledge, and typically creating knowledge related to a concept, idea, phenomenon, or topic. Action research generates knowledge around inquiry in practical educational contexts. Action research allows educators to learn through their actions with the purpose of developing personally or professionally. Due to its participatory nature, the process of action research is also distinct in educational research. There are many models for how the action research process takes shape. I will share a few of those here. Each model utilizes the following processes to some extent:

  • Plan a change;
  • Take action to enact the change;
  • Observe the process and consequences of the change;
  • Reflect on the process and consequences;
  • Act, observe, & reflect again and so on.

The basic process of Action Research is as follows: Plan a change; Take action to enact the change; Observe the process and consequences of the change; Reflect on the process and consequences; Act, observe, & reflect again and so on.

Figure 1.1 Basic action research cycle

There are many other models that supplement the basic process of action research with other aspects of the research process to consider. For example, figure 1.2 illustrates a spiral model of action research proposed by Kemmis and McTaggart (2004). The spiral model emphasizes the cyclical process that moves beyond the initial plan for change. The spiral model also emphasizes revisiting the initial plan and revising based on the initial cycle of research:

Kemmis and McTaggart (2004) offer a slightly different process for action research: Plan; Act & Observe; Reflect; Revised Plan; Act & Observe; Reflect.

Figure 1.2 Interpretation of action research spiral, Kemmis and McTaggart (2004, p. 595)

Other models of action research reorganize the process to emphasize the distinct ways knowledge takes shape in the reflection process. O’Leary’s (2004, p. 141) model, for example, recognizes that the research may take shape in the classroom as knowledge emerges from the teacher’s observations. O’Leary highlights the need for action research to be focused on situational understanding and implementation of action, initiated organically from real-time issues:

O'Leary (2004) offers another version of the action research process that focuses the cyclical nature of action research, with three cycles shown: Observe; Reflect; Plan; Act; And Repeat.

Figure 1.3 Interpretation of O’Leary’s cycles of research, O’Leary (2000, p. 141)

Lastly, Macintyre’s (2000, p. 1) model, offers a different characterization of the action research process. Macintyre emphasizes a messier process of research with the initial reflections and conclusions as the benchmarks for guiding the research process. Macintyre emphasizes the flexibility in planning, acting, and observing stages to allow the process to be naturalistic. Our interpretation of Macintyre process is below:

Macintyre (2000) offers a much more complex process of action research that highlights multiple processes happening at the same time. It starts with: Reflection and analysis of current practice and general idea of research topic and context. Second: Narrowing down the topic, planning the action; and scanning the literature, discussing with colleagues. Third: Refined topic – selection of key texts, formulation of research question/hypothesis, organization of refined action plan in context; and tentative action plan, consideration of different research strategies. Fourth: Evaluation of entire process; and take action, monitor effects – evaluation of strategy and research question/hypothesis and final amendments. Lastly: Conclusions, claims, explanations. Recommendations for further research.

Figure 1.4 Interpretation of the action research cycle, Macintyre (2000, p. 1)

We believe it is important to prioritize the flexibility of the process, and encourage you to only use these models as basic guides for your process. Your process may look similar, or you may diverge from these models as you better understand your students, context, and data.

Definitions of Action Research and Examples

At this point, it may be helpful for readers to have a working definition of action research and some examples to illustrate the methodology in the classroom. Bassey (1998, p. 93) offers a very practical definition and describes “action research as an inquiry which is carried out in order to understand, to evaluate and then to change, in order to improve educational practice.” Cohen and Manion (1994, p. 192) situate action research differently, and describe action research as emergent, writing:

essentially an on-the-spot procedure designed to deal with a concrete problem located in an immediate situation. This means that ideally, the step-by-step process is constantly monitored over varying periods of time and by a variety of mechanisms (questionnaires, diaries, interviews and case studies, for example) so that the ensuing feedback may be translated into modifications, adjustment, directional changes, redefinitions, as necessary, so as to bring about lasting benefit to the ongoing process itself rather than to some future occasion.

Lastly, Koshy (2010, p. 9) describes action research as:

a constructive inquiry, during which the researcher constructs his or her knowledge of specific issues through planning, acting, evaluating, refining and learning from the experience. It is a continuous learning process in which the researcher learns and also shares the newly generated knowledge with those who may benefit from it.

These definitions highlight the distinct features of action research and emphasize the purposeful intent of action researchers to improve, refine, reform, and problem-solve issues in their educational context. To better understand the distinctness of action research, these are some examples of action research topics:

Examples of Action Research Topics

  • Flexible seating in 4th grade classroom to increase effective collaborative learning.
  • Structured homework protocols for increasing student achievement.
  • Developing a system of formative feedback for 8th grade writing.
  • Using music to stimulate creative writing.
  • Weekly brown bag lunch sessions to improve responses to PD from staff.
  • Using exercise balls as chairs for better classroom management.

Action Research in Theory

Action research-based inquiry in educational contexts and classrooms involves distinct participants – students, teachers, and other educational stakeholders within the system. All of these participants are engaged in activities to benefit the students, and subsequently society as a whole. Action research contributes to these activities and potentially enhances the participants’ roles in the education system. Participants’ roles are enhanced based on two underlying principles:

  • communities, schools, and classrooms are sites of socially mediated actions, and action research provides a greater understanding of self and new knowledge of how to negotiate these socially mediated environments;
  • communities, schools, and classrooms are part of social systems in which humans interact with many cultural tools, and action research provides a basis to construct and analyze these interactions.

In our quest for knowledge and understanding, we have consistently analyzed human experience over time and have distinguished between types of reality. Humans have constantly sought “facts” and “truth” about reality that can be empirically demonstrated or observed.

Social systems are based on beliefs, and generally, beliefs about what will benefit the greatest amount of people in that society. Beliefs, and more specifically the rationale or support for beliefs, are not always easy to demonstrate or observe as part of our reality. Take the example of an English Language Arts teacher who prioritizes argumentative writing in her class. She believes that argumentative writing demonstrates the mechanics of writing best among types of writing, while also providing students a skill they will need as citizens and professionals. While we can observe the students writing, and we can assess their ability to develop a written argument, it is difficult to observe the students’ understanding of argumentative writing and its purpose in their future. This relates to the teacher’s beliefs about argumentative writing; we cannot observe the real value of the teaching of argumentative writing. The teacher’s rationale and beliefs about teaching argumentative writing are bound to the social system and the skills their students will need to be active parts of that system. Therefore, our goal through action research is to demonstrate the best ways to teach argumentative writing to help all participants understand its value as part of a social system.

The knowledge that is conveyed in a classroom is bound to, and justified by, a social system. A postmodernist approach to understanding our world seeks knowledge within a social system, which is directly opposed to the empirical or positivist approach which demands evidence based on logic or science as rationale for beliefs. Action research does not rely on a positivist viewpoint to develop evidence and conclusions as part of the research process. Action research offers a postmodernist stance to epistemology (theory of knowledge) and supports developing questions and new inquiries during the research process. In this way action research is an emergent process that allows beliefs and decisions to be negotiated as reality and meaning are being constructed in the socially mediated space of the classroom.

Theorizing Action Research for the Classroom

All research, at its core, is for the purpose of generating new knowledge and contributing to the knowledge base of educational research. Action researchers in the classroom want to explore methods of improving their pedagogy and practice. The starting place of their inquiry stems from their pedagogy and practice, so by nature the knowledge created from their inquiry is often contextually specific to their classroom, school, or community. Therefore, we should examine the theoretical underpinnings of action research for the classroom. It is important to connect action research conceptually to experience; for example, Levin and Greenwood (2001, p. 105) make these connections:

  • Action research is context bound and addresses real life problems.
  • Action research is inquiry where participants and researchers cogenerate knowledge through collaborative communicative processes in which all participants’ contributions are taken seriously.
  • The meanings constructed in the inquiry process lead to social action or these reflections and action lead to the construction of new meanings.
  • The credibility/validity of action research knowledge is measured according to whether the actions that arise from it solve problems (workability) and increase participants’ control over their own situation.

Educators who engage in action research will generate new knowledge and beliefs based on their experiences in the classroom. Let us emphasize that these are all important to you and your work, as both an educator and researcher. It is these experiences, beliefs, and theories that are often discounted when more official forms of knowledge (e.g., textbooks, curriculum standards, districts standards) are prioritized. These beliefs and theories based on experiences should be valued and explored further, and this is one of the primary purposes of action research in the classroom. These beliefs and theories should be valued because they were meaningful aspects of knowledge constructed from teachers’ experiences. Developing meaning and knowledge in this way forms the basis of constructivist ideology, just as teachers often try to get their students to construct their own meanings and understandings when experiencing new ideas.  

Classroom Teachers Constructing their Own Knowledge

Most of you are probably at least minimally familiar with constructivism, or the process of constructing knowledge. However, what is constructivism precisely, for the purposes of action research? Many scholars have theorized constructivism and have identified two key attributes (Koshy, 2010; von Glasersfeld, 1987):

  • Knowledge is not passively received, but actively developed through an individual’s cognition;
  • Human cognition is adaptive and finds purpose in organizing the new experiences of the world, instead of settling for absolute or objective truth.

Considering these two attributes, constructivism is distinct from conventional knowledge formation because people can develop a theory of knowledge that orders and organizes the world based on their experiences, instead of an objective or neutral reality. When individuals construct knowledge, there are interactions between an individual and their environment where communication, negotiation and meaning-making are collectively developing knowledge. For most educators, constructivism may be a natural inclination of their pedagogy. Action researchers have a similar relationship to constructivism because they are actively engaged in a process of constructing knowledge. However, their constructions may be more formal and based on the data they collect in the research process. Action researchers also are engaged in the meaning making process, making interpretations from their data. These aspects of the action research process situate them in the constructivist ideology. Just like constructivist educators, action researchers’ constructions of knowledge will be affected by their individual and professional ideas and values, as well as the ecological context in which they work (Biesta & Tedder, 2006). The relations between constructivist inquiry and action research is important, as Lincoln (2001, p. 130) states:

much of the epistemological, ontological, and axiological belief systems are the same or similar, and methodologically, constructivists and action researchers work in similar ways, relying on qualitative methods in face-to-face work, while buttressing information, data and background with quantitative method work when necessary or useful.

While there are many links between action research and educators in the classroom, constructivism offers the most familiar and practical threads to bind the beliefs of educators and action researchers.  

Epistemology, Ontology, and Action Research

It is also important for educators to consider the philosophical stances related to action research to better situate it with their beliefs and reality. When researchers make decisions about the methodology they intend to use, they will consider their ontological and epistemological stances. It is vital that researchers clearly distinguish their philosophical stances and understand the implications of their stance in the research process, especially when collecting and analyzing their data. In what follows, we will discuss ontological and epistemological stances in relation to action research methodology.

Ontology, or the theory of being, is concerned with the claims or assumptions we make about ourselves within our social reality – what do we think exists, what does it look like, what entities are involved and how do these entities interact with each other (Blaikie, 2007). In relation to the discussion of constructivism, generally action researchers would consider their educational reality as socially constructed. Social construction of reality happens when individuals interact in a social system. Meaningful construction of concepts and representations of reality develop through an individual’s interpretations of others’ actions. These interpretations become agreed upon by members of a social system and become part of social fabric, reproduced as knowledge and beliefs to develop assumptions about reality. Researchers develop meaningful constructions based on their experiences and through communication. Educators as action researchers will be examining the socially constructed reality of schools. In the United States, many of our concepts, knowledge, and beliefs about schooling have been socially constructed over the last hundred years. For example, a group of teachers may look at why fewer female students enroll in upper-level science courses at their school. This question deals directly with the social construction of gender and specifically what careers females have been conditioned to pursue. We know this is a social construction in some school social systems because in other parts of the world, or even the United States, there are schools that have more females enrolled in upper level science courses than male students. Therefore, the educators conducting the research have to recognize the socially constructed reality of their school and consider this reality throughout the research process. Action researchers will use methods of data collection that support their ontological stance and clarify their theoretical stance throughout the research process.

Koshy (2010, p. 23-24) offers another example of addressing the ontological challenges in the classroom:

A teacher who was concerned with increasing her pupils’ motivation and enthusiasm for learning decided to introduce learning diaries which the children could take home. They were invited to record their reactions to the day’s lessons and what they had learnt. The teacher reported in her field diary that the learning diaries stimulated the children’s interest in her lessons, increased their capacity to learn, and generally improved their level of participation in lessons. The challenge for the teacher here is in the analysis and interpretation of the multiplicity of factors accompanying the use of diaries. The diaries were taken home so the entries may have been influenced by discussions with parents. Another possibility is that children felt the need to please their teacher. Another possible influence was that their increased motivation was as a result of the difference in style of teaching which included more discussions in the classroom based on the entries in the dairies.

Here you can see the challenge for the action researcher is working in a social context with multiple factors, values, and experiences that were outside of the teacher’s control. The teacher was only responsible for introducing the diaries as a new style of learning. The students’ engagement and interactions with this new style of learning were all based upon their socially constructed notions of learning inside and outside of the classroom. A researcher with a positivist ontological stance would not consider these factors, and instead might simply conclude that the dairies increased motivation and interest in the topic, as a result of introducing the diaries as a learning strategy.

Epistemology, or the theory of knowledge, signifies a philosophical view of what counts as knowledge – it justifies what is possible to be known and what criteria distinguishes knowledge from beliefs (Blaikie, 1993). Positivist researchers, for example, consider knowledge to be certain and discovered through scientific processes. Action researchers collect data that is more subjective and examine personal experience, insights, and beliefs.

Action researchers utilize interpretation as a means for knowledge creation. Action researchers have many epistemologies to choose from as means of situating the types of knowledge they will generate by interpreting the data from their research. For example, Koro-Ljungberg et al., (2009) identified several common epistemologies in their article that examined epistemological awareness in qualitative educational research, such as: objectivism, subjectivism, constructionism, contextualism, social epistemology, feminist epistemology, idealism, naturalized epistemology, externalism, relativism, skepticism, and pluralism. All of these epistemological stances have implications for the research process, especially data collection and analysis. Please see the table on pages 689-90, linked below for a sketch of these potential implications:

Again, Koshy (2010, p. 24) provides an excellent example to illustrate the epistemological challenges within action research:

A teacher of 11-year-old children decided to carry out an action research project which involved a change in style in teaching mathematics. Instead of giving children mathematical tasks displaying the subject as abstract principles, she made links with other subjects which she believed would encourage children to see mathematics as a discipline that could improve their understanding of the environment and historic events. At the conclusion of the project, the teacher reported that applicable mathematics generated greater enthusiasm and understanding of the subject.

The educator/researcher engaged in action research-based inquiry to improve an aspect of her pedagogy. She generated knowledge that indicated she had improved her students’ understanding of mathematics by integrating it with other subjects – specifically in the social and ecological context of her classroom, school, and community. She valued constructivism and students generating their own understanding of mathematics based on related topics in other subjects. Action researchers working in a social context do not generate certain knowledge, but knowledge that emerges and can be observed and researched again, building upon their knowledge each time.

Researcher Positionality in Action Research

In this first chapter, we have discussed a lot about the role of experiences in sparking the research process in the classroom. Your experiences as an educator will shape how you approach action research in your classroom. Your experiences as a person in general will also shape how you create knowledge from your research process. In particular, your experiences will shape how you make meaning from your findings. It is important to be clear about your experiences when developing your methodology too. This is referred to as researcher positionality. Maher and Tetreault (1993, p. 118) define positionality as:

Gender, race, class, and other aspects of our identities are markers of relational positions rather than essential qualities. Knowledge is valid when it includes an acknowledgment of the knower’s specific position in any context, because changing contextual and relational factors are crucial for defining identities and our knowledge in any given situation.

By presenting your positionality in the research process, you are signifying the type of socially constructed, and other types of, knowledge you will be using to make sense of the data. As Maher and Tetreault explain, this increases the trustworthiness of your conclusions about the data. This would not be possible with a positivist ontology. We will discuss positionality more in chapter 6, but we wanted to connect it to the overall theoretical underpinnings of action research.

Advantages of Engaging in Action Research in the Classroom

In the following chapters, we will discuss how action research takes shape in your classroom, and we wanted to briefly summarize the key advantages to action research methodology over other types of research methodology. As Koshy (2010, p. 25) notes, action research provides useful methodology for school and classroom research because:

Advantages of Action Research for the Classroom

  • research can be set within a specific context or situation;
  • researchers can be participants – they don’t have to be distant and detached from the situation;
  • it involves continuous evaluation and modifications can be made easily as the project progresses;
  • there are opportunities for theory to emerge from the research rather than always follow a previously formulated theory;
  • the study can lead to open-ended outcomes;
  • through action research, a researcher can bring a story to life.

Action Research Copyright © by J. Spencer Clark; Suzanne Porath; Julie Thiele; and Morgan Jobe is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

Conducting an educational research study is an intensive but intensely rewarding process. The following tutorial provides step-by-step guidance for conducting an educational research study according to the University of Jos guidelines. These guidelines can be slightly modified for other educational research studies. Other social science researchers can also use these guidelines by adapting the general principles of research methods to their own domain of study.

I highly recommend that each researcher starts by reading the Overview of Scientific Research so you are clear on the general principles of educational research. If you have already started your project and need advice about a particular step, you can then click on that step. If you are just starting a research study, click NEXT to review the entire research process before you begin.

If you have any questions, comments, corrections, or errors, please email me at [email protected]

  • Purpose of educational research
  • Philosophy of research
  • Ethical considerations of conducting research
  • Introduction to the Research Process
  • Overview of writing a research study
  • Brainstorm research ideas
  • Interesting Educational Variables to Consider
  • Identify key variables and research design
  • Write Purposes, Research Questions, and Hypotheses based on key variables and research design
  • Write the Research Design section to describe the selected design
  • Consider the population that is in line with your purposes
  • Based on logistical considerations, select the sampling procedure
  • Write the Population, Sample, and Sampling Technique sections
  • Search the literature to find other studies about your key variables
  • Choose an appropriate instrument for each key variable
  • Adopting or adapting pre-existing instruments
  • Writing Questionnaire Items
  • Developing the Instrument Format
  • Evaluating the Reliability of the Instrument
  • Evaluating the Validity of the Instrument
  • Write Instruments section to describe the instruments
  • For experimental and quasi-experimental studies, develop a treatment that should influence the dependent variables
  • Write the Method of Data Collection section to describe the treatment and how the instruments will be administered
  • Write the Method of Data Analysis section to describe the appropriate statistics based on the research questions and hypotheses
  • Write the rest of Chapter 1, Introduction, focusing on the key variables in the purposes
  • Write Chapter 2, Review of Relevant Literature, focusing on the key variables in the purposes
  • Collect data, strictly following the procedures described in Chapter 3, Methods
  • Analyze data according to the procedures described in Method of Data Analysis
  • Code data from the instrument
  • Calculate descriptive statistics
  • Conduct statistics to analyze Research Questions and Research Hypotheses
  • Create tables and figures
  • Write Chapter 4, Results
  • Write Chapter 5, Conclusion
  • Create Supplementary Materials
  • Implement the lessons learned from the research study in education

For more references, here are general guidelines on writing each portion of a thesis according to the UniJos Faculty of Education guidelines. This page links to PowerPoint presentations that describe each aspect of writing and conducting a research study for a thesis, as well as three step-by-step examples of conducting a research study.

The following resources were used in developing this website.

American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: Author. Cohen, R. J., & Swerdlik, M. E. (1999). Psychological testing and assessment: An introduction to tests and measurement (4th ed.). Mountain View, CA: Mayfield. Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational Research: An Introduction (7th ed.). Boston: Allyn and Bacon. Reeve. (n.d.) How to write a journal article. Unpublished manuscript. Singleton, R. A. Jr., & Straits, B. C. (2010). Approaches to social research (5th ed.).New York: Oxford University Press. Trochim, W. M. (2006). The Research Methods Knowledge Base (2nd ed.). Retrieved online from http://www.socialresearchmethods.net/kb/

Return to KorbEdPsych Homepage

Copyright 2013, Katrina A. Korb, All Rights Reserved

  • Copy/Paste Link Link Copied

Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research to Make Curricular & Instructional Decisions

Paula J. Stanovich and Keith E. Stanovich University of Toronto

Produced by RMC Research Corporation, Portsmouth, New Hampshire

This publication was produced under National Institute for Literacy Contract No. ED-00CO-0093 with RMC Research Corporation. Sandra Baxter served as the contracting officer's technical representative. The views expressed herein do not necessarily represent the policies of the National Institute for Literacy. No official endorsement by the National Institute for Literacy or any product, commodity, service, or enterprise is intended or should be inferred.

The National Institute for Literacy

Sandra Baxter, Interim Executive Director Lynn Reddy, Communications Director

To order copies of this booklet, contact the National Institute for Literacy at EdPubs, PO Box 1398, Jessup, MD 20794-1398. Call 800-228-8813 or email [email protected] .

The National Institute for Literacy, an independent federal organization, supports the development of high quality state, regional, and national literacy services so that all Americans can develop the literacy skills they need to succeed at work, at home, and in the community.

The Partnership for Reading, a project administered by the National Institute for Literacy, is a collaborative effort of the National Institute for Literacy, the National Institute of Child Health and Human Development, the U.S. Department of Education, and the U.S. Department of Health and Human Services to make evidence-based reading research available to educators, parents, policy makers, and others with an interest in helping all people learn to read well.

Editorial support provided by C. Ralph Adler and Elizabeth Goldman, and design/production support provided by Diane Draper and Bob Kozman, all of RMC Research Corporation.

Introduction

In the recent move toward standards-based reform in public education, many educational reform efforts require schools to demonstrate that they are achieving educational outcomes with students performing at a required level of achievement. Federal and state legislation, in particular, has codified this standards-based movement and tied funding and other incentives to student achievement.

At first, demonstrating student learning may seem like a simple task, but reflection reveals that it is a complex challenge requiring educators to use specific knowledge and skills. Standards-based reform has many curricular and instructional prerequisites. The curriculum must represent the most important knowledge, skills, and attributes that schools want their students to acquire because these learning outcomes will serve as the basis of assessment instruments. Likewise, instructional methods should be appropriate for the designed curriculum. Teaching methods should lead to students learning the outcomes that are the focus of the assessment standards.

Standards- and assessment-based educational reforms seek to obligate schools and teachers to supply evidence that their instructional methods are effective. But testing is only one of three ways to gather evidence about the effectiveness of instructional methods. Evidence of instructional effectiveness can come from any of the following sources:

  • Demonstrated student achievement in formal testing situations implemented by the teacher, school district, or state;
  • Published findings of research-based evidence that the instructional methods being used by teachers lead to student achievement; or
  • Proof of reason-based practice that converges with a research-based consensus in the scientific literature. This type of justification of educational practice becomes important when direct evidence may be lacking (a direct test of the instructional efficacy of a particular method is absent), but there is a theoretical link to research-based evidence that can be traced.

Each of these methods has its pluses and minuses. While testing seems the most straightforward, it is not necessarily the clear indicator of good educational practice that the public seems to think it is. The meaning of test results is often not immediately clear. For example, comparing averages or other indicators of overall performance from tests across classrooms, schools, or school districts takes no account of the resources and support provided to a school, school district, or individual professional. Poor outcomes do not necessarily indict the efforts of physicians in Third World countries who work with substandard equipment and supplies. Likewise, objective evidence of below-grade or below-standard mean performance of a group of students should not necessarily indict their teachers if essential resources and supports (e.g., curriculum materials, institutional aid, parental cooperation) to support teaching efforts were lacking. However, the extent to which children could learn effectively even in under-equipped schools is not known because evidence-based practices are, by and large, not implemented. That is, there is evidence that children experiencing academic difficulties can achieve more educationally if they are taught with effective methods; sadly, scientific research about what works does not usually find its way into most classrooms.

Testing provides a useful professional calibrator, but it requires great contextual sensitivity in interpretation. It is not the entire solution for assessing the quality of instructional efforts. This is why research-based and reason-based educational practice are also crucial for determining the quality and impact of programs. Teachers thus have the responsibility to be effective users and interpreters of research. Providing a survey and synthesis of the most effective practices for a variety of key curriculum goals (such as literacy and numeracy) would seem to be a helpful idea, but no document could provide all of that information. (Many excellent research syntheses exist, such as the National Reading Panel, 2000; Snow, Burns, & Griffin, 1998; Swanson, 1999, but the knowledge base about effective educational practices is constantly being updated, and many issues remain to be settled.)

As professionals, teachers can become more effective and powerful by developing the skills to recognize scientifically based practice and, when the evidence is not available, use some basic research concepts to draw conclusions on their own. This paper offers a primer for those skills that will allow teachers to become independent evaluators of educational research.

The Formal Scientific Method and Scientific Thinking in Educational Practice

When you go to your family physician with a medical complaint, you expect that the recommended treatment has proven to be effective with many other patients who have had the same symptoms. You may even ask why a particular medication is being recommended for you. The doctor may summarize the background knowledge that led to that recommendation and very likely will cite summary evidence from the drug's many clinical trials and perhaps even give you an overview of the theory behind the drug's success in treating symptoms like yours.

All of this discussion will probably occur in rather simple terms, but that does not obscure the fact that the doctor has provided you with data to support a theory about your complaint and its treatment. The doctor has shared knowledge of medical science with you. And while everyone would agree that the practice of medicine has its "artful" components (for example, the creation of a healing relationship between doctor and patient), we have come to expect and depend upon the scientific foundation that underpins even the artful aspects of medical treatment. Even when we do not ask our doctors specifically for the data, we assume it is there, supporting our course of treatment.

Actually, Vaughn and Dammann (2001) have argued that the correct analogy is to say that teaching is in part a craft, rather than an art. They point out that craft knowledge is superior to alternative forms of knowledge such as superstition and folklore because, among other things, craft knowledge is compatible with scientific knowledge and can be more easily integrated with it. One could argue that in this age of education reform and accountability, educators are being asked to demonstrate that their craft has been integrated with science--that their instructional models, methods, and materials can be likened to the evidence a physician should be able to produce showing that a specific treatment will be effective. As with medicine, constructing teaching practice on a firm scientific foundation does not mean denying the craft aspects of teaching.

Architecture is another professional practice that, like medicine and education, grew from being purely a craft to a craft based firmly on a scientific foundation. Architects wish to design beautiful buildings and environments, but they must also apply many foundational principles of engineering and adhere to structural principles. If they do not, their buildings, however beautiful they may be, will not stand. Similarly, a teacher seeks to design lessons that stimulate students and entice them to learn--lessons that are sometimes a beauty to behold. But if the lessons are not based in the science of pedagogy, they, like poorly constructed buildings, will fail.

Education is informed by formal scientific research through the use of archival research-based knowledge such as that found in peer-reviewed educational journals. Preservice teachers are first exposed to the formal scientific research in their university teacher preparation courses (it is hoped), through the instruction received from their professors, and in their course readings (e.g., textbooks, journal articles). Practicing teachers continue their exposure to the results of formal scientific research by subscribing to and reading professional journals, by enrolling in graduate programs, and by becoming lifelong learners.

Scientific thinking in practice is what characterizes reflective teachers--those who inquire into their own practice and who examine their own classrooms to find out what works best for them and their students. What follows in this document is, first, a "short course" on how to become an effective consumer of the archival literature that results from the conduct of formal scientific research in education and, second, a section describing how teachers can think scientifically in their ongoing reflection about their classroom practice.

Being able to access mechanisms that evaluate claims about teaching methods and to recognize scientific research and its findings is especially important for teachers because they are often confronted with the view that "anything goes" in the field of education--that there is no such thing as best practice in education, that there are no ways to verify what works best, that teachers should base their practice on intuition, or that the latest fad must be the best way to teach, please a principal, or address local school reform. The "anything goes" mentality actually represents a threat to teachers' professional autonomy. It provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base.

Teachers as independent evaluators of research evidence

One factor that has impeded teachers from being active and effective consumers of educational science has been a lack of orientation and training in how to understand the scientific process and how that process results in the cumulative growth of knowledge that leads to validated educational practice. Educators have only recently attempted to resolve educational disputes scientifically, and teachers have not yet been armed with the skills to evaluate disputes on their own.

Educational practice has suffered greatly because its dominant model for resolving or adjudicating disputes has been more political (with its corresponding factions and interest groups) than scientific. The field's failure to ground practice in the attitudes and values of science has made educators susceptible to the "authority syndrome" as well as fads and gimmicks that ignore evidence-based practice.

When our ancestors needed information about how to act, they would ask their elders and other wise people. Contemporary society and culture are much more complex. Mass communication allows virtually anyone (on the Internet, through self-help books) to proffer advice, to appear to be a "wise elder." The current problem is how to sift through the avalanche of misguided and uninformed advice to find genuine knowledge. Our problem is not information; we have tons of information. What we need are quality control mechanisms.

Peer-reviewed research journals in various disciplines provide those mechanisms. However, even with mechanisms like these in behavioral science and education, it is all too easy to do an "end run" around the quality control they provide. Powerful information dissemination outlets such as publishing houses and mass media frequently do not discriminate between good and bad information. This provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base and, often, to discredit science, scientific evidence, and the notion of research-based best practice in education. As Gersten (2001) notes, both seasoned and novice teachers are "deluged with misinformation" (p. 45).

We need tools for evaluating the credibility of these many and varied sources of information; the ability to recognize research-based conclusions is especially important. Acquiring those tools means understanding scientific values and learning methods for making inferences from the research evidence that arises through the scientific process. These values and methods were recently summarized by a panel of the National Academy of Sciences convened on scientific inquiry in education (Shavelson & Towne, 2002), and our discussion here will be completely consistent with the conclusions of that NAS panel.

The scientific criteria for evaluating knowledge claims are not complicated and could easily be included in initial teacher preparation programs, but they usually are not (which deprives teachers from an opportunity to become more efficient and autonomous in their work right at the beginning of their careers). These criteria include:

  • the publication of findings in refereed journals (scientific publications that employ a process of peer review),
  • the duplication of the results by other investigators, and
  • a consensus within a particular research community on whether there is a critical mass of studies that point toward a particular conclusion.

In their discussion of the evolution of the American Educational Research Association (AERA) conference and the importance of separating research evidence from opinion when making decisions about instructional practice, Levin and O'Donnell (2000) highlight the importance of enabling teachers to become independent evaluators of research evidence. Being aware of the importance of research published in peer-reviewed scientific journals is only the first step because this represents only the most minimal of criteria. Following is a review of some of the principles of research-based evaluation that teachers will find useful in their work.

Publicly verifiable research conclusions: Replication and Peer Review

Source credibility: the consumer protection of peer reviewed journals..

The front line of defense for teachers against incorrect information in education is the existence of peer-reviewed journals in education, psychology, and other related social sciences. These journals publish empirical research on topics relevant to classroom practice and human cognition and learning. They are the first place that teachers should look for evidence of validated instructional practices.

As a general quality control mechanism, peer review journals provide a "first pass" filter that teachers can use to evaluate the plausibility of educational claims. To put it more concretely, one ironclad criterion that will always work for teachers when presented with claims of uncertain validity is the question: Have findings supporting this method been published in recognized scientific journals that use some type of peer review procedure? The answer to this question will almost always separate pseudoscientific claims from the real thing.

In a peer review, authors submit a paper to a journal for publication, where it is critiqued by several scientists. The critiques are reviewed by an editor (usually a scientist with an extensive history of work in the specialty area covered by the journal). The editor then decides whether the weight of opinion warrants immediate publication, publication after further experimentation and statistical analysis, or rejection because the research is flawed or does not add to the knowledge base. Most journals carry a statement of editorial policy outlining their exact procedures for publication, so it is easy to check whether a journal is in fact, peer-reviewed.

Peer review is a minimal criterion, not a stringent one. Not all information in peer-reviewed scientific journals is necessarily correct, but it has at the very least undergone a cycle of peer criticism and scrutiny. However, it is because the presence of peer-reviewed research is such a minimal criterion that its absence becomes so diagnostic. The failure of an idea, a theory, an educational practice, behavioral therapy, or a remediation technique to have adequate documentation in the peer-reviewed literature of a scientific discipline is a very strong indication to be wary of the practice.

The mechanisms of peer review vary somewhat from discipline to discipline, but the underlying rationale is the same. Peer review is one way (replication of a research finding is another) that science institutionalizes the attitudes of objectivity and public criticism. Ideas and experimentation undergo a honing process in which they are submitted to other critical minds for evaluation. Ideas that survive this critical process have begun to meet the criterion of public verifiability. The peer review process is far from perfect, but it really is the only external consumer protection that teachers have.

The history of reading instruction illustrates the high cost that is paid when the peer-reviewed literature is ignored, when the normal processes of scientific adjudication are replaced with political debates and rhetorical posturing. A vast literature has been generated on best practices that foster children's reading acquisition (Adams, 1990; Anderson, Hiebert, Scott, & Wilkinson, 1985; Chard & Osborn, 1999; Cunningham & Allington, 1994; Ehri, Nunes, Stahl, & Willows, 2001; Moats, 1999; National Reading Panel, 2000; Pearson, 1993; Pressley, 1998; Pressley, Rankin, & Yokol, 1996; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Reading Coherence Initiative, 1999; Snow, Burns, & Griffin, 1998; Spear-Swerling & Sternberg, 2001). Yet much of this literature remains unknown to many teachers, contributing to the frustrating lack of clarity about accepted, scientifically validated findings and conclusions on reading acquisition.

Teachers should also be forewarned about the difference between professional education journals that are magazines of opinion in contrast to journals where primary reports of research, or reviews of research, are peer reviewed. For example, the magazines Phi Delta Kappan and Educational Leadership both contain stimulating discussions of educational issues, but neither is a peer-reviewed journal of original research. In contrast, the American Educational Research Journal (a flagship journal of the AERA) and the Journal of Educational Psychology (a flagship journal of the American Psychological Association) are both peer-reviewed journals of original research. Both are main sources for evidence on validated techniques of reading instruction and for research on aspects of the reading process that are relevant to a teacher's instructional decisions.

This is true, too, of presentations at conferences of educational organizations. Some are data-based presentations of original research. Others are speeches reflecting personal opinion about educational problems. While these talks can be stimulating and informative, they are not a substitute for empirical research on educational effectiveness.

Replication and the importance of public verifiability.

Research-based conclusions about educational practice are public in an important sense: they do not exist solely in the mind of a particular individual but have been submitted to the scientific community for criticism and empirical testing by others. Knowledge considered "special"--the province of the thought of an individual and immune from scrutiny and criticism by others--can never have the status of scientific knowledge. Research-based conclusions, when published in a peer reviewed journal, become part of the public realm, available to all, in a way that claims of "special expertise" are not.

Replication is the second way that science uses to make research-based conclusions concrete and "public." In order to be considered scientific, a research finding must be presented to other researchers in the scientific community in a way that enables them to attempt the same experiment and obtain the same results. When the same results occur, the finding has been replicated . This process ensures that a finding is not the result of the errors or biases of a particular investigator. Replicable findings become part of the converging evidence that forms the basis of a research-based conclusion about educational practice.

John Donne told us that "no man is an island." Similarly, in science, no researcher is an island. Each investigator is connected to the research community and its knowledge base. This interconnection enables science to grow cumulatively and for research-based educational practice to be built on a convergence of knowledge from a variety of sources. Researchers constantly build on previous knowledge in order to go beyond what is currently known. This process is possible only if research findings are presented in such a way that any investigator can use them to build on.

Philosopher Daniel Dennett (1995) has said that science is "making mistakes in public. Making mistakes for all to see, in the hopes of getting the others to help with the corrections" (p. 380). We might ask those proposing an educational innovation for the evidence that they have in fact "made some mistakes in public." Legitimate scientific disciplines can easily provide such evidence. For example, scientists studying the psychology of reading once thought that reading difficulties were caused by faulty eye movements. This hypothesis has been shown to be in error, as has another that followed it, that so-called visual reversal errors were a major cause of reading difficulty. Both hypotheses were found not to square with the empirical evidence (Rayner, 1998; Share & Stanovich, 1995). The hypothesis that reading difficulties can be related to language difficulties at the phonological level has received much more support (Liberman, 1999; National Reading Panel, 2000; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Shankweiler, 1999; Stanovich, 2000).

After making a few such "errors" in public, reading scientists have begun, in the last 20 years, to get it right. But the only reason teachers can have confidence that researchers are now "getting it right" is that researchers made it open, public knowledge when they got things wrong. Proponents of untested and pseudoscientific educational practices will never point to cases where they "got it wrong" because they are not committed to public knowledge in the way that actual science is. These proponents do not need, as Dennett says, "to get others to help in making the corrections" because they have no intention of correcting their beliefs and prescriptions based on empirical evidence.

Education is so susceptible to fads and unproven practices because of its tacit endorsement of a personalistic view of knowledge acquisition--one that is antithetical to the scientific value of the public verifiability of knowledge claims. Many educators believe that knowledge resides within particular individuals--with particularly elite insights--who then must be called upon to dispense this knowledge to others. Indeed, some educators reject public, depersonalized knowledge in social science because they believe it dehumanizes people. Science, however, with its conception of publicly verifiable knowledge, actually democratizes knowledge. It frees practitioners and researchers from slavish dependence on authority.

Subjective, personalized views of knowledge degrade the human intellect by creating conditions that subjugate it to an elite whose "personal" knowledge is not accessible to all (Bronowski, 1956, 1977; Dawkins, 1998; Gross, Levitt, & Lewis, 1997; Medawar, 1982, 1984, 1990; Popper, 1972; Wilson, 1998). Empirical science, by generating knowledge and moving it into the public domain, is a liberating force. Teachers can consult the research and decide for themselves whether the state of the literature is as the expert portrays it. All teachers can benefit from some rudimentary grounding in the most fundamental principles of scientific inference. With knowledge of a few uncomplicated research principles, such as control, manipulation, and randomization, anyone can enter the open, public discourse about empirical findings. In fact, with the exception of a few select areas such as the eye movement research mentioned previously, much of the work described in noted summaries of reading research (e.g., Adams, 1990; Snow, Burns, & Griffin, 1998) could easily be replicated by teachers themselves.

There are many ways that the criteria of replication and peer review can be utilized in education to base practitioner training on research-based best practice. Take continuing teacher education in the form of inservice sessions, for example. Teachers and principals who select speakers for professional development activities should ask speakers for the sources of their conclusions in the form of research evidence in peer-reviewed journals. They should ask speakers for bibliographies of the research evidence published on the practices recommended in their presentations.

The science behind research-based practice relies on systematic empiricism

Empiricism is the practice of relying on observation. Scientists find out about the world by examining it. The refusal by some scientists to look into Galileo's telescope is an example of how empiricism has been ignored at certain points in history. It was long believed that knowledge was best obtained through pure thought or by appealing to authority. Galileo claimed to have seen moons around the planet Jupiter. Another scholar, Francesco Sizi, attempted to refute Galileo, not with observations, but with the following argument:

There are seven windows in the head, two nostrils, two ears, two eyes and a mouth; so in the heavens there are two favorable stars, two unpropitious, two luminaries, and Mercury alone undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc., which it were tedious to enumerate, we gather that the number of planets is necessarily seven...ancient nations, as well as modern Europeans, have adopted the division of the week into seven days, and have named them from the seven planets; now if we increase the number of planets, this whole system falls to the ground...moreover, the satellites are invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist. (Holton & Roller, 1958, p. 160)

Three centuries of the demonstrated power of the empirical approach give us an edge on poor Sizi. Take away those years of empiricism, and many of us might have been there nodding our heads and urging him on. In fact, the empirical approach is not necessarily obvious, which is why we often have to teach it, even in a society that is dominated by science.

Empiricism pure and simple is not enough, however. Observation itself is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. Write down every observation you make from the time you get up in the morning to the time you go to bed on a given day. When you finish, you will have a great number of facts, but you will not have a greater understanding of the world. Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying causal structure of events in the world. Observations are structured so that, depending upon the outcome of the observation, some theories of the causes of the outcome are supported and others rejected.

Teachers can benefit by understanding two things about research and causal inferences. The first is the simple (but sometimes obscured) fact that statements about best instructional practices are statements that contain a causal claim. These statements claim that one type of method or practice causes superior educational outcomes. Second, teachers must understand how the logic of the experimental method provides the critical support for making causal inferences.

Science addresses testable questions

Science advances by positing theories to account for particular phenomena in the world, by deriving predictions from these theories, by testing the predictions empirically, and by modifying the theories based on the tests (the sequence is typically theory -> prediction -> test -> theory modification). What makes a theory testable? A theory must have specific implications for observable events in the natural world.

Science deals only with a certain class of problem: the kind that is empirically solvable. That does not mean that different classes of problems are inherently solvable or unsolvable and that this division is fixed forever. Quite the contrary: some problems that are currently unsolvable may become solvable as theory and empirical techniques become more sophisticated. For example, decades ago historians would not have believed that the controversial issue of whether Thomas Jefferson had a child with his slave Sally Hemings was an empirically solvable question. Yet, by 1998, this problem had become solvable through advances in genetic technology, and a paper was published in the journal Nature (Foster, Jobling, Taylor, Donnelly, Deknijeff, Renemieremet, Zerjal, & Tyler-Smith, 1998) on the question.

The criterion of whether a problem is "testable" is called the falsifiability criterion: a scientific theory must always be stated in such a way that the predictions derived from it can potentially be shown to be false. The falsifiability criterion states that, for a theory to be useful, the predictions drawn from it must be specific. The theory must go out on a limb, so to speak, because in telling us what should happen, the theory must also imply that certain things will not happen. If these latter things do happen, it is a clear signal that something is wrong with the theory. It may need to be modified, or we may need to look for an entirely new theory. Either way, we will end up with a theory that is closer to the truth.

In contrast, if a theory does not rule out any possible observations, then the theory can never be changed, and we are frozen into our current way of thinking with no possibility of progress. A successful theory cannot posit or account for every possible happening. Such a theory robs itself of any predictive power.

What we are talking about here is a certain type of intellectual honesty. In science, the proponent of a theory is always asked to address this question before the data are collected: "What data pattern would cause you to give up, or at least to alter, this theory?" In the same way, the falsifiability criterion is a useful consumer protection for the teacher when evaluating claims of educational effectiveness. Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will lead them to abandon the practice. True scientific knowledge is held tentatively and is subject to change based on contrary evidence. Educational remedies not based on scientific evidence will often fail to put themselves at risk by specifying what data patterns would prove them false.

Objectivity and intellectual honesty

Objectivity, another form of intellectual honesty in research, means that we let nature "speak for itself" without imposing our wishes on it--that we report the results of experimentation as accurately as we can and that we interpret them as fairly as possible. (The fact that this goal is unattainable for any single human being should not dissuade us from holding objectivity as a value.)

In the language of the general public, open-mindedness means being open to possible theories and explanations for a particular phenomenon. But in science it means that and something more. Philosopher Jonathan Adler (1998) teaches us that science values another aspect of open-mindedness even more highly: "What truly marks an open-minded person is the willingness to follow where evidence leads. The open-minded person is willing to defer to impartial investigations rather than to his own predilections...Scientific method is attunement to the world, not to ourselves" (p. 44).

Objectivity is critical to the process of science, but it does not mean that such attitudes must characterize each and every scientist for science as a whole to work. Jacob Bronowski (1973, 1977) often argued that the unique power of science to reveal knowledge about the world does not arise because scientists are uniquely virtuous (that they are completely objective or that they are never biased in interpreting findings, for example). It arises because fallible scientists are immersed in a process of checks and balances --a process in which scientists are always there to criticize and to root out errors. Philosopher Daniel Dennett (1999/2000) points out that "scientists take themselves to be just as weak and fallible as anybody else, but recognizing those very sources of error in themselvesÉthey have devised elaborate systems to tie their own hands, forcibly preventing their frailties and prejudices from infecting their results" (p. 42). More humorously, psychologist Ray Nickerson (1998) makes the related point that the vanities of scientists are actually put to use by the scientific process, by noting that it is "not so much the critical attitude that individual scientists have taken with respect to their own ideas that has given science its success...but more the fact that individual scientists have been highly motivated to demonstrate that hypotheses that are held by some other scientists are false" (p. 32). These authors suggest that the strength of scientific knowledge comes not because scientists are virtuous, but from the social process where scientists constantly cross-check each others' knowledge and conclusions.

The public criteria of peer review and replication of findings exist in part to keep checks on the objectivity of individual scientists. Individuals cannot hide bias and nonobjectivity by personalizing their claims and keeping them from public scrutiny. Science does not accept findings that have failed the tests of replication and peer review precisely because it wants to ensure that all findings in science are in the public domain, as defined above. Purveyors of pseudoscientific educational practices fail the test of objectivity and are often identifiable by their attempts to do an "end run" around the public mechanisms of science by avoiding established peer review mechanisms and the information-sharing mechanisms that make replication possible. Instead, they attempt to promulgate their findings directly to consumers, such as teachers.

The principle of converging evidence

The principle of converging evidence has been well illustrated in the controversies surrounding the teaching of reading. The methods of systematic empiricism employed in the study of reading acquisition are many and varied. They include case studies, correlational studies, experimental studies, narratives, quasi-experimental studies, surveys, epidemiological studies and many others. The results of many of these studies have been synthesized in several important research syntheses (Adams, 1990; Ehri et al., 2001; National Reading Panel, 2000; Pressley, 1998; Rayner et al., 2002; Reading Coherence Initiative, 1999; Share & Stanovich, 1995; Snow, Burns, & Griffin, 1998; Snowling, 2000; Spear-Swerling & Sternberg, 2001; Stanovich, 2000). These studies were used in a process of establishing converging evidence, a principle that governs the drawing of the conclusion that a particular educational practice is research-based.

The principle of converging evidence is applied in situations requiring a judgment about where the "preponderance of evidence" points. Most areas of science contain competing theories. The extent to which a particular study can be seen as uniquely supporting one particular theory depends on whether other competing explanations have been ruled out. A particular experimental result is never equally relevant to all competing theories. An experiment may be a very strong test of one or two alternative theories but a weak test of others. Thus, research is considered highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge.

Contrast this idea of converging evidence with the mistaken view that a problem in science can be solved with a single, crucial experiment, or that a single critical insight can advance theory and overturn all previous knowledge. This view of scientific progress fits nicely with the operation of the news media, in which history is tracked by presenting separate, disconnected "events" in bite-sized units. This is a gross misunderstanding of scientific progress and, if taken too seriously, leads to misconceptions about how conclusions are reached about research-based practices.

One experiment rarely decides an issue, supporting one theory and ruling out all others. Issues are most often decided when the community of scientists gradually begins to agree that the preponderance of evidence supports one alternative theory rather than another. Scientists do not evaluate data from a single experiment that has finally been designed in the perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer.

Although there are many ways in which an experiment can go wrong (or become confounded ), a scientist with experience working on a particular problem usually has a good idea of what most of the critical factors are, and there are usually only a few. The idea of converging evidence tells us to examine the pattern of flaws running through the research literature because the nature of this pattern can either support or undermine the conclusions that we might draw.

For example, suppose that the findings from a number of different experiments were largely consistent in supporting a particular conclusion. Given the imperfect nature of experiments, we would evaluate the extent and nature of the flaws in these studies. If all the experiments were flawed in a similar way, this circumstance would undermine confidence in the conclusions drawn from them because the consistency of the outcome may simply have resulted from a particular, consistent flaw. On the other hand, if all the experiments were flawed in different ways, our confidence in the conclusions increases because it is less likely that the consistency in the results was due to a contaminating factor that confounded all the experiments. As Anderson and Anderson (1996) note, "When a conceptual hypothesis survives many potential falsifications based on different sets of assumptions, we have a robust effect." (p. 742).

Suppose that five different theoretical summaries (call them A, B, C, D, and E) of a given set of phenomena exist at one time and are investigated in a series of experiments. Suppose that one set of experiments represents a strong test of theories A, B, and C, and that the data largely refute theories A and B and support C. Imagine also that another set of experiments is a particularly strong test of theories C, D, and E, and that the data largely refute theories D and E and support C. In such a situation, we would have strong converging evidence for theory C. Not only do we have data supportive of theory C, but we have data that contradict its major competitors. Note that no one experiment tests all the theories, but taken together, the entire set of experiments allows a strong inference.

In contrast, if the two sets of experiments each represent strong tests of B, C, and E, and the data strongly support C and refute B and E, the overall support for theory C would be less strong than in our previous example. The reason is that, although data supporting theory C have been generated, there is no strong evidence ruling out two viable alternative theories (A and D). Thus research is highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge in the manner of our first example.

Increasingly, the combining of evidence from disparate studies to form a conclusion is being done more formally by the use of the statistical technique termed meta-analysis (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunter & Schmidt, 1990; Rosenthal, 1995; Schmidt, 1992; Swanson, 1999) which has been used extensively to establish whether various medical practices are research based. In a medical context, meta-analysis:

involves adding together the data from many clinical trials to create a single pool of data big enough to eliminate much of the statistical uncertainty that plagues individual trials...The great virtue of meta-analysis is that clear findings can emerge from a group of studies whose findings are scattered all over the map. (Plotkin,1996, p. 70)

The use of meta-analysis for determining the research validation of educational practices is just the same as in medicine. The effects obtained when one practice is compared against another are expressed in a common statistical metric that allows comparison of effects across studies. The findings are then statistically amalgamated in some standard ways (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Swanson, 1999) and a conclusion about differential efficacy is reached if the amalgamation process passes certain statistical criteria. In some cases, of course, no conclusion can be drawn with confidence, and the result of the meta-analysis is inconclusive.

More and more commentators on the educational research literature are calling for a greater emphasis on meta-analysis as a way of dampening the contentious disputes about conflicting studies that plague education and other behavioral sciences (Kavale & Forness, 1995; Rosnow & Rosenthal, 1989; Schmidt, 1996; Stanovich, 2001; Swanson, 1999). The method is useful for ending disputes that seem to be nothing more than a "he-said, she-said" debate. An emphasis on meta-analysis has often revealed that we actually have more stable and useful findings than is apparent from a perusal of the conflicts in our journals.

The National Reading Panel (2000) found just this in their meta-analysis of the evidence surrounding several issues in reading education. For example, they concluded that the results of a meta-analysis of the results of 66 comparisons from 38 different studies indicated "solid support for the conclusion that systematic phonics instruction makes a bigger contribution to children's growth in reading than alternative programs providing unsystematic or no phonics instruction" (p. 2-84). In another section of their report, the National Reading Panel reported that a meta-analysis of 52 studies of phonemic awareness training indicated that "teaching children to manipulate the sounds in language helps them learn to read. Across the various conditions of teaching, testing, and participant characteristics, the effect sizes were all significantly greater than chance and ranged from large to small, with the majority in the moderate range. Effects of phonemic awareness training on reading lasted well beyond the end of training" (p. 2-5).

A statement by a task force of the American Psychological Association (Wilkinson, 1999) on statistical methods in psychology journals provides an apt summary for this section. The task force stated that investigators should not "interpret a single study's results as having importance independent of the effects reported elsewhere in the relevant literature" (p. 602). Science progresses by convergence upon conclusions. The outcomes of one study can only be interpreted in the context of the present state of the convergence on the particular issue in question.

The logic of the experimental method

Scientific thinking is based on the ideas of comparison, control, and manipulation . In a true experimental study, these characteristics of scientific investigation must be arranged to work in concert.

Comparison alone is not enough to justify a causal inference. In methodology texts, correlational investigations (which involve comparison only) are distinguished from true experimental investigations that warrant much stronger causal inferences because they involve comparison, control, and manipulation. The mere existence of a relationship between two variables does not guarantee that changes in one are causing changes in the other. Correlation does not imply causation.

There are two potential problems with drawing causal inferences from correlational evidence. The first is called the third-variable problem. It occurs when the correlation between the two variables does not indicate a direct causal path between them but arises because both variables are related to a third variable that has not even been measured.

The second reason is called the directionality problem. It creates potential interpretive difficulties because even if two variables have a direct causal relationship, the direction of that relationship is not indicated by the mere presence of the correlation. In short, a correlation between variables A and B could arise because changes in A are causing changes in B or because changes in B are causing changes in A. The mere presence of the correlation does not allow us to decide between these two possibilities.

The heart of the experimental method lies in manipulation and control. In contrast to a correlational study, where the investigator simply observes whether the natural fluctuation in two variables displays a relationship, the investigator in a true experiment manipulates the variable thought to be the cause (the independent variable) and looks for an effect on the variable thought to be the effect (the dependent variable ) while holding all other variables constant by control and randomization. This method removes the third-variable problem because, in the natural world, many different things are related. The experimental method may be viewed as a way of prying apart these naturally occurring relationships. It does so because it isolates one particular variable (the hypothesized cause) by manipulating it and holding everything else constant (control).

When manipulation is combined with a procedure known as random assignment (in which the subjects themselves do not determine which experimental condition they will be in but, instead, are randomly assigned to one of the experimental groups), scientists can rule out alternative explanations of data patterns. By using manipulation, experimental control, and random assignment, investigators construct stronger comparisons so that the outcome eliminates alternative theories and explanations.

The need for both correlational methods and true experiments

As strong as they are methodologically, studies employing true experimental logic are not the only type that can be used to draw conclusions. Correlational studies have value. The results from many different types of investigation, including correlational studies, can be amalgamated to derive a general conclusion. The basis for conclusion rests on the convergence observed from the variety of methods used. This is most certainly true in classroom and curriculum research. It is necessary to amalgamate the results from not only experimental investigations, but correlational studies, nonequivalent control group studies, time series designs, and various other quasi-experimental designs and multivariate correlational designs, All have their strengths and weaknesses. For example, it is often (but not always) the case that experimental investigations are high in internal validity, but limited in external validity, whereas correlational studies are often high in external validity, but low in internal validity.

Internal validity concerns whether we can infer a causal effect for a particular variable. The more a study employs the logic of a true experiment (i.e., includes manipulation, control, and randomization), the more we can make a strong causal inference. External validity concerns the generalizability of the conclusion to the population and setting of interest. Internal and external validity are often traded off across different methodologies. Experimental laboratory investigations are high in internal validity but may not fully address concerns about external validity. Field classroom investigations, on the other hand, are often quite high in external validity but because of the logistical difficulties involved in carrying them out, they are often quite low in internal validity. That is why we need to look for a convergence of results, not just consistency from one method. Convergence increases our confidence in the external and internal validity of our conclusions.

Again, this underscores why correlational studies can contribute to knowledge. First, some variables simply cannot be manipulated for ethical reasons (for instance, human malnutrition or physical disabilities). Other variables, such as birth order, sex, and age, are inherently correlational because they cannot be manipulated, and therefore the scientific knowledge concerning them must be based on correlational evidence. Finally, logistical difficulties in classroom and curriculum research often make it impossible to achieve the logic of the true experiment. However, this circumstance is not unique to educational or psychological research. Astronomers obviously cannot manipulate all the variables affecting the objects they study, yet they are able to arrive at conclusions.

Complex correlational techniques are essential in the absence of experimental research because complex correlational statistics such as multiple regression, path analysis, and structural equation modeling that allow for the partial control of third variables when those variables can be measured. These statistics allow us to recalculate the correlation between two variables after the influence of other variables is removed. If a potential third variable can be measured, complex correlational statistics can help us determine whether that third variable is determining the relationship. These correlational statistics and designs help to rule out certain causal hypotheses, even if they cannot demonstrate the true causal relation definitively.

Stages of scientific investigation: The Role of Case Studies and Qualitative Investigations

The educational literature includes many qualitative investigations that focus less on issues of causal explanation and variable control and more on thick description , in the manner of the anthropologist (Geertz, 1973, 1979). The context of a person's behavior is described as much as possible from the standpoint of the participant. Many different fields (e.g., anthropology, psychology, education) contain case studies where the focus is detailed description and contextualization of the situation of a single participant (or very few participants).

The usefulness of case studies and qualitative investigations is strongly determined by how far scientific investigation has advanced in a particular area. The insights gained from case studies or qualitative investigations may be quite useful in the early stages of an investigation of a certain problem. They can help us determine which variables deserve more intense study by drawing attention to heretofore unrecognized aspects of a person's behavior and by suggesting how understanding of behavior might be sharpened by incorporating the participant's perspective.

However, when we move from the early stages of scientific investigation, where case studies may be very useful, to the more mature stages of theory testing--where adjudicating between causal explanations is the main task--the situation changes drastically. Case studies and qualitative description are not useful at the later stages of scientific investigation because they cannot be used to confirm or disconfirm a particular causal theory. They lack the comparative information necessary to rule out alternative explanations.

Where qualitative investigations are useful relates strongly to a distinction in philosophy of science between the context of discovery and the context of justification . Qualitative research, case studies, and clinical observations support a context of discovery where, as Levin and O'Donnell (2000) note in an educational context, such research must be regarded as "preliminary/exploratory, observational, hypothesis generating" (p. 26). They rightly point to the essential importance of qualitative investigations because "in the early stages of inquiry into a research topic, one has to look before one can leap into designing interventions, making predictions, or testing hypotheses" (p. 26). The orientation provided by qualitative investigations is critical in such cases. Even more important, the results of quantitative investigations--which must sometimes abstract away some of the contextual features of a situation--are often contextualized by the thick situational description provided by qualitative work.

However, in the context of justification, variables must be measured precisely, large groups must be tested to make sure the conclusion generalizes and, most importantly, many variables must be controlled because alternative causal explanations must be ruled out. Gersten (2001) summarizes the value of qualitative research accurately when he says that "despite the rich insights they often provide, descriptive studies cannot be used as evidence for an intervention's efficacy...descriptive research can only suggest innovative strategies to teach students and lay the groundwork for development of such strategies" (p. 47). Qualitative research does, however, help to identify fruitful directions for future experimental studies.

Nevertheless, here is why the sole reliance on qualitative techniques to determine the effectiveness of curricula and instructional strategies has become problematic. As a researcher, you desire to do one of two things.

Objective A

The researcher wishes to make some type of statement about a relationship, however minimal. That is, you at least want to use terms like greater than, or less than, or equal to. You want to say that such and such an educational program or practice is better than another. "Better than" and "worse than" are, of course, quantitative statements--and, in the context of issues about what leads to or fosters greater educational achievement, they are causal statements as well . As quantitative causal statements, the support for such claims obviously must be found in the experimental logic that has been outlined above. To justify such statements, you must adhere to the canons of quantitative research logic.

Objective B

The researcher seeks to adhere to an exclusively qualitative path that abjures statements about relationships and never uses comparative terms of magnitude. The investigator desires to simply engage in thick description of a domain that may well prompt hypotheses when later work moves on to the more quantitative methods that are necessary to justify a causal inference.

Investigators pursuing Objective B are doing essential work. They provide quantitative information with suggestions for richer hypotheses to study. In education, however, investigators sometimes claim to be pursuing Objective B but slide over into Objective A without realizing they have made a crucial switch. They want to make comparative, or quantitative, statements, but have not carried out the proper types of investigation to justify them. They want to say that a certain educational program is better than another (that is, it causes better school outcomes). They want to give educational strictures that are assumed to hold for a population of students, not just to the single or few individuals who were the objects of the qualitative study. They want to condemn an educational practice (and, by inference, deem an alternative quantitatively and causally better). But instead of taking the necessary course of pursuing Objective A, they carry out their investigation in the manner of Objective B.

Let's recall why the use of single case or qualitative description as evidence in support of a particular causal explanation is inappropriate. The idea of alternative explanations is critical to an understanding of theory testing. The goal of experimental design is to structure events so that support of one particular explanation simultaneously disconfirms other explanations. Scientific progress can occur only if the data that are collected rule out some explanations. Science sets up conditions for the natural selection of ideas. Some survive empirical testing and others do not.

This is the honing process by which ideas are sifted so that those that contain the most truth are found. But there must be selection in this process: data collected as support for a particular theory must not leave many other alternative explanations as equally viable candidates. For this reason, scientists construct control or comparison groups in their experimentation. These groups are formed so that, when their results are compared with those from an experimental group, some alternative explanations are ruled out.

Case studies and qualitative description lack the comparative information necessary to prove that a particular theory or educational practice is superior, because they fail to test an alternative; they rule nothing out. Take the seminal work of Jean Piaget for example. His case studies were critical in pointing developmental psychology in new and important directions, but many of his theoretical conclusions and causal explanations did not hold up in controlled experiments (Bjorklund, 1995; Goswami, 1998; Siegler, 1991).

In summary, as educational psychologist Richard Mayer (2000) notes, "the domain of science includes both some quantitative and qualitative methodologies" (p. 39), and the key is to use each where it is most effective (see Kamil, 1995). Likewise, in their recent book on research-based best practices in comprehension instruction, Block and Pressley (2002) argue that future progress in understanding how comprehension works will depend on a healthy interaction between qualitative and quantitative approaches. They point out that getting an initial idea of the comprehension processes involved in hypertext and Web-based environments will involve detailed descriptive studies using think-alouds and assessments of qualitative decision making. Qualitative studies of real reading environments will set the stage for more controlled investigations of causal hypotheses.

The progression to more powerful methods

A final useful concept is the progression to more powerful research methods ("more powerful" in this context meaning more diagnostic of a causal explanation). Research on a particular problem often proceeds from weaker methods (ones less likely to yield a causal explanation) to ones that allow stronger causal inferences. For example, interest in a particular hypothesis may originally emerge from a particular case study of unusual interest. This is the proper role for case studies: to suggest hypotheses for further study with more powerful techniques and to motivate scientists to apply more rigorous methods to a research problem. Thus, following the case studies, researchers often undertake correlational investigations to verify whether the link between variables is real rather than the result of the peculiarities of a few case studies. If the correlational studies support the relationship between relevant variables, then researchers will attempt experiments in which variables are manipulated in order to isolate a causal relationship between the variables.

Summary of principles that support research-based inferences about best practice

Our sketch of the principles that support research-based inferences about best practice in education has revealed that:

  • Science progresses by investigating solvable, or testable, empirical problems.
  • To be testable, a theory must yield predictions that could possible be shown to be wrong.
  • The concepts in the theories in science evolve as evidence accumulates. Scientific knowledge is not infallible knowledge, but knowledge that has at least passed some minimal tests. The theories behind research-based practice can be proven wrong, and therefore they contain a mechanism for growth and advancement.
  • Theories are tested by systematic empiricism. The data obtained from empirical research are in the public domain in the sense that they are presented in a manner that allows replication and criticism by other scientists.
  • Data and theories in science are considered in the public domain only after publication in peer-reviewed scientific journals.
  • Empiricism is systematic because it strives for the logic of control and manipulation that characterizes a true experiment.
  • Correlational techniques are helpful when the logic of an experiment cannot be approximated, but because these techniques only help rule out hypotheses, they are considered weaker than true experimental methods.
  • Researchers use many different methods to arrive at their conclusions, and the strengths and weaknesses of these methods vary. Most often, conclusions are drawn only after a slow accumulation of data from many studies.

Scientific thinking in educational practice: Reason-based practice in the absence of direct evidence

Some areas in educational research, to date, lack a research-based consensus, for a number of reasons. Perhaps the problem or issue has not been researched extensively. Perhaps research into the issue is in the early stages of investigation, where descriptive studies are suggesting interesting avenues, but no controlled research justifying a causal inference has been completed. Perhaps many correlational studies and experiments have been conducted on the issue, but the research evidence has not yet converged in a consistent direction.

Even if teachers know the principles of scientific evaluation described earlier, the research literature sometimes fails to give them clear direction. They will have to fall back on their own reasoning processes as informed by their own teaching experiences. In those cases, teachers still have many ways of reasoning scientifically.

Tracing the link from scientific research to scientific thinking in practice

Scientific thinking in can be done in several ways. Earlier we discussed different types of professional publications that teachers can read to improve their practice. The most important defining feature of these outlets is whether they are peer reviewed. Another defining feature is whether the publication contains primary research rather than presenting opinion pieces or essays on educational issues. If a journal presents primary research, we can evaluate the research using the formal scientific principles outlined above.

If the journal is presenting opinion pieces about what constitutes best practice, we need to trace the link between those opinions and archival peer-reviewed research. We would look to see whether the authors have based their opinions on peer-reviewed research by reading the reference list. Do the authors provide a significant amount of original research citations (is their opinion based on more than one study)? Do the authors cite work other than their own (have the results been replicated)? Are the cited journals peer-reviewed? For example, in the case of best practice for reading instruction, if we came across an article in an opinion-oriented journal such as Intervention in School and Clinic, we might look to see if the authors have cited work that has appeared in such peer-reviewed journals as Journal of Educational Psychology , Elementary School Journal , Journal of Literacy Research , Scientific Studies of Reading , or the Journal of Learning Disabilities .

These same evaluative criteria can be applied to presenters at professional development workshops or papers given at conferences. Are they conversant with primary research in the area on which they are presenting? Can they provide evidence for their methods and does that evidence represent a scientific consensus? Do they understand what is required to justify causal statements? Are they open to the possibility that their claims could be proven false? What evidence would cause them to shift their thinking?

An important principle of scientific evaluation--the connectivity principle (Stanovich, 2001)--can be generalized to scientific thinking in the classroom. Suppose a teacher comes upon a new teaching method, curriculum component, or process. The method is advertised as totally new, which provides an explanation for the lack of direct empirical evidence for the method. A lack of direct empirical evidence should be grounds for suspicion, but should not immediately rule it out. The principle of connectivity means that the teacher now has another question to ask: "OK, there is no direct evidence for this method, but how is the theory behind it (the causal model of the effects it has) connected to the research consensus in the literature surrounding this curriculum area?" Even in the absence of direct empirical evidence on a particular method or technique, there could be a theoretical link to the consensus in the existing literature that would support the method.

For further tips on translating research into classroom practice, see Warby, Greene, Higgins, & Lovitt (1999). They present a format for selecting, reading, and evaluating research articles, and then importing the knowledge gained into the classroom.

Let's take an imaginary example from the domain of treatments for children with extreme reading difficulties. Imagine two treatments have been introduced to a teacher. No direct empirical tests of efficacy have been carried out using either treatment. The first, Treatment A, is a training program to facilitate the awareness of the segmental nature of language at the phonological level. The second, Treatment B, involves giving children training in vestibular sensitivity by having them walk on balance beams while blindfolded. Treatment A and B are equal in one respect--neither has had a direct empirical test of its efficacy, which reflects badly on both. Nevertheless, one of the treatments has the edge when it comes to the principle of connectivity. Treatment A makes contact with a broad consensus in the research literature that children with extraordinary reading difficulties are hampered because of insufficiently developed awareness of the segmental structure of language. Treatment B is not connected to any corresponding research literature consensus. Reason dictates that Treatment A is a better choice, even though neither has been directly tested.

Direct connections with research-based evidence and use of the connectivity principle when direct empirical evidence is absent give us necessary cross-checks on some of the pitfalls that arise when we rely solely on personal experience. Drawing upon personal experience is necessary and desirable in a veteran teacher, but it is not sufficient for making critical judgments about the effectiveness of an instructional strategy or curriculum. The insufficiency of personal experience becomes clear if we consider that the educational judgments--even of veteran teachers--often are in conflict. That is why we have to adjudicate conflicting knowledge claims using the scientific method.

Let us consider two further examples that demonstrate why we need controlled experimentation to verify even the most seemingly definitive personal observations. In the 1990s, considerable media and professional attention were directed at a method for aiding the communicative capacity of autistic individuals. This method is called facilitated communication. Autistic individuals who had previously been nonverbal were reported to have typed highly literate messages on a keyboard when their hands and arms were supported over the typewriter by a so-called facilitator. These startlingly verbal performances by autistic children who had previously shown very limited linguistic behavior raised incredible hopes among many parents of autistic children.

Unfortunately, claims for the efficacy of facilitated communication were disseminated by many media outlets before any controlled studies had been conducted. Since then, many studies have appeared in journals in speech science, linguistics, and psychology and each study has unequivocally demonstrated the same thing: the autistic child's performance is dependent upon tactile cueing from the facilitator. In the experiments, it was shown that when both child and facilitator were looking at the same drawing, the child typed the correct name of the drawing. When the viewing was occluded so that the child and the facilitator were shown different drawings, the child typed the name of the facilitator's drawing, not the one that the child herself was looking at (Beck & Pirovano, 1996; Burgess, Kirsch, Shane, Niederauer, Graham, & Bacon, 1998; Hudson, Melita, & Arnold, 1993; Jacobson, Mulick, & Schwartz, 1995; Wheeler, Jacobson, Paglieri, & Schwartz, 1993). The experimental studies directly contradicted the extensive case studies of the experiences of the facilitators of the children. These individuals invariably deny that they have inadvertently cued the children. Their personal experience, honest and heartfelt though it is, suggests the wrong model for explaining this outcome. The case study evidence told us something about the social connections between the children and their facilitators. But that is something different than what we got from the controlled experimental studies, which provided direct tests of the claim that the technique unlocks hidden linguistic skills in these children. Even if the claim had turned out to be true, the verification of the proof of its truth would not have come from the case studies or personal experiences, but from the necessary controlled studies.

Another example of the need for controlled experimentation to test the insights gleaned from personal experience is provided by the concept of learning styles--the idea that various modality preferences (or variants of this theme in terms of analytic/holistic processing or "learning styles") will interact with instructional methods, allowing teachers to individualize learning. The idea seems to "feel right" to many of us. It does seem to have some face validity, but it has never been demonstrated to work in practice. Its modern incarnation (see Gersten, 2001, Spear-Swerling & Sternberg, 2001) takes a particularly harmful form, one where students identified as auditory learners are matched with phonics instruction and visual and/or kinesthetic learners matched with holistic instruction. The newest form is particularly troublesome because the major syntheses of reading research demonstrate that many children can benefit from phonics-based instruction, not just "auditory" learners (National Reading Panel, 2000; Rayner et al., 2002; Stanovich, 2000). Excluding students identified as "visual/kinesthetic" learners from effective phonics instruction is a bad instructional practice--bad because it is not only not research based, it is actually contradicted by research.

A thorough review of the literature by Arter and Jenkins (1979) found no consistent evidence for the idea that modality strengths and weaknesses could be identified in a reliable and valid way that warranted differential instructional prescriptions. A review of the research evidence by Tarver and Dawson (1978) found likewise that the idea of modality preferences did not hold up to empirical scrutiny. They concluded, "This review found no evidence supporting an interaction between modality preference and method of teaching reading" (p. 17). Kampwirth and Bates (1980) confirmed the conclusions of the earlier reviews, although they stated their conclusions a little more baldly: "Given the rather general acceptance of this idea, and its common-sense appeal, one would presume that there exists a body of evidence to support it. UnfortunatelyÉno such firm evidence exists" (p. 598).

More recently, the idea of modality preferences (also referred to as learning styles, holistic versus analytic processing styles, and right versus left hemispheric processing) has again surfaced in the reading community. The focus of the recent implementations refers more to teaching to strengths, as opposed to remediating weaknesses (the latter being more the focus of the earlier efforts in the learning disabilities field). The research of the 1980s was summarized in an article by Steven Stahl (1988). His conclusions are largely negative because his review of the literature indicates that the methods that have been used in actual implementations of the learning styles idea have not been validated. Stahl concludes: "As intuitively appealing as this notion of matching instruction with learning style may be, past research has turned up little evidence supporting the claim that different teaching methods are more or less effective for children with different reading styles" (p. 317).

Obviously, such research reviews cannot prove that there is no possible implementation of the idea of learning styles that could work. However, the burden of proof in science rests on the investigator who is making a new claim about the nature of the world. It is not incumbent upon critics of a particular claim to show that it "couldn't be true." The question teachers might ask is, "Have the advocates for this new technique provided sufficient proof that it works?" Their burden of responsibility is to provide proof that their favored methods work. Teachers should not allow curricular advocates to avoid this responsibility by introducing confusion about where the burden of proof lies. For example, it is totally inappropriate and illogical to ask "Has anyone proved that it can't work?" One does not "prove a negative" in science. Instead, hypotheses are stated, and then must be tested by those asserting the hypotheses.

Reason-based practice in the classroom

Effective teachers engage in scientific thinking in their classrooms in a variety of ways: when they assess and evaluate student performance, develop Individual Education Plans (IEPs) for their students with disabilities, reflect on their practice, or engage in action research. For example, consider the assessment and evaluation activities in which teachers engage. The scientific mechanisms of systematic empiricism--iterative testing of hypotheses that are revised after the collection of data--can be seen when teachers plan for instruction: they evaluate their students' previous knowledge, develop hypotheses about the best methods for attaining lesson objectives, develop a teaching plan based on those hypotheses, observe the results, and base further instruction on the evidence collected.

This assessment cycle looks even more like the scientific method when teachers (as part of a multidisciplinary team) are developing and implementing an IEP for a student with a disability. The team must assess and evaluate the student's learning strengths and difficulties, develop hypotheses about the learning problems, select curriculum goals and objectives, base instruction on the hypotheses and the goals selected, teach, and evaluate the outcomes of that teaching. If the teaching is successful (goals and objectives are attained), the cycle continues with new goals. If the teaching has been unsuccessful (goals and objectives have not been achieved), the cycle begins again with new hypotheses. We can also see the principle of converging evidence here. No one piece of evidence might be decisive, but collectively the evidence might strongly point in one direction.

Scientific thinking in practice occurs when teachers engage in action research. Action research is research into one's own practice that has, as its main aim, the improvement of that practice. Stokes (1997) discusses how many advances in science came about as a result of "use-inspired research" which draws upon observations in applied settings. According to McNiff, Lomax, and Whitehead (1996), action research shares several characteristics with other types of research: "it leads to knowledge, it provides evidence to support this knowledge, it makes explicit the process of enquiry through which knowledge emerges, and it links new knowledge with existing knowledge" (p. 14). Notice the links to several important concepts: systematic empiricism, publicly verifiable knowledge, converging evidence, and the connectivity principle.

Teachers and Research Commonality in a "what works" epistemology

Many educational researchers have drawn attention to the epistemological commonalities between researchers and teachers (Gersten, Vaughn, Deshler, & Schiller, 1997; Stanovich, 1993/1994). A "what works" epistemology is a critical source of underlying unity in the world views of educators and researchers (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). Empiricism, broadly construed (as opposed to the caricature of white coats, numbers, and test tubes that is often used to discredit scientists) is about watching the world, manipulating it when possible, observing outcomes, and trying to associate outcomes with features observed and with manipulations. This is what the best teachers do. And this is true despite the grain of truth in the statement that "teaching is an art." As Berliner (1987) notes: "No one I know denies the artistic component to teaching. I now think, however, that such artistry should be research-based. I view medicine as an art, but I recognize that without its close ties to science it would be without success, status, or power in our society. Teaching, like medicine, is an art that also can be greatly enhanced by developing a close relationship to science (p. 4)."

In his review of the work of the Committee on the Prevention of Reading Difficulties for the National Research Council of the National Academy of Sciences (Snow, Burns, & Griffin, 1998), Pearson (1999) warned educators that resisting evaluation by hiding behind the "art of teaching" defense will eventually threaten teacher autonomy. Teachers need creativity, but they also need to demonstrate that they know what evidence is, and that they recognize that they practice in a profession based in behavioral science. While making it absolutely clear that he opposes legislative mandates, Pearson (1999) cautions:

We have a professional responsibility to forge best practice out of the raw materials provided by our most current and most valid readings of research...If professional groups wish to retain the privileges of teacher prerogative and choice that we value so dearly, then the price we must pay is constant attention to new knowledge as a vehicle for fine-tuning our individual and collective views of best practice. This is the path that other professions, such as medicine, have taken in order to maintain their professional prerogative, and we must take it, too. My fear is that if the professional groups in education fail to assume this responsibility squarely and openly, then we will find ourselves victims of the most onerous of legislative mandates (p. 245).

Those hostile to a research-based approach to educational practice like to imply that the insights of teachers and those of researchers conflict. Nothing could be farther from the truth. Take reading, for example. Teachers often do observe exactly what the research shows--that most of their children who are struggling with reading have trouble decoding words. In an address to the Reading Hall of Fame at the 1996 meeting of the International Reading Association, Isabel Beck (1996) illustrated this point by reviewing her own intellectual history (see Beck, 1998, for an archival version). She relates her surprise upon coming as an experienced teacher to the Learning Research and Development Center at the University of Pittsburgh and finding "that there were some people there (psychologists) who had not taught anyone to read, yet they were able to describe phenomena that I had observed in the course of teaching reading" (Beck, 1996, p. 5). In fact, what Beck was observing was the triangulation of two empirical approaches to the same issue--two perspectives on the same underlying reality. And she also came to appreciate how these two perspectives fit together: "What I knew were a number of whats--what some kids, and indeed adults, do in the early course of learning to read. And what the psychologists knew were some whys--why some novice readers might do what they do" (pp. 5-6).

Beck speculates on why the disputes about early reading instruction have dragged on so long without resolution and posits that it is due to the power of a particular kind of evidence--evidence from personal observation. The determination of whole language advocates is no doubt sustained because "people keep noticing the fact that some children or perhaps many children--in any event a subset of children--especially those who grow up in print-rich environments, don't seem to need much more of a boost in learning to read than to have their questions answered and to point things out to them in the course of dealing with books and various other authentic literacy acts" (Beck, 1996, p. 8). But Beck points out that it is equally true that proponents of the importance of decoding skills are also fueled by personal observation: "People keep noticing the fact that some children or perhaps many children--in any event a subset of children--don't seem to figure out the alphabetic principle, let alone some of the intricacies involved without having the system directly and systematically presented" (p. 8). But clearly we have lost sight of the basic fact that the two observations are not mutually exclusive--one doesn't negate the other. This is just the type of situation for which the scientific method was invented: a situation requiring a consensual view, triangulated across differing observations by different observers.

Teachers, like scientists, are ruthless pragmatists (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). They believe that some explanations and methods are better than others. They think there is a real world out there--a world in flux, obviously--but still one that is trackable by triangulating observations and observers. They believe that there are valid, if fallible, ways of finding out which educational practices are best. Teachers believe in a world that is predictable and controllable by manipulations that they use in their professional practice, just as scientists do. Researchers and educators are kindred spirits in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom.

  • Adams, M. J. (1990). Beginning to read: Thinking and learning about print . Cambridge, MA: MIT Press.
  • Adler, J. E. (1998, January). Open minds and the argument from ignorance. Skeptical Inquirer , 22 (1), 41-44.
  • Anderson, C. A., & Anderson, K. B. (1996). Violent crime rate studies in philosophical context: A destructive testing approach to heat and Southern culture of violence effects. Journal of Personality and Social Psychology , 70 , 740-756.
  • Anderson, R. C., Hiebert, E. H., Scott, J., & Wilkinson, I. (1985). Becoming a nation of readers . Washington, D. C.: National Institute of Education.
  • Arter, A. and Jenkins, J. (1979). Differential diagnosis-prescriptive teaching: A critical appraisal, Review of Educational Research , 49 , 517-555.
  • Beck, A. R., & Pirovano, C. M. (1996). Facilitated communications' performance on a task of receptive language with children and youth with autism. Journal of Autism and Developmental Disorders , 26 , 497-512.
  • Beck, I. L. (1996, April). Discovering reading research: Why I didn't go to law school . Paper presented at the Reading Hall of Fame, International Reading Association, New Orleans.
  • Beck, I. (1998). Understanding beginning reading: A journey through teaching and research. In J. Osborn & F. Lehr (Eds.), Literacy for all: Issues in teaching and learning (pp. 11-31). New York: Guilford Press.
  • Berliner, D. C. (1987). Knowledge is power: A talk to teachers about a revolution in the teaching profession. In D. C. Berliner & B. V. Rosenshine (Eds.), Talks to teachers (pp. 3-33). New York: Random House.
  • Bjorklund, D. F. (1995). Children's thinking: Developmental function and individual differences (Second Edition) . Pacific Grove, CA: Brooks/Cole.
  • Block, C. C., & Pressley, M. (Eds.). (2002). Comprehension instruction: Research-based best practices . New York: Guilford Press.
  • Bronowski, J. (1956). Science and human values . New York: Harper & Row.
  • Bronowski, J. (1973). The ascent of man . Boston: Little, Brown.
  • Bronowski, J. (1977). A sense of the future . Cambridge: MIT Press.
  • Burgess, C. A., Kirsch, I., Shane, H., Niederauer, K., Graham, S., & Bacon, A. (1998). Facilitated communication as an ideomotor response. Psychological Science , 9 , 71-74.
  • Chard, D. J., & Osborn, J. (1999). Phonics and word recognition in early reading programs: Guidelines for accessibility. Learning Disabilities Research & Practice , 14 , 107-117.
  • Cooper, H. & Hedges, L. V. (Eds.), (1994). The handbook of research synthesis . New York: Russell Sage Foundation.
  • Cunningham, P. M., & Allington, R. L. (1994). Classrooms that work: They all can read and write . New York: HarperCollins.
  • Dawkins, R. (1998). Unweaving the rainbow . Boston: Houghton Mifflin.
  • Dennett, D. C. (1995). Darwin's dangerous idea: Evolution and the meanings of life . New York: Simon & Schuster.
  • Dennett, D. C. (1999/2000, Winter). Why getting it right matters. Free Inquiry , 20 (1), 40-43.
  • Ehri, L. C., Nunes, S., Stahl, S., & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel's Meta-Analysis. Review of Educational Research , 71 , 393-447.
  • Foster, E. A., Jobling, M. A., Taylor, P. G., Donnelly, P., Deknijff, P., Renemieremet, J., Zerjal, T., & Tyler-Smith, C. (1998). Jefferson fathered slave's last child. Nature , 396 , 27-28.
  • Fraenkel, J. R., & Wallen, N. R. (1996). How to design and evaluate research in education (Third Edition). New York: McGraw-Hill.
  • Geertz, C. (1973). The interpretation of cultures . New York: Basic Books.
  • Geertz, C. (1979). From the native's point of view: On the nature of anthropological understanding. In P. Rabinow & W. Sullivan (Eds.), Interpretive social science (pp. 225-242). Berkeley: University of California Press.
  • Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities: Research & Practice , 16 (1), 45-50.
  • Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities , 33 (5), 445-457.
  • Gersten, R., & Dimino, J. (2001). The realities of translating research into classroom practice. Learning Disabilities: Research & Practice , 16 (2), 120-130.
  • Gersten, R., Vaughn, S., Deshler, D., & Schiller, E. (1997).What we know about using research findings: Implications for improving special education practice. Journal of Learning Disabilities , 30 (5), 466-476.
  • Goswami, U. (1998). Cognition in children . Hove, England: Psychology Press.
  • Gross, P. R., Levitt, N., & Lewis, M. (1997). The flight from science and reason . New York: New York Academy of Science.
  • Hedges, L. V., & Olkin, I. (1985). Statistical Methods for Meta-Analysis . New York: Academic Press.
  • Holton, G., & Roller, D. (1958). Foundations of modern physical science . Reading, MA: Addison-Wesley.
  • Hudson, A., Melita, B., & Arnold, N. (1993). A case study assessing the validity of facilitated communication. Journal of Autism and Developmental Disorders , 23 , 165-173.
  • Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings . Newbury Park, CA: Sage.
  • Jacobson, J. W., Mulick, J. A., & Schwartz, A. A. (1995). A history of facilitated communication: Science, pseudoscience, and antiscience. American Psychologist , 50 , 750-765.
  • Kamil, M. L. (1995). Some alternatives to paradigm wars in literacy research. Journal of Reading Behavior , 27 , 243-261.
  • Kampwirth, R., and Bates, E. (1980). Modality preference and teaching method: A review of the research, Academic Therapy , 15 , 597-605.
  • Kavale, K. A., & Forness, S. R. (1995). The nature of learning disabilities: Critical elements of diagnosis and classification . Mahweh, NJ: Lawrence Erlbaum Associates.
  • Levin, J. R., & O'Donnell, A. M. (2000). What to do about educational research's credibility gaps? Issues in Education: Contributions from Educational Psychology , 5 , 1-87.
  • Liberman, A. M. (1999). The reading researcher and the reading teacher need the right theory of speech. Scientific Studies of Reading , 3 , 95-111.
  • Magee, B. (1985). Philosophy and the real world: An introduction to Karl Popper . LaSalle, IL: Open Court.
  • Mayer, R. E. (2000). What is the place of science in educational research? Educational Researcher , 29 (6), 38-39.
  • McNiff, J.,Lomax, P., & Whitehead, J. (1996). You and your action research project . London: Routledge.
  • Medawar, P. B. (1982). Pluto's republic . Oxford: Oxford University Press.
  • Medawar, P. B. (1984). The limits of science . New York: Harper & Row.
  • Medawar, P. B. (1990). The threat and the glory . New York: Harper Collins.
  • Moats, L. (1999). Teaching reading is rocket science . Washington, DC: American Federation of Teachers.
  • National Reading Panel: Reports of the Subgroups. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction . Washington, DC.
  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology , 2 , 175-220.
  • Pearson, P. D. (1993). Teaching and learning to read: A research perspective. Language Arts , 70 , 502-511.
  • Pearson, P. D. (1999). A historically based review of preventing reading difficulties in young children. Reading Research Quarterly , 34 , 231-246.
  • Plotkin, D. (1996, June). Good news and bad news about breast cancer. Atlantic Monthly , 53-82.
  • Popper, K. R. (1972). Objective knowledge . Oxford: Oxford University Press.
  • Pressley, M. (1998). Reading instruction that works: The case for balanced teaching . New York: Guilford Press.
  • Pressley, M., Rankin, J., & Yokol, L. (1996). A survey of the instructional practices of outstanding primary-level literacy teachers. Elementary School Journal , 96 , 363-384.
  • Rayner, K. (1998). Eye movements in reading and information processing: 20 Years of research. Psychological Bulletin , 124 , 372-422.
  • Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., & Seidenberg, M. S. (2002, March). How should reading be taught? Scientific American , 286 (3), 84-91.
  • Reading Coherence Initiative. (1999). Understanding reading: What research says about how children learn to read . Austin, TX: Southwest Educational Development Laboratory.
  • Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin , 118 , 183-192.
  • Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist , 44 , 1276-1284.
  • Shankweiler, D. (1999). Words to meaning. Scientific Studies of Reading , 3 , 113-127.
  • Share, D. L., & Stanovich, K. E. (1995). Cognitive processes in early reading development: Accommodating individual differences into a model of acquisition. Issues in Education: Contributions from Educational Psychology , 1 , 1-57.
  • Shavelson, R. J., & Towne, L. (Eds.) (2002). Scientific research in education . Washington, DC: National Academy Press.
  • Siegler, R. S. (1991). Children's thinking (Second Edition) . Englewood Cliffs, NJ: Prentice Hall.
  • Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children . Washington, DC: National Academy Press.
  • Snowling, M. (2000). Dyslexia (Second Edition) . Oxford: Blackwell.
  • Spear-Swerling, L., & Sternberg, R. J. (2001). What science offers teachers of reading. Learning Disabilities: Research & Practice , 16 (1), 51-57.
  • Stahl, S. (December, 1988). Is there evidence to support matching reading styles and initial reading methods? Phi Delta Kappan , 317-327.
  • Stanovich, K. E. (1993/1994). Romance and reality. The Reading Teacher , 47 (4), 280-291.
  • Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers . New York: Guilford Press.
  • Stanovich, K. E. (2001). How to think straight about psychology (Sixth Edition). Boston: Allyn & Bacon.
  • Stokes, D. E. (1997). Pasteur's quadrant: Basic science and technological innovation . Washington, DC: Brookings Institution Press.
  • Swanson, H. L. (1999). Interventions for students with learning disabilities: A meta-analysis of treatment outcomes . New York: Guilford Press.
  • Tarver, S. G., & Dawson, E. (1978). Modality preference and the teaching of reading: A review, Journal of Learning Disabilities , 11, 17-29.
  • Vaughn, S., & Dammann, J. E. (2001). Science and sanity in special education. Behavioral Disorders , 27, 21-29.
  • Warby, D. B., Greene, M. T., Higgins, K., & Lovitt, T. C. (1999). Suggestions for translating research into classroom practices. Intervention in School and Clinic , 34 (4), 205-211.
  • Wheeler, D. L., Jacobson, J. W., Paglieri, R. A., & Schwartz, A. A. (1993). An experimental assessment of facilitated communication. Mental Retardation , 31 , 49-60.
  • Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist , 54 , 595-604.
  • Wilson, E. O. (1998). Consilience: The unity of knowledge . New York: Knopf.

For additional copies of this document:

Contact the National Institute for Literacy at ED Pubs PO Box 1398, Jessup, Maryland 20794-1398

Phone 1-800-228-8813 Fax 301-430-1244 [email protected]

NICHD logo

Date Published: 2003 Date Posted: March 2010

Department of Education logo

Conducting Research in K-12 Education Settings

Conducting Research in K-12 Education Settings

Research with  human subjects or their data is regulated by the federal government and reviewed by Teachers College (TC) Institutional Review Board (IRB). Educational research that involves students, teachers, administrative staff, student-level (e.g., test scores) administrative data, or classroom curriculum, activities, or assignments, may be subject to federal regulations and IRB review. This guide provides defining questions to ask about your research, considerations when developing consent procedures, and identifies several topics when conducting human subjects research in kindergarten through 12th-grade education settings.

Defining Questions to Ask

While  preparing to submit your research protocol to the IRB, it is important to consider these key questions:

Who are your participants?

Identify whether your participants will be school administrators, teachers, students, your current or previous students . If you plan to recruit students that are under your supervision or jurisdiction (whether as a teacher, assistant teacher, or principal), your study may be subject to additional research compliance requirements. Please review the  Working with Your Own Students guide for more information.

Researchers working with students under the age of 18 will be required to obtain both parent permission and assent, except in rare cases as outlined by the federal regulations .

In some cases, students may be involved in the research indirectly. For example, a researcher videotaping teachers during class time may inadvertently capture students in their recordings. Researchers should always have a plan for these types of situations, describing in the IRB application how they will prevent non-participants from being recorded. Once your population of interest is defined, the IRB will better understand what protections are needed.

What are your  research activities?

Exempt   Category 1 indicates that protocols may be exempt from IRB review if the research is “conducted in established or commonly accepted educational settings, that specifically involves normal educational practices that are not likely to adversely impact students’ opportunity to learn or the assessment of educators.” It is important to note that Exempt Category 1 is not a catch-all for education research. Observations of classroom activities may qualify for this category, but only if the activities observed are part of a typical schedule. However, if the researcher engages with the students, classroom materials, or teachers during the observations and/or intervenes in the course of the natural classroom activities, the research may not qualify for Exempt Category 1.

  • Program evaluations may not be considered under Exempt Category 1 research if the curriculum being evaluated is not part of the typical school practice. For example, experimental teaching methods or new classroom activities that are outside of the typical curriculum would exclude research from this category.
Exempt Category 1 is not a catch-all for education research.

Other types of education research activities may include program or curriculum evaluations, surveys, interviews, or experimental interventions of new teaching strategies. Consider whether you will be collecting new data, accessing existing data, or a combination of the two.

  • When accessing student-level data for research purposes, consider using a data release form . A data release form is an agreement between the researcher and the participant in which the participant agrees to allow the researcher access to a specific set of data for research purposes.
  • In cases where a school or district shares a data set with a researcher, a  data sharing form should be implemented between the head administrator (or site official) and the researcher.

Randomized Controlled Trials (RCTs) are considered interventions, even in educational settings. Researchers will need to demonstrate how their randomized control trial has a fair selection of subjects and provides equitable access to education opportunities, especially if the trial occurs in the classroom.

  • For example, subjects in the control group of a new math intervention should have access to the intervention once the study has been concluded. Alternatively, a researcher who is testing a set of new math problems against a set of old problems can counterbalance the practice tests so that all students receive the new and old math problems, just at different time points.

Where and when are your research activities?

School Permission Template and Site Permission Template are required for research on external premises. Permission forms should always be signed by the head of the school or district, rather than a teacher or aide.

It is important to determine the exact time of day the research will be conducted, whether during school breaks, during class time, or before and after school. The proposed research activities should be conducted in the least intrusive way possible, ideally outside of class time if the study activities are not part of typical education. Evaluate whether the research will detract from normal learning time or if it is possible to do outside of class time.

What Does Consent Look Like in Education Research?

Consent in education settings may require additional steps to ensure vulnerable populations are protected. Typically, both  parent permission forms and  child assent forms are required in order for a student to participate in a research study. In addition, if a study is being conducted at the school or district level, teachers, parents, and students should have the option to opt-into the study. Two key considerations for navigating this consent process are:

  • Burden on Teachers and School Administrators There are several elements of research that, if carried out by school staff, will add burden to the staff, including collecting consent, supporting participant recruitment, and managing participant data. It is important to ask yourself how much your proposed research activities require of the teachers or school administrators? What is their total time commitment? What activities can you as the researcher take on in order to relieve the burden on school staff? Consider who is distributing and collecting forms, who is in charge of implementing the study activities, and any other responsibilities that may be required for the research.
  • Managing individuals who do not want to participate in the study If parents or students do not want their child to participate in the research study, an experimental activity, or survey, there must be an alternative option for that student. Specific precautions must be taken to respect those individuals. For example, if a classroom will be recorded there must be a protocol in place to protect students who do not consent. Additionally, if a class is participating in a survey, ensure that there are non-research activities the student can participate in while the survey is being conducted.

Other Considerations for Education Researchers

  • Technology in Classrooms
  • Special Protections for Youth
  • FERPA and PPRA
  • IRB Requirements by Other Sites
  • Compensation
  • New  software to be used in research settings requires a software proposal to TC IT. Software will be vetted by TC IT to ensure that it meets the privacy and security requirements set forth by the College. Begin by reviewing the approved software list. If the software is not on the approved list, submit a “Project Request.” The "Project Request" button can be found on MyTC/Support (upper right corner) "Submit a Project Request” or reach out to TCIT at [email protected] .
  • In addition, Teachers College requires PI’s to use their TC issued Google accounts when conducting research. TC specific Google accounts are encrypted with additional security protects and should be used for all documents including recruiting materials, storing data, etc.
  • Federal regulations require additional protections for youth. Some of them have been outlined in this guide. For more information, please review OHRP’s Information on Special Protections for Children in Research and  TC IRB’s blog for more information.
  • The Family Educational Rights and Privacy Act (FERPA) and The Protection of Pupil Rights Amendment (PPRA) pertain to youth and parents rights to student data and participation in research. Please familiarize yourself with these laws before engaging with students or parents as research participants.
  • Please find more information here for  FERPA and PPRA  as well as this accessible version .
  • Some schools and districts have their own IRB of oversight (e.g., NYC DOE IRB ). If you are using the school site to conduct research, recruiting or contacting participants through their professional networks, or conducting activities during schooltime, you may be required to submit to the school’s IRB of oversight as well. Always double check to ensure you are submitting your research to the correct institution(s).
  • New York City’s law  states that  NYC Department of Education teachers cannot receive compensation for participation in research studies. This means that  TC researchers cannot compensate teachers for research conducted during class time or typical work hours. Instead, NYC DOE recommends providing classroom donations.

Institutional Review Board

Address: Russell Hall, Room 13

* Phone: 212-678-4105 * Email:   [email protected]

Appointments are available by request . Make sure to have your IRB protocol number (e.g., 19-011) available.  If you are unable to access any of the downloadable resources, please contact  OASID via email [email protected] .

CEQA logo

  • Recent Postings

Letter of Non-Objection to Conduct Research and Educational Activities

Contact information, notice of exemption, attachments.

Disclaimer: The Governor’s Office of Planning and Research (OPR) accepts no responsibility for the content or accessibility of these documents. To obtain an attachment in a different format, please contact the lead agency at the contact information listed above. You may also contact the OPR via email at [email protected] or via phone at (916) 445-0613 . For more information, please visit OPR’s Accessibility Site .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

A practical guide for conducting qualitative research in medical education: Part 1-How to interview

Affiliations.

  • 1 Department of Emergency Medicine University of California, Los Angeles, David Geffen School of Medicine at UCLA Los Angeles California USA.
  • 2 Department of Emergency Medicine Ronald Reagan UCLA Medical Center Los Angeles California USA.
  • 3 Department of Emergency Medicine University of California, Davis Health System Sacramento California USA.
  • 4 Department of Emergency Medicine Harbor-UCLA Medical Center Torrance California USA.
  • PMID: 34471795
  • PMCID: PMC8325517
  • DOI: 10.1002/aet2.10646

PubMed Disclaimer

Conflict of interest statement

The authors have no potential conflicts to disclose.

Similar articles

  • A practical guide for conducting qualitative research in medical education: Part 3-Using software for qualitative analysis. Clarke SO, Coates WC, Jordan J. Clarke SO, et al. AEM Educ Train. 2021 Aug 1;5(4):e10644. doi: 10.1002/aet2.10644. eCollection 2021 Aug. AEM Educ Train. 2021. PMID: 34589659 Free PMC article.
  • A practical guide for conducting qualitative research in medical education: Part 2-Coding and thematic analysis. Coates WC, Jordan J, Clarke SO. Coates WC, et al. AEM Educ Train. 2021 Aug 1;5(4):e10645. doi: 10.1002/aet2.10645. eCollection 2021 Aug. AEM Educ Train. 2021. PMID: 34585038 Free PMC article.
  • Qualitative Research Methods in Medical Education. Sawatsky AP, Ratelle JT, Beckman TJ. Sawatsky AP, et al. Anesthesiology. 2019 Jul;131(1):14-22. doi: 10.1097/ALN.0000000000002728. Anesthesiology. 2019. PMID: 31045898 Review.
  • Systematic methodological review: developing a framework for a qualitative semi-structured interview guide. Kallio H, Pietilä AM, Johnson M, Kangasniemi M. Kallio H, et al. J Adv Nurs. 2016 Dec;72(12):2954-2965. doi: 10.1111/jan.13031. Epub 2016 Jun 23. J Adv Nurs. 2016. PMID: 27221824 Review.
  • Using focus groups in medical education research: AMEE Guide No. 91. Stalmeijer RE, Mcnaughton N, Van Mook WN. Stalmeijer RE, et al. Med Teach. 2014 Nov;36(11):923-39. doi: 10.3109/0142159X.2014.917165. Epub 2014 Jul 29. Med Teach. 2014. PMID: 25072306
  • Quality of life, socioeconomic and psychological concerns in parents of children with tuberous sclerosis complex, STXBP1 and SYNGAP1 encephalopathies: a mixed method study. Salcedo-Perez-Juana M, Palacios-Ceña D, San-Martín-Gómez A, Aledo-Serrano Á, Florencio LL. Salcedo-Perez-Juana M, et al. Front Pediatr. 2023 Nov 9;11:1285377. doi: 10.3389/fped.2023.1285377. eCollection 2023. Front Pediatr. 2023. PMID: 38027293 Free PMC article.
  • The Emmaus Project: Aging, Illness, and Dying Among Older Christians-A Qualitative Study. Quinn KRT, Kim J, Yoon JD. Quinn KRT, et al. Linacre Q. 2023 Aug;90(3):320-332. doi: 10.1177/00243639231156700. Epub 2023 Mar 22. Linacre Q. 2023. PMID: 37841375
  • Factors influencing the quality of acupuncture clinical trials: a qualitative interview of stakeholders. He Y, Li N, Wang Q, Wang Y, Dai Z, Wu M, Song H, Wen Q, Li N, Zhang Y. He Y, et al. BMC Complement Med Ther. 2023 Sep 16;23(1):326. doi: 10.1186/s12906-023-04020-w. BMC Complement Med Ther. 2023. PMID: 37716936 Free PMC article.
  • Kozleski EB. The uses of qualitative research: powerful methods to inform evidence based practice in education. Res Pract Persons Severe Disabl. 2017;42(1):19‐32.
  • Cleland JA. The qualitative orientation in medical education research. Korean J Med Educ. 2017;29(2):61‐71. - PMC - PubMed
  • Daniel E. The usefulness of qualitative and quantitative approaches and methods in researching problem‐solving ability in science education curriculum. J Edu Pract. 2016;7(15):91‐100.
  • Chen HC, Teherani A. Common qualitative methodologies and research designs in health professions education. Acad Med. 2016;91(12):e5. - PubMed
  • Paradis E. The tools of the qualitative research trade. Acad Med. 2016;91(12):e17. - PubMed

LinkOut - more resources

Full text sources.

  • Europe PubMed Central
  • Ovid Technologies, Inc.
  • PubMed Central

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Opportunities & Announcements
  • Funding Strategy for Grants
  • Grant Writing & Approval Process
  • Managing Grants
  • Clinical Research
  • Small Business Research

Training banner

Guidance for Responsible Conduct of Research (RCR) Training Requirements

NIH requires that all trainees, fellows, participants, and scholars receiving support through any NIH training, career development award (individual or institutional), research education grant, and dissertation research grant must receive instruction in responsible conduct of research. 

For complete requirements, applicants should review official policies  NOT-OD-10-019  and NOT-OD-22-055  .

NIMH-specific RCR information

NIMH requires successful completion RCR instruction during Year 01 of NIMH-supported research training and career development awards (i.e. NRSAs, mentored Ks), including the R36 and R25. Instructional details must be reported in the Research Performance Progress Report (RPPR). This requirement is fulfilled if the fellow/trainee provides documentation that acceptable instruction has been completed within the last four years and during the current career stage (e.g. if a postdoctoral fellow, during the postdoctoral period).

Instructional component recommendations

Format of Instruction : Describe the required format of instruction, i.e., face-to-face lectures, coursework, and/or real-time discussion groups (a plan with only on-line instruction is not acceptable). Discussion-based instruction should not exclusively employ video conferencing unless there are unusual or well-justified circumstances.

Subject matter : Developments in the conduct of research and a growing understanding of the impact of the broader research environment have led to a recognition that additional topics merit inclusion in discussions of the responsible conduct of research. For context, those additional subjects among the list of topics traditionally included in most acceptable plans for RCR instruction, cited in  NOT-OD-22-055  and appearing below:

  • Conflict of interest– personal, professional, and financial – and conflict of commitment, in allocating time, effort, or other research resources
  • Policies regarding human subjects, live vertebrate animal subjects in research, and safe laboratory practices
  • Mentor/mentee responsibilities and relationships
  • Safe research environments (e.g., those that promote inclusion and are free of sexual, racial, ethnic, disability and other forms of discriminatory harassment)
  • Collaborative research, including collaborations with industry and investigators and institutions in other countries
  • Peer review, including the responsibility for maintaining confidentiality and security in peer review
  • Data acquisition and analysis; laboratory tools (e.g., tools for analyzing data and creating or working with digital images); recordkeeping practices, including methods such as electronic laboratory notebooks
  • Secure and ethical data use; data confidentiality, management, sharing, and ownership
  • Research misconduct and policies for handling misconduct
  • Responsible authorship and publication
  • The scientist as a responsible member of society, contemporary ethical issues in biomedical research, and the environmental and societal impacts of scientific research

Faculty participation: Training faculty and sponsors/mentors are highly encouraged to contribute both to formal and informal instruction in responsible conduct of research.  Informal instruction occurs during laboratory interactions and in other informal situations throughout the year.

Duration of instruction: Instruction should involve substantive contact hours between the trainees/fellows/scholars/participants and the participating faculty.  Acceptable programs generally involve at least eight contact hours. A semester-long series of seminars/programs may be more effective than a single seminar or one-day workshop because it is expected that topics will then be considered in sufficient depth, learning will be better consolidated, and the subject matter will be synthesized within a broader conceptual framework.

Frequency of Instruction: Instruction must be undertaken at least once during each career stage, and at a frequency of no less than once every four years.

Additional RCR advice for applicants

  • Online training is not considered sufficient for RCR training though it can serve as a valuable supplement to face-to-face instruction. A plan that employs only online coursework for instruction in RCR will not be considered acceptable, except in special instances of short-term career development programs, or unusual and well-justified circumstances.
  • Discussion-based instruction and face-to-face interaction is expected to remain a key feature of RCR training. However, it is recognized that video conferencing allows for effective “face-to-face” discussions, provided that virtual options are utilized in a way that fosters discussion, active learning, engagement, and interaction. RCR plans that that only include video conference-based training will not be considered acceptable, except in the circumstances described in NOT-OD-10-019  .
  • It is helpful to use the above categories (format, subject matter, faculty participation, duration, and frequency) as a framework for describing the proposed training.
  • Applicants are encouraged to tailor RCR instruction to the needs of the individual and to include instruction beyond formal institutional courses. RCR training should provide opportunities to develop the trainee’s own scholarly understanding of the ethical issues associated with their research activities and its impact on society.

For complete requirements, applicants should review official policies NOT-OD-10-019  and NOT-OD-22-055  .

  • Systematic review
  • Open access
  • Published: 24 June 2024

A systematic review of experimentally tested implementation strategies across health and human service settings: evidence from 2010-2022

  • Laura Ellen Ashcraft   ORCID: orcid.org/0000-0001-9957-0617 1 , 2 ,
  • David E. Goodrich 3 , 4 , 5 ,
  • Joachim Hero 6 ,
  • Angela Phares 3 ,
  • Rachel L. Bachrach 7 , 8 ,
  • Deirdre A. Quinn 3 , 4 ,
  • Nabeel Qureshi 6 ,
  • Natalie C. Ernecoff 6 ,
  • Lisa G. Lederer 5 ,
  • Leslie Page Scheunemann 9 , 10 ,
  • Shari S. Rogal 3 , 11   na1 &
  • Matthew J. Chinman 3 , 4 , 6   na1  

Implementation Science volume  19 , Article number:  43 ( 2024 ) Cite this article

1159 Accesses

18 Altmetric

Metrics details

Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify strategies and the Reach Effectiveness Adoption Implementation and Maintenance (RE-AIM) framework to classify outcomes.

We conducted a systematic review of studies examining implementation strategies from 2010-2022 and registered with PROSPERO (CRD42021235592). We searched databases using terms “implementation strategy”, “intervention”, “bundle”, “support”, and their variants. We also solicited study recommendations from implementation science experts and mined existing systematic reviews. We included studies that quantitatively assessed the impact of at least one implementation strategy to improve health or health care using an outcome that could be mapped to the five evaluation dimensions of RE-AIM. Only studies meeting prespecified methodologic standards were included. We described the characteristics of studies and frequency of implementation strategy use across study arms. We also examined common strategy pairings and cooccurrence with significant outcomes.

Our search resulted in 16,605 studies; 129 met inclusion criteria. Studies tested an average of 6.73 strategies (0-20 range). The most assessed outcomes were Effectiveness ( n =82; 64%) and Implementation ( n =73; 56%). The implementation strategies most frequently occurring in the experimental arm were Distribute Educational Materials ( n =99), Conduct Educational Meetings ( n =96), Audit and Provide Feedback ( n =76), and External Facilitation ( n =59). These strategies were often used in combination. Nineteen implementation strategies were frequently tested and associated with significantly improved outcomes. However, many strategies were not tested sufficiently to draw conclusions.

This review of 129 methodologically rigorous studies built upon prior implementation science data syntheses to identify implementation strategies that had been experimentally tested and summarized their impact on outcomes across diverse outcomes and clinical settings. We present recommendations for improving future similar efforts.

Peer Review reports

Contributions to the literature

While many implementation strategies exist, it has been challenging to compare their effectiveness across a wide range of trial designs and practice settings

This systematic review provides a transdisciplinary evaluation of implementation strategies across population, practice setting, and evidence-based interventions using a standardized taxonomy of strategies and outcomes.

Educational strategies were employed ubiquitously; nineteen other commonly used implementation strategies, including External Facilitation and Audit and Provide Feedback, were associated with positive outcomes in these experimental trials.

This review offers guidance for scholars and practitioners alike in selecting implementation strategies and suggests a roadmap for future evidence generation.

Implementation strategies are “methods or techniques used to enhance the adoption, implementation, and sustainment of evidence-based practices or programs” (EBPs) [ 1 ]. In 2015, the Expert Recommendations for Implementing Change (ERIC) study organized a panel of implementation scientists to compile a standardized set of implementation strategy terms and definitions [ 2 , 3 , 4 ]. These 73 strategies were then organized into nine “clusters” [ 5 ]. The ERIC taxonomy has been widely adopted and further refined [ 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 ]. However, much of the evidence for individual or groups of ERIC strategies remains narrowly focused. Prior systematic reviews and meta-analyses have assessed strategy effectiveness, but have generally focused on a specific strategy, (e.g., Audit and Provide Feedback) [ 14 , 15 , 16 ], subpopulation, disease (e.g., individuals living with dementia) [ 16 ], outcome [ 15 ], service setting (e.g., primary care clinics) [ 17 , 18 , 19 ] or geography [ 20 ]. Given that these strategies are intended to have broad applicability, there remains a need to understand how well implementation strategies work across EBPs and settings and the extent to which implementation knowledge is generalizable.

There are challenges in assessing the evidence of implementation strategies across many EBPs, populations, and settings. Heterogeneity in population characteristics, study designs, methods, and outcomes have made it difficult to quantitatively compare which strategies work and under which conditions [ 21 ]. Moreover, there remains significant variability in how researchers operationalize, apply, and report strategies (individually or in combination) and outcomes [ 21 , 22 ]. Still, synthesizing data related to using individual strategies would help researchers replicate findings and better understand possible mediating factors including the cost, timing, and delivery by specific types of health providers or key partners [ 23 , 24 , 25 ]. Such an evidence base would also aid practitioners with implementation planning such as when and how to deploy a strategy for optimal impact.

Building upon previous efforts, we therefore conducted a systematic review to evaluate the level of evidence supporting the ERIC implementation strategies across a broad array of health and human service settings and outcomes, as organized by the evaluation framework, RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) [ 26 , 27 , 28 ]. A secondary aim of this work was to identify patterns in scientific reporting of strategy use that could not only inform reporting standards for strategies but also the methods employed in future. The current study was guided by the following research questions Footnote 1 :

What implementation strategies have been most commonly and rigorously tested in health and human service settings?

Which implementation strategies were commonly paired?

What is the evidence supporting commonly tested implementation strategies?

We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-P) model [ 29 , 30 , 31 ] to develop and report on the methods for this systematic review (Additional File 1). This study was considered to be non-human subjects research by the RAND institutional review board.

Registration

The protocol was registered with PROSPERO (PROSPERO 2021 CRD42021235592).

Eligibility criteria

This review sought to synthesize evidence for implementation strategies from research studies conducted across a wide range of health-related settings and populations. Inclusion criteria required studies to: 1) available in English; 2) published between January 1, 2010 and September 20, 2022; 3) based on experimental research (excluded protocols, commentaries, conference abstracts, or proposed frameworks); 4) set in a health or human service context (described below); 5) tested at least one quantitative outcome that could be mapped to the RE-AIM evaluation framework [ 26 , 27 , 28 ]; and 6) evaluated the impact of an implementation strategy that could be classified using the ERIC taxonomy [ 2 , 32 ]. We defined health and human service setting broadly, including inpatient and outpatient healthcare settings, specialty clinics, mental health treatment centers, long-term care facilities, group homes, correctional facilities, child welfare or youth services, aging services, and schools, and required that the focus be on a health outcome. We excluded hybrid type I trials that primarily focused on establishing EBP effectiveness, qualitative studies, studies that described implementation barriers and facilitators without assessing implementation strategy impact on an outcome, and studies not meeting standardized rigor criteria defined below.

Information sources

Our three-pronged search strategy included searching academic databases (i.e., CINAHL, PubMed, and Web of Science for replicability and transparency), seeking recommendations from expert implementation scientists, and assessing existing, relevant systematic reviews and meta-analyses.

Search strategy

Search terms included “implementation strateg*” OR “implementation intervention*” OR “implementation bundl*” OR “implementation support*.” The search, conducted on September 20, 2022, was limited to English language and publication between 2010 and 2022, similar to other recent implementation science reviews [ 22 ]. This timeframe was selected to coincide with the advent of Implementation Science and when the term “implementation strategy” became conventionally used [ 2 , 4 , 33 ]. A full search strategy can be found in Additional File 2.

Title and abstract screening process

Each study’s title and abstract were read by two reviewers, who dichotomously scored studies on each of the six eligibility criteria described above as yes=1 or no=0, resulting in a score ranging from 1 to 6. Abstracts receiving a six from both reviewers were included in the full text review. Those with only one score of six were adjudicated by a senior member of the team (MJC, SSR, DEG). The study team held weekly meetings to troubleshoot and resolve any ongoing issues noted through the abstract screening process.

Full text screening

During the full text screening process, we reviewed, in pairs, each article that had progressed through abstract screening. Conflicts between reviewers were adjudicated by a senior member of the team for a final inclusion decision (MJC, SSR, DEG).

Review of study rigor

After reviewing published rigor screening tools [ 34 , 35 , 36 ], we developed an assessment of study rigor that was appropriate for the broad range of reviewed implementation studies. Reviewers evaluated studies on the following: 1) presence of a concurrent comparison or control group (=2 for traditional randomized controlled trial or stepped wedge cluster randomized trial and =1 for pseudo-randomized and other studies with concurrent control); 2) EBP standardization by protocol or manual (=1 if present); 3) EBP fidelity tracking (=1 if present); 4) implementation strategy standardization by operational description, standard training, or manual (=1 if present); 5) length of follow-up from full implementation of intervention (=2 for twelve months or longer, =1 for six to eleven months, or =0 for less than six months); and 6) number of sites (=1 for more than one site). Rigor scores ranged from 0 to 8, with 8 indicating the most rigorous. Articles were included if they 1) included a concurrent control group, 2) had an experimental design, and 3) received a score of 7 or 8 from two independent reviewers.

Outside expert consultation

We contacted 37 global implementation science experts who were recognized by our study team as leaders in the field or who were commonly represented among first or senior authors in the included abstracts. We asked each expert for recommendations of publications meeting study inclusion criteria (i.e., quantitatively evaluating the effectiveness of an implementation strategy). Recommendations were recorded and compared to the full abstract list.

Systematic reviews

Eighty-four systematic reviews were identified through the initial search strategy (See Additional File 3). Systematic reviews that examined the effectiveness of implementation strategies were reviewed in pairs for studies that were not found through our initial literature search.

Data abstraction and coding

Data from the full text review were abstracted in pairs, with conflicts resolved by senior team members (DEG, MJC) using a standard Qualtrics abstraction form. The form captured the setting, number of sites and participants studied, evidence-based practice/program of focus, outcomes assessed (based on RE-AIM), strategies used in each study arm, whether the study took place in the U.S. or outside of the U.S., and the findings (i.e., was there significant improvement in the outcome(s)?). We coded implementation strategies used in the Control and Experimental Arms. We defined the Control Arm as receiving the lowest number of strategies (which could mean zero strategies or care as usual) and the Experimental Arm as the most intensive arm (i.e., receiving the highest number of strategies). When studies included multiple Experimental Arms, the Experimental Arm with the least intensive implementation strategy(ies) was classified as “Control” and the Experimental Arm with the most intensive implementation strategy(ies) was classified as the “Experimental” Arm.

Implementation strategies were classified using standard definitions (MJC, SSR, DEG), based on minor modifications to the ERIC taxonomy [ 2 , 3 , 4 ]. Modifications resulted in 70 named strategies and were made to decrease redundancy and improve clarity. These modifications were based on input from experts, cognitive interview data, and team consensus [ 37 ] (See Additional File 4). Outcomes were then coded into RE-AIM outcome domains following best practices as recommended by framework experts [ 26 , 27 , 28 ]. We coded the RE-AIM domain of Effectiveness as either an assessment of the effectiveness of the EBP or the implementation strategy. We did not assess implementation strategy fidelity or effects on health disparities as these are recently adopted reporting standards [ 27 , 28 ] and not yet widely implemented in current publications. Further, we did not include implementation costs as an outcome because reporting guidelines have not been standardized [ 38 , 39 ].

Assessment and minimization of bias

Assessment and minimization of bias is an important component of high-quality systematic reviews. The Cochrane Collaboration guidance for conducting high-quality systematic reviews recommends including a specific assessment of bias for individual studies by assessing the domains of randomization, deviations of intended intervention, missing data, measurement of the outcome, and selection of the reported results (e.g., following a pre-specified analysis plan) [ 40 , 41 ]. One way we addressed bias was by consolidating multiple publications from the same study into a single finding (i.e., N =1), so-as to avoid inflating estimates due to multiple publications on different aspects of a single trial. We also included high-quality studies only, as described above. However, it was not feasible to consistently apply an assessment of bias tool due to implementation science’s broad scope and the heterogeneity of study design, context, outcomes, and variable measurement, etc. For example, most implementation studies reviewed had many outcomes across the RE-AIM framework, with no one outcome designated as primary, precluding assignment of a single score across studies.

We used descriptive statistics to present the distribution of health or healthcare area, settings, outcomes, and the median number of included patients and sites per study, overall and by country (classified as U.S. vs. non-U.S.). Implementation strategies were described individually, using descriptive statistics to summarize the frequency of strategy use “overall” (in any study arm), and the mean number of strategies reported in the Control and Experimental Arms. We additionally described the strategies that were only in the experimental (and not control) arm, defining these as strategies that were “tested” and may have accounted for differences in outcomes between arms.

We described frequencies of pair-wise combinations of implementation strategies in the Experimental Arm. To assess the strength of the evidence supporting implementation strategies that were used in the Experimental Arm, study outcomes were categorized by RE-AIM and coded based on whether the association between use of the strategies resulted in a significantly positive effect (yes=1; no=0). We then created an indicator variable if at least one RE-AIM outcome in the study was significantly positive (yes=1; no=0). We plotted strategies on a graph with quadrants based on the combination of median number of studies in which a strategy appears and the median percent of studies in which a strategy was associated with at least one positive RE-AIM outcome. The upper right quadrant—higher number of studies overall and higher percent of studies with a significant RE-AIM outcome—represents a superior level of evidence. For implementation strategies in the upper right quadrant, we describe each RE-AIM outcome and the proportion of studies which have a significant outcome.

Search results

We identified 14,646 articles through the initial literature search, 17 articles through expert recommendation (three of which were not included in the initial search), and 1,942 articles through reviewing prior systematic reviews (Fig. 1 ). After removing duplicates, 9,399 articles were included in the initial abstract screening. Of those, 48% ( n =4,075) abstracts were reviewed in pairs for inclusion. Articles with a score of five or six were reviewed a second time ( n =2,859). One quarter of abstracts that scored lower than five were reviewed for a second time at random. We screened the full text of 1,426 articles in pairs. Common reasons for exclusion were 1) study rigor, including no clear delineation between the EBP and implementation strategy, 2) not testing an implementation strategy, and 3) article type that did not meet inclusion criteria (e.g., commentary, protocol, etc.). Six hundred seventeen articles were reviewed for study rigor with 385 excluded for reasons related to study design and rigor, and 86 removed for other reasons (e.g., not a research article). Among the three additional expert-recommended articles, one met inclusion criteria and was added to the analysis. The final number of studies abstracted was 129 representing 143 publications.

figure 1

Expanded PRISMA Flow Diagram

The expanded PRISMA flow diagram provides a description of each step in the review and abstraction process for the systematic review

Descriptive results

Of 129 included studies (Table 1 ; see also Additional File 5 for Summary of Included Studies), 103 (79%) were conducted in a healthcare setting. EBP health care setting varied and included primary care ( n =46; 36%), specialty care ( n =27; 21%), mental health ( n =11; 9%), and public health ( n =30; 23%), with 64 studies (50%) occurring in an outpatient health care setting. Studies included a median of 29 sites and 1,419 target population (e.g., patients or students). The number of strategies varied widely across studies, with Control Arms averaging approximately two strategies (Range = 0-20, including studies with no strategy in the comparison group) and Experimental Arms averaging eight strategies (Range = 1-21). Non-US studies ( n =73) included more sites and target population on average, with an overall median of 32 sites and 1,531 patients assessed in each study.

Organized by RE-AIM, the most evaluated outcomes were Effectiveness ( n = 82, 64%) and Implementation ( n = 73, 56%); followed by Maintenance ( n =40; 31%), Adoption ( n =33; 26%), and Reach ( n =31; 24%). Most studies ( n = 98, 76%) reported at least one significantly positive outcome. Adoption and Implementation outcomes showed positive change in three-quarters of studies ( n =78), while Reach ( n =18; 58%), Effectiveness ( n =44; 54%), and Maintenance ( n =23; 58%) outcomes evidenced positive change in approximately half of studies.

The following describes the results for each research question.

Table 2 shows the frequency of studies within which an implementation strategy was used in the Control Arm, Experimental Arm(s), and tested strategies (those used exclusively in the Experimental Arm) grouped by strategy type, as specified by previous ERIC reports [ 2 , 6 ].

Control arm

In about half the studies (53%; n =69), the Control Arms were “active controls” that included at least one strategy, with an average of 1.64 (and up to 20) strategies reported in control arms. The two most common strategies used in Control Arms were: Distribute Educational Materials ( n =52) and Conduct Educational Meetings ( n =30).

Experimental arm

Experimental conditions included an average of 8.33 implementation strategies per study (Range = 1-21). Figure 2 shows a heat map of the strategies that were used in the Experimental Arms in each study. The most common strategies in the Experimental Arm were Distribute Educational Materials ( n =99), Conduct Educational Meetings ( n =96), Audit and Provide Feedback ( n =76), and External Facilitation ( n =59).

figure 2

Implementation strategies used in the Experimental Arm of included studies. Explore more here: https://public.tableau.com/views/Figure2_16947070561090/Figure2?:language=en-US&:display_count=n&:origin=viz_share_link

Tested strategies

The average number of implementation strategies that were included in the Experimental Arm only (and not in the Control Arm) was 6.73 (Range = 0-20). Footnote 2 Overall, the top 10% of tested strategies included Conduct Educational Meetings ( n =68), Audit and Provide Feedback ( n =63), External Facilitation ( n =54), Distribute Educational Materials ( n =49), Tailor Strategies ( n =41), Assess for Readiness and Identify Barriers and Facilitators ( n =38) and Organize Clinician Implementation Team Meetings ( n =37). Few studies tested a single strategy ( n =9). These strategies included, Audit and Provide Feedback, Conduct Educational Meetings, Conduct Ongoing Training, Create a Learning Collaborative, External Facilitation ( n =2), Facilitate Relay of Clinical Data To Providers, Prepare Patients/Consumers to be Active Participants, and Use Other Payment Schemes. Three implementation strategies were included in the Control or Experimental Arms but were not Tested including, Use Mass Media, Stage Implementation Scale Up, and Fund and Contract for the Clinical Innovation.

Table 3  shows the five most used strategies in Experimental Arms with their top ten most frequent pairings, excluding Distribute Educational Materials and Conduct Educational Meetings, as these strategies were included in almost all Experimental and half of Control Arms. The five most used strategies in the Experimental Arm included Audit and Provide Feedback ( n =76), External Facilitation ( n =59), Tailor Strategies ( n =43), Assess for Readiness and Identify Barriers and Facilitators ( n =43), and Organize Implementation Teams ( n =42).

Strategies frequently paired with these five strategies included two educational strategies: Distribute Educational Materials and Conduct Educational Meetings. Other commonly paired strategies included Develop a Formal Implementation Blueprint, Promote Adaptability, Conduct Ongoing Training, Purposefully Reexamine the Implementation, and Develop and Implement Tools for Quality Monitoring.

We classified the strength of evidence for each strategy by evaluating both the number of studies in which each strategy appeared in the Experimental Arm and the percentage of times there was at least one significantly positive RE-AIM outcome. Using these factors, Fig. 3 shows the number of studies in which individual strategies were evaluated (on the y axis) compared to the percentage of times that studies including those strategies had at least one positive outcome (on the x axis). Due to the non-normal distribution of both factors, we used the median (rather than the mean) to create four quadrants. Strategies in the lower left quadrant were tested in fewer than the median number of studies (8.5) and were less frequently associated with a significant RE-AIM outcome (75%). The upper right quadrant included strategies that occurred in more than the median number of studies (8.5) and had more than the median percent of studies with a significant RE-AIM outcome (75%); thus those 19 strategies were viewed as having stronger evidence. Of those 19 implementation strategies, Conduct Educational Meetings, Distribute Educational Materials, External Facilitation, and Audit and Provide Feedback continued to occur frequently, appearing in 59-99 studies.

figure 3

Experimental Arm Implementation Strategies with significant RE-AIM outcome. Explore more here: https://public.tableau.com/views/Figure3_16947017936500/Figure3?:language=en-US&publish=yes&:display_count=n&:origin=viz_share_link

Figure 4 graphically illustrates the proportion of significant outcomes for each RE-AIM outcome for the 19 commonly used and evidence-based implementation strategies in the upper right quadrant. These findings again show the widespread use of Conduct Educational Meetings and Distribute Educational Materials. Implementation and Effectiveness outcomes were assessed most frequently, with Implementation being the mostly commonly reported significantly positive outcome.

figure 4

RE-AIM outcomes for the 19 Top-Right Quadrant Implementation Strategies . The y-axis is the number of studies and the x-axis is a stacked bar chart for each RE-AIM outcome with R=Reach, E=Effectiveness, A=Adoption, I=Implementation, M=Maintenance. Blue denotes at least one significant RE-AIM outcome; Light blue denotes studies which used the given implementation strategy and did not have a significant RE-AIM . Explore more here: https://public.tableau.com/views/Figure4_16947017112150/Figure4?:language=en-US&publish=yes&:display_count=n&:origin=viz_share_link

This systematic review identified 129 experimental studies examining the effectiveness of implementation strategies across a broad range of health and human service studies. Overall, we found that evidence is lacking for most ERIC implementation strategies, that most studies employed combinations of strategies, and that implementation outcomes, categorized by RE-AIM dimensions, have not been universally defined or applied. Accordingly, other researchers have described the need for universal outcomes definitions and descriptions across implementation research studies [ 28 , 42 ]. Our findings have important implications not only for the current state of the field but also for creating guidance to help investigators determine which strategies and in what context to examine.

The four most evaluated strategies were Distribute Educational Materials, Conduct Educational Meetings, External Facilitation, and Audit and Provide Feedback. Conducting Educational Meetings and Distributing Educational Materials were surprisingly the most common. This may reflect the fact that education strategies are generally considered to be “necessary but not sufficient” for successful implementation [ 43 , 44 ]. Because education is often embedded in interventions, it is critical to define the boundary between the innovation and the implementation strategies used to support the innovation. Further specification as to when these strategies are EBP core components or implementation strategies (e.g., booster trainings or remediation) is needed [ 45 , 46 ].

We identified 19 implementation strategies that were tested in at least 8 studies (more than the median) and were associated with positive results at least 75% of the time. These strategies can be further categorized as being used in early or pre-implementation versus later in implementation. Preparatory activities or pre-implementation, strategies that had strong evidence included educational activities (Meetings, Materials, Outreach visits, Train for Leadership, Use Train the Trainer Strategies) and site diagnostic activities (Assess for Readiness, Identify Barriers and Facilitators, Conduct Local Needs Assessment, Identify and Prepare Champions, and Assess and Redesign Workflows). Strategies that target the implementation phase include those that provide coaching and support (External and Internal Facilitation), involve additional key partners (Intervene with Patients to Enhance Uptake and Adherence), and engage in quality improvement activities (Audit and Provide Feedback, Facilitate the Relay of Clinical Data to Providers, Purposefully Reexamine the Implementation, Conduct Cyclical Small Tests of Change, Develop and Implement Tools for Quality Monitoring).

There were many ERIC strategies that were not represented in the reviewed studies, specifically the financial and policy strategies. Ten strategies were not used in any studies, including: Alter Patient/Consumer Fees, Change Liability Laws, Change Service Sites, Develop Disincentives, Develop Resource Sharing Agreements, Identify Early Adopters, Make Billing Easier, Start a Dissemination Organization, Use Capitated Payments, and Use Data Experts. One of the limitations of this investigation was that not all individual strategies or combinations were investigated. Reasons for the absence of these strategies in our review may include challenges with testing certain strategies experimentally (e.g., changing liability laws), limitations in our search terms, and the relative paucity of implementation strategy trials compared to clinical trials. Many “untested” strategies require large-scale structural changes with leadership support (see [ 47 ] for policy experiment example). Recent preliminary work has assessed the feasibility of applying policy strategies and described the challenges with doing so [ 48 , 49 , 50 ]. While not impossible in large systems like VA (for example: the randomized evaluation of the VA Stratification Tool for Opioid Risk Management) the large size, structure, and organizational imperative makes these initiatives challenging to experimentally evaluate. Likewise, the absence of these ten strategies may have been the result of our inclusion criteria, which required an experimental design. Thus, creative study designs may be needed to test high-level policy or financial strategies experimentally.

Some strategies that were likely under-represented in our search strategy included electronic medical record reminders and clinical decision support tools and systems. These are often considered “interventions” when used by clinical trialists and may not be indexed as studies involving ‘implementation strategies’ (these tools have been reviewed elsewhere [ 51 , 52 , 53 ]). Thus, strategies that are also considered interventions in the literature (e.g., education interventions) were not sought or captured. Our findings do not imply that these strategies are ineffective, rather that more study is needed. Consistent with prior investigations [ 54 ], few studies meeting inclusion criteria tested financial strategies. Accordingly, there are increasing calls to track and monitor the effects of financial strategies within implementation science to understand their effectiveness in practice [ 55 , 56 ]. However, experts have noted that the study of financial strategies can be a challenge given that they are typically implemented at the system-level and necessitate research designs for studying policy-effects (e.g., quasi-experimental methods, systems-science modeling methods) [ 57 ]. Yet, there have been some recent efforts to use financial strategies to support EBPs that appear promising [ 58 ] and could be a model for the field moving forward.

The relationship between the number of strategies used and improved outcomes has been described inconsistently in the literature. While some studies have found improved outcomes with a bundle of strategies that were uniquely combined or a standardized package of strategies (e.g., Replicating Effective Programs [ 59 , 60 ] and Getting To Outcomes [ 61 , 62 ]), others have found that “more is not always better” [ 63 , 64 , 65 ]. For example, Rogal and colleagues documented that VA hospitals implementing a new evidence-based hepatitis C treatment chose >20 strategies, when multiple years of data linking strategies to outcomes showed that 1-3 specific strategies would have yielded the same outcome [ 39 ]. Considering that most studies employed multiple or multifaceted strategies, it seems that there is a benefit of using a targeted bundle of strategies that are purposefully aligns with site/clinic/population norms, rather than simply adding more strategies [ 66 ].

It is difficult to assess the effectiveness of any one implementation strategy in bundles where multiple strategies are used simultaneously. Even a ‘single’ strategy like External Facilitation is, in actuality, a bundle of narrowly constructed strategies (e.g., Conduct Educational Meetings, Identify and Prepare Champions, and Develop a Formal Implementation Blueprint). Thus, studying External Facilitation does not allow for a test of the individual strategies that comprise it, potentially masking the effectiveness of any individual strategy. While we cannot easily disaggregate the effects of multifaceted strategies, doing so may not yield meaningful results. Because strategies often synergize, disaggregated results could either underestimate the true impact of individual strategies or conversely, actually undermine their effectiveness (i.e., when their effectiveness comes from their combination with other strategies). The complexity of health and human service settings, imperative to improve public health outcomes, and engagement with community partners often requires the use of multiple strategies simultaneously. Therefore, the need to improve real-world implementation may outweigh the theoretical need to identify individual strategy effectiveness. In situations where it would be useful to isolate the impact of single strategies, we suggest that the same methods for documenting and analyzing the critical components (or core functions) of complex interventions [ 67 , 68 , 69 , 70 ] may help to identify core components of multifaceted implementation strategies [ 71 , 72 , 73 , 74 ].

In addition, to truly assess the impacts of strategies on outcomes, it may be necessary to track fidelity to implementation strategies (not just the EBPs they support). While this can be challenging, without some degree of tracking and fidelity checks, one cannot determine whether a strategy’s apparent failure to work was because it 1) was ineffective or 2) was not applied well. To facilitate this tracking there are pragmatic tools to support researchers. For example, the Longitudinal Implementation Strategy Tracking System (LISTS) offers a pragmatic and feasible means to assess fidelity to and adaptations of strategies [ 75 ].

Implications for implementation science: four recommendations

Based on our findings, we offer four recommended “best practices” for implementation studies.

Prespecify strategies using standard nomenclature. This study reaffirmed the need to apply not only a standard naming convention (e.g., ERIC) but also a standard reporting of for implementation strategies. While reporting systems like those by Proctor [ 1 ] or Pinnock [ 75 ] would optimize learning across studies, few manuscripts specify strategies as recommended [ 76 , 77 ]. Pre-specification allows planners and evaluators to assess the feasibility and acceptability of strategies with partners and community members [ 24 , 78 , 79 ] and allows evaluators and implementers to monitor and measure the fidelity, dose, and adaptations to strategies delivered over the course of implementation [ 27 ]. In turn, these data can be used to assess the costs, analyze their effectiveness [ 38 , 80 , 81 ], and ensure more accurate reporting [ 82 , 83 , 84 , 85 ]. This specification should include, among other data, the intensity, stage of implementation, and justification for the selection. Information regarding why strategies were selected for specific settings would further the field and be of great use to practitioners. [ 63 , 65 , 69 , 79 , 86 ].

Ensure that standards for measuring and reporting implementation outcomes are consistently applied and account for the complexity of implementation studies. Part of improving standardized reporting must include clearly defining outcomes and linking each outcome to particular implementation strategies. It was challenging in the present review to disentangle the impact of the intervention(s) (i.e., the EBP) versus the impact of the implementation strategy(ies) for each RE-AIM dimension. For example, often fidelity to the EBP was reported but not for the implementation strategies. Similarly, Reach and Adoption of the intervention would be reported for the Experimental Arm but not for the Control Arm, prohibiting statistical comparisons of strategies on the relative impact of the EBP between study arms. Moreover, there were many studies evaluating numerous outcomes, risking data dredging. Further, the significant heterogeneity in the ways in which implementation outcomes are operationalized and reported is a substantial barrier to conducting large-scale meta-analytic approaches to synthesizing evidence for implementation strategies [ 67 ]. The field could look to others in the social and health sciences for examples in how to test, validate, and promote a common set of outcome measures to aid in bringing consistency across studies and real-world practice (e.g., the NIH-funded Patient-Reported Outcomes Measurement Information System [PROMIS], https://www.healthmeasures.net/explore-measurement-systems/promis ).

Develop infrastructure to learn cross-study lessons in implementation science. Data repositories, like those developed by NCI for rare diseases, U.S. HIV Implementation Science Coordination Initiative [ 87 ], and the Behavior Change Technique Ontology [ 88 ], could allow implementation scientists to report their findings in a more standardized manner, which would promote ease of communication and contextualization of findings across studies. For example, the HIV Implementation Science Coordination Initiative requested all implementation projects use common frameworks, developed user friendly databases to enable practitioners to match strategies to determinants, and developed a dashboard of studies that assessed implementation determinants [ 89 , 90 , 91 , 92 , 93 , 94 ].

Develop and apply methods to rigorously study common strategies and bundles. These findings support prior recommendations for improved empirical rigor in implementation studies [ 46 , 95 ]. Many studies were excluded from our review based on not meeting methodological rigor standards. Understanding the effectiveness of discrete strategies deployed alone or in combination requires reliable and low burden tracking methods to collect information about strategy use and outcomes. For example, frameworks like the Implementation Replication Framework [ 96 ] could help interpret findings across studies using the same strategy bundle. Other tracking approaches may leverage technology (e.g., cell phones, tablets, EMR templates) [ 78 , 97 ] or find novel, pragmatic approaches to collect recommended strategy specifications over time (e.g.., dose, deliverer, and mechanism) [ 1 , 9 , 27 , 98 , 99 ]. Rigorous reporting standards could inform more robust analyses and conclusions (e.g., moving toward the goal of understanding causality, microcosting efforts) [ 24 , 38 , 100 , 101 ]. Such detailed tracking is also required to understand how site-level factors moderate implementation strategy effects [ 102 ]. In some cases, adaptive trial designs like sequential multiple assignment randomized trials (SMARTs) and just-in-time adaptive interventions (JITAIs) can be helpful for planning strategy escalation.

Limitations

Despite the strengths of this review, there were certain notable limitations. For one, we only included experimental studies, omitting many informative observational investigations that cover the range of implementation strategies. Second, our study period was centered on the creation of the journal Implementation Science and not on the standardization and operationalization of implementation strategies in the publication of the ERIC taxonomy (which came later). This, in conjunction with latency in reporting study results and funding cycles, means that the employed taxonomy was not applied in earlier studies. To address this limitation, we retroactively mapped strategies to ERIC, but it is possible that some studies were missed. Additionally, indexing approaches used by academic databases may have missed relevant studies. We addressed this particular concern by reviewing other systematic reviews of implementation strategies and soliciting recommendations from global implementation science experts.

Another potential limitation comes from the ERIC taxonomy itself—i.e., strategy listings like ERIC are only useful when they are widely adopted and used in conjunction with guidelines for specifying and reporting strategies [ 1 ] in protocol and outcome papers. Although the ERIC paper has been widely cited (over three thousand times, accessed about 186 thousand times), it is still not universally applied, making tracking the impact of specific strategies more difficult. However, our experience with this review seemed to suggest that ERIC’s use was increasing over time. Also, some have commented that ERIC strategies can be unclear and are missing key domains. Thus, researchers are making definitions clearer for lay users [ 37 , 103 ], increasing the number of discrete strategies for specific domains like HIV treatment, acknowledging strategies for new functions (e.g., de-implementation [ 104 ], local capacity building), accounting for phases of implementation (dissemination, sustainment [ 13 ], scale-up), addressing settings [ 12 , 20 ], actors roles in the process, and making mechanisms of change to select strategies more user-friendly through searchable databases [ 9 , 10 , 54 , 73 , 104 , 105 , 106 ]. In sum, we found the utility of the ERIC taxonomy to outweigh any of the taxonomy’s current limitations.

As with all reviews, the search terms influenced our findings. As such, the broad terms for implementation strategies (e.g., “evidence-based interventions”[ 7 ] or “behavior change techniques” [ 107 ]) may have led to inadvertent omissions of studies of specific strategies. For example, the search terms may not have captured tests of policies, financial strategies, community health promotion initiatives, or electronic medical record reminders, due to differences in terminology used in corresponding subfields of research (e.g., health economics, business, health information technology, and health policy). To manage this, we asked experts to inform us about any studies that they would include and cross-checked their lists with what was identified through our search terms, which yielded very few additional studies. We included standard coding using the ERIC taxonomy, which was a strength, but future work should consider including the additional strategies that have been recommended to augment ERIC, around sustainment [ 13 , 79 , 106 , 108 ], community and public health research [ 12 , 109 , 110 , 111 ], consumer or service user engagement [ 112 ], de-implementation [ 104 , 113 , 114 , 115 , 116 , 117 ] and related terms [ 118 ].

We were unable to assess the bias of studies due to non-standard reporting across the papers and the heterogeneity of study designs, measurement of implementation strategies and outcomes, and analytic approaches. This could have resulted in over- or underestimating the results of our synthesis. We addressed this limitation by being cautious in our reporting of findings, specifically in identifying “effective” implementation strategies. Further, we were not able to gather primary data to evaluate effect sizes across studies in order to systematically evaluate bias, which would be fruitful for future study.

Conclusions

This novel review of 129 studies summarized the body of evidence supporting the use of ERIC-defined implementation strategies to improve health or healthcare. We identified commonly occurring implementation strategies, frequently used bundles, and the strategies with the highest degree of supportive evidence, while simultaneously identifying gaps in the literature. Additionally, we identified several key areas for future growth and operationalization across the field of implementation science with the goal of improved reporting and assessment of implementation strategies and related outcomes.

Availability and materials

All data for this study are included in this published article and its supplementary information files.

We modestly revised the following research questions from our PROSPERO registration after reading the articles and better understanding the nature of the literature: 1) What is the available evidence regarding the effectiveness of implementation strategies in supporting the uptake and sustainment of evidence intended to improve health and healthcare outcomes? 2) What are the current gaps in the literature (i.e., implementation strategies that do not have sufficient evidence of effectiveness) that require further exploration?

Tested strategies are those which exist in the Experimental Arm but not in the Control Arm. Comparative effectiveness or time staggered trials may not have any unique strategies in the Experimental Arm and therefore in our analysis would have no Tested Strategies.

Abbreviations

Centers for Disease Control

Cumulated Index to Nursing and Allied Health Literature

Dissemination and Implementation

Evidence-based practices or programs

Expert Recommendations for Implementing Change

Multiphase Optimization Strategy

National Cancer Institute

National Institutes of Health

The Pittsburgh Dissemination and Implementation Science Collaborative

Sequential Multiple Assignment Randomized Trial

United States

Department of Veterans Affairs

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

Article   PubMed   PubMed Central   Google Scholar  

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

Waltz TJ, Powell BJ, Chinman MJ, Smith JL, Matthieu MM, Proctor EK, et al. Expert recommendations for implementing change (ERIC): protocol for a mixed methods study. Implement Sci IS. 2014;9:39.

Article   PubMed   Google Scholar  

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A Compilation of Strategies for Implementing Clinical Innovations in Health and Mental Health. Med Care Res Rev. 2012;69:123–57.

Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, Chinman MJ, Smith JL, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:109.

Perry CK, Damschroder LJ, Hemler JR, Woodson TT, Ono SS, Cohen DJ. Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory. Implement Sci. 2019;14(1):32.

Community Preventive Services Task Force. Community Preventive Services Task Force: All Active Findings June 2023 [Internet]. 2023 [cited 2023 Aug 7]. Available from: https://www.thecommunityguide.org/media/pdf/CPSTF-All-Findings-508.pdf

Solberg LI, Kuzel A, Parchman ML, Shelley DR, Dickinson WP, Walunas TL, et al. A Taxonomy for External Support for Practice Transformation. J Am Board Fam Med JABFM. 2021;34:32–9.

Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12:1–9.

Article   Google Scholar  

Leeman J, Calancie L, Hartman MA, Escoffery CT, Herrmann AK, Tague LE, et al. What strategies are used to build practitioners’ capacity to implement community-based interventions and are they effective?: a systematic review. Implement Sci. 2015;10:1–15.

Nathan N, Shelton RC, Laur CV, Hailemariam M, Hall A. Editorial: Sustaining the implementation of evidence-based interventions in clinical and community settings. Front Health Serv. 2023;3:1176023.

Balis LE, Houghtaling B, Harden SM. Using implementation strategies in community settings: an introduction to the Expert Recommendations for Implementing Change (ERIC) compilation and future directions. Transl Behav Med. 2022;12:965–78.

Nathan N, Powell BJ, Shelton RC, Laur CV, Wolfenden L, Hailemariam M, et al. Do the Expert Recommendations for Implementing Change (ERIC) strategies adequately address sustainment? Front Health Serv. 2022;2:905909.

Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.

Google Scholar  

Moore L, Guertin JR, Tardif P-A, Ivers NM, Hoch J, Conombo B, et al. Economic evaluations of audit and feedback interventions: a systematic review. BMJ Qual Saf. 2022;31:754–67.

Sykes MJ, McAnuff J, Kolehmainen N. When is audit and feedback effective in dementia care? A systematic review. Int J Nurs Stud. 2018;79:27–35.

Barnes C, McCrabb S, Stacey F, Nathan N, Yoong SL, Grady A, et al. Improving implementation of school-based healthy eating and physical activity policies, practices, and programs: a systematic review. Transl Behav Med. 2021;11:1365–410.

Tomasone JR, Kauffeldt KD, Chaudhary R, Brouwers MC. Effectiveness of guideline dissemination and implementation strategies on health care professionals’ behaviour and patient outcomes in the cancer care context: a systematic review. Implement Sci. 2020;15:1–18.

Seda V, Moles RJ, Carter SR, Schneider CR. Assessing the comparative effectiveness of implementation strategies for professional services to community pharmacy: A systematic review. Res Soc Adm Pharm. 2022;18:3469–83.

Lovero KL, Kemp CG, Wagenaar BH, Giusto A, Greene MC, Powell BJ, et al. Application of the Expert Recommendations for Implementing Change (ERIC) compilation of strategies to health intervention implementation in low- and middle-income countries: a systematic review. Implement Sci. 2023;18:56.

Chapman A, Rankin NM, Jongebloed H, Yoong SL, White V, Livingston PM, et al. Overcoming challenges in conducting systematic reviews in implementation science: a methods commentary. Syst Rev. 2023;12:1–6.

Article   CAS   Google Scholar  

Proctor EK, Bunger AC, Lengnick-Hall R, Gerke DR, Martin JK, Phillips RJ, et al. Ten years of implementation outcomes research: a scoping review. Implement Sci. 2023;18:1–19.

Michaud TL, Pereira E, Porter G, Golden C, Hill J, Kim J, et al. Scoping review of costs of implementation strategies in community, public health and healthcare settings. BMJ Open. 2022;12:e060785.

Sohn H, Tucker A, Ferguson O, Gomes I, Dowdy D. Costing the implementation of public health interventions in resource-limited settings: a conceptual framework. Implement Sci. 2020;15:1–8.

Peek C, Glasgow RE, Stange KC, Klesges LM, Purcell EP, Kessler RS. The 5 R’s: an emerging bold standard for conducting relevant research in a changing world. Ann Fam Med. 2014;12:447–55.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

Shelton RC, Chambers DA, Glasgow RE. An Extension of RE-AIM to Enhance Sustainability: Addressing Dynamic Context and Promoting Health Equity Over Time. Front Public Health. 2020;8:134.

Holtrop JS, Estabrooks PA, Gaglio B, Harden SM, Kessler RS, King DK, et al. Understanding and applying the RE-AIM framework: Clarifications and resources. J Clin Transl Sci. 2021;5:e126.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ [Internet]. 2021;372. Available from: https://www.bmj.com/content/372/bmj.n71

Rabin BA, Brownson RC, Haire-Joshu D, Kreuter MW, Weaver NL. A Glossary for Dissemination and Implementation Research in Health. J Public Health Manag Pract. 2008;14:117–23.

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1:1.

Article   PubMed Central   Google Scholar  

Miller WR, Wilbourne PL. Mesa Grande: a methodological analysis of clinical trials of treatments for alcohol use disorders. Addict Abingdon Engl. 2002;97:265–77.

Miller WR, Brown JM, Simpson TL, Handmaker NS, Bien TH, Luckie LF, et al. What works? A methodological analysis of the alcohol treatment outcome literature. Handb Alcohol Treat Approaches Eff Altern 2nd Ed. Needham Heights, MA, US: Allyn & Bacon; 1995:12–44.

Wells S, Tamir O, Gray J, Naidoo D, Bekhit M, Goldmann D. Are quality improvement collaboratives effective? A systematic review BMJ Qual Saf. 2018;27:226–40.

Yakovchenko V, Chinman MJ, Lamorte C, Powell BJ, Waltz TJ, Merante M, et al. Refining Expert Recommendations for Implementing Change (ERIC) strategy surveys using cognitive interviews with frontline providers. Implement Sci Commun. 2023;4:1–14.

Wagner TH, Yoon J, Jacobs JC, So A, Kilbourne AM, Yu W, et al. Estimating costs of an implementation intervention. Med Decis Making. 2020;40:959–67.

Gold HT, McDermott C, Hoomans T, Wagner TH. Cost data in implementation science: categories and approaches to costing. Implement Sci. 2022;17:11.

Boutron I, Page MJ, Higgins JP, Altman DG, Lundh A, Hróbjartsson A. Considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions. 2019. https://doi.org/10.1002/9781119536604.ch7 . 

Higgins JP, Savović J, Page MJ, Elbers RG, Sterne J. Assessing risk of bias in a randomized trial. Cochrane Handb Syst Rev Interv. 2019;6:205–28.

Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, Contrasting, and Integrating Dissemination and Implementation Outcomes Included in the RE-AIM and Implementation Outcomes Frameworks. Front Public Health [Internet]. 2020 [cited 2024 Apr 24];8. Available from: https://www.frontiersin.org/journals/public-health/articles/ https://doi.org/10.3389/fpubh.2020.00430/full

Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess Winch Engl. 2004;8:iii–iv 1-72.

CAS   Google Scholar  

Beidas RS, Kendall PC. Training Therapists in Evidence-Based Practice: A Critical Review of Studies From a Systems-Contextual Perspective. Clin Psychol Publ Div Clin Psychol Am Psychol Assoc. 2010;17:1–30.

Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to Improve the Selection and Tailoring of Implementation Strategies. J Behav Health Serv Res. 2017;44:177–94.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda. Front Public Health [Internet]. 2019 [cited 2021 Mar 31];7. Available from: https://www.frontiersin.org/articles/ https://doi.org/10.3389/fpubh.2019.00003/full

Frakt AB, Prentice JC, Pizer SD, Elwy AR, Garrido MM, Kilbourne AM, et al. Overcoming Challenges to Evidence-Based Policy Development in a Large. Integrated Delivery System Health Serv Res. 2018;53:4789–807.

PubMed   Google Scholar  

Crable EL, Lengnick-Hall R, Stadnick NA, Moullin JC, Aarons GA. Where is “policy” in dissemination and implementation science? Recommendations to advance theories, models, and frameworks: EPIS as a case example. Implement Sci. 2022;17:80.

Crable EL, Grogan CM, Purtle J, Roesch SC, Aarons GA. Tailoring dissemination strategies to increase evidence-informed policymaking for opioid use disorder treatment: study protocol. Implement Sci Commun. 2023;4:16.

Bond GR. Evidence-based policy strategies: A typology. Clin Psychol Sci Pract. 2018;25:e12267.

Loo TS, Davis RB, Lipsitz LA, Irish J, Bates CK, Agarwal K, et al. Electronic Medical Record Reminders and Panel Management to Improve Primary Care of Elderly Patients. Arch Intern Med. 2011;171:1552–8.

Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ Can Med Assoc J. 2010;182:E216-25.

Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al. A Randomized Trial of Electronic Clinical Reminders to Improve Quality of Care for Diabetes and Coronary Artery Disease. J Am Med Inform Assoc JAMIA. 2005;12:431–7.

Dopp AR, Kerns SEU, Panattoni L, Ringel JS, Eisenberg D, Powell BJ, et al. Translating economic evaluations into financing strategies for implementing evidence-based practices. Implement Sci. 2021;16:1–12.

Dopp AR, Hunter SB, Godley MD, Pham C, Han B, Smart R, et al. Comparing two federal financing strategies on penetration and sustainment of the adolescent community reinforcement approach for substance use disorders: protocol for a mixed-method study. Implement Sci Commun. 2022;3:51.

Proctor EK, Toker E, Tabak R, McKay VR, Hooley C, Evanoff B. Market viability: a neglected concept in implementation science. Implement Sci. 2021;16:98.

Dopp AR, Narcisse M-R, Mundey P, Silovsky JF, Smith AB, Mandell D, et al. A scoping review of strategies for financing the implementation of evidence-based practices in behavioral health systems: State of the literature and future directions. Implement Res Pract. 2020;1:2633489520939980.

PubMed   PubMed Central   Google Scholar  

Dopp AR, Kerns SEU, Panattoni L, Ringel JS, Eisenberg D, Powell BJ, et al. Translating economic evaluations into financing strategies for implementing evidence-based practices. Implement Sci IS. 2021;16:66.

Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, Stall R. Implementing evidence-based interventions in health care:application of the replicating effective programs framework. Implement Sci. 2007;2:42–51.

Kegeles SM, Rebchook GM, Hays RB, Terry MA, O’Donnell L, Leonard NR, et al. From science to application: the development of an intervention package. AIDS Educ Prev Off Publ Int Soc AIDS Educ. 2000;12:62–74.

Wandersman A, Imm P, Chinman M, Kaftarian S. Getting to outcomes: a results-based approach to accountability. Eval Program Plann. 2000;23:389–95.

Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: Tools, training, technical assistance, and quality assurance/quality improvement. Am J Community Psychol. 2012;50:445–59.

Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Kirchner JE, Proctor EK, et al. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample. Implement Sci. 2017;12:1–13.

Smith SN, Almirall D, Prenovost K, Liebrecht C, Kyle J, Eisenberg D, et al. Change in patient outcomes after augmenting a low-level implementation strategy in community practices that are slow to adopt a collaborative chronic care model: a cluster randomized implementation trial. Med Care. 2019;57:503.

Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Gonzalez R, Park A, et al. Longitudinal assessment of the association between implementation strategy use and the uptake of hepatitis C treatment: Year 2. Implement Sci. 2019;14:1–12.

Harvey G, Kitson A. Translating evidence into healthcare policy and practice: Single versus multi-faceted implementation strategies – is there a simple answer to a complex question? Int J Health Policy Manag. 2015;4:123–6.

Engell T, Stadnick NA, Aarons GA, Barnett ML. Common Elements Approaches to Implementation Research and Practice: Methods and Integration with Intervention Science. Glob Implement Res Appl. 2023;3:1–15.

Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci IS. 2009;4:40.

Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci IS. 2020;15:84.

Perez Jolles M, Lengnick-Hall R, Mittman BS. Core Functions and Forms of Complex Health Interventions: a Patient-Centered Medical Home Illustration. JGIM J Gen Intern Med. 2019;34:1032–8.

Schroeck FR, Ould Ismail AA, Haggstrom DA, Sanchez SL, Walker DR, Zubkoff L. Data-driven approach to implementation mapping for the selection of implementation strategies: a case example for risk-aligned bladder cancer surveillance. Implement Sci IS. 2022;17:58.

Frank HE, Kemp J, Benito KG, Freeman JB. Precision Implementation: An Approach to Mechanism Testing in Implementation Research. Adm Policy Ment Health. 2022;49:1084–94.

Lewis CC, Klasnja P, Lyon AR, Powell BJ, Lengnick-Hall R, Buchanan G, et al. The mechanics of implementation strategies and measures: advancing the study of implementation mechanisms. Implement Sci Commun. 2022;3:114.

Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19:e1003918.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;356:i6795.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38:65–76.

Hooley C, Amano T, Markovitz L, Yaeger L, Proctor E. Assessing implementation strategy reporting in the mental health literature: a narrative review. Adm Policy Ment Health Ment Health Serv Res. 2020;47:19–35.

Proctor E, Ramsey AT, Saldana L, Maddox TM, Chambers DA, Brownson RC. FAST: a framework to assess speed of translation of health innovations to practice and policy. Glob Implement Res Appl. 2022;2:107–19.

Cullen L, Hanrahan K, Edmonds SW, Reisinger HS, Wagner M. Iowa Implementation for Sustainability Framework. Implement Sci IS. 2022;17:1.

Saldana L, Ritzwoller DP, Campbell M, Block EP. Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers. Implement Sci Commun. 2022;3:40.

Eisman AB, Kilbourne AM, Dopp AR, Saldana L, Eisenberg D. Economic evaluation in implementation science: making the business case for implementation strategies. Psychiatry Res. 2020;283:112433.

Akiba CF, Powell BJ, Pence BW, Nguyen MX, Golin C, Go V. The case for prioritizing implementation strategy fidelity measurement: benefits and challenges. Transl Behav Med. 2022;12:335–42.

Akiba CF, Powell BJ, Pence BW, Muessig K, Golin CE, Go V. “We start where we are”: a qualitative study of barriers and pragmatic solutions to the assessment and reporting of implementation strategy fidelity. Implement Sci Commun. 2022;3:117.

Rudd BN, Davis M, Doupnik S, Ordorica C, Marcus SC, Beidas RS. Implementation strategies used and reported in brief suicide prevention intervention studies. JAMA Psychiatry. 2022;79:829–31.

Painter JT, Raciborski RA, Matthieu MM, Oliver CM, Adkins DA, Garner KK. Engaging stakeholders to retrospectively discern implementation strategies to support program evaluation: Proposed method and case study. Eval Program Plann. 2024;103:102398.

Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Res Policy Syst. 2017;15:1–12.

Mustanski B, Smith JD, Keiser B, Li DH, Benbow N. Supporting the growth of domestic HIV implementation research in the united states through coordination, consultation, and collaboration: how we got here and where we are headed. JAIDS J Acquir Immune Defic Syndr. 2022;90:S1-8.

Marques MM, Wright AJ, Corker E, Johnston M, West R, Hastings J, et al. The Behaviour Change Technique Ontology: Transforming the Behaviour Change Technique Taxonomy v1. Wellcome Open Res. 2023;8:308.

Merle JL, Li D, Keiser B, Zamantakis A, Queiroz A, Gallo CG, et al. Categorising implementation determinants and strategies within the US HIV implementation literature: a systematic review protocol. BMJ Open. 2023;13:e070216.

Glenshaw MT, Gaist P, Wilson A, Cregg RC, Holtz TH, Goodenow MM. Role of NIH in the Ending the HIV Epidemic in the US Initiative: Research Improving Practice. J Acquir Immune Defic Syndr. 1999;2022(90):S9-16.

Purcell DW, Namkung Lee A, Dempsey A, Gordon C. Enhanced Federal Collaborations in Implementation Science and Research of HIV Prevention and Treatment. J Acquir Immune Defic Syndr. 1999;2022(90):S17-22.

Queiroz A, Mongrella M, Keiser B, Li DH, Benbow N, Mustanski B. Profile of the Portfolio of NIH-Funded HIV Implementation Research Projects to Inform Ending the HIV Epidemic Strategies. J Acquir Immune Defic Syndr. 1999;2022(90):S23-31.

Zamantakis A, Li DH, Benbow N, Smith JD, Mustanski B. Determinants of Pre-exposure Prophylaxis (PrEP) Implementation in Transgender Populations: A Qualitative Scoping Review. AIDS Behav. 2023;27:1600–18.

Li DH, Benbow N, Keiser B, Mongrella M, Ortiz K, Villamar J, et al. Determinants of Implementation for HIV Pre-exposure Prophylaxis Based on an Updated Consolidated Framework for Implementation Research: A Systematic Review. J Acquir Immune Defic Syndr. 1999;2022(90):S235-46.

Chambers DA, Emmons KM. Navigating the field of implementation science towards maturity: challenges and opportunities. Implement Sci. 2024;19:26, s13012-024-01352–0.

Chinman M, Acosta J, Ebener P, Shearer A. “What we have here, is a failure to [replicate]”: Ways to solve a replication crisis in implementation science. Prev Sci. 2022;23:739–50.

Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

Lengnick-Hall R, Gerke DR, Proctor EK, Bunger AC, Phillips RJ, Martin JK, et al. Six practical recommendations for improved implementation outcomes reporting. Implement Sci. 2022;17:16.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci IS. 2021;16:36.

Xu X, Lazar CM, Ruger JP. Micro-costing in health and medicine: a critical appraisal. Health Econ Rev. 2021;11:1.

Barnett ML, Dopp AR, Klein C, Ettner SL, Powell BJ, Saldana L. Collaborating with health economists to advance implementation science: a qualitative study. Implement Sci Commun. 2020;1:82.

Lengnick-Hall R, Williams NJ, Ehrhart MG, Willging CE, Bunger AC, Beidas RS, et al. Eight characteristics of rigorous multilevel implementation research: a step-by-step guide. Implement Sci. 2023;18:52.

Riley-Gibson E, Hall A, Shoesmith A, Wolfenden L, Shelton RC, Doherty E, et al. A systematic review to determine the effect of strategies to sustain chronic disease prevention interventions in clinical and community settings: study protocol. Res Sq [Internet]. 2023 [cited 2024 Apr 19]; Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10312971/

Ingvarsson S, Hasson H, von Thiele Schwarz U, Nilsen P, Powell BJ, Lindberg C, et al. Strategies for de-implementation of low-value care—a scoping review. Implement Sci IS. 2022;17:73.

Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11:e053474.

Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14:N.PAG-N.PAG.

Michie S, Atkins L, West R. The behaviour change wheel. Guide Des Interv 1st Ed G B Silverback Publ. 2014;1003:1010.

Birken SA, Haines ER, Hwang S, Chambers DA, Bunger AC, Nilsen P. Advancing understanding and identifying strategies for sustaining evidence-based practices: a review of reviews. Implement Sci IS. 2020;15:88.

Metz A, Jensen T, Farley A, Boaz A, Bartley L, Villodas M. Building trusting relationships to support implementation: A proposed theoretical model. Front Health Serv. 2022;2:894599.

Rabin BA, Cain KL, Watson P, Oswald W, Laurent LC, Meadows AR, et al. Scaling and sustaining COVID-19 vaccination through meaningful community engagement and care coordination for underserved communities: hybrid type 3 effectiveness-implementation sequential multiple assignment randomized trial. Implement Sci IS. 2023;18:28.

Gyamfi J, Iwelunmor J, Patel S, Irazola V, Aifah A, Rakhra A, et al. Implementation outcomes and strategies for delivering evidence-based hypertension interventions in lower-middle-income countries: Evidence from a multi-country consortium for hypertension control. PLOS ONE. 2023;18:e0286204.

Woodward EN, Ball IA, Willging C, Singh RS, Scanlon C, Cluck D, et al. Increasing consumer engagement: tools to engage service users in quality improvement or implementation efforts. Front Health Serv. 2023;3:1124290.

Norton WE, Chambers DA. Unpacking the complexities of de-implementing inappropriate health interventions. Implement Sci IS. 2020;15:2.

Norton WE, McCaskill-Stevens W, Chambers DA, Stella PJ, Brawley OW, Kramer BS. DeImplementing Ineffective and Low-Value Clinical Practices: Research and Practice Opportunities in Community Oncology Settings. JNCI Cancer Spectr. 2021;5:pkab020.

McKay VR, Proctor EK, Morshed AB, Brownson RC, Prusaczyk B. Letting Go: Conceptualizing Intervention De-implementation in Public Health and Social Service Settings. Am J Community Psychol. 2018;62:189–202.

Patey AM, Grimshaw JM, Francis JJ. Changing behaviour, ‘more or less’: do implementation and de-implementation interventions include different behaviour change techniques? Implement Sci IS. 2021;16:20.

Rodriguez Weno E, Allen P, Mazzucca S, Farah Saliba L, Padek M, Moreland-Russell S, et al. Approaches for Ending Ineffective Programs: Strategies From State Public Health Practitioners. Front Public Health. 2021;9:727005.

Gnjidic D, Elshaug AG. De-adoption and its 43 related terms: harmonizing low-value care terminology. BMC Med. 2015;13:273.

Download references

Acknowledgements

The authors would like to acknowledge the early contributions of the Pittsburgh Dissemination and Implementation Science Collaborative (Pitt DISC). LEA would like to thank Dr. Billie Davis for analytical support. The authors would like to acknowledge the implementation science experts who recommended articles for our review, including Greg Aarons, Mark Bauer, Rinad Beidas, Geoffrey Curran, Laura Damschroder, Rani Elwy, Amy Kilbourne, JoAnn Kirchner, Jennifer Leeman, Cara Lewis, Dennis Li, Aaron Lyon, Gila Neta, and Borsika Rabin.

Dr. Rogal’s time was funded in part by a University of Pittsburgh K award (K23-DA048182) and by a VA Health Services Research and Development grant (PEC 19-207). Drs. Bachrach and Quinn were supported by VA HSR Career Development Awards (CDA 20-057, PI: Bachrach; CDA 20-224, PI: Quinn). Dr. Scheunemann’s time was funded by the US Agency for Healthcare Research and Quality (K08HS027210). Drs. Hero, Chinman, Goodrich, Ernecoff, and Mr. Qureshi were funded by the Patient-Centered Outcomes Research Institute (PCORI) AOSEPP2 Task Order 12 to conduct a landscape review of US studies on the effectiveness of implementation strategies with results reported here ( https://www.pcori.org/sites/default/files/PCORI-Implementation-Strategies-for-Evidence-Based-Practice-in-Health-and-Health-Care-A-Review-of-the-Evidence-Full-Report.pdf and https://www.pcori.org/sites/default/files/PCORI-Implementation-Strategies-for-Evidence-Based-Practice-in-Health-and-Health-Care-Brief-Report-Summary.pdf ). Dr. Ashcraft and Ms. Phares were funded by the Center for Health Equity Research and Promotion, (CIN 13-405). The funders had no involvement in this study.

Author information

Shari S. Rogal and Matthew J. Chinman are co-senior authors.

Authors and Affiliations

Center for Health Equity Research and Promotion, Corporal Michael Crescenz VA Medical Center, Philadelphia, PA, USA

Laura Ellen Ashcraft

Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA

Center for Health Equity Research and Promotion, VA Pittsburgh Healthcare System, Pittsburgh, PA, USA

David E. Goodrich, Angela Phares, Deirdre A. Quinn, Shari S. Rogal & Matthew J. Chinman

Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA

David E. Goodrich, Deirdre A. Quinn & Matthew J. Chinman

Clinical & Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA

David E. Goodrich & Lisa G. Lederer

RAND Corporation, Pittsburgh, PA, USA

Joachim Hero, Nabeel Qureshi, Natalie C. Ernecoff & Matthew J. Chinman

Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, Michigan, USA

Rachel L. Bachrach

Department of Psychiatry, University of Michigan Medical School, Ann Arbor, MI, USA

Division of Geriatric Medicine, University of Pittsburgh, Department of Medicine, Pittsburgh, PA, USA

Leslie Page Scheunemann

Division of Pulmonary, Allergy, Critical Care, and Sleep Medicine, University of Pittsburgh, Department of Medicine, Pittsburgh, PA, USA

Departments of Medicine and Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, USA

Shari S. Rogal

You can also search for this author in PubMed   Google Scholar

Contributions

LEA, SSR, and MJC conceptualized the study. LEA, SSR, MJC, and JOH developed the study design. LEA and JOH acquired the data. LEA, DEG, AP, RLB, DAQ, LGL, LPS, SSR, NQ, and MJC conducted the abstract, full text review, and rigor assessment. LEA, DEG, JOH, AP, RLB, DAQ, NQ, NCE, SSR, and MJC conducted the data abstraction. DEG, SSR, and MJC adjudicated conflicts. LEA and SSR analyzed the data. LEA, SSR, JOH, and MJC interpreted the data. LEA, SSR, and MJC drafted the work. All authors substantially revised the work. All authors approved the submitted version and agreed to be personally accountable for their contributions and the integrity of the work.

Corresponding author

Correspondence to Laura Ellen Ashcraft .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

The manuscript does not contain any individual person’s data.

Competing interests

Additional information, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., supplementary material 5., supplementary material 6., supplementary material 7., supplementary material 8., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ashcraft, L.E., Goodrich, D.E., Hero, J. et al. A systematic review of experimentally tested implementation strategies across health and human service settings: evidence from 2010-2022. Implementation Sci 19 , 43 (2024). https://doi.org/10.1186/s13012-024-01369-5

Download citation

Received : 09 November 2023

Accepted : 27 May 2024

Published : 24 June 2024

DOI : https://doi.org/10.1186/s13012-024-01369-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation strategy
  • Health-related outcomes

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

how to conduct educational research

Facility for Rare Isotope Beams

At michigan state university, investigating the conditions for a new stellar process.

A scientific research team studied how the barium-139 nucleus captures  neutrons in the stellar environment in an experiment at  Argonne National Laboratory ’s (ANL)  CARIBU facility using FRIB’s Summing Nal (SuN) detector . The team’s goal was to lessen uncertainties related to lanthanum production. Lanthanum is a rare earth element sensitive to intermediate neutron capture process (i process) conditions. Uncovering the conditions of the i process allows scientists to determine its required neutron density and reveal potential sites where it might occur. The team recently published its findings in  Physical Review Letters   (“First Study of the 139Ba(𝑛,𝛾)140Ba Reaction to Constrain the Conditions for the Astrophysical i Process”).

Artemis Spyrou , professor of physics at FRIB and in the Department of Physics and Astronomy at Michigan State University (MSU), and Dennis Mücher , professor of physics at the  University of Cologne in Germany, led the experiment. MSU is home to FRIB, the only accelerator-based U.S. Department of Energy Office of Science (DOE-SC) user facility on a university campus. FRIB is operated by MSU to support the mission of the DOE-SC Office of Nuclear Physics as one of 28 DOE-SC user facilities.

Combining global collaboration and world-class educational experiences

The experiment was a collaborative effort involving more than 30 scientists and students from around the world. Participating institutions included the  University of Victoria in Canada, the  University of Oslo in Norway, and the  University of Jyväskyla in Finland. 

“The collaboration is essential because everyone comes from different backgrounds with different areas of expertise,” Spyrou said. “Together, we’re much stronger. It’s really an intellectual sharing of that knowledge and bringing new ideas to the experiment.”

The international collaboration also included five FRIB graduate and two FRIB undergraduate students. FRIB is an educational resource for the next generation of science and technical talent. Students enrolled in nuclear physics at MSU can work with scientific researchers from around the world to conduct groundbreaking research in accelerator science, cryogenic engineering, and astrophysics. 

“Our students contribute to every aspect of the experiment, from transporting the instrumentation to unpacking and setting it up, then testing and calibrating it to make sure everything works,” Spyrou said. “Then, we all work together to identify what’s in the beam. Is it reasonable? Do we accept it? Once everything is set up and ready, we all take shifts.”

Measuring the i process 

Producing some of the heaviest elements found on Earth, like platinum and gold, requires stellar environments rich in neutrons. Inside stars, neutrons combine with an atomic nucleus to create a heavier nucleus. These nuclear reactions, called neutron capture processes, are what create these heavy elements. Two neutron capture processes are known to occur in stars: the rapid neutron capture process ( r process) and the slow neutron capture process ( s process). Yet, neither process can explain some astronomic observations, such as unusual abundance patterns found on very old stars. A new stellar process—the i process—may help. The i process represents neutron densities that fall between those of the r and s processes.

“Through this reaction we are constraining, we discovered that compared to what theory predicted, the amount of lanthanum is actually less,” said Spyrou. 

Spyrou said that combining lanthanum with other elements, like barium and europium, helps provide a signature of the i process. 

“It’s a new process, and we don’t know the conditions where the i process is happening. It’s all theoretical, so unless we constrain the nuclear physics, we will never find out,” Spyrou said. “This was the first strong constraint from the nuclear physics point of view that validates that yes, the i process should be making these elements under these conditions.”

Neutron capture processes are difficult to measure directly, Spyrou said. Indirect techniques, like the beta-Oslo and shape methods, help constrain neutron capture reaction rates in exotic  nuclei . These two methods formed the basis of the barium-139 nucleus experiment.

To measure the data, beams provided by ANL’s CARIBU facility produced a high-intensity beam and delivered it to the center of the SuN detector, a device that measures gamma rays emitted from decaying  isotope beams. This tool was pivotal in producing strong data constraints during the study.

“I developed SuN with my group at the National Superconducting Cyclotron Laboratory, the predecessor to FRIB,” Spyrou said. “It’s a very efficient and large detector. Basically, every gamma ray that comes out, we can detect. This is an advantage compared to other detectors, which are smaller.”

The first i process constraint paves the way for more research

Studying the barium-139 neutron capture was only the first step in discovering the conditions of the i  process. Mücher is starting a new program at the University of Cologne that aims to measure some significant i  process reactions directly. Spyrou said that she and her FRIB team plan to continue studying the i process through different reactions that can help constrain the production of different elements or neutron densities. They recently conducted an experiment at ANL to study the neodymium-151 neutron capture. This neutron capture is the dominant reaction for europium production.

This material is based upon work supported by the National Science Foundation.

Michigan State University operates the Facility for Rare Isotope Beams (FRIB) as a user facility for the U.S. Department of Energy Office of Science (DOE-SC), supporting the mission of the DOE-SC Office of Nuclear Physics. Hosting what is designed to be the most powerful heavy-ion accelerator, FRIB enables scientists to make discoveries about the properties of rare isotopes in order to better understand the physics of nuclei, nuclear astrophysics, fundamental interactions, and applications for society, including in medicine, homeland security, and industry.

The U.S. Department of Energy Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of today’s most pressing challenges. For more information, visit  energy.gov/science .

William A. Galston, Elaine Kamarck

June 28, 2024

Robin Brooks, Ben Harris

June 26, 2024

Yemi Osinbajo

Sofoklis Goulas

June 27, 2024

The Brookings Institution conducts independent research to improve policy and governance at the local, national, and global levels

We bring together leading experts in government, academia, and beyond to provide nonpartisan research and analysis on the most important issues in the world.

From deep-dive reports to brief explainers on trending topics, we analyze complicated problems and generate innovative solutions.

Brookings has been at the forefront of public policy for over 100 years, providing data and insights to inform critical decisions during some of the most important inflection points in recent history.

Subscribe to the Brookings Brief

Get a daily newsletter with our best research on top issues.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Already a subscriber? Manage your subscriptions

Support Our Work

Invest in solutions. Drive impact.

Business Insider highlights research by Jon Valant showing that Arizona's universal education savings accounts are primarily benefiting wealthy families.

Valerie Wirtschafter spoke to the Washington Post about her latest study finding that Russian state media are ramping up on TikTok in both Spanish and English.

Tony Pipa writes in the New York Times about what's necessary for rural communities to benefit from federal investments made in the IIJA, IRA, & CHIPs.

What does the death of Iran’s President really mean? Suzanne Maloney writes in Politico about a transition already underway.

America’s foreign policy: A conversation with Secretary of State Antony Blinken

Online Only

10:30 am - 11:15 am EDT

The Brookings Institution, Washington D.C.

10:00 am - 11:15 am EDT

10:00 am - 11:00 am EDT

AEI, Washington DC

12:45 pm - 1:45 pm EDT

Andre M. Perry Manann Donoghoe

Keesha Middlemass

Elizabeth Cox Chloe East Isabelle Pula

June 20, 2024

Daniel S. Hamilton

Zouera Youssoufou Zakari Momodu

Brookings Explains

Unpack critical policy issues through fact-based explainers.

Listen to informative discussions on important policy challenges.

COMMENTS

  1. PDF A Six Step Process to Developing an Educational Research Plan

    Education research leads to new knowledge about teaching, learning, and educational administration. The goal of educational research is to generate knowledge that describes, predicts, improves, and explains processes and practices related to education (Gall, Gall, and Borg, 2007). Developing and implementing an educational study plan can lead ...

  2. Educational Research: What It Is + How to Do It

    Education is a pillar in modern society, it provides the tools to develop critical thinking, decision making, and social abilities. Education helps individuals to secure the necessary research skills to secure jobs or to be entrepreneurs in new technologies. This is where educational research takes an important place in the overall improvement of the education system (pedagogy, learning ...

  3. PDF A Beginner's Guide to Applied Educational Research using ...

    supports the thoughtful and ethical conduct of research and talks about reporting research that is rigorous and trustworthy. In 2006, the American Educational Research Association (AERA) developed standards for reporting on empirical research (Duran et al., 2006). Two general principles have been recommended to reflect on empirical research ...

  4. Introduction to Education Research

    Abstract. Educators rely on the discovery of new knowledge of teaching practices and frameworks to improve and evolve education for trainees. An important consideration that should be made when embarking on a career conducting education research is finding a scholarship niche. An education researcher can then develop the conceptual framework ...

  5. Methodologies for Conducting Education Research

    A comprehensive overview of various methodologies and debates in education research, with citations of influential books and articles. Learn how to choose the best research design for different types of questions and standards of quality research.

  6. PDF ETHICAL GUIDELINES FOR EDUCATIONAL RESEARCH

    To this end, these guidelines are designed to support educational researchers in conducting research to the highest ethical standards in any and all contexts. BERA's guidelines unequivocally recognise and celebrate the diversity of approaches in educational research, and promote respect for all those who engage with it: researchers

  7. PDF Overview of the Educational Research Process or Student Learning

    Overview of the Educational Research Process. 25. Developing a Research Plan. Specification of the research problem, development of research questions, and a . thorough review of the existing body of literature provide the necessary groundwork to begin developing a plan to conduct an educational research study. The next step in

  8. Sage Academic Books

    A step-by-step guide to conducting a research project or thesis in Education. Designed to be used during the research process, Conducting Educational Research walks readers through each step of a research project or thesis, including developing a research question, performing a literature search, developing a research plan, collecting and analyzing data, drawing conclusions, and sharing the ...

  9. PDF What Is Educational Research?

    2. Describe the scientific method and how it can be applied to educational research topics. 3. Summarize characteristics that define what educational research is and is not. 4. Identify and define key terms associated with educational research. 5. Identify various methods for conducting educational research. 6.

  10. PDF An Introduction to Educational Research

    Discuss important ethical issues in conducting research. Recognize skills needed to design and conduct research. To begin, consider Maria, a teacher with 10 years of experience, who teaches English at a midsized metropolitan high school. Lately, a number of incidents in the school district have involved students possessing weapons:

  11. PDF Common Guidelines for Education Research and Development

    Education research and development programs at NSF are distributed throughout its science and engineering directorates but are located primarily in its Directorate for Education and Human Resources (EHR). EHR's purview includes K-12 education, postsecondary education, and after - ... short of conducting an efficacy study.

  12. Harvard EdCast: Applying Education Research to Practice

    This is the Harvard EdCast. There's a lot of education data out there, but it's not always easy for school leaders to use it. Harvard's Carrie Conaway has spent her career figuring out how to take research and apply it to education in ways that improve outcomes and make a difference.

  13. Education Research and Methods

    Education Research and Methods. IES seeks to improve the quality of education for all students—prekindergarten through postsecondary and adult education—by supporting education research and the development of tools that education scientists need to conduct rigorous, applied research. Such research aims to advance our understanding of and ...

  14. What is Educational Research? + [Types, Scope & Importance]

    Research. What is Educational Research? + [Types, Scope & Importance] Education is an integral aspect of every society and in a bid to expand the frontiers of knowledge, educational research must become a priority. Educational research plays a vital role in the overall development of pedagogy, learning programs, and policy formulation.

  15. 1 What is Action Research for Classroom Teachers?

    Accordingly, the purpose of educational research is to engage in disciplined inquiry to generate knowledge on topics significant to the students, teachers, administrators, schools, and other educational stakeholders. Just as the topics of educational research vary, so do the approaches to conducting educational research in the classroom.

  16. Educational Research Steps

    Conducting an educational research study is an intensive but intensely rewarding process. The following tutorial provides step-by-step guidance for conducting an educational research study according to the University of Jos guidelines. These guidelines can be slightly modified for other educational research studies.

  17. Descriptive analysis in education: A guide for researchers

    Descriptive analysis identifies patterns in data to answer questions about who, what, where, when, and to what extent. This guide describes how to more effectively approach, conduct, and communicate quantitative descriptive analysis. The primary audience for this guide includes members of the research community who conduct and publish both ...

  18. Using Research and Reason in Education: How Teachers Can Use ...

    Teachers as independent evaluators of research evidence. One factor that has impeded teachers from being active and effective consumers of educational science has been a lack of orientation and training in how to understand the scientific process and how that process results in the cumulative growth of knowledge that leads to validated educational practice.

  19. Ethics in educational research: Review boards, ethical issues and

    The paper concludes that the ethical conduct of educational research is more complex than adhering to a set of strict 'rules' but is an issue of resolving ethical dilemmas, which is beyond the scope of a single event review process (see, for example, the Economic and Social Research Council's Research Ethics Framework ). Ethics in ...

  20. Conducting Research in K-12 Education Settings

    Research with human subjects or their data is regulated by the federal government and reviewed by Teachers College (TC) Institutional Review Board (IRB). Educational research that involves students, teachers, administrative staff, student-level (e.g., test scores) administrative data, or classroom curriculum, activities, or assignments, may be subject to federal regulations and IRB review.

  21. PDF Learning How to Conduct Educational Research in Teacher Education : a

    Learning How To Conduct Educational Research In Teacher Education: A Turkish Perspective. Abstract: This paper examines the attitudes of student teachers in social studies towards an educational research assignment, undertaken in an educational research methods course given at the Fatih Faculty of Education at Karadeniz Technical University ...

  22. PDF or post, copy,

    When conducting educational research studies, it is important to keep the ultimate goal in mind. Remember, the basic goal of nearly all research studies is to find answers to questions, or to help explain and understand some educational phenomenon. For example, if you are planning to conduct a research study

  23. How to Write an Educational Research: Preparing for A Publishable

    Abstract. When designing an educational research, researchers should carefully refine the issue to be investigated, plan systematic processes of inquiry, and check the ethics and validity of the ...

  24. Letter of Non-Objection to Conduct Research and Educational ...

    Request to conduct research and educational activities to generate data on population and community dynamics, changing ocean conditions and effects on marine organisms, ecosystem functioning, ecology, physiology, acidification, larval dispersal and settlement, phytoplankton blooms, upwelling, anomalies / rare events, and populations expansions.

  25. A practical guide for conducting qualitative research in medical

    A practical guide for conducting qualitative research in medical education: Part 1-How to interview AEM Educ Train . 2021 Jul 1;5(3):e10646. doi: 10.1002/aet2.10646.

  26. Guidance for Responsible Conduct of Research (RCR) Training ...

    Purpose. NIH requires that all trainees, fellows, participants, and scholars receiving support through any NIH training, career development award (individual or institutional), research education grant, and dissertation research grant must receive instruction in responsible conduct of research.

  27. A systematic review of experimentally tested implementation strategies

    Background Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify ...

  28. Investigating the conditions for a new stellar process

    A scientific research team studied how the barium-139 nucleus captures neutrons in the stellar environment in an experiment at Argonne National Laboratory's (ANL) CARIBU facility using FRIB's Summing Nal (SuN) detector. The team's goal was to lessen uncertainties related to lanthanum production. Lanthanum is a rare earth element sensitive to intermediate neutron capture process (i ...

  29. Brookings

    The Brookings Institution is a nonprofit public policy organization based in Washington, DC. Our mission is to conduct in-depth research that leads to new ideas for solving problems facing society ...

  30. Careers

    Assist with the development and implementation of research proposals and special projects. Knowledge, education, and experience • B.A. in relevant field required. 1-2 year of professional experience is an asset. ... The Center's 220 full-time staff and large network of affiliated scholars conduct research and analysis and develop policy ...