• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 6. The Methodology
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The methods section describes actions taken to investigate a research problem and the rationale for the application of specific procedures or techniques used to identify, select, process, and analyze information applied to understanding the problem, thereby, allowing the reader to critically evaluate a study’s overall validity and reliability. The methodology section of a research paper answers two main questions: How was the data collected or generated? And, how was it analyzed? The writing should be direct and precise and always written in the past tense.

Kallet, Richard H. "How to Write the Methods Section of a Research Paper." Respiratory Care 49 (October 2004): 1229-1232.

Importance of a Good Methodology Section

You must explain how you obtained and analyzed your results for the following reasons:

  • Readers need to know how the data was obtained because the method you chose affects the results and, by extension, how you interpreted their significance in the discussion section of your paper.
  • Methodology is crucial for any branch of scholarship because an unreliable method produces unreliable results and, as a consequence, undermines the value of your analysis of the findings.
  • In most cases, there are a variety of different methods you can choose to investigate a research problem. The methodology section of your paper should clearly articulate the reasons why you have chosen a particular procedure or technique.
  • The reader wants to know that the data was collected or generated in a way that is consistent with accepted practice in the field of study. For example, if you are using a multiple choice questionnaire, readers need to know that it offered your respondents a reasonable range of answers to choose from.
  • The method must be appropriate to fulfilling the overall aims of the study. For example, you need to ensure that you have a large enough sample size to be able to generalize and make recommendations based upon the findings.
  • The methodology should discuss the problems that were anticipated and the steps you took to prevent them from occurring. For any problems that do arise, you must describe the ways in which they were minimized or why these problems do not impact in any meaningful way your interpretation of the findings.
  • In the social and behavioral sciences, it is important to always provide sufficient information to allow other researchers to adopt or replicate your methodology. This information is particularly important when a new method has been developed or an innovative use of an existing method is utilized.

Bem, Daryl J. Writing the Empirical Journal Article. Psychology Writing Center. University of Washington; Denscombe, Martyn. The Good Research Guide: For Small-Scale Social Research Projects . 5th edition. Buckingham, UK: Open University Press, 2014; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008.

Structure and Writing Style

I.  Groups of Research Methods

There are two main groups of research methods in the social sciences:

  • The e mpirical-analytical group approaches the study of social sciences in a similar manner that researchers study the natural sciences . This type of research focuses on objective knowledge, research questions that can be answered yes or no, and operational definitions of variables to be measured. The empirical-analytical group employs deductive reasoning that uses existing theory as a foundation for formulating hypotheses that need to be tested. This approach is focused on explanation.
  • The i nterpretative group of methods is focused on understanding phenomenon in a comprehensive, holistic way . Interpretive methods focus on analytically disclosing the meaning-making practices of human subjects [the why, how, or by what means people do what they do], while showing how those practices arrange so that it can be used to generate observable outcomes. Interpretive methods allow you to recognize your connection to the phenomena under investigation. However, the interpretative group requires careful examination of variables because it focuses more on subjective knowledge.

II.  Content

The introduction to your methodology section should begin by restating the research problem and underlying assumptions underpinning your study. This is followed by situating the methods you used to gather, analyze, and process information within the overall “tradition” of your field of study and within the particular research design you have chosen to study the problem. If the method you choose lies outside of the tradition of your field [i.e., your review of the literature demonstrates that the method is not commonly used], provide a justification for how your choice of methods specifically addresses the research problem in ways that have not been utilized in prior studies.

The remainder of your methodology section should describe the following:

  • Decisions made in selecting the data you have analyzed or, in the case of qualitative research, the subjects and research setting you have examined,
  • Tools and methods used to identify and collect information, and how you identified relevant variables,
  • The ways in which you processed the data and the procedures you used to analyze that data, and
  • The specific research tools or strategies that you utilized to study the underlying hypothesis and research questions.

In addition, an effectively written methodology section should:

  • Introduce the overall methodological approach for investigating your research problem . Is your study qualitative or quantitative or a combination of both (mixed method)? Are you going to take a special approach, such as action research, or a more neutral stance?
  • Indicate how the approach fits the overall research design . Your methods for gathering data should have a clear connection to your research problem. In other words, make sure that your methods will actually address the problem. One of the most common deficiencies found in research papers is that the proposed methodology is not suitable to achieving the stated objective of your paper.
  • Describe the specific methods of data collection you are going to use , such as, surveys, interviews, questionnaires, observation, archival research. If you are analyzing existing data, such as a data set or archival documents, describe how it was originally created or gathered and by whom. Also be sure to explain how older data is still relevant to investigating the current research problem.
  • Explain how you intend to analyze your results . Will you use statistical analysis? Will you use specific theoretical perspectives to help you analyze a text or explain observed behaviors? Describe how you plan to obtain an accurate assessment of relationships, patterns, trends, distributions, and possible contradictions found in the data.
  • Provide background and a rationale for methodologies that are unfamiliar for your readers . Very often in the social sciences, research problems and the methods for investigating them require more explanation/rationale than widely accepted rules governing the natural and physical sciences. Be clear and concise in your explanation.
  • Provide a justification for subject selection and sampling procedure . For instance, if you propose to conduct interviews, how do you intend to select the sample population? If you are analyzing texts, which texts have you chosen, and why? If you are using statistics, why is this set of data being used? If other data sources exist, explain why the data you chose is most appropriate to addressing the research problem.
  • Provide a justification for case study selection . A common method of analyzing research problems in the social sciences is to analyze specific cases. These can be a person, place, event, phenomenon, or other type of subject of analysis that are either examined as a singular topic of in-depth investigation or multiple topics of investigation studied for the purpose of comparing or contrasting findings. In either method, you should explain why a case or cases were chosen and how they specifically relate to the research problem.
  • Describe potential limitations . Are there any practical limitations that could affect your data collection? How will you attempt to control for potential confounding variables and errors? If your methodology may lead to problems you can anticipate, state this openly and show why pursuing this methodology outweighs the risk of these problems cropping up.

NOTE :   Once you have written all of the elements of the methods section, subsequent revisions should focus on how to present those elements as clearly and as logically as possibly. The description of how you prepared to study the research problem, how you gathered the data, and the protocol for analyzing the data should be organized chronologically. For clarity, when a large amount of detail must be presented, information should be presented in sub-sections according to topic. If necessary, consider using appendices for raw data.

ANOTHER NOTE : If you are conducting a qualitative analysis of a research problem , the methodology section generally requires a more elaborate description of the methods used as well as an explanation of the processes applied to gathering and analyzing of data than is generally required for studies using quantitative methods. Because you are the primary instrument for generating the data [e.g., through interviews or observations], the process for collecting that data has a significantly greater impact on producing the findings. Therefore, qualitative research requires a more detailed description of the methods used.

YET ANOTHER NOTE :   If your study involves interviews, observations, or other qualitative techniques involving human subjects , you may be required to obtain approval from the university's Office for the Protection of Research Subjects before beginning your research. This is not a common procedure for most undergraduate level student research assignments. However, i f your professor states you need approval, you must include a statement in your methods section that you received official endorsement and adequate informed consent from the office and that there was a clear assessment and minimization of risks to participants and to the university. This statement informs the reader that your study was conducted in an ethical and responsible manner. In some cases, the approval notice is included as an appendix to your paper.

III.  Problems to Avoid

Irrelevant Detail The methodology section of your paper should be thorough but concise. Do not provide any background information that does not directly help the reader understand why a particular method was chosen, how the data was gathered or obtained, and how the data was analyzed in relation to the research problem [note: analyzed, not interpreted! Save how you interpreted the findings for the discussion section]. With this in mind, the page length of your methods section will generally be less than any other section of your paper except the conclusion.

Unnecessary Explanation of Basic Procedures Remember that you are not writing a how-to guide about a particular method. You should make the assumption that readers possess a basic understanding of how to investigate the research problem on their own and, therefore, you do not have to go into great detail about specific methodological procedures. The focus should be on how you applied a method , not on the mechanics of doing a method. An exception to this rule is if you select an unconventional methodological approach; if this is the case, be sure to explain why this approach was chosen and how it enhances the overall process of discovery.

Problem Blindness It is almost a given that you will encounter problems when collecting or generating your data, or, gaps will exist in existing data or archival materials. Do not ignore these problems or pretend they did not occur. Often, documenting how you overcame obstacles can form an interesting part of the methodology. It demonstrates to the reader that you can provide a cogent rationale for the decisions you made to minimize the impact of any problems that arose.

Literature Review Just as the literature review section of your paper provides an overview of sources you have examined while researching a particular topic, the methodology section should cite any sources that informed your choice and application of a particular method [i.e., the choice of a survey should include any citations to the works you used to help construct the survey].

It’s More than Sources of Information! A description of a research study's method should not be confused with a description of the sources of information. Such a list of sources is useful in and of itself, especially if it is accompanied by an explanation about the selection and use of the sources. The description of the project's methodology complements a list of sources in that it sets forth the organization and interpretation of information emanating from those sources.

Azevedo, L.F. et al. "How to Write a Scientific Paper: Writing the Methods Section." Revista Portuguesa de Pneumologia 17 (2011): 232-238; Blair Lorrie. “Choosing a Methodology.” In Writing a Graduate Thesis or Dissertation , Teaching Writing Series. (Rotterdam: Sense Publishers 2016), pp. 49-72; Butin, Dan W. The Education Dissertation A Guide for Practitioner Scholars . Thousand Oaks, CA: Corwin, 2010; Carter, Susan. Structuring Your Research Thesis . New York: Palgrave Macmillan, 2012; Kallet, Richard H. “How to Write the Methods Section of a Research Paper.” Respiratory Care 49 (October 2004):1229-1232; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008. Methods Section. The Writer’s Handbook. Writing Center. University of Wisconsin, Madison; Rudestam, Kjell Erik and Rae R. Newton. “The Method Chapter: Describing Your Research Plan.” In Surviving Your Dissertation: A Comprehensive Guide to Content and Process . (Thousand Oaks, Sage Publications, 2015), pp. 87-115; What is Interpretive Research. Institute of Public and International Affairs, University of Utah; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University; Methods and Materials. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College.

Writing Tip

Statistical Designs and Tests? Do Not Fear Them!

Don't avoid using a quantitative approach to analyzing your research problem just because you fear the idea of applying statistical designs and tests. A qualitative approach, such as conducting interviews or content analysis of archival texts, can yield exciting new insights about a research problem, but it should not be undertaken simply because you have a disdain for running a simple regression. A well designed quantitative research study can often be accomplished in very clear and direct ways, whereas, a similar study of a qualitative nature usually requires considerable time to analyze large volumes of data and a tremendous burden to create new paths for analysis where previously no path associated with your research problem had existed.

To locate data and statistics, GO HERE .

Another Writing Tip

Knowing the Relationship Between Theories and Methods

There can be multiple meaning associated with the term "theories" and the term "methods" in social sciences research. A helpful way to delineate between them is to understand "theories" as representing different ways of characterizing the social world when you research it and "methods" as representing different ways of generating and analyzing data about that social world. Framed in this way, all empirical social sciences research involves theories and methods, whether they are stated explicitly or not. However, while theories and methods are often related, it is important that, as a researcher, you deliberately separate them in order to avoid your theories playing a disproportionate role in shaping what outcomes your chosen methods produce.

Introspectively engage in an ongoing dialectic between the application of theories and methods to help enable you to use the outcomes from your methods to interrogate and develop new theories, or ways of framing conceptually the research problem. This is how scholarship grows and branches out into new intellectual territory.

Reynolds, R. Larry. Ways of Knowing. Alternative Microeconomics . Part 1, Chapter 3. Boise State University; The Theory-Method Relationship. S-Cool Revision. United Kingdom.

Yet Another Writing Tip

Methods and the Methodology

Do not confuse the terms "methods" and "methodology." As Schneider notes, a method refers to the technical steps taken to do research . Descriptions of methods usually include defining and stating why you have chosen specific techniques to investigate a research problem, followed by an outline of the procedures you used to systematically select, gather, and process the data [remember to always save the interpretation of data for the discussion section of your paper].

The methodology refers to a discussion of the underlying reasoning why particular methods were used . This discussion includes describing the theoretical concepts that inform the choice of methods to be applied, placing the choice of methods within the more general nature of academic work, and reviewing its relevance to examining the research problem. The methodology section also includes a thorough review of the methods other scholars have used to study the topic.

Bryman, Alan. "Of Methods and Methodology." Qualitative Research in Organizations and Management: An International Journal 3 (2008): 159-168; Schneider, Florian. “What's in a Methodology: The Difference between Method, Methodology, and Theory…and How to Get the Balance Right?” PoliticsEastAsia.com. Chinese Department, University of Leiden, Netherlands.

  • << Previous: Scholarly vs. Popular Publications
  • Next: Qualitative Methods >>
  • Last Updated: Apr 18, 2024 12:20 PM
  • URL: https://libguides.usc.edu/writingguide
  • Interlibrary Loan and Scan & Deliver
  • Course Reserves
  • Purchase Request
  • Collection Development & Maintenance
  • Current Negotiations
  • Ask a Librarian
  • Instructor Support
  • Library How-To
  • Research Guides
  • Research Support
  • Study Rooms
  • Research Rooms
  • Partner Spaces
  • Loanable Equipment
  • Print, Scan, Copy
  • 3D Printers
  • Poster Printing
  • OSULP Leadership
  • Strategic Plan

Scholarly Articles: How can I tell?

  • Journal Information
  • Literature Review
  • Author and affiliation
  • Introduction
  • Specialized Vocabulary

Methodology

  • Research sponsors
  • Peer-review

The methodology section or methods section tells you how the author(s) went about doing their research. It should let you know a) what method they used to gather data (survey, interviews, experiments, etc.), why they chose this method, and what the limitations are to this method.

The methodology section should be detailed enough that another researcher could replicate the study described. When you read the methodology or methods section:

  • What kind of research method did the authors use? Is it an appropriate method for the type of study they are conducting?
  • How did the authors get their tests subjects? What criteria did they use?
  • What are the contexts of the study that may have affected the results (e.g. environmental conditions, lab conditions, timing questions, etc.)
  • Is the sample size representative of the larger population (i.e., was it big enough?)
  • Are the data collection instruments and procedures likely to have measured all the important characteristics with reasonable accuracy?
  • Does the data analysis appear to have been done with care, and were appropriate analytical techniques used? 

A good researcher will always let you know about the limitations of his or her research.

  • << Previous: Specialized Vocabulary
  • Next: Results >>
  • Last Updated: Apr 15, 2024 3:26 PM
  • URL: https://guides.library.oregonstate.edu/ScholarlyArticle

methodology of research article

Contact Info

121 The Valley Library Corvallis OR 97331–4501

Phone: 541-737-3331

Services for Persons with Disabilities

In the Valley Library

  • Oregon State University Press
  • Special Collections and Archives Research Center
  • Undergrad Research & Writing Studio
  • Graduate Student Commons
  • Tutoring Services
  • Northwest Art Collection

Digital Projects

  • Oregon Explorer
  • Oregon Digital
  • ScholarsArchive@OSU
  • Digital Publishing Initiatives
  • Atlas of the Pacific Northwest
  • Marilyn Potts Guin Library  
  • Cascades Campus Library
  • McDowell Library of Vet Medicine

FDLP Emblem

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

A Comprehensive Guide to Methodology in Research

Sumalatha G

Table of Contents

Research methodology plays a crucial role in any study or investigation. It provides the framework for collecting, analyzing, and interpreting data, ensuring that the research is reliable, valid, and credible. Understanding the importance of research methodology is essential for conducting rigorous and meaningful research.

In this article, we'll explore the various aspects of research methodology, from its types to best practices, ensuring you have the knowledge needed to conduct impactful research.

What is Research Methodology?

Research methodology refers to the system of procedures, techniques, and tools used to carry out a research study. It encompasses the overall approach, including the research design, data collection methods, data analysis techniques, and the interpretation of findings.

Research methodology plays a crucial role in the field of research, as it sets the foundation for any study. It provides researchers with a structured framework to ensure that their investigations are conducted in a systematic and organized manner. By following a well-defined methodology, researchers can ensure that their findings are reliable, valid, and meaningful.

When defining research methodology, one of the first steps is to identify the research problem. This involves clearly understanding the issue or topic that the study aims to address. By defining the research problem, researchers can narrow down their focus and determine the specific objectives they want to achieve through their study.

How to Define Research Methodology

Once the research problem is identified, researchers move on to defining the research questions. These questions serve as a guide for the study, helping researchers to gather relevant information and analyze it effectively. The research questions should be clear, concise, and aligned with the overall goals of the study.

After defining the research questions, researchers need to determine how data will be collected and analyzed. This involves selecting appropriate data collection methods, such as surveys, interviews, observations, or experiments. The choice of data collection methods depends on various factors, including the nature of the research problem, the target population, and the available resources.

Once the data is collected, researchers need to analyze it using appropriate data analysis techniques. This may involve statistical analysis, qualitative analysis, or a combination of both, depending on the nature of the data and the research questions. The analysis of data helps researchers to draw meaningful conclusions and make informed decisions based on their findings.

Role of Methodology in Research

Methodology plays a crucial role in research, as it ensures that the study is conducted in a systematic and organized manner. It provides a clear roadmap for researchers to follow, ensuring that the research objectives are met effectively. By following a well-defined methodology, researchers can minimize bias, errors, and inconsistencies in their study, thus enhancing the reliability and validity of their findings.

In addition to providing a structured approach, research methodology also helps in establishing the reliability and validity of the study. Reliability refers to the consistency and stability of the research findings, while validity refers to the accuracy and truthfulness of the findings. By using appropriate research methods and techniques, researchers can ensure that their study produces reliable and valid results, which can be used to make informed decisions and contribute to the existing body of knowledge.

Steps in Choosing the Right Research Methodology

Choosing the appropriate research methodology for your study is a critical step in ensuring the success of your research. Let's explore some steps to help you select the right research methodology:

Identifying the Research Problem

The first step in choosing the right research methodology is to clearly identify and define the research problem. Understanding the research problem will help you determine which methodology will best address your research questions and objectives.

Identifying the research problem involves a thorough examination of the existing literature in your field of study. This step allows you to gain a comprehensive understanding of the current state of knowledge and identify any gaps that your research can fill. By identifying the research problem, you can ensure that your study contributes to the existing body of knowledge and addresses a significant research gap.

Once you have identified the research problem, you need to consider the scope of your study. Are you focusing on a specific population, geographic area, or time frame? Understanding the scope of your research will help you determine the appropriate research methodology to use.

Reviewing Previous Research

Before finalizing the research methodology, it is essential to review previous research conducted in the field. This will allow you to identify gaps, determine the most effective methodologies used in similar studies, and build upon existing knowledge.

Reviewing previous research involves conducting a systematic review of relevant literature. This process includes searching for and analyzing published studies, articles, and reports that are related to your research topic. By reviewing previous research, you can gain insights into the strengths and limitations of different methodologies and make informed decisions about which approach to adopt.

During the review process, it is important to critically evaluate the quality and reliability of the existing research. Consider factors such as the sample size, research design, data collection methods, and statistical analysis techniques used in previous studies. This evaluation will help you determine the most appropriate research methodology for your own study.

Formulating Research Questions

Once the research problem is identified, formulate specific and relevant research questions. These questions will guide your methodology selection process by helping you determine what type of data you need to collect and how to analyze it.

Formulating research questions involves breaking down the research problem into smaller, more manageable components. These questions should be clear, concise, and measurable. They should also align with the objectives of your study and provide a framework for data collection and analysis.

When formulating research questions, consider the different types of data that can be collected, such as qualitative or quantitative data. Depending on the nature of your research questions, you may need to employ different data collection methods, such as interviews, surveys, observations, or experiments. By carefully formulating research questions, you can ensure that your chosen methodology will enable you to collect the necessary data to answer your research questions effectively.

Implementing the Research Methodology

After choosing the appropriate research methodology, it is time to implement it. This stage involves collecting data using various techniques and analyzing the gathered information. Let's explore two crucial aspects of implementing the research methodology:

Data Collection Techniques

Data collection techniques depend on the chosen research methodology. They can include surveys, interviews, observations, experiments, or document analysis. Selecting the most suitable data collection techniques will ensure accurate and relevant data for your study.

Data Analysis Methods

Data analysis is a critical part of the research process. It involves interpreting and making sense of the collected data to draw meaningful conclusions. Depending on the research methodology, data analysis methods can include statistical analysis, content analysis, thematic analysis, or grounded theory.

Ensuring the Validity and Reliability of Your Research

In order to ensure the validity and reliability of your research findings, it is important to address these two key aspects:

Understanding Validity in Research

Validity refers to the accuracy and soundness of a research study. It is crucial to ensure that the research methods used effectively measure what they intend to measure. Researchers can enhance validity by using proper sampling techniques, carefully designing research instruments, and ensuring accurate data collection.

Ensuring Reliability in Your Study

Reliability refers to the consistency and stability of the research results. It is important to ensure that the research methods and instruments used yield consistent and reproducible results. Researchers can enhance reliability by using standardized procedures, ensuring inter-rater reliability, and conducting pilot studies.

A comprehensive understanding of research methodology is essential for conducting high-quality research. By selecting the right research methodology, researchers can ensure that their studies are rigorous, reliable, and valid. It is crucial to follow the steps in choosing the appropriate methodology, implement the chosen methodology effectively, and address validity and reliability concerns throughout the research process. By doing so, researchers can contribute valuable insights and advances in their respective fields.

You might also like

AI for Meta-Analysis — A Comprehensive Guide

AI for Meta-Analysis — A Comprehensive Guide

Monali Ghosh

Cybersecurity in Higher Education: Safeguarding Students and Faculty Data

Leena Jaiswal

How To Write An Argumentative Essay

  • Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

51k Accesses

25 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

methodology of research article

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

37k Accesses

52 Citations

57 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

methodology of research article

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Methodology for research I

Rakesh garg.

Department of Onco-anaesthesiology and Palliative Medicine, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India

The conduct of research requires a systematic approach involving diligent planning and its execution as planned. It comprises various essential predefined components such as aims, population, conduct/technique, outcome and statistical considerations. These need to be objective, reliable and in a repeatable format. Hence, the understanding of the basic aspects of methodology is essential for any researcher. This is a narrative review and focuses on various aspects of the methodology for conduct of a clinical research. The relevant keywords were used for literature search from various databases and from bibliographies of the articles.

INTRODUCTION

Research is a process for acquiring new knowledge in systematic approach involving diligent planning and interventions for discovery or interpretation of the new-gained information.[ 1 , 2 ] The outcome reliability and validity of a study would depend on well-designed study with objective, reliable, repeatable methodology with appropriate conduct, data collection and its analysis with logical interpretation. Inappropriate or faulty methodology would make study unacceptable and may even provide clinicians faulty information. Hence, the understanding the basic aspects of methodology is essential.

This is a narrative review based on existing literature search. This review focuses on specific aspects of the methodology for conduct of a research/clinical trial. The relevant keywords for literature search included ‘research’, ‘study design’, ‘study controls’, ‘study population’, ‘inclusion/exclusion criteria’, ‘variables’, ‘sampling’, ‘randomisation’, ‘blinding’, ‘masking’, ‘allocation concealment’, ‘sample size’, ‘bias’, ‘confounders’ alone and in combinations. The search engine included PubMed/MEDLINE, Google Scholar and Cochrane. The bibliographies of the searched articles were specifically searched for missing manuscripts from the search engines and manually from the print journals in the library.

The following text highlights/describes the basic essentials of methodology which needs to be adopted for conducting a good research.

Aims and objectives of study

The aims and objectives of research need to be known thoroughly and should be specified before start of the study based on thorough literature search and inputs from professional experience. Aims and objectives state whether nature of the problem (formulated as research question or research problem) has to be investigated or its solution has to be found by different more appropriate method. The lacunae in existing knowledge would help formulate a research question. These statements have to be objective specific with all required details such as population, intervention, control, outcome variables along with time interventions.[ 3 , 4 , 5 ] This would help formulate a hypothesis which is a scientifically derived statement about a particular problem in the defined population. The hypothesis generation depends on the type of study as well. Researcher observation related to any aspect initiates hypothesis generation. A cross-sectional survey would generate hypothesis. An observational study establishes associations and supports/rejects the hypothesis. An experiment would finally test the hypothesis.[ 5 , 6 , 7 ]

STUDY POPULATION AND PATIENT SELECTION, STUDY AREA, STUDY PERIOD

The flow of study in an experimental design has various sequential steps [ Figure 1 ].[ 1 , 2 , 6 ] Population refers to an aggregate of individuals, things, cases, etc., i.e., observation units that are of interest and remain the focus of investigation. This reference population or target population is the group on which the study outcome would be extrapolated.[ 6 ] Once this target population is identified, researcher needs to assess whether it is possible to study all the individuals for an outcome. Usually, all cannot be included, so a study population is sampled. The important attribute of a sample is that every individual should have equal and non-zero chance of getting included in the study. The sample should be made independently, i.e., selection of one does not influence inclusion or exclusion of other. In clinical practice, the sampling is restricted to a particular place (patients attending to clinics or posted for surgery) or includes multiple centres rather than sampling the universe. Hence, the researcher should be cautious in generalising the outcomes. For example, in a tertiary care hospital, patients are referred and may have more risk factors as compared to primary centres where a patient with lesser severity are managed. Hence, researchers must disclose details of the study area. The study period needs to be disclosed as it would make readers understand the population characteristics. Furthermore, study period would tell about relevance of the study with respect to the present period.

An external file that holds a picture, illustration, etc.
Object name is IJA-60-640-g001.jpg

Flow of an experimental study

The size of sample has to be pre-determined, analytically approached and sufficiently large to represent the population.[ 7 , 8 , 9 ] Including a larger sample would lead to wastage of resources, risk that the true treatment effect may be missed due to heterogeneity of large population and would be time-consuming.[ 6 ] If a study is too small, it will not provide the suitable answer to research question. The main determinant of the sample size includes clinical hypothesis, primary endpoint, study design, probability of Type I and II error, power, minimum treatment difference of clinical importance.[ 7 ] Attrition of patients should be attended during the sample size calculation.[ 6 , 9 ]

SELECTION OF STUDY DESIGN

The appropriate study design is essential for the intervention outcome in terms of its best possible and most reliable estimate. The study design selection is based on parameters such as objectives, therapeutic area, treatment comparison, outcome and phase of the trial.[ 6 ] The study design may be broadly classified as:[ 5 , 6 , 7 ]

  • Descriptive: Case report, case series, survey
  • Analytical: Case-control, cohort, cross-sectional
  • Experimental: Randomised controlled trial (RCT), quasi-experiment
  • Qualitative.

For studying causality, analytical observational studies would be prudent to avoid posing risk to subjects. For clinical drugs or techniques, experimental study would be more appropriate.[ 6 ] The treatments remain concurrent, i.e. the active and control interventions happen at the same period in RCT. It may parallel group design wherein treatment and control groups are allocated to different individuals. This requires comparing a placebo group or a gold standard intervention (control) with newer agent or technique.[ 6 ] In matched-design RCT, randomisation is between matched pairs. For cross-over study design, two or more treatments are administered sequentially to the same subject and thus each subject acts as its own control. However, researches should be aware of ‘carryover effect’ of the previous intervention and suitable wash period needs to be ensured. In cohort study design, subjects with disease/symptom or free of study variable are followed for a particular period. The cross-sectional study examines the prevalence of the disease, surveys, validating instruments, tools and questionnaires. The qualitative research is a study design wherein health-related issue in the population is explored with regard to its description, exploration and explanation.[ 6 ]

Selection of controls

The control is required because disease may be self-remitting, Hawthorne effect (change in response or behaviours of subjects when included in study), placebo effect (patients feel improvement even with placebo), effect of confounder, co-intervention and regression to the mean phenomenon (for example, white coat hypertension, i.e. patients at recruitment may have higher study parameter but subsequently may get normal).[ 2 , 6 , 7 ] The control could be a placebo, no treatment, different dose or regimen or intervention or the standard/gold treatment. Avoiding a routine care for placebo is not desirable and unethical. For instance, for studying analgesic regimen, it would be unethical not to administer analgesics in a control group. It is advisable to continue standard of care, i.e. providing routine analgesics even in control group. The use of placebo or no treatment may be considered where no current proven intervention exists or placebo is required to evaluate efficacy or safety of an intervention without serious or irreversible harm.

The comparisons to be made in the study among groups also need to be specified.[ 6 , 7 , 9 ] These comparisons may prove superiority, non-inferiority or equivalence among groups. The superiority trials demonstrate superiority either to a placebo in a placebo-controlled trial or to an active control treatment. The non-inferiority trials would prove that the efficacy of an intervention is no worse than that of the active comparative treatment. The equivalence trials demonstrate that the outcome of two or more interventions differs by a clinically unimportant margin and either technique or drug may be clinically acceptable.

STUDY TOOLS

The study tools such as measurements scales, questionnaires and scoring systems need to be specified with an objective definition. These tools should be validated before its use and appropriate use by the research staff is mandatory to avoid any bias. These tools should be simple and easily understandable to everyone involved in the study.

Inclusion/exclusion criteria

In clinical research, specific group of relatively homogeneous patient population needs to be selected.[ 6 ] Inclusion and exclusion criteria define who can be included or excluded from the study sample. The inclusion criteria identify the study population in a consistent, reliable, uniform and objective manner. The exclusion criteria include factors or characteristics that make the recruited population ineligible for the study. These factors may be confounders for the outcome parameter. For example, patients with liver disease would be excluded if coagulation parameters would impact the outcome. The exclusion criteria are inclusive of inclusion criteria.

VARIABLES: PRIMARY AND SECONDARY

Variables are definite characteristics/parameters that are being studied. Clear, precise and objective definition for measurement of these characteristics needs to be defined.[ 2 ] These should be measurable and interpretable, sensitive to the objective of the study and clinically relevant. The most common end-point is related to efficacy, safety and quality of life. The study variables could be primary or secondary.[ 6 ] The primary end-point, usually one, provides the most relevant, reliable and convincing evidence related to the aim and objective. It is the characteristic on the basis of which research question/hypothesis has been formulated. It reflects clinically relevant and important treatment benefits. It determines the sample size. Secondary end-points are the other objectives indirectly related to primary objective with regard to its close association or they may be some associated effects/adverse effects related to intervention. The measurement timing of the variables must be defined a priori . These are usually done at screening, baseline and completion of trial.

The study end-point parameter may be clinical or surrogate in nature. A clinical end-point is related directly to clinical implications with regard to beneficial outcome of the intervention. The surrogate end-point is indirectly related to patient clinical benefit and is usually measures laboratory measurement or physical sign as a substitute for a clinically meaningful end-point. Surrogate end-points are more convenient, easily measurable, repeatable and faster.

SAMPLING TECHNIQUES: RANDOMISATION, BLINDING/MASKING AND ALLOCATION CONCEALMENT

Randomisation.

Randomisation or random allocation is a method to allocate individuals into one of the groups (arms) of a study.[ 1 , 2 ] It is the basic assumption required for statistical analysis of data. The randomisation would maximise statistical power, especially in subgroup analyses, minimise selection bias and minimise allocation bias (or confounding). This leads to distribution of all the characteristics, measured or non-measured, visible or invisible and known or unknown equally into the groups. Randomisation uses various strategies as per the study design and outcome.

Probability sampling/randomisation

  • Simple/unrestricted: Each individual of the population has the same chance of being included in the sample. This is used when population is small, homogenous and the sampling frame is available. For example, lottery method, table of random numbers or computer-generated
  • Stratified: It is used in non-homogenous population. Population is divided into homogenous groups (strata), and the sample is drawn for each stratum at random. It keeps the ‘characteristics’ of the participants (for example, age, weight or physical status) as similar as possible across the study groups. The allocation to strata can be by equal or proportional allocation
  • Systematic: This is used when complete and up-to-date sampling frame is available. The first unit is selected at random and the rest get selected automatically according to some pre-designed pattern
  • Cluster: This applies for large geographical area. Population is divided into a finite numbers of distinct and identifiable units (sampling units/element). A group of such elements is a cluster and sampling of these clusters is done. All units of the selected clusters are included in the study
  • Multistage: This applies for large nationwide surveys. Sampling is done in stages using random sampling. Here, sub-sampling within the selected clusters is done. If procedure is repeated in more number of stages, then they termed as multistage sampling
  • Multiphase: Here, some data are collected from whole of the units of a sample, and other data are collected from a sub-sample of the units constituting the original sample (two-phase sampling). If three or more phases are used, then they termed as multiphase sampling.

Non-probability sampling/randomisation

This technique does not give equal and non-zero chances to all the individuals in the population to be selected in the sample.

  • Convenience: Sampling is done as per the convenience of the investigator, i.e., easily available
  • Purposive/judgemental/selective/subjective: The sample is selected as per judgement of investigator
  • Quota: It is done as per judgement of the interviewer based on some specified characteristics such as sex and physical status.

ALLOCATION CONCEALMENT

Allocation concealment refers to the process ensuring the person who generates the random assignment remains blind to what arm the person will be allotted.[ 8 , 9 , 10 ] It is a strategy to avoid ascertainment or selection bias. For example, based on an outcome, researcher may recruit a specific category as lesser sicker patients to a particular group and vice versa to the other group. This selective recruitment would underestimate (if treatment group is sicker) or overestimate (if control group is sicker) the intervention effect.[ 9 ] The allocation should be concealed from investigator till the initiation of intervention. Hence, randomisation should be performed by an independent person who is not involved in the conduct of the study or its monitoring. The randomisation list is kept secret. The methods of allocation concealment include:[ 9 , 10 ]

  • Central randomisation: Some centrally independent authority performs randomisation and informs the investigators via telephone, E-mail or fax
  • Pharmacy controlled: Here, pharmacy provides coded drugs for use
  • Sequentially numbered containers: Identical containers equal in weight, similar in appearance and tamper-proof are used
  • Sequentially numbered, opaque, sealed envelopes: The randomised numbers are concealed in opaque envelope to be opened just before intervention and are the most common and easy to perform method.

BLINDING/MASKING

Blinding ensures the group to which the study subjects are assigned not known or easily ascertained by those who are ‘masked’, i.e., participants, investigators, evaluators or statistician to limit occurrence of bias.[ 1 , 2 ] It confirms that the intervention and standard or placebo treatment appears the same. Blinding is different from allocation concealment. Allocation concealment is done before, whereas blinding is done at and after initiation of treatment. In situations such as study drugs with different formulations or medical versus surgical interventions, blinding may not be feasible.[ 8 ] Sham blocks or needling in subjects may not be ethical. In such situation, the outcome measurement should be made objective to the fullest to avoid bias and whosoever may be masked should be blinded. The research manuscript must mention the details about blinding including who was blinded after assignment to interventions and process or technique used. Blinding could be:[ 8 , 9 ]

  • Unblinded: The process cannot conceal randomisation
  • Single blind: One of the participants, investigators or evaluators remains masked
  • Double-blind: The investigator and participants remained masked
  • Triple blind: Not only investigator but also participant maintains a blind data analysis.

BIAS AND CONFOUNDERS

Bias is a systematic deviation of the real, true effect (better or worst outcome) resulting from faulty study design.[ 1 , 2 ] The various steps of study such as randomisation, concealment, blinding, objective measurement and strict protocol adherence would reduce bias.

The various possible and potential biases in a trial can be:[ 7 ]

  • Investigator bias: An investigator either consciously or subconsciously favours one group than other
  • Evaluator bias: The investigator taking end-point variable measurement intentionally or unintentionally favours one group over other. It is more common with subjective or quality of life end-points
  • Performance bias: It occurs when participant knows of exposure to intervention or its response, be it inactive or active
  • Selection bias: This occurs due to sampling method such as admission bias (selective factors for admission), non-response bias (refusals to participate and the population who refused may be different from who participated) or sample is not representative of the population
  • Ascertainment or information bias: It occurs due to measurement error or misclassification of patient. For example, diagnostic bias (more diagnostic procedures performed in cases as compared with controls), recall bias (error of categorisation, investigator aggressively search for exposure variables in cases)
  • Allocation bias: Allocation bias occurs when the measured treatment effect differs from the true treatment effect
  • Detection bias: It occurs when observations in one group are not as vigilantly sought as in the other
  • Attrition bias/loss-to-follow-up bias: It occurs when patient is lost to follow-up preferentially in a particular group.

Confounding occurs when outcome parameters are affected by effects of other factors not directly relevant to the research question.[ 1 , 7 ] For example, if impact of drug on haemodynamics is studied on hypertensive patients, then diabetes mellitus would be confounder as it also effects the hemodynamic response to autonomic disturbances. Hence, it becomes prudent during the designing stage for a study that all potential confounders should be carefully considered. If the confounders are known, then they can be adjusted statistically but with loss of precision (statistical power). Hence, confounding can be controlled either by preventing it or by adjusting for it in the statistical analysis. The confounding can be controlled by restriction by study design (for example, restricted age range as 2-6 years), matching (use of constraints in the selection of the comparison group so that the study and comparison group have similar distribution with regard to potential confounder), stratification in the analysis without matching (involves restriction of the analysis to narrow ranges of the extraneous variable) and mathematical modelling in the analysis (use of advanced statistical methods of analysis such as multiple linear regression and logistic regression). Strategies during data analysis include stratified analysis using the Mantel-Haenszel method to adjust for confounders, using a matched design approach, data restriction and model fitting using regression techniques.

Basic understanding of the methodology is essential to have reliable, repeatable and clinically acceptable outcome. The study plan including all its components needs to be designed before start of the study, and the study protocol should be strictly adhered during the conduct of study.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

This paper is in the following e-collection/theme issue:

Published on 19.4.2024 in Vol 26 (2024)

Psychometric Evaluation of a Tablet-Based Tool to Detect Mild Cognitive Impairment in Older Adults: Mixed Methods Study

Authors of this article:

Author Orcid Image

Original Paper

  • Josephine McMurray 1, 2 * , MBA, PhD   ; 
  • AnneMarie Levy 1 * , MSc, PhD   ; 
  • Wei Pang 1, 3 * , BTM   ; 
  • Paul Holyoke 4 , PhD  

1 Lazaridis School of Business & Economics, Wilfrid Laurier University, Brantford, ON, Canada

2 Health Studies, Faculty of Human and Social Sciences, Wilfrid Laurier University, Brantford, ON, Canada

3 Biomedical Informatics & Data Science, Yale University, New Haven, CT, United States

4 SE Research Centre, Markham, ON, Canada

*these authors contributed equally

Corresponding Author:

Josephine McMurray, MBA, PhD

Lazaridis School of Business & Economics

Wilfrid Laurier University

73 George St

Brantford, ON, N3T3Y3

Phone: 1 548 889 4492

Email: [email protected]

Background: With the rapid aging of the global population, the prevalence of mild cognitive impairment (MCI) and dementia is anticipated to surge worldwide. MCI serves as an intermediary stage between normal aging and dementia, necessitating more sensitive and effective screening tools for early identification and intervention. The BrainFx SCREEN is a novel digital tool designed to assess cognitive impairment. This study evaluated its efficacy as a screening tool for MCI in primary care settings, particularly in the context of an aging population and the growing integration of digital health solutions.

Objective: The primary objective was to assess the validity, reliability, and applicability of the BrainFx SCREEN (hereafter, the SCREEN) for MCI screening in a primary care context. We conducted an exploratory study comparing the SCREEN with an established screening tool, the Quick Mild Cognitive Impairment (Qmci) screen.

Methods: A concurrent mixed methods, prospective study using a quasi-experimental design was conducted with 147 participants from 5 primary care Family Health Teams (FHTs; characterized by multidisciplinary practice and capitated funding) across southwestern Ontario, Canada. Participants included health care practitioners, patients, and FHT administrative executives. Individuals aged ≥55 years with no history of MCI or diagnosis of dementia rostered in a participating FHT were eligible to participate. Participants were screened using both the SCREEN and Qmci. The study also incorporated the Geriatric Anxiety Scale–10 to assess general anxiety levels at each cognitive screening. The SCREEN’s scoring was compared against that of the Qmci and the clinical judgment of health care professionals. Statistical analyses included sensitivity, specificity, internal consistency, and test-retest reliability assessments.

Results: The study found that the SCREEN’s longer administration time and complex scoring algorithm, which is proprietary and unavailable for independent analysis, presented challenges. Its internal consistency, indicated by a Cronbach α of 0.63, was below the acceptable threshold. The test-retest reliability also showed limitations, with moderate intraclass correlation coefficient (0.54) and inadequate κ (0.15) values. Sensitivity and specificity were consistent (63.25% and 74.07%, respectively) between cross-tabulation and discrepant analysis. In addition, the study faced limitations due to its demographic skew (96/147, 65.3% female, well-educated participants), the absence of a comprehensive gold standard for MCI diagnosis, and financial constraints limiting the inclusion of confirmatory neuropsychological testing.

Conclusions: The SCREEN, in its current form, does not meet the necessary criteria for an optimal MCI screening tool in primary care settings, primarily due to its longer administration time and lower reliability. As the number of digital health technologies increases and evolves, further testing and refinement of tools such as the SCREEN are essential to ensure their efficacy and reliability in real-world clinical settings. This study advocates for continued research in this rapidly advancing field to better serve the aging population.

International Registered Report Identifier (IRRID): RR2-10.2196/25520

Introduction

Mild cognitive impairment (MCI) is a syndrome characterized by a slight but noticeable and measurable deterioration in cognitive abilities, predominantly memory and thinking skills, that is greater than expected for an individual’s age and educational level [ 1 , 2 ]. The functional impairments associated with MCI are subtle and often impair instrumental activities of daily living (ADL). Instrumental ADL include everyday tasks such as managing finances, cooking, shopping, or taking regularly prescribed medications and are considered more complex than ADL such as bathing, dressing, and toileting [ 3 , 4 ]. In cases in which memory impairment is the primary indicator of the disease, MCI is classified as amnesic MCI and when significant impairment of non–memory-related cognitive domains such as visual-spatial or executive functioning is dominant, MCI is classified as nonamnesic [ 5 ].

Cognitive decline, more so than cancer and cardiovascular disease, poses a substantial threat to an individual’s ability to live independently or at home with family caregivers [ 6 ]. The Centers for Disease Control and Prevention reports that 1 in 8 adults aged ≥60 years experiences memory loss and confusion, with 35% reporting functional difficulties with basic ADL [ 7 ]. The American Academy of Neurology estimates that the prevalence of MCI ranges from 13.4% to 42% in people aged ≥65 years [ 8 ], and a 2023 meta-analysis that included 233 studies and 676,974 participants aged ≥50 years estimated that the overall global prevalence of MCI is 19.7% [ 9 ]. Once diagnosed, the prognosis for MCI is variable, whereby the impairment may be reversible; the rate of decline may plateau; or it may progressively worsen and, in some cases, may be a prodromal stage to dementia [ 10 - 12 ]. While estimates vary based on sample (community vs clinical), annual rates of conversion from MCI to dementia range from 5% to 24% [ 11 , 12 ], and those who present with multiple domains of cognitive impairment are at higher risk of conversion [ 5 ].

The risk of developing MCI rises with age, and while there are no drug treatments for MCI, nonpharmacologic interventions may improve cognitive function, alleviate the burden on caregivers, and potentially delay institutionalization should MCI progress to dementia [ 13 ]. To overcome the challenges of early diagnosis, which currently depends on self-detection, family observation, or health care provider (HCP) recognition of symptoms, screening high-risk groups for MCI or dementia is suggested as a solution [ 13 ]. However, the Canadian Task Force on Preventive Health Care recommends against screening adults aged ≥65 years due to a lack of meaningful evidence from randomized controlled trials and the high false-positive rate [ 14 - 16 ]. The main objective of a screening test is to reduce morbidity or mortality in at-risk populations through early detection and intervention, with the anticipated benefits outweighing potential harms. Using brief screening tools in primary care might improve MCI case detection, allowing patients and families to address reversible causes, make lifestyle changes, and access disease-modifying treatments [ 17 ].

There is no agreement among experts as to which tests or groups of tests are most predictive of MCI [ 16 ], and the gold standard approach uses a combination of positive results from neuropsychological assessments, laboratory tests, and neuroimaging to infer a diagnosis [ 8 , 18 ]. The clinical heterogeneity of MCI complicates its diagnosis because it influences not only memory and thinking abilities but also mood, behavior, emotional regulation, and sensorimotor abilities, and patients may present with any combination of symptoms with varying rates of onset and decline [ 4 , 8 ]. For this reason, a collaborative approach between general practitioners and specialists (eg, geriatricians and neurologists) is often required to be confident in the diagnosis of MCI [ 8 , 19 , 20 ].

In Canada, diagnosis often begins with screening for cognitive impairment followed by referral for additional testing; this process takes, on average, 5 months [ 20 ]. The current usual practice screening tools for MCI are the Mini-Mental State Examination (MMSE) [ 21 , 22 ] and the Montreal Cognitive Assessment (MoCA) 8.1 [ 3 ]. Both are paper-and-pencil screens administered in 10 to 15 minutes, scored out of 30, and validated as MCI screening tools across diverse clinical samples [ 23 , 24 ]. Universally, the MMSE is most often used to screen for MCI [ 20 , 25 ] and consists of 20 items that measure orientation, immediate and delayed recall, attention and calculation, visual-spatial skills, verbal fluency, and writing. The MoCA 8.1 was developed to improve on the MMSE’s ability to detect early signs of MCI, placing greater emphasis on evaluating executive function as well as language, memory, visual-spatial skills, abstraction, attention, concentration, and orientation across 30 items [ 24 , 26 ]. Scores of <24 on the MMSE or ≤25 on the MoCA 8.1 signal probable MCI [ 21 , 27 ]. Lower cutoff scores for both screens have been recommended to address evidence that they lack specificity to detect mild and early cases of MCI [ 4 , 28 - 31 ]. The clinical efficacy of both screens for tracking change in cognition over time is limited as they are also subject to practice effects with repeated administration [ 32 ].

Novel screening tools, including the Quick Mild Cognitive Impairment (Qmci) screen, have been developed with the goal of improving the accuracy of detecting MCI [ 33 , 34 ]. The Qmci is a sensitive and specific tool that differentiates normal cognition from MCI and dementia and is more accurate at differentiating MCI from controls than either the MoCA 8.1 (Qmci area under the curve=0.97 vs MoCA 8.1 area under the curve=0.92) [ 25 , 35 ] or the Short MMSE [ 33 , 36 ]. It also demonstrates high test-retest reliability (intraclass correlation coefficient [ICC]=0.88) [ 37 ] and is clinically useful as a rapid screen for MCI as the Qmci mean is 4.5 (SD 1.3) minutes versus 9.5 (SD 2.8) minutes for the MoCA 8.1 [ 25 ].

The COVID-19 pandemic and the necessary shift to virtual health care accelerated the use of digital assessment tools, including MCI screening tools such as the electronic MoCA 8.1 [ 38 , 39 ], and the increased use and adoption of technology (eg, smartphones and tablets) by older adults suggests that a lack of proficiency with technology may not be a barrier to the use of such assessment tools [ 40 , 41 ]. BrainFx is a for-profit firm that creates proprietary software designed to assess cognition and changes in neurofunction that may be caused by neurodegenerative diseases (eg, MCI or dementia), stroke, concussions, or mental illness using ecologically relevant tasks (eg, prioritizing daily schedules and route finding on a map) [ 42 ]. Their assessments are administered via a tablet and stylus. The BrainFx 360 performance assessment (referred to hereafter as the 360) is a 90-minute digitally administered test that was designed to assess cognitive, physical, and psychosocial areas of neurofunction across 26 cognitive domains using 49 tasks that are timed and scored [ 42 ]. The BrainFx SCREEN (referred to hereafter as the SCREEN) is a short digital version of the 360 that includes 7 of the cognitive domains included in the 360, is estimated to take approximately 10 to 15 minutes to complete, and was designed to screen for early detection of cognitive impairment [ 43 , 44 ]. Upon completion of any BrainFx assessment, the results of the 360 or SCREEN are added to the BrainFx Living Brain Bank (LBB), which is an electronic database that stores all completed 360 and SCREEN assessments and is maintained by BrainFx. An electronic report is generated by BrainFx comparing an individual’s results to those of others collected and stored in the LBB. Normative data from the LBB are used to evaluate and compare an individual’s results.

The 360 has been used in clinical settings to assess neurofunction among youth [ 45 ] and anecdotally in other rehabilitation settings (T Milner, personal communication, May 2018). To date, research on the 360 indicates that it has been validated in healthy young adults (mean age 22.9, SD 2.4 years) and that the overall test-retest reliability of the tool is high (ICC=0.85) [ 42 ]. However, only 2 of the 7 tasks selected to be included in the SCREEN produced reliability coefficients of >0.70 (visual-spatial and problem-solving abilities) [ 42 ]. Jones et al [ 43 ] explored the acceptability and perceived usability of the SCREEN with a small sample (N=21) of Canadian Armed Forces veterans living with posttraumatic stress disorder. A structural equation model based on the Unified Theory of Acceptance and Use of Technology suggested that behavioral intent to use the SCREEN was predicted by facilitating conditions such as guidance during the test and appropriate resources to complete the test [ 43 ]. However, the validity, reliability, and sensitivity of the SCREEN for detecting cognitive impairment have not been tested.

McMurray et al [ 44 ] designed a protocol to assess the validity, reliability, and sensitivity of the SCREEN for detecting early signs of MCI in asymptomatic adults aged ≥55 years in a primary care setting (5 Family Health Teams [FHTs]). The protocol also used a series of semistructured interviews and surveys guided by the fit between individuals, task, technology, and environment framework [ 46 ], a health-specific model derived from the Task-Technology Fit model by Goodhue and Thompson [ 47 ], to explore the SCREEN’s acceptability and use by HCPs and patients in primary care settings (manuscript in preparation). This study is a psychometric evaluation of the SCREEN’s validity, reliability, and sensitivity for detecting MCI in asymptomatic adults aged ≥55 years in primary care settings.

Study Location, Design, and Data Collection

This was a concurrent, mixed methods, prospective study using a quasi-experimental design. Participants were recruited from 5 primary care FHTs (characterized by multidisciplinary practice and capitated funding) across southwestern Ontario, Canada. FHTs that used a registered occupational therapist on staff were eligible to participate in the study, and participating FHTs received a nominal compensatory payment for the time the HCPs spent in training; collecting data for the study; administering the SCREEN, Qmci, and Geriatric Anxiety Scale–10 (GAS-10); and communicating with the research team. A multipronged recruitment approach was used [ 44 ]. A designated occupational therapist at each location was provided with training and equipment to recruit participants, administer assessment tools, and submit collected data to the research team.

The research protocol describing the methods of both the quantitative and qualitative arms of the study is published elsewhere [ 44 ].

Ethical Considerations

This study was approved by the Wilfrid Laurier University Research Ethics Board (ORE 5820) and was reviewed and approved by each FHT. Participants (HCPs, patients, and administrative executives) read and signed an information and informed consent package in advance of taking part in the study. We complied with recommendations for obtaining informed consent and conducting qualitative interviews with persons with dementia when recruiting patients who may be affected by neurocognitive diseases [ 48 - 50 ]. In addition, at the end of each SCREEN assessment, patients were required to provide their consent (electronic signature) to contribute their anonymized scores to the database of SCREEN results maintained by BrainFx. Upon enrolling in the study, participants were assigned a unique identification number that was used in place of their name on all study documentation to anonymize the data and preserve their confidentiality. A master list matching participant names with their unique identification number was stored in a password-protected file by the administering HCP and principal investigator on the research team. The FHTs received a nominal compensatory payment to account for their HCPs’ time spent administering the SCREEN, collecting data for the study, and communicating with the research team. However, the individual HCPs who volunteered to participate and the patient participants were not financially compensated for taking part in the study.

Participants

Patients who were rostered with the FHT, were aged ≥55 years, and had no history of MCI or dementia diagnoses to better capture the population at risk of early signs of cognitive impairment were eligible to participate [ 51 , 52 ]. It was necessary for the participants to be rostered with the FHTs to ensure that the HCPs could access their electronic medical record to confirm eligibility and record the testing sessions and results and to ensure that there was a responsible physician for referral if indicated. As the SCREEN is administered using a tablet, participants had to be able to read and think in English and discern color, have adequate hearing and vision to interact with the administering HCP, read 12-point font on the tablet, and have adequate hand and arm function to manipulate and hold the tablet. The exclusion criteria used in the study included colorblindness and any disability that might impair the individual’s ability to hold and interact with the tablet. Prospective participants were also excluded based on a diagnosis of conditions that may result in MCI or dementia-like symptoms, including major depression that required hospitalization, psychiatric disorders (eg, schizophrenia and bipolar disorder), psychopathology, epilepsy, substance use disorders, or sleep apnea (without the use of a continuous positive airway pressure machine) [ 52 ]. Patients were required to complete a minimum of 2 screening sessions spaced 3 months apart to participate in the study and, depending on when they enrolled to participate, could complete a maximum of 4 screening sessions over a year.

Data Collection Instruments

Gas-10 instrument.

A standardized protocol was used to collect demographic data, randomly administer the SCREEN and the Qmci (a validated screening tool for MCI), and administer the GAS-10 immediately before and after the completion of the first MCI screen at each visit [ 44 ]. This was to assess participants’ general anxiety as it related to screening for cognitive impairment at the time of the assessment, any change in subjective ratings after completion of the first MCI screen, and change in anxiety between appointments. The GAS-10 is a 10-item, self-report screen for anxiety in older adults [ 53 ] developed for rapid screening of anxiety in clinical settings (the GAS-10 is the short form of the full 30-item Geriatric Anxiety Scale [GAS]) [ 54 ]. While 3 subscales are identified, the GAS is reported to be a unidimensional scale that assesses general anxiety [ 55 , 56 ]. Validation of the GAS-10 suggests that it is optimal for assessing average to moderate levels of anxiety in older adults, with subscale scores that are highly and positively correlated with the GAS and high internal consistency [ 53 ]. Participants were asked to use a 4-point Likert scale (0= not at all , 1= sometimes , 2= most of the time , and 3= all of the time ) to rate how often they had experienced each symptom over the previous week, including on the day the test was administered [ 54 ]. The GAS-10 has a maximum score of 30, with higher scores indicating higher levels of anxiety [ 53 , 54 , 57 ].

HCPs completed the required training to become certified BrainFx SCREEN administrators before the start of the study. To this end, HCPs completed a web-based training program (developed and administered through the BrainFx website) that included 3 self-directed training modules. For the purpose of the study, they also participated in 1 half-day in-person training session conducted by a certified BrainFx administrator (T Milner, BrainFx chief executive officer) at one of the participating FHT locations. The SCREEN (version 0.5; beta) was administered on a tablet (ASUS ZenPad 10.1” IPS WXGA display, 1920 × 1200, powered by a quad-core 1.5 GHz, 64-bit MediaTek MTK 8163A processor with 2 GB RAM and 16-GB storage). The tablet came with a tablet stand for optional use and a dedicated stylus that is recommended for completion of a subset of questions. At the start of the study, HCPs were provided with identical tablets preloaded with the SCREEN software for use in the study. The 7 tasks on the SCREEN are summarized in Table 1 and were taken directly from the 360 based on a clustering and regression analysis of LBB records in 2016 (N=188) [ 58 ]. A detailed description of the study and SCREEN administration procedures was published by McMurray et al [ 44 ].

An activity score is generated for each of the 7 tasks on the SCREEN. It is computed based on a combination of the accuracy of the participant’s response and the processing speed (time in seconds) that it takes to complete the task. The relative contribution of accuracy and processing speed to the final activity score for each task is proprietary to BrainFx and unknown to the research team. The participant’s activity score is compared to the mean activity score for the same task at the time of testing in the LBB. The mean activity score from the LBB may be based on the global reference population (ie, all available SCREEN results in the LBB), or the administering HCP may select a specific reference population by filtering according to factors including but not limited to age, sex, or diagnosis. If the participant’s activity score is >1 SD below the LBB activity score mean for that task, it is labeled as an area of challenge . Each of the 7 tasks on the SCREEN are evaluated independently of each other, producing a report with 7 activity scores showing the participant’s score, the LBB mean score, and the SD. The report also provides an overall performance and processing speed score. The overall performance score is an average of all 7 activity scores; however, the way in which the overall processing speed score is generated remains proprietary to BrainFx and unknown to the research team. Both the overall performance and processing speed scores are similarly evaluated against the LBB and identified as an area of challenge using the criteria described previously. For the purpose of this study, participants’ mean activity scores on the SCREEN were compared to the results of people aged ≥55 years in the LBB.

The Qmci evaluated 6 cognitive domains: orientation (10 points), registration (5 points), clock drawing (15 points), delayed recall (20 points), verbal fluency (20 points), and logical memory (30 points) [ 59 ]. Administering HCPs scored the text manually, with each subtest’s points contributing to the overall score out of 100 points, and the cutoff score to distinguish normal cognition from MCI was ≤67/100 [ 60 ]. Cutoffs to account for age and education have been validated and are recommended as the Qmci is sensitive to these factors [ 60 ]. A 2019 meta-analysis of the diagnostic accuracy of MCI screening tools reported that the sensitivity and specificity of the Qmci for distinguishing MCI from normal cognition is similar to usual standard-of-care tools (eg, the MoCA, Addenbrooke Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease battery total score, and Sunderland Clock Drawing Test) [ 61 ]. The Qmci has also been translated into >15 different languages and has undergone psychometric evaluation across a subset of these languages. While not as broadly adopted as the MoCA 8.1 in Canada, its psychometric properties, administration time, and availability for use suggested that the Qmci was an optimal assessment tool for MCI screening in FHT settings during the study.

Psychometric Evaluation

To date, the only published psychometric evaluation of any BrainFx tool is by Searles et al [ 42 ] in Athletic Training & Sports Health Care ; it assessed the test-retest reliability of the 360 in 15 healthy adults between the ages of 20 and 25 years. This study evaluated the psychometric properties of the SCREEN and included a statistical analysis of the tool’s internal consistency, construct validity, test-retest reliability, and sensitivity and specificity. McMurray et al [ 44 ] provide a detailed description of the data collection procedures for administration of the SCREEN and Qmci completed by participants at each visit.

Validity Testing

Face validity was outside the scope of this study but was implied, and assumptions are reported in the Results section. Construct validity, whether the 7 activities that make up the SCREEN were representative of MCI, was assessed through comparison with a substantive body of literature in the domain and through principal component analysis using varimax rotation. Criterion validity measures how closely the SCREEN results corresponded to the results of the Qmci (used here as an “imperfect gold standard” for identifying MCI in older adults) [ 62 ]. A BrainFx representative hypothesized that the ecological validity of the SCREEN questions (ie, using tasks that reflect real-world activities to detect early signs of cognitive impairment) [ 63 ] makes it a more sensitive tool than other screens (T Milner, personal communication, May 2018) and allows HCPs to equate activity scores on the SCREEN with real-world functional abilities. Criterion validity was explored first using cross-tabulations to calculate the sensitivity and specificity of the SCREEN compared to those of the Qmci. Conventional screens such as the Qmci are scored by taking the sum of correct responses on the screen and a cutoff score derived from normative data to distinguish normal cognition from MCI. The SCREEN used a different method of scoring whereby each of the 7 tasks was scored and evaluated independently of each other and there were no recommended guidelines for distinguishing normal cognition from MCI based on the aggregate areas of challenge identified by the SCREEN. Therefore, to compare the sensitivity and specificity of the SCREEN against those of the Qmci, the results of both screens were coded into a binary format as 1=healthy and 2=unhealthy, where healthy denoted no areas of challenge identified through the SCREEN and a Qmci score of ≥67. Conversely, unhealthy denoted one or more areas of challenge identified through the SCREEN and a Qmci score of <67.

Criterion validity was further explored using discrepant analysis via a resolver test [ 44 ]. Following the administration of the SCREEN and Qmci, screen results were evaluated by the administering HCP. HCPs were instructed to refer the participant for follow-up with their primary care physician if the Qmci result was <67 regardless of whether any areas of challenge were identified on the SCREEN. However, HCPs could use their clinical judgment to refer a participant for physician follow-up based on the results of the SCREEN or the Qmci, and all the referral decisions were charted on the participant’s electronic medical record following each visit and screening. In discrepant analysis, the results of the imperfect gold standard [ 64 ], as was the role of the Qmci in this study, were compared with the SCREEN results. A resolver test (classified as whether the HCP referred the patient to a physician for follow-up based on their performance on the SCREEN and the Qmci) was used on discordant results [ 64 , 65 ] to determine sensitivity and specificity. To this end, a new variable, Referral to a Physician for Cognitive Impairment , was coded as the true status (1=no referral; 2=referral was made) and compared to the Qmci as the imperfect gold standard (1=healthy; 2=unhealthy).

Reliability Testing

The reliability of a screening instrument is its ability to consistently measure an attribute and how well its component measures fit together conceptually. Internal consistency identifies whether the items in a multi-item scale are measuring the same underlying construct; the internal consistency of the SCREEN was assessed using the Cronbach α. Test-retest reliability refers to the ability of a measurement instrument to reproduce results over ≥2 occasions (assuming the underlying conditions have not changed) and was assessed using paired t tests (2-tailed), ICC, and the κ coefficient. In this study, participants completed both the SCREEN and the Qmci in the same sitting in a random sequence on at least 2 different occasions spaced 3 months apart (administration procedures are described elsewhere) [ 44 ]. In some instances, the screens were administered to the same participant on 4 separate occasions spaced 3 months apart each, and this provided up to 3 separate opportunities to conduct test-retest reliability analyses and investigate the effects of repeated practice. There are no clear guidelines on the optimal time between tests [ 66 , 67 ]; however, Streiner and Kottner [ 68 ] and Streiner [ 69 ] recommend longer periods between tests (eg, at least 10-14 days) to avoid recall bias, and greater practice effects have been experienced with shorter test-retest intervals [ 32 ].

Analysis of the quantitative data was completed using Stata (version 17.0; StataCorp). Assumptions of normality were not violated, so parametric tests were used. Collected data were reported using frequencies and percentages and compared using the chi-square or Fisher exact test as necessary. Continuous data were analyzed for central tendency and variability; categoric data were presented as proportions. Normality was tested using the Shapiro-Wilk test, and nonparametric data were tested using the Mann-Whitney U test. A P value of .05 was considered statistically significant, with 95% CIs provided where appropriate. We powered the exploratory analysis to validate the SCREEN using an estimated effect size of 12%—understanding that Canadian prevalence rates of MCI were not available [ 1 ]—and determined that the study required at least 162 participants. For test-retest reliability, using 90% power and a 5% type-I error rate, a minimum of 67 test results was required.

The time taken for participants to complete the SCREEN was recorded by the HCPs at the time of testing; there were 6 missing HCP records of time to complete the SCREEN. For these 6 cases of missing data, we imputed the mean time to complete the SCREEN by all participants who were tested by that HCP and used this to populate the missing cells [ 70 ]. There were 3 cases of missing data related to the SCREEN reports. More specifically, the SCREEN report generated by BrainFx did not include 1 or 2 data points each for the route finding, divided attention, and prioritizing tasks. The clinical notes provided by the HCP at the time of SCREEN administration did not indicate that the participant had not completed those questions, and it was not possible to determine the root cause of the missing data in report generation according to BrainFx (M Milner, personal communication, July 7, 2020). For continuous variables in analyses such as exploratory factor analysis, Cronbach α, and t test, missing values were imputed using the mean. However, for the coded healthy and unhealthy categorical variables, values were not imputed.

Data collection began in January 2019 and was to conclude on May 31, 2020. However, the emergence of the global COVID-19 pandemic resulted in the FHTs and Wilfrid Laurier University prohibiting all in-person research starting on March 16, 2020.

Participant Demographics

A total of 154 participants were recruited for the study, and 20 (13%) withdrew following their first visit to the FHT. The data of 65% (13/20) of the participants who withdrew were included in the final analysis, and the data of the remaining 35% (7/20) were removed, either due to their explicit request (3/7, 43%) or because technical issues at the time of testing rendered their data unusable (4/7, 57%). These technical issues were related to software issues (eg, any instance in which the patient or HCP interacted with the SCREEN software and followed the instructions provided, the software did not work as expected [ie, objects did not move where they were dragged or tapping on objects failed to highlight the object], and the question could not be completed). After attrition, a total of 147 individuals aged ≥55 years with no previous diagnosis of MCI or dementia participated in the study ( Table 2 ). Of the 147 participants, 71 (48.3%) took part in only 1 round of screening on visit 1 (due to COVID-19 restrictions imposed on in-person research that prevented a second visit). The remaining 51.7% (76/147) of the participants took part in ≥2 rounds of screening across multiple visits (76/147, 51.7% participated in 2 rounds; 22/147, 15% participated in 3 rounds; and 13/147, 8.8% participated in 4 rounds of screening).

The sample population was 65.3% (96/147) female (mean 70.2, SD 7.9 years) and 34.7% (51/147) male (mean 72.5, SD 8.1 years), with age ranging from 55 to 88 years; 65.3% (96/147) achieved the equivalent of or higher than a college diploma or certificate ( Table 2 ); and 32.7% (48/147) self-reported living with one or more chronic medical conditions ( Table 3 ). At the time of screening, 73.5% (108/147) of participants were also taking medications with side effects that may include impairments to memory and thinking abilities [ 71 - 75 ]; therefore, medication use was accounted for in a subset of the analyses. Finally, 84.4% (124/147) of participants self-reported regularly using technology (eg, smartphone, laptop, or tablet) with high proficiency. A random sequence generator was used to determine the order for administering the MCI screens; the SCREEN was administered first 51.9% (134/258) of the time.

Construct Validity

Construct validity was assessed through a review of relevant peer-reviewed literature that compared constructs included in the SCREEN with those identified in the literature as 2 of the most sensitive tools for MCI screening: the MoCA 8.1 [ 76 ] and the Qmci [ 25 ]. Memory, language, and verbal skills are assessed in the MoCA and Qmci but are absent from the SCREEN. Tests of verbal fluency and logical memory have been shown to be particularly sensitive to early cognitive changes [ 77 , 78 ] but are similarly absent from the SCREEN.

Exploratory factor analysis was performed to examine the SCREEN’s ability to reliably measure risk of MCI. The Kaiser-Meyer-Olkin measure yielded a value of 0.79, exceeding the commonly accepted threshold of 0.70, indicating that the sample was adequate for factor analysis. The Bartlett test of sphericity returned a chi-square value of χ 2 21 =167.1 ( P <.001), confirming the presence of correlations among variables suitable for factor analysis. A principal component analysis revealed 2 components with eigenvalues of >1, cumulatively accounting for 52.12% of the variance, with the first factor alone explaining 37.8%. After the varimax rotation, the 2 factors exhibited distinct patterns of loadings, with the visual-spatial ability factor loading predominantly on the second factor. The SCREEN tasks, except for visual-spatial ability, loaded substantially on the factors (>0.5), suggesting that the SCREEN possesses good convergent validity for assessing the risk of MCI.

Criterion Validity

The coding of SCREEN scores into a binary healthy and unhealthy outcome standardized the dependent variable to allow for criterion testing. Criterion validity was assessed using cross-tabulations and the analysis of confusion matrices and provided insights into the sensitivity and specificity of the SCREEN when compared to the Qmci. Of the 144 cases considered, 20 (13.9%) were true negatives, and 74 (51.4%) were true positives. The SCREEN’s sensitivity, which reflects its capacity to accurately identify healthy individuals (true positives), was 63.25% (74 correct identifications/117 actual positives). The specificity of the test, indicating its ability to accurately identify unhealthy individuals (true negatives), was 74.07% (20 correct identifications/27 actual negatives). Then, sensitivity and specificity were derived using discrepant analysis and a resolver test previously described (whether the HCP referred the participant to a physician following the screens). The results were identical, the estimate of the SCREEN sensitivity was 63.3% (74/117), and the estimate of the specificity was 74% (20/27).

Internal Reliability

A Cronbach α=0.70 is acceptable, and at least 0.90 is required for clinical instruments [ 79 ]. The estimate of internal consistency for the SCREEN (N=147) was Cronbach α=0.63.

Test-Retest Reliability

Test-retest reliability analyses were conducted using ICC for the SCREEN activity scores and the κ coefficient for the healthy and unhealthy classifications. Guidelines for interpretation of the ICC suggest that anything <0.5 indicates poor reliability and anything between 0.5 and 0.75 suggests moderate reliability [ 80 ]; the ICC for the SCREEN activity scores was 0.54. With respect to the κ coefficient, a κ value of <0.2 is considered to have no level of agreement, a κ value of 0.21 to 0.39 is considered minimal, a κ value of 0.4 to 0.59 is considered weak agreement, and anything >0.8 suggests strong to almost perfect agreement [ 81 ]. The κ coefficient for healthy and unhealthy classifications was 0.15.

Analysis of the Factors Impacting Healthy and Unhealthy Results

The Spearman rank correlation was used to assess the relationships between participants’ overall activity score on the SCREEN and their total time to complete the SCREEN; age, sex, and self-reported levels of education; technology use; medication use; amount of sleep; and level of anxiety (as measured using the GAS-10) at the time of SCREEN administration. Lower overall activity scores were moderately correlated with being older ( r s142 =–0.57; P <.001) and increased total time to complete the SCREEN ( r s142 =0.49; P <.001). There was also a moderate inverse relationship between overall activity score and total time to compete the SCREEN ( r s142 =–0.67; P <.001) whereby better performance was associated with quicker task completion. There were weak positive associations between overall activity score and increased technology use ( r s142 =0.34; P <.001) and higher level of education ( r s142 =0.21; P =.01).

A logistic regression model was used to predict the SCREEN result using data from 144 observations. The model’s predictors explain approximately 21.33% of the variance in the outcome variable. The likelihood ratio test indicates that the model provides a significantly better fit to the data than a model without predictors ( P <.001).

The SCREEN outcome variable ( healthy vs unhealthy ) was associated with the predictor variables sex and total time to complete the SCREEN. More specifically, female participants were more likely to obtain healthy SCREEN outcomes ( P =.007; 95% CI 0.32-2.05). For all participants, the longer it took to complete the SCREEN, the less likely they were to achieve a healthy SCREEN outcome ( P =.002; 95% CI –0.33 to –0.07). Age ( P =.25; 95% CI –0.09 to 0.02), medication use ( P =.96; 95% CI –0.9 to 0.94), technology use ( P =.44; 95% CI –0.28 to 0.65), level of education ( P =.14; 95% CI –0.09 to 0.64), level of anxiety ( P =.26; 95% CI –1.13 to 0.3), and hours of sleep ( P =.08; 95% CI –0.06 to 0.93) were not significant.

Impact of Practice Effects

The SCREEN was administered approximately 3 months apart, and separate, paired-sample t tests were performed to compare SCREEN outcomes between visits 1 and 2 (76/147, 51.7%; Table 4 ), visits 2 and 3 (22/147, 15%), and visits 3 and 4 (13/147, 8.8%). Declining visits were partially attributable to the early shutdown of data collection due to the COVID-19 pandemic, and therefore, comparisons between visits 2 and 3 or visits 3 and 4 were not reported. Compared to participants’ SCREEN performance on visit 1, their overall mean activity score and overall processing time improved on their second administration of the SCREEN (score: t 75 =–2.86 and P =.005; processing time: t 75 =–2.98 and P =.004). Even though the 7 task-specific activity scores on the SCREEN also increased between visits 1 and 2, these improvements were not significant, indicating that the difference in overall activity scores was cumulative and not attributable to a specific task ( Table 4 ).

Principal Findings

Our study aimed to evaluate the effectiveness and reliability of the BrainFx SCREEN in detecting MCI in primary care settings. The research took place during the COVID-19 pandemic, which influenced the study’s execution and timeline. Despite these challenges, the findings offer valuable insights into cognitive impairment screening.

Brief MCI screening tools help time-strapped primary care physicians determine whether referral for a definitive battery of more time-consuming and expensive tests is warranted. These tools must optimize and balance the need for time efficiency while also being psychometrically valid and easily administered [ 82 ]. The importance of brevity is determined by a number of factors, including the clinical setting. Screens that can be completed in approximately ≤5 minutes [ 13 ] are recommended for faster-paced clinical settings (eg, emergency rooms and preoperative screens), whereas those that can be completed in 5 to 10 minutes or less are better suited to primary care settings [ 82 - 84 ]. Identifying affordable, psychometrically tested screening tests for MCI that integrate into clinical workflows and are easy to consistently administer and complete may help with the following:

  • Initiating appropriate diagnostic tests for signs and symptoms at an earlier stage
  • Normalizing and destigmatizing cognitive testing for older adults
  • Expediting referrals
  • Allowing for timely access to programs and services that can support aging in place or delay institutionalization
  • Reducing risk
  • Improving the psychosocial well-being of patients and their care partners by increasing access to information and resources that aid with future planning and decision-making [ 85 , 86 ]

Various cognitive tests are commonly used for detecting MCI. These include the Addenbrook Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease, Sunderland Clock Drawing Test, Informant Questionnaire on Cognitive Decline in the Elderly, Memory Alternation Test, MMSE, MoCA 8.1, and Qmci [ 61 , 87 ]. The Addenbrook Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease, MoCA 8.1, Qmci, and Memory Alternation Test are reported to have similar diagnostic accuracy [ 61 , 88 ]. The HCPs participating in this study reported using the MoCA 8.1 as their primary screening tool for MCI along with other assessments such as the MMSE and Trail Making Test parts A and B.

Recent research highlights the growing use of digital tools [ 51 , 89 , 90 ], mobile technology [ 91 , 92 ], virtual reality [ 93 , 94 ], and artificial intelligence [ 95 ] to improve early identification of MCI. Demeyere et al [ 51 ] developed the tablet-based, 10-item Oxford Cognitive Screen–Plus to detect slight changes in cognitive impairment across 5 domains of cognition (memory, attention, number, praxis, and language), which has been validated among neurologically healthy older adults. Statsenko et al [ 96 ] have explored improvement of the predictive capabilities of tests using artificial intelligence. Similarly, there is an emerging focus on the use of machine learning techniques to detect dementia leveraging routinely collected clinical data [ 97 , 98 ]. This progression signifies a shift toward more technologically advanced, efficient, and potentially more accurate diagnostic approaches in the detection of MCI.

Whatever the modality, screening tools should be quick to administer, demonstrate consistent results over time and between different evaluators, cover all major cognitive areas, and be straightforward to both administer and interpret [ 99 ]. However, highly sensitive tests such as those suggested for screening carry a significant risk of false-positive diagnoses [ 15 ]. Given the high potential for harm of false positives, it is important to validate the psychometric properties of screening tests across different populations and understand how factors such as age and education can influence the results [ 99 ].

Our study did not assess the face validity of the SCREEN, but participating occupational therapists were comfortable with the test regimen. Nonetheless, the research team noted the absence of verbal fluency and memory tests in the SCREEN, both of which McDonnell et al [ 100 ] identified as being more sensitive to the more commonly seen amnesic MCI. Two of the most sensitive tools for MCI screening, the MoCA 8.1 [ 76 ] and Qmci [ 25 ], assess memory, language, and verbal skills, and tests of verbal fluency and logical memory have been shown to be particularly sensitive to early cognitive changes [ 77 , 78 ].

The constructs included in the SCREEN ( Table 1 ) were selected based on a single non–peer-reviewed study [ 58 ] using the 360 and traumatic brain injury data (N=188) that identified the constructs as predictive of brain injury. The absence of tasks that measure verbal fluency or logical memory in the SCREEN appears to weaken claims of construct validity. The principal component analysis of the SCREEN assessment identified 2 components accounting for 52.12% of the total variance. The first component was strongly associated with abstract reasoning, constructive ability, and divided attention, whereas the second was primarily influenced by visual-spatial abilities. This indicates that constructs related to perception, attention, and memory are central to the SCREEN scores.

The SCREEN’s binary outcome (healthy or unhealthy) created by the research team was based on comparisons with the Qmci. However, the method of identifying areas of challenge in the SCREEN by comparing the individual’s mean score on each of the 7 tasks with the mean scores of a global or filtered cohort in the LBB introduces potential biases or errors. These could arise from a surge in additions to the LBB from patients with specific characteristics, self-selection of participants, poorly trained SCREEN administrators, inclusion of nonstandard test results, underuse of appropriate filters, and underreporting of clinical conditions or factors such as socioeconomic status that impact performance in standardized cognitive tests.

The proprietary method of analyzing and reporting SCREEN results complicates traditional sensitivity and specificity measurement. Our testing indicated a sensitivity of 63.25% and specificity of 74.07% for identifying healthy (those without MCI) and unhealthy (those with MCI) individuals. The SCREEN’s Cronbach α=.63, slightly below the threshold for clinical instruments, and reliability scores that were lower than the ideal standards suggest a higher-than-acceptable level of random measurement error in its constructs. The lower reliability may also stem from an inadequate sample size or a limited number of scale items.

The SCREEN’s results are less favorable compared to those of other digital MCI screening tools that similarly enable evaluation of specific cognitive domains but also provide validated, norm-referenced cutoff scores and methods for cumulative scoring in clinical settings (Oxford Cognitive Screen–Plus) [ 51 ] or of validated MCI screening tools used in primary care (eg, MoCA 8.1, Qmci, and MMSE) [ 51 , 87 ]. The SCREEN’s unique scoring algorithm and the dynamic denominator in data analysis necessitate caution in comparing these results to those of other tools with fixed scoring algorithms and known sensitivities [ 101 , 102 ]. We found the SCREEN to have lower-than-expected internal reliability, suggesting significant random measurement error. Test-retest reliability was weak for the healthy or unhealthy outcome but stronger for overall activity scores between tests. The variability in identifying areas of challenge could relate to technological difficulties or variability from comparisons with a growing database of test results.

Potential reasons for older adults’ poorer scores on timed tests include the impact of sensorimotor decline on touch screen sensation and reaction time [ 38 , 103 ], anxiety related to taking a computer-enabled test [ 104 - 106 ], or the anticipated consequences of a negative outcome [ 107 ]. However, these effects were unlikely to have influenced the results of this study. Practice effects were observed [ 29 , 108 ], but the SCREEN’s novelty suggests that familiarity is not gained through prepreparation or word of mouth as this sample was self-selected and not randomized. Future research might also explore the impact of digital literacy and cultural differences in the interpretation of software constructs or icons on MCI screening in a randomized, older adult sample.

Limitations

This study had methodological limitations that warrant attention. The small sample size and the demographic distribution of the 147 participants aged ≥55 years, with most (96/147, 65.3%) being female and well educated, limits the generalizability of the findings to different populations. The study’s design, aiming to explore the sensitivity of the SCREEN for early detection of MCI, necessitated the exclusion of individuals with a previous diagnosis of MCI or dementia. This exclusion criterion might have impacted the study’s ability to thoroughly assess the SCREEN’s effectiveness in a more varied clinical context. The requirement for participants to read and comprehend English introduced another limitation to our study. This criterion potentially limited the SCREEN tool’s applicability across diverse linguistic backgrounds as individuals with language-based impairments or those not proficient in English may face challenges in completing the assessment [ 51 ]. Such limitations could impact the generalizability of our findings to non–English-speaking populations or to those with language impairments, underscoring the need for further research to evaluate the SCREEN tool’s effectiveness in broader clinical and linguistic contexts.

Financial constraints played a role in limiting the study’s scope. Due to funding limitations, it was not possible to include specialist assessments and a battery of neuropsychiatric tests generally considered the gold standard to confirm or rule out an MCI diagnosis. Therefore, the study relied on differential verification through 2 imperfect reference standards: a comparison with the Qmci (the tool with the highest published sensitivity to MCI in 2019, when the study was designed) and the clinical judgment of the administering HCP, particularly in decisions regarding referrals for further clinical assessment. Furthermore, while an economic feasibility assessment was considered, the research team determined that it should follow, not precede, an evaluation of the SCREEN’s validity and reliability.

The proprietary nature of the algorithm used for scoring the SCREEN posed another challenge. Without access to this algorithm, the research team had to use a novel comparative statistical approach, coding patient results into a binary variable: healthy (SCREEN=no areas of challenge OR Qmci≥67 out of 100) or unhealthy (SCREEN=one or more areas of challenge OR Qmci<67 out of 100). This may have introduced a higher level of error into our statistical analysis. Furthermore, the process for determining areas of challenge on the SCREEN involves comparing a participant’s result to the existing SCREEN results in the LBB at the time of testing. By the end of this study, the LBB contained 632 SCREEN results for adults aged ≥55 years, with this study contributing 258 of those results. The remaining 366 original SCREEN results, 64% of which were completed by individuals who self-identified as having a preexisting diagnosis or conditions associated with cognitive impairment (eg, traumatic brain injury, concussion, or stroke), could have led to an overestimation of the means and SDs of the study participants’ results at the outset of the study.

Unlike other cognitive screening tools, the SCREEN allows for filtering of results to compare different patient cohorts in the LBB using criteria such as age and education. However, at this stage of the LBB’s development, using such filters can significantly reduce the reliability of the results due to a smaller comparator population (ie, the denominator used to calculate the mean and SD). This, in turn, affects the significance of the results. Moreover, the constantly changing LBB data set makes it challenging to meaningfully compare an individual’s results over time as the evolving denominator affects the accuracy and relevance of these comparisons. Finally, the significant improvement in SCREEN scores between the first and second visits suggests the presence of practice effects, which could have influenced the reliability and validity of the findings.

Conclusions

In a primary care setting, where MCI screening tools are essential and recommended for those with concerns [ 85 ], certain criteria are paramount: time efficiency, ease of administration, and robust psychometric properties [ 82 ]. Our analysis of the BrainFx SCREEN suggests that, despite its innovative approach and digital delivery, it currently falls short in meeting these criteria. The SCREEN’s comparatively longer administration time and lower-than-expected reliability scores suggest that it may not be the most effective tool for MCI screening of older adults in a primary care setting at this time.

It is important to note that, in the wake of the COVID-19 pandemic, and with an aging population living and aging by design or necessity in a community setting, there is growing interest in digital solutions, including web-based applications and platforms to both collect digital biomarkers and deliver cognitive training and other interventions [ 109 , 110 ]. However, new normative standards are required when adapting cognitive tests to digital formats [ 92 ] as the change in medium can significantly impact test performance and results interpretation. Therefore, we recommend caution when interpreting our study results and encourage continued research and refinement of tools such as the SCREEN. This ongoing process will ensure that current and future MCI screening tools are effective, reliable, and relevant in meeting the needs of our aging population, particularly in primary care settings where early detection and intervention are key.

Acknowledgments

The researchers gratefully acknowledge the Ontario Centres of Excellence Health Technologies Fund for their financial support of this study; the executive directors and clinical leads in each of the Family Health Team study locations; the participants and their friends and families who took part in the study; and research assistants Sharmin Sharker, Kelly Zhu, and Muhammad Umair for their contributions to data management and statistical analysis.

Data Availability

The data sets generated during and analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

JM contributed to the conceptualization, methodology, validation, formal analysis, data curation, writing—original draft, writing—review and editing, visualization, supervision, and funding acquisition. AML contributed to the conceptualization, methodology, validation, investigation, formal analysis, data curation, writing—original draft, writing—review and editing, visualization, and project administration. WP contributed to the validation, formal analysis, data curation, writing—original draft, writing—review and editing, and visualization. Finally, PH contributed to conceptualization, methodology, writing—review and editing, supervision, and funding acquisition.

Conflicts of Interest

None declared.

  • Casagrande M, Marselli G, Agostini F, Forte G, Favieri F, Guarino A. The complex burden of determining prevalence rates of mild cognitive impairment: a systematic review. Front Psychiatry. 2022;13:960648. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Petersen RC, Caracciolo B, Brayne C, Gauthier S, Jelic V, Fratiglioni L. Mild cognitive impairment: a concept in evolution. J Intern Med. Mar 2014;275(3):214-228. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Knopman DS, Petersen RC. Mild cognitive impairment and mild dementia: a clinical perspective. Mayo Clin Proc. Oct 2014;89(10):1452-1459. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Anderson ND. State of the science on mild cognitive impairment (MCI). CNS Spectr. Feb 2019;24(1):78-87. [ CrossRef ] [ Medline ]
  • Tangalos EG, Petersen RC. Mild cognitive impairment in geriatrics. Clin Geriatr Med. Nov 2018;34(4):563-589. [ CrossRef ] [ Medline ]
  • Ng R, Maxwell C, Yates E, Nylen K, Antflick J, Jette N, et al. Brain disorders in Ontario: prevalence, incidence and costs from health administrative data. Institute for Clinical Evaluative Sciences. 2015. URL: https:/​/www.​ices.on.ca/​publications/​research-reports/​brain-disorders-in-ontario-prevalence-incidence-and-costs-from-health-administrative-data/​ [accessed 2024-04-01]
  • Centers for Disease ControlPrevention (CDC). Self-reported increased confusion or memory loss and associated functional difficulties among adults aged ≥ 60 years - 21 states, 2011. MMWR Morb Mortal Wkly Rep. May 10, 2013;62(18):347-350. [ FREE Full text ] [ Medline ]
  • Petersen RC, Lopez O, Armstrong MJ, Getchius TS, Ganguli M, Gloss D, et al. Practice guideline update summary: mild cognitive impairment: report of the guideline development, dissemination, and implementation subcommittee of the American Academy of Neurology. Neurology. Jan 16, 2018;90(3):126-135. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Song WX, Wu WW, Zhao YY, Xu HL, Chen GC, Jin SY, et al. Evidence from a meta-analysis and systematic review reveals the global prevalence of mild cognitive impairment. Front Aging Neurosci. 2023;15:1227112. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen Y, Denny KG, Harvey D, Farias ST, Mungas D, DeCarli C, et al. Progression from normal cognition to mild cognitive impairment in a diverse clinic-based and community-based elderly cohort. Alzheimers Dement. Apr 2017;13(4):399-405. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Langa KM, Levine DA. The diagnosis and management of mild cognitive impairment: a clinical review. JAMA. Dec 17, 2014;312(23):2551-2561. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhang Y, Natale G, Clouston S. Incidence of mild cognitive impairment, conversion to probable dementia, and mortality. Am J Alzheimers Dis Other Demen. 2021;36:15333175211012235. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Prince M, Bryce R, Ferri CP. World Alzheimer report 2011: the benefits of early diagnosis and intervention. Alzheimer’s Disease International. 2011. URL: https://www.alzint.org/u/WorldAlzheimerReport2011.pdf [accessed 2024-04-01]
  • Patnode CD, Perdue LA, Rossom RC, Rushkin MC, Redmond N, Thomas RG, et al. Screening for cognitive impairment in older adults: updated evidence report and systematic review for the US preventive services task force. JAMA. Feb 25, 2020;323(8):764-785. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Canadian Task Force on Preventive Health Care, Pottie K, Rahal R, Jaramillo A, Birtwhistle R, Thombs BD, et al. Recommendations on screening for cognitive impairment in older adults. CMAJ. Jan 05, 2016;188(1):37-46. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tahami Monfared AA, Phan NT, Pearson I, Mauskopf J, Cho M, Zhang Q, et al. A systematic review of clinical practice guidelines for Alzheimer's disease and strategies for future advancements. Neurol Ther. Aug 2023;12(4):1257-1284. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mattke S, Jun H, Chen E, Liu Y, Becker A, Wallick C. Expected and diagnosed rates of mild cognitive impairment and dementia in the U.S. medicare population: observational analysis. Alzheimers Res Ther. Jul 22, 2023;15(1):128. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Manly JJ, Tang MX, Schupf N, Stern Y, Vonsattel JP, Mayeux R. Frequency and course of mild cognitive impairment in a multiethnic community. Ann Neurol. Apr 2008;63(4):494-506. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Black CM, Ambegaonkar BM, Pike J, Jones E, Husbands J, Khandker RK. The diagnostic pathway from cognitive impairment to dementia in Japan: quantification using real-world data. Alzheimer Dis Assoc Disord. 2019;33(4):346-353. [ CrossRef ] [ Medline ]
  • Ritchie CW, Black CM, Khandker RK, Wood R, Jones E, Hu X, et al. Quantifying the diagnostic pathway for patients with cognitive impairment: real-world data from seven European and north American countries. J Alzheimers Dis. 2018;62(1):457-466. [ CrossRef ] [ Medline ]
  • Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. Nov 1975;12(3):189-198. [ CrossRef ] [ Medline ]
  • Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC. Cognitive tests to detect dementia: a systematic review and meta-analysis. JAMA Intern Med. Sep 2015;175(9):1450-1458. [ CrossRef ] [ Medline ]
  • Lopez MN, Charter RA, Mostafavi B, Nibut LP, Smith WE. Psychometric properties of the Folstein mini-mental state examination. Assessment. Jun 2005;12(2):137-144. [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal cognitive assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. Apr 2005;53(4):695-699. [ CrossRef ] [ Medline ]
  • O'Caoimh R, Timmons S, Molloy DW. Screening for mild cognitive impairment: comparison of "MCI specific" screening instruments. J Alzheimers Dis. 2016;51(2):619-629. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Trzepacz PT, Hochstetler H, Wang S, Walker B, Saykin AJ, Alzheimer’s Disease Neuroimaging Initiative. Relationship between the Montreal cognitive assessment and mini-mental state examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. Sep 07, 2015;15:107. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Phillips N, Chertkow H. Normative data for the Montreal Cognitive Assessment (MoCA) in a population-based sample. Neurology. Mar 06, 2012;78(10):765-766. [ CrossRef ] [ Medline ]
  • Monroe T, Carter M. Using the Folstein Mini Mental State Exam (MMSE) to explore methodological issues in cognitive aging research. Eur J Ageing. Sep 2012;9(3):265-274. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Damian AM, Jacobson SA, Hentz JG, Belden CM, Shill HA, Sabbagh MN, et al. The Montreal cognitive assessment and the mini-mental state examination as screening instruments for cognitive impairment: item analyses and threshold scores. Dement Geriatr Cogn Disord. 2011;31(2):126-131. [ CrossRef ] [ Medline ]
  • Kaufer DI, Williams CS, Braaten AJ, Gill K, Zimmerman S, Sloane PD. Cognitive screening for dementia and mild cognitive impairment in assisted living: comparison of 3 tests. J Am Med Dir Assoc. Oct 2008;9(8):586-593. [ CrossRef ] [ Medline ]
  • Gagnon C, Saillant K, Olmand M, Gayda M, Nigam A, Bouabdallaoui N, et al. Performances on the Montreal cognitive assessment along the cardiovascular disease continuum. Arch Clin Neuropsychol. Jan 17, 2022;37(1):117-124. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cooley SA, Heaps JM, Bolzenius JD, Salminen LE, Baker LM, Scott SE, et al. Longitudinal change in performance on the Montreal cognitive assessment in older adults. Clin Neuropsychol. 2015;29(6):824-835. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Caoimh R, Gao Y, McGlade C, Healy L, Gallagher P, Timmons S, et al. Comparison of the quick mild cognitive impairment (Qmci) screen and the SMMSE in screening for mild cognitive impairment. Age Ageing. Sep 2012;41(5):624-629. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Caoimh R, Molloy DW. Comparing the diagnostic accuracy of two cognitive screening instruments in different dementia subtypes and clinical depression. Diagnostics (Basel). Aug 08, 2019;9(3):93. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Clarnette R, O'Caoimh R, Antony DN, Svendrovski A, Molloy DW. Comparison of the Quick Mild Cognitive Impairment (Qmci) screen to the Montreal Cognitive Assessment (MoCA) in an Australian geriatrics clinic. Int J Geriatr Psychiatry. Jun 2017;32(6):643-649. [ CrossRef ] [ Medline ]
  • Glynn K, Coen R, Lawlor BA. Is the Quick Mild Cognitive Impairment screen (QMCI) more accurate at detecting mild cognitive impairment than existing short cognitive screening tests? A systematic review of the current literature. Int J Geriatr Psychiatry. Dec 2019;34(12):1739-1746. [ CrossRef ] [ Medline ]
  • Lee MT, Chang WY, Jang Y. Psychometric and diagnostic properties of the Taiwan version of the quick mild cognitive impairment screen. PLoS One. 2018;13(12):e0207851. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wallace SE, Donoso Brown EV, Simpson RC, D'Acunto K, Kranjec A, Rodgers M, et al. A comparison of electronic and paper versions of the Montreal cognitive assessment. Alzheimer Dis Assoc Disord. 2019;33(3):272-278. [ CrossRef ] [ Medline ]
  • Gagnon C, Olmand M, Dupuy EG, Besnier F, Vincent T, Grégoire CA, et al. Videoconference version of the Montreal cognitive assessment: normative data for Quebec-French people aged 50 years and older. Aging Clin Exp Res. Jul 2022;34(7):1627-1633. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Friemel TN. The digital divide has grown old: determinants of a digital divide among seniors. New Media & Society. Jun 12, 2014;18(2):313-331. [ CrossRef ]
  • Ventola CL. Mobile devices and apps for health care professionals: uses and benefits. P T. May 2014;39(5):356-364. [ FREE Full text ] [ Medline ]
  • Searles C, Farnsworth JL, Jubenville C, Kang M, Ragan B. Test–retest reliability of the BrainFx 360® performance assessment. Athl Train Sports Health Care. Jul 2019;11(4):183-191. [ CrossRef ]
  • Jones C, Miguel-Cruz A, Brémault-Phillips S. Technology acceptance and usability of the BrainFx SCREEN in Canadian military members and veterans with posttraumatic stress disorder and mild traumatic brain injury: mixed methods UTAUT study. JMIR Rehabil Assist Technol. May 13, 2021;8(2):e26078. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McMurray J, Levy A, Holyoke P. Psychometric evaluation and workflow integration study of a tablet-based tool to detect mild cognitive impairment in older adults: protocol for a mixed methods study. JMIR Res Protoc. May 21, 2021;10(5):e25520. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wilansky P, Eklund JM, Milner T, Kreindler D, Cheung A, Kovacs T, et al. Cognitive behavior therapy for anxious and depressed youth: improving homework adherence through mobile technology. JMIR Res Protoc. Nov 10, 2016;5(4):e209. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ammenwerth E, Iller C, Mahler C. IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Med Inform Decis Mak. Jan 09, 2006;6:3. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Goodhue DL, Thompson RL. Task-technology fit and individual performance. MIS Q. Jun 1995;19(2):213-236. [ CrossRef ]
  • Beuscher L, Grando VT. Challenges in conducting qualitative research with individuals with dementia. Res Gerontol Nurs. Jan 2009;2(1):6-11. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Howe E. Informed consent, participation in research, and the Alzheimer's patient. Innov Clin Neurosci. May 2012;9(5-6):47-51. [ FREE Full text ] [ Medline ]
  • Thorogood A, Mäki-Petäjä-Leinonen A, Brodaty H, Dalpé G, Gastmans C, Gauthier S, et al. Global Alliance for GenomicsHealth‚ AgeingDementia Task Team. Consent recommendations for research and international data sharing involving persons with dementia. Alzheimers Dement. Oct 2018;14(10):1334-1343. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Demeyere N, Haupt M, Webb SS, Strobel L, Milosevich ET, Moore MJ, et al. Introducing the tablet-based Oxford Cognitive Screen-Plus (OCS-Plus) as an assessment tool for subtle cognitive impairments. Sci Rep. Apr 12, 2021;11(1):8000. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Patel BB. Validation of Montreal cognitive assessment, MoCA, alternate French versions. Can J Neurol Sci. Sep 2016;43(5):665-671. [ CrossRef ] [ Medline ]
  • Mueller AE, Segal DL, Gavett B, Marty MA, Yochim B, June A, et al. Geriatric anxiety scale: item response theory analysis, differential item functioning, and creation of a ten-item short form (GAS-10). Int Psychogeriatr. Jul 2015;27(7):1099-1111. [ CrossRef ] [ Medline ]
  • Segal DL, June A, Payne M, Coolidge FL, Yochim B. Development and initial validation of a self-report assessment tool for anxiety among older adults: the Geriatric Anxiety Scale. J Anxiety Disord. Oct 2010;24(7):709-714. [ CrossRef ] [ Medline ]
  • Balsamo M, Cataldi F, Carlucci L, Fairfield B. Assessment of anxiety in older adults: a review of self-report measures. Clin Interv Aging. 2018;13:573-593. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gatti A, Gottschling J, Brugnera A, Adorni R, Zarbo C, Compare A, et al. An investigation of the psychometric properties of the Geriatric Anxiety Scale (GAS) in an Italian sample of community-dwelling older adults. Aging Ment Health. Sep 2018;22(9):1170-1178. [ CrossRef ] [ Medline ]
  • Yochim BP, Mueller AE, June A, Segal DL. Psychometric properties of the Geriatric Anxiety Scale: comparison to the beck anxiety inventory and geriatric anxiety inventory. Clin Gerontol. Dec 06, 2010;34(1):21-33. [ CrossRef ]
  • Recent concussion (< 6 months ago) analysis result. Daisy Intelligence. 2016. URL: https://www.daisyintelligence.com/retail-solutions/ [accessed 2024-04-01]
  • Malloy DW, O'Caoimh R. The Quick Guide: Scoring and Administration Instructions for The Quick Mild Cognitive Impairment (Qmci) Screen. Waterford, Ireland. Newgrange Press; 2017.
  • O'Caoimh R, Gao Y, Svendovski A, Gallagher P, Eustace J, Molloy DW. Comparing approaches to optimize cut-off scores for short cognitive screening instruments in mild cognitive impairment and dementia. J Alzheimers Dis. 2017;57(1):123-133. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Breton A, Casey D, Arnaoutoglou NA. Cognitive tests for the detection of mild cognitive impairment (MCI), the prodromal stage of dementia: meta-analysis of diagnostic accuracy studies. Int J Geriatr Psychiatry. Feb 2019;34(2):233-242. [ CrossRef ] [ Medline ]
  • Umemneku Chikere CM, Wilson K, Graziadio S, Vale L, Allen AJ. Diagnostic test evaluation methodology: a systematic review of methods employed to evaluate diagnostic tests in the absence of gold standard - An update. PLoS One. 2019;14(10):e0223832. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Espinosa A, Alegret M, Boada M, Vinyes G, Valero S, Martínez-Lage P, et al. Ecological assessment of executive functions in mild cognitive impairment and mild Alzheimer's disease. J Int Neuropsychol Soc. Sep 2009;15(5):751-757. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hawkins DM, Garrett JA, Stephenson B. Some issues in resolution of diagnostic tests using an imperfect gold standard. Stat Med. Jul 15, 2001;20(13):1987-2001. [ CrossRef ] [ Medline ]
  • Hadgu A, Dendukuri N, Hilden J. Evaluation of nucleic acid amplification tests in the absence of a perfect gold-standard test: a review of the statistical and epidemiologic issues. Epidemiology. Sep 2005;16(5):604-612. [ CrossRef ] [ Medline ]
  • Marx RG, Menezes A, Horovitz L, Jones EC, Warren RF. A comparison of two time intervals for test-retest reliability of health status instruments. J Clin Epidemiol. Aug 2003;56(8):730-735. [ CrossRef ] [ Medline ]
  • Paiva CE, Barroso EM, Carneseca EC, de Pádua Souza C, Dos Santos FT, Mendoza López RV, et al. A critical analysis of test-retest reliability in instrument validation studies of cancer patients under palliative care: a systematic review. BMC Med Res Methodol. Jan 21, 2014;14:8. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. Sep 2014;70(9):1970-1979. [ CrossRef ] [ Medline ]
  • Streiner DL. A checklist for evaluating the usefulness of rating scales. Can J Psychiatry. Mar 1993;38(2):140-148. [ CrossRef ] [ Medline ]
  • Peyre H, Leplège A, Coste J. Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey. Qual Life Res. Mar 2011;20(2):287-300. [ CrossRef ] [ Medline ]
  • Nevado-Holgado AJ, Kim CH, Winchester L, Gallacher J, Lovestone S. Commonly prescribed drugs associate with cognitive function: a cross-sectional study in UK Biobank. BMJ Open. Nov 30, 2016;6(11):e012177. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moore AR, O'Keeffe ST. Drug-induced cognitive impairment in the elderly. Drugs Aging. Jul 1999;15(1):15-28. [ CrossRef ] [ Medline ]
  • Rogers J, Wiese BS, Rabheru K. The older brain on drugs: substances that may cause cognitive impairment. Geriatr Aging. 2008;11(5):284-289. [ FREE Full text ]
  • Marvanova M. Drug-induced cognitive impairment: effect of cardiovascular agents. Ment Health Clin. Jul 2016;6(4):201-206. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Espeland MA, Rapp SR, Manson JE, Goveas JS, Shumaker SA, Hayden KM, et al. WHIMSYWHIMS-ECHO Study Groups. Long-term effects on cognitive trajectories of postmenopausal hormone therapy in two age groups. J Gerontol A Biol Sci Med Sci. Jun 01, 2017;72(6):838-845. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Luis CA, Keegan AP, Mullan M. Cross validation of the Montreal cognitive assessment in community dwelling older adults residing in the Southeastern US. Int J Geriatr Psychiatry. Feb 2009;24(2):197-201. [ CrossRef ] [ Medline ]
  • Cunje A, Molloy DW, Standish TI, Lewis DL. Alternate forms of logical memory and verbal fluency tasks for repeated testing in early cognitive changes. Int Psychogeriatr. Feb 2007;19(1):65-75. [ CrossRef ] [ Medline ]
  • Molloy DW, Standish TI, Lewis DL. Screening for mild cognitive impairment: comparing the SMMSE and the ABCS. Can J Psychiatry. Jan 2005;50(1):52-58. [ CrossRef ] [ Medline ]
  • Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to Their Development and Use. 4th edition. Oxford, UK. Oxford University Press; 2008.
  • Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. Jun 2016;15(2):155-163. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276-282. [ FREE Full text ] [ Medline ]
  • Zhuang L, Yang Y, Gao J. Cognitive assessment tools for mild cognitive impairment screening. J Neurol. May 2021;268(5):1615-1622. [ CrossRef ] [ Medline ]
  • Zhang J, Wang L, Deng X, Fei G, Jin L, Pan X, et al. Five-minute cognitive test as a new quick screening of cognitive impairment in the elderly. Aging Dis. Dec 2019;10(6):1258-1269. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Feldman HH, Jacova C, Robillard A, Garcia A, Chow T, Borrie M, et al. Diagnosis and treatment of dementia: 2. Diagnosis. CMAJ. Mar 25, 2008;178(7):825-836. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sabbagh MN, Boada M, Borson S, Chilukuri M, Dubois B, Ingram J, et al. Early detection of mild cognitive impairment (MCI) in primary care. J Prev Alzheimers Dis. 2020;7(3):165-170. [ CrossRef ] [ Medline ]
  • Milne A. Dementia screening and early diagnosis: the case for and against. Health Risk Soc. Mar 05, 2010;12(1):65-76. [ CrossRef ]
  • Screening tools to identify adults with cognitive impairment associated with dementia: diagnostic accuracy. Canadian Agency for Drugs and Technologies in Health. 2014. URL: https:/​/www.​cadth.ca/​sites/​default/​files/​pdf/​htis/​nov-2014/​RB0752%20Cognitive%20Assessments%20for%20Dementia%20Final.​pdf [accessed 2024-04-01]
  • Chehrehnegar N, Nejati V, Shati M, Rashedi V, Lotfi M, Adelirad F, et al. Early detection of cognitive disturbances in mild cognitive impairment: a systematic review of observational studies. Psychogeriatrics. Mar 2020;20(2):212-228. [ CrossRef ] [ Medline ]
  • Chan JY, Yau ST, Kwok TC, Tsoi KK. Diagnostic performance of digital cognitive tests for the identification of MCI and dementia: a systematic review. Ageing Res Rev. Dec 2021;72:101506. [ CrossRef ] [ Medline ]
  • Cubillos C, Rienzo A. Digital cognitive assessment tests for older adults: systematic literature review. JMIR Ment Health. Dec 08, 2023;10:e47487. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen R, Foschini L, Kourtis L, Signorini A, Jankovic F, Pugh M, et al. Developing measures of cognitive impairment in the real world from consumer-grade multimodal sensor streams. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019. Presented at: KDD '19; August 4-8, 2019;2145; Anchorage, AK. URL: https://dl.acm.org/doi/10.1145/3292500.3330690 [ CrossRef ]
  • Koo BM, Vizer LM. Mobile technology for cognitive assessment of older adults: a scoping review. Innov Aging. Jan 2019;3(1):igy038. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zygouris S, Ntovas K, Giakoumis D, Votis K, Doumpoulakis S, Segkouli S, et al. A preliminary study on the feasibility of using a virtual reality cognitive training application for remote detection of mild cognitive impairment. J Alzheimers Dis. 2017;56(2):619-627. [ CrossRef ] [ Medline ]
  • Liu Q, Song H, Yan M, Ding Y, Wang Y, Chen L, et al. Virtual reality technology in the detection of mild cognitive impairment: a systematic review and meta-analysis. Ageing Res Rev. Jun 2023;87:101889. [ CrossRef ] [ Medline ]
  • Fayemiwo MA, Olowookere TA, Olaniyan OO, Ojewumi TO, Oyetade IS, Freeman S, et al. Immediate word recall in cognitive assessment can predict dementia using machine learning techniques. Alzheimers Res Ther. Jun 15, 2023;15(1):111. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Statsenko Y, Meribout S, Habuza T, Almansoori TM, van Gorkom KN, Gelovani JG, et al. Patterns of structure-function association in normal aging and in Alzheimer's disease: screening for mild cognitive impairment and dementia with ML regression and classification models. Front Aging Neurosci. 2022;14:943566. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Roebuck-Spencer TM, Glen T, Puente AE, Denney RL, Ruff RM, Hostetter G, et al. Cognitive screening tests versus comprehensive neuropsychological test batteries: a national academy of neuropsychology education paper†. Arch Clin Neuropsychol. Jun 01, 2017;32(4):491-498. [ CrossRef ] [ Medline ]
  • Jammeh EA, Carroll CB, Pearson SW, Escudero J, Anastasiou A, Zhao P, et al. Machine-learning based identification of undiagnosed dementia in primary care: a feasibility study. BJGP Open. Jul 2018;2(2):bjgpopen18X101589. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Riello M, Rusconi E, Treccani B. The role of brief global cognitive tests and neuropsychological expertise in the detection and differential diagnosis of dementia. Front Aging Neurosci. 2021;13:648310. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McDonnell M, Dill L, Panos S, Amano S, Brown W, Giurgius S, et al. Verbal fluency as a screening tool for mild cognitive impairment. Int Psychogeriatr. Sep 2020;32(9):1055-1062. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wojtowicz A, Larner AJ. Diagnostic test accuracy of cognitive screeners in older people. Prog Neurol Psychiatry. Mar 20, 2017;21(1):17-21. [ CrossRef ]
  • Larner AJ. Cognitive screening instruments for the diagnosis of mild cognitive impairment. Prog Neurol Psychiatry. Apr 07, 2016;20(2):21-26. [ CrossRef ]
  • Heintz BD, Keenan KG. Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction. PLoS One. 2018;13(2):e0191309. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Laguna K, Babcock RL. Computer anxiety in young and older adults: implications for human-computer interactions in older populations. Comput Human Behav. Aug 1997;13(3):317-326. [ CrossRef ]
  • Wild KV, Mattek NC, Maxwell SA, Dodge HH, Jimison HB, Kaye JA. Computer-related self-efficacy and anxiety in older adults with and without mild cognitive impairment. Alzheimers Dement. Nov 2012;8(6):544-552. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wiechmann D, Ryan AM. Reactions to computerized testing in selection contexts. Int J Sel Assess. Jul 30, 2003;11(2-3):215-229. [ CrossRef ]
  • Gass CS, Curiel RE. Test anxiety in relation to measures of cognitive and intellectual functioning. Arch Clin Neuropsychol. Aug 2011;26(5):396-404. [ CrossRef ] [ Medline ]
  • Barbic D, Kim B, Salehmohamed Q, Kemplin K, Carpenter CR, Barbic SP. Diagnostic accuracy of the Ottawa 3DY and short blessed test to detect cognitive dysfunction in geriatric patients presenting to the emergency department. BMJ Open. Mar 16, 2018;8(3):e019652. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Owens AP, Ballard C, Beigi M, Kalafatis C, Brooker H, Lavelle G, et al. Implementing remote memory clinics to enhance clinical care during and after COVID-19. Front Psychiatry. 2020;11:579934. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Geddes MR, O'Connell ME, Fisk JD, Gauthier S, Camicioli R, Ismail Z, et al. Alzheimer Society of Canada Task Force on Dementia Care Best Practices for COVID‐19. Remote cognitive and behavioral assessment: report of the Alzheimer Society of Canada task force on dementia care best practices for COVID-19. Alzheimers Dement (Amst). 2020;12(1):e12111. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by G Eysenbach, T de Azevedo Cardoso; submitted 29.01.24; peer-reviewed by J Gao, MJ Moore; comments to author 20.02.24; revised version received 05.03.24; accepted 19.03.24; published 19.04.24.

©Josephine McMurray, AnneMarie Levy, Wei Pang, Paul Holyoke. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 19.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 18 April 2024

A method for identifying different types of university research teams

  • Zhe Cheng   ORCID: orcid.org/0009-0002-5120-6124 1 ,
  • Yihuan Zou 1 &
  • Yueyang Zheng   ORCID: orcid.org/0000-0001-7751-2619 2  

Humanities and Social Sciences Communications volume  11 , Article number:  523 ( 2024 ) Cite this article

Metrics details

Identifying research teams constitutes a fundamental step in team science research, and universities harbor diverse types of such teams. This study introduces a method and proposes algorithms for team identification, encompassing the project-based research team (Pbrt), the individual-based research team (Ibrt), the backbone-based research group (Bbrg), and the representative research group (Rrg), scrutinizing aspects such as project, contribution, collaboration, and similarity. Drawing on two top universities in Materials Science and Engineering as case studies, this research reveals that university research teams predominantly manifest as backbone-based research groups. The distribution of members within these groups adheres to Price’s Law, indicating a concentration of research funding among a minority of research groups. Furthermore, the representative research groups in universities exhibit interdisciplinary characteristics. Notably, significant differences exist in collaboration mode and member structures among high-level backbone-based research groups across diverse cultural backgrounds.

Similar content being viewed by others

methodology of research article

Towards understanding the characteristics of successful and unsuccessful collaborations: a case-based team science study

Hannah B. Love, Bailey K. Fosdick, … Ellen R. Fisher

methodology of research article

Interpersonal relationships drive successful team science: an exemplary case-based study

Hannah B. Love, Jennifer E. Cross, … Ellen R. Fisher

methodology of research article

Mixing Patterns in Interdisciplinary Co-Authorship Networks at Multiple Scales

Shihui Feng & Alec Kirkley

Introduction

Team science has emerged as a burgeoning field of inquiry, attracting the attention of numerous scholars (e.g., Stokols et al., 2008 ; Bozeman & Youtie, 2018 ; Coles et al., 2022 ; Deng et al., 2022 ; Forscher et al., 2023 ), who endeavor to explore and try to summarize strategies for fostering effective research teams. Conducting team science research would help improve team efficacy. The National Institutes of Health in the USA pointed out that team science is a new interdisciplinary field that empirically examines the processes by which scientific teams, research centers, and institutes, both large and small, are structured (National Research Council, 2015 ). In accordance with this conceptualization, research teams can be delineated into various types based on their size and organizational form. Existing research also takes diverse teams as focal points when probing issues such as team construction and team performance. For example, Wu et al. ( 2019 ) and Abramo et al. ( 2017 ) regard the co-authors of a single paper as a team, discussing issues of research team innovation and benefits. Meanwhile, Zhao et al. ( 2014 ) and Lungeanu et al. ( 2014 ) consider the project members as a research team, exploring issues such as internal interest distribution and team performance. Boardman and Ponomariov ( 2014 ), Lee et al. ( 2008 ), and Okamoto and Centers for Population Health and Health Disparities Evaluation Working Group ( 2015 ) view the university’s research center as a research group, investigating themes about member collaboration, management, and knowledge management portals.

Regarding the definition of research teams, some researchers believe that a research team is a collection of people who work together to achieve a common goal and discover new phenomena through research by sharing information, resources, and professional expertise (Liu et al., 2020 ). Conversely, others argue that groups operating across distinct temporal and spatial contexts, such as virtual teams, do not meet the criteria for teams, as they engage solely in collaborative activities between teams. According to this perspective, Research teams should be individuals collaborating over an extended period (typically exceeding six months) (Barjak & Robinson, 2008 ). Contemporary discourse on team science tends to embrace a broad conceptualization wherein research teams include both small-scale teams comprising 2–10 individuals and larger groups consisting of more than 10 members (National Research Council, 2015 ). These research teams are typically formed to conduct a project or finish research papers, while research groups are formed to solve complex problems, drawing members from diverse departments or geographical locations.

Obviously, different research inquiries are linked to different types of research teams. Micro-level investigations, such as those probing the impact of international collaboration on citations, often regard co-authors of research papers as research teams. Conversely, meso-level inquiries, including those exploring factors impacting team organization and management, often view center-based researchers as research groups. Although various approaches can be adopted to identify research teams, such as retrieving names from research centers’ websites or obtaining lists of project-funded members, when the study involves a large sample size and requires more data to measure the performance of research teams, it becomes necessary to use bibliometric methods for team identification.

Existing literature on team identification uses social network analysis (Zhang et al., 2019 ), cohesive subgroup (Dino et al., 2020 ), faction algorithm (Imran et al., 2018 ), FP algorithm (Liao, 2018 ), etc. However, these identification methods often target a singular type of research team or fail to categorize the identified research teams. Moreover, existing studies mostly explore the evolution of specific disciplines (Wang et al., 2017 ), with limited attention devoted to identifying university research teams and the influencing factors of team effectiveness. Therefore, this study tries to develop algorithms to identify diverse university research teams, drawing insights from two universities characterized by different cultural backgrounds. It aims to address two research questions:

How can we identify different types of university research teams?

What are the characteristics of research groups within universities?

Literature review

Why is it necessary to identify research teams? The research focuses on scientific research teams, mostly first identifying the members of research teams through their names on the list of funding projects or institutions’ websites and then conducting research through questionnaires or interviews. However, this methodology may compromise research validity for several reasons. Firstly, the mere inclusion of individuals on funding project lists does not guarantee genuine research team membership or substantive collaboration among members. Secondly, the institutional website generally announces important research team members, potentially overlooking auxiliary personnel or important members from external institutions. Thirdly, reliance solely on lists of research team members fails to capture nuanced information about the team, such as their research ability or communication intensity, thus hindering the exploration of team science-related issues.

Consequently, researchers have turned to co-authorship and citation to identify research teams using established software tools and customized algorithms. For example, Li and Tan ( 2012 ) applied UCINET and social network analysis to identify university research teams, while Hu et al. ( 2019 ) used Citespace to analyze research communities of four disciplines in China, the UK, and the US. Similarly, some researchers also identify the members and leaders of research teams by using and optimizing existing algorithms. For example, Liao ( 2018 ) applied the Fast-Unfolding algorithm to identify research teams in the field of solar cells, while Yu et al. ( 2020 ) and Li et al. ( 2017 ) employed the Louvain community discovery algorithm to identify research teams in artificial intelligence. Lv et al. ( 2016 ) applied the FP-GROWTH algorithm to identify core R&D teams. Yu et al. ( 2018 ) used the faction algorithm to identify research teams in intelligence. Dino et al. ( 2020 ) developed the CL-leader algorithm to confirm research teams and their leaders. Boyack and Klavans ( 2014 ) regard researchers engaged in the same research topic as research teams based on citation information. Notably, these community detection algorithms complement each other, offering versatile tools for identifying research teams.

Despite the utility of these identification methods, they are not without limitations. For example, fixed software algorithms are constrained by predefined rules, posing challenges for researchers seeking to customize identification criteria. Moreover, for developed algorithms, although algorithms based on computer programming languages have high accuracy, they overemphasize the connection relationship between members and do not consider the definition of research teams. In addition, research based on co-authorship networks and community identification algorithms faces inherent problems: (1) Ensuring temporal consistency in co-authorship networks is challenging due to variations in publication timelines, potentially undermining the temporal alignment of team member collaborations; (2) The lack of stability in team identification result means that different identification standards would produce different outcomes; (3) Team members only belong to one research team, but in the actual process, researchers often participate in multiple research teams with different identities, or the same members conduct research in different team combinations.

In summary, research teams in a specific field can be identified using co-authorship information, designing or introducing identification algorithms. However, achieving more accurate identification necessitates consideration of the nuanced definition of research teams. Therefore, this study focuses on university research teams, addressing temporal and spatial collaboration issues among team members by incorporating project information and first-author information. Furthermore, it tackles the issue of classifying research team members by introducing Price’s Law and Everett’s Rule. Additionally, it tackles the issue of team members’ multiple affiliations through the Jaccard Similarity Coefficient and the Louvain Algorithm. Ultimately, this study aims to achieve the classification recognition of university research teams.

Team identification method

An effective team identification method requires both consideration of the definition of research teams and the ability to transform this definition into operable programming languages. University research teams, by definition, comprise researchers collaborating towards a shared objective. As a typical form of the output of a research team, the co-authorship of a scientific research paper implies information exchange and interaction among team members. Thus, this study uses co-authorship relationships within papers to reflect the collaborative relationships among research team members. In this section, novel algorithms for identifying research teams are proposed to address deficiencies observed in prior research.

Classification of research team members

A researcher might be part of multiple research teams, with varying roles within each. Members of the research team can be categorized according to how the research team is defined.

The original idea of team member classification

The prevailing notion of teams underscores the collaborative efforts between individual team members and their contributions toward achieving research objectives. This study similarly classifies team members based on these dual dimensions.

In terms of overall contributions, members who make substantial contributions are typically seen as pivotal figures within the research team, providing the primary impetus for the team’s productivity. Conversely, those with lesser input only contribute to specific facets of the team’s goals and engage in limited research activities, thus being regarded as standard team members.

In terms of collaboration, it is essential to recognize that high levels of contribution do not inherently denote a core position within a team. The collaboration among team members serves as an important indicator of their identity characteristics within the research team. Based on the collaboration between members, this study believes that researchers who have high contributions and collaborate with many high-contribution team members assume the core members of the research team. Conversely, members who have high contributions but only collaborate with a limited number of high-contribution team members are identified as backbone members. Similarly, members displaying low levels of contributions but collaborating widely with high contributors are categorized as ordinary members. Conversely, those with low contributions and limited collaboration with high-contributing team members are regarded as marginal members of the research team.

Establishment of team member classification criteria

This study introduces Price’s Law and Everett’s Rule to realize the idea of team member classification.

In terms of overall contribution, the well-known bibliometrics Price, drawing from Lotka’s Law, deduced that the number of papers published by prolific scientists is 0.749 times the square root of the number of papers published by the most prolific scientist in a group. Existing research also used this law when analyzing prolific authors of an organization. This study believes that prolific authors who conform to Price’s Law are important members who contribute more to the research team.

In terms of collaboration, existing research mostly employs the concept of factions. Factions refer to a relationship where members reciprocate and cannot readily join new groups without altering the reciprocal nature of their factional ties. However, in real-world settings, relationships with overtly reciprocal characteristics are uncommon. Therefore, to ensure the applicability and stability of the faction, Seidman and Foster ( 1978 ) proposed the concept of K-plex, pointing out that in a group of size n, when the number of direct connections of any point in the group is not less than n-k, this group is called k-plex. For k-plex, as the number k increases, the stability of the entire faction will decrease. Addressing this concern, renowned sociologist Martin Everett ( 2002 ), based on the empirical rule of research, proposed specific values for k and corresponding minimum group sizes, stipulating that the overall team size should not fall below 2k-1 (Scott, 2017 ). The expression is:

In other words, for a K-plex, the most acceptable definition to qualify as a faction is when each member of the team is directly connected to at least ( n  − 1)/2 members of the team. Applied to research teams, this empirical guideline necessitates that team members maintain collaborative ties with at least half or more of the team.

Based on Price’s Law and Everett’s Empirical Rule, this study gives the criteria for distinguishing prolific authors, core members, backbone members, ordinary members, and marginal members of research teams. The specifics are shown in the following Table 1 .

Classification of research teams

Within universities, a diverse array of research teams exists, categorized by their scale, the characteristics of funded projects, and the platforms they rely upon. This study proposes the identification algorithms for project-based teams, individual-based teams, backbone-based groups, and representative groups.

Project-based research teams: identification based on research projects

Traditional methods for identifying research teams attribute co-authorship to collaboration among multiple authors without considering the time scope. However, in practice, collaborations vary in content and duration. Therefore, in the identification process, it is necessary to introduce appropriate standards to distinguish varying degrees of collaboration and content among scholars.

Research projects serve as evidence of researchers engaging in the same research topic, thereby indicating that the paper’s authors belong to the same research team. Upon formal acceptance of a research paper, authors typically append funding information to the paper. Therefore, papers sharing the same funding information can be aggregated into paper clusters to identify the research team members who completed the fund project. The specific steps proposed for identifying a single research project fund are as follows.

Firstly, extract the funding number and regard all papers attached with the same funding number as a paper cluster. Secondly, construct a co-authorship network based on the paper cluster. Thirdly, identify the research team using the team member classification criteria.

Individual-based research teams: team identification based on the first author

For research papers lacking project numbers, clustering can be performed based on the contribution and research experience of the authors. Each co-author of the research paper contributes differently to the paper’s content. In 2014, the Consortia Advancing Standards in Research Administration Information (CASRAI) proposed classification standards for paper contributions, including 14 types such as conceptualization, data processing, formal analysis, funding acquisition, investigation, methods, project management, resources, software, supervision, validation, visualization, paper writing, review, and editing.

In this study, the primary author of a paper lacking project funding is considered the initiator, while other authors are seen as contributors who advance and finalize the research. For papers not affiliated with any project, the first author and all their published papers form a paper group for team identification purposes. The procedure entails the following steps: Initially, gather the first author and all papers authored by them within the identification period to constitute a paper group. Subsequently, a co-authorship network will be constructed using the papers within the group. Lastly, the research team will be identified based on the criteria for classifying team members.

Backbone-based research group: merging based on project-based and individual-based research teams

Research teams can be identified either by a single project number or by individual researchers. Upon identification, it becomes evident that many research teams share similar members. This is because a research team may engage in multiple projects, and some members collaborate without funding support. While identification algorithms are suitable for evaluating the quality of a research article or funding, they may not suffice when assessing the research group, or they may not suffice when assessing the key factors affecting their performance. To address this, it is necessary to merge highly similar individual-based or project-based research teams according to specific criteria. The merged one should be termed a group, as it encompasses multiple project-based and individual-based research teams.

In the pursuit of building world-class universities, governments worldwide often emphasize the necessity of fostering research teams led by discipline backbones. In this vein, this study further develops a backbone-based research group identification algorithm, which considers project-based and individual-based research teams.

Identification of university discipline backbone members

Previous studies have summarized the characteristics of the university discipline backbones, revealing that these individuals often excel in indicators such as degree centrality, eigenvector centrality, and betweenness centrality. Each centrality indicator demonstrates a strong positive correlation with the author’s output volume, indicating that high-productive researchers with more collaborators are more inclined to be university discipline backbones. Based on these characteristics, Price’s law is applied, defining discipline backbone members as researchers whose publications count exceeds 0.749 times the square root of the highest publication count within the discipline.

Team identification with discipline backbone members as the Core

Following the identification of discipline backbones, this study consolidates paper groups wherein the discipline backbone serves as the core member of either individual-based or project-based research teams. Subsequently, backbone-based research groups are formed.

Merging based on similarity perspective

It should be noted that different discipline backbones may simultaneously participate as core members in the same individual-based or project-based research teams. Consequently, distinct backbone-based research groups may encompass duplicate project-based and individual-based research teams, necessitating the merging of backbone-based research groups.

To address this redundancy issue, this study introduces the concept of similarity in community identification. In the community identification process, existing algorithms often assess whether to incorporate members into the community based on their level of similarity. Among various algorithms for calculating similarity, the Jaccard coefficient is deemed to possess superior validity and robustness in merging nodes within network communities (Wang et al., 2020 ). Its calculation formula is as follows.

N i denotes the nodes within subset i , while N j represents the nodes within subset j ; N i  ∩ N j signifies the nodes present in both subsets, whereas N i ∪ N j encompasses all nodes in subsets i and j . Existing research shows that when the Jaccard coefficient equals or exceeds 0.5 (Guo et al., 2022 ), the community identification algorithm achieves optimal precision.

In the context of this study, N i represents the core and backbone members of research group i , while N j denotes the core and backbone members of research group j . If these two groups exhibit significant overlap in core and backbone members, the papers from both research groups are merged into a new set of papers to identify the research team.

Given the efficacy of the Jaccard similarity measure in identifying community networks and merging, this study employs this principle to merge backbone-based research groups. Specifically, groups are merged if the Jaccard similarity coefficient between their core and backbone members equals or exceeds 0.5. Subsequently, new research groups are formed based on the merged set of papers.

It’s important to note that during the merging process, certain research teams within a backbone-based group may be utilized multiple times. Initially, the merging occurs based on the core and backbone members of the backbone-based research group, adhering to the Jaccard coefficient criterion. However, since project or individual-based research teams within a backbone-based research group may be reused, resulting in the similarity of research papers across different groups, the study further tested the team duplication of the merged papers of various groups. During the research process, it was found that the research papers within groups often exhibit similarity due to their association with multiple funding projects. Therefore, a principle of “if connected, then merged” was adopted among groups with highly similar research papers to ensure the heterogeneity of papers within the final merged research groups.

The generation process of the backbone-based research groups is illustrated in Fig. 1 below. Initially, university discipline backbones α, β, γ, θ, δ, and ε are each designated as core members within project-based or individual-based research teams A, B, C, D, E, and F, among which αβγ, γθ, θδ, δε ‘s core and backbone members’ Jaccard coefficient meet the merging standard and generate lines. After the first merging, the Jaccard coefficient of the papers of the αβγ, γθ, θδ, δε are calculated, and the lines are generated because of a high duplicated papers between γθ, θδ, and θδ, δε. Finally, αβγ and γθδε are retained based on the rule.

figure 1

The α, β, γ, θ, δ, and ε are core members within project-based or individual-based research teams. The A, B, C, D, E, and F are project-based or individual-based research teams. From step 1 to step 2, research groups are merged according to the Jaccard coefficient between research team members. From step 2 to step 3, research groups are merged according to the Jaccard coefficient between research group papers.

In summary, the process of identifying a backbone-based research group involves the following steps: (1) Identify prolific authors within the university’s discipline by analyzing all papers published in the field, considering them as the discipline’s backbones members; (2) Merge the project-based and individual-based research teams wherein university discipline backbones are core member, thereby forming backbone-based research groups; (3) Merge the backbone-based research group identified in step (2) based on the Jaccard coefficient between their core and backbone members; (4) Calculate the Jaccard coefficient of the papers of the merged groups in step (3), merge the groups with significant paper overlap, and generate new backbone-based research groups.

The research groups identified through the above steps offer two advantages: Firstly, they integrate similar project-based and individual-based research teams, avoiding redundancy in team identification outcomes. Secondly, the same member may participate in different research teams, assuming distinct roles within each, thus better reflecting the complexity of scientific research practices.

Representative team: consolidation via backbone-based research group

When universities introduce their research groups to external parties, they typically highlight the most significant research members within the institution. Although the backbone-based research group has condensed the project-based and individual-based research teams, there may still be some overlap among members from different backbone-based research groups.

In order to create condensed and representative research groups that accurately reflect the development of the university’s discipline, this study extracts the core and backbone members identified in the backbone-based research group. It then identifies the representative group using the widely utilized Louvain algorithm (Blondel et al., 2008 ) commonly employed in research group identification. This algorithm facilitates the integration of important members from different backbone-based research groups while ensuring there is no redundancy among group members. The merging process is shown in Fig. 2 .

figure 2

Each pass is made of two phases: one where modularity is optimized by allowing only local changes of communities, and one where the communities found are aggregated in order to build a new network of communities. The passes are repeated iteratively until no increase in modularity is possible.

Research team identification process and its pros and cons

Overall, the method of identifying university research teams proposed in this research encompasses four stages: Initially, research teams are categorized into project-based research teams and individual-based research teams based on information provided with research papers, distinguishing between those supported by funding projects and those not. Subsequently, the prolific authors of universities are identified to combine individual-based and project-based research teams, and backbone-based research groups are generated. Finally, representative research groups are established utilizing the Louvain algorithm and the interrelations among members within the backbone-based research groups. The entire process is depicted in Fig. 3 below.

figure 3

Different university research teams are identified at different stage.

Each type of research team or group has its advantages and disadvantages, as shown in Table 2 below.

Validation of identification results

In order to verify the accuracy of the identification results, the method proposed by Boyack and Klavans ( 2014 ), which relies on citation analysis, is utilized. This method calculates the level of consistency regarding the main research areas of the core and backbone members, thereby verifying the validity of the identification method.

In the SCIVAL database, all research papers are clustered into relevant topic groups, providing insights into the research area of individual authors. By examining the research topic clusters of team papers in the SCIVAL database, the predominant research areas of prolific authors can be determined. Authors sharing common research areas within a university are regarded as constituting a research team. Given that authors often conduct research in various research areas, this study focuses solely on the top three research areas for each author.

As demonstrated in Table 3 below, for the prolific authors A, B, C, D, and E of the research team, their top three research areas collectively span five distinct fields. By calculating the highest value of the consistency among these research areas, it can be judged whether these researchers can be classified as members of the same research group. As depicted in Table 3 , the main research areas of all prolific authors include Research Area 3, indicating that this field is one of the three most important research areas for all prolific authors. This consistency validates that the main research areas of the five authors align, affirming their classification within the same research team.

Data collection and preprocessing

In order to present the distinct characteristics of various types of scientific research teams as intuitively as possible, this study focuses on the field of material science, with Tsinghua University and Nanyang Technological University selected for analysis. The selection of these two institutions is driven by several considerations: (1) both universities boast exceptional performance in the field of material science on a global scale, consistently ranking within the top 10 worldwide for numerous years; (2) The scientific research systems in the respective countries where these universities are situated differ significantly. China’s scientific research system operates under a government-led funding model, whereas Singapore’s system involves a multi-party funding approach with contributions from the government, enterprises, and societies. By examining universities from these distinct scientific research cultures, this study aims to validate the proposed methods and highlight disparities in the characteristics of their scientific research teams. (3) Material science is inherently interdisciplinary, with contributions from researchers across various domains. Although the selected papers focus on material science, they may also intersect with other disciplines. Therefore, investigating research teams in material science could somewhat represent the interdisciplinary research teams.

The data utilized in this study is sourced from the Clarivate Analytics database, which categorizes scientific research papers based on the subject classification catalogs. In order to ensure the consistency and reliability of scientific research paper identification, this study focuses on the papers published in the field of material science by the two selected universities between 2017 and 2021. Additionally, considering the duration of funded projects, papers associated with projects that have appeared in 2017–2021 within ten years (2011–2022) are also included for analysis to enhance the precision of identification. In order to ensure the affiliation of a research team with the respective universities, this study exclusively considers papers authored by the first author or the corresponding author affiliated with the university as the subject of analysis.

Throughout this process, it should be noted that the name problem in identifying scientific research. Abbreviations, orders, and other name-related information are cleaned and verified. Given that this study exports data utilizing the Author’s Full name and restricts it to specific universities and disciplines, the cleaning process targets the rectification of identification discrepancies arising from a minority of abbreviations and similar names. The specific cleaning procedures entail the following steps.

First, all occurrences of “-” are replaced with null values, and names are standardized by capitalization. Second, the Python dedupe module is employed to mitigate ambiguity in author names, facilitating the differentiation or unification of authors sharing the same surname, name, and initials. List and output all personnel names of each university in this discipline and observe in ascending order. Third, a comparison of names and abbreviations is conducted in reverse order, alongside their respective affiliations and replacements in the identification data. For example, names such as “LONG, W.H” “LONG, WEN, HUI” and “LONG, WENHUI” are uniformly replaced with “LONG, WENHUI.” Fourth, identify and compare similar names in both abbreviations and full forms and confirm whether they are consistent by scrutinizing their affiliations and collaborators. Names exhibiting consistency are replaced accordingly, while those lacking uniformity remain unchanged. For example, “LI, W.D” and “LI, WEIDE” lacking common affiliations and collaborators, are not considered the same person and thus remain distinct.

The publication of the two universities in the field of Materials Science and Engineering across two distinct time periods is shown in Table 4 below.

Based on the publication count of papers authored by the first author or corresponding author from both universities, Tsinghua University demonstrates a significantly higher publication output than Nanyang Technological University, indicating a substantial disparity between the two institutions.

Subsequent to data preprocessing, this study uses the Python tool to develop algorithms in accordance with the proposed principles, thereby facilitating the identification of research teams and groups.

This study has identified several research teams through the sorting and analysis of original data. In order to provide a comprehensive overview of the identification results, this study begins by outlining the characteristics of the identification results and then analyzes the research teams affiliated with both universities, focusing on three aspects: scale, structure, and output.

Identification results of university research teams

The results reveal that both Tsinghua University and Nanyang Technological University boast a considerable number of Pbrts, indicating that most of the researchers from both universities have received funding support. Additionally, a small number of teams have not received funding support, although their overall proportion is relatively low. The Bbrgs predominantly encompass the majority of the Ibrts and Pbrts, underscoring the significant influence of the discipline backbone members within both universities. Notably, the total count of Rrg across the two universities stands at 39, reflecting that many research groups are supporting the construction of material disciplines in the two universities (Table 5 ).

In order to validate the accuracy of the developed method, this study verifies the effectiveness of the identification algorithm. Given that the method emphasizes the main research area of its members, it is appropriate to apply it to the verification of the Bbrgs, which encompass the majority of the individual-based and project-based teams.

The analysis reveals that the consistency level of the most concentrated research area within the identified Bbrgs is 0.93. This signifies that within a Bbrg comprising 10 core or backbone members, a minimum of 9.3 individuals share the same main research area. Moreover, across Bbrgs of varying sizes, the average consistency level of the most concentrated research area also reached 0.90, indicating that the algorithm proposed in this study is valid (Table 6 ).

Analysis of the characteristics of Bbrg in universities

The findings of the analysis show that the Bbrgs encompass the vast majority of Pbrts and Ibrts within universities. Consequently, this study further analyzes the scale, structure, and output of the Bbrgs to present the characteristics of university research teams.

Group scale

Upon scrutinizing the distribution of Bbrgs across the two universities, it is observed that the number of core members is similar. Bbrg with a core member scale of 6–10 individuals are the most prevalent, followed by those with a scale of 0–5 members. Additionally, there are Bbrgs comprising 11–15 members, with relatively fewer Bbrgs consisting of 15 members or more. On average, the number of core members in Bbrgs stands at 7.08. Tsinghua University has more Bbrgs than Nanyang Technological University, while the average number of core members is relatively less. Notably, the proportion of core and backbone members amounts to nearly 12%, ranging from 11.22% to 13.88% (Table 7 ).

Group structure

The structural attributes of the research groups could be assessed through network density among core members, core and backbone members, and all team members. Additionally, departmental distribution can be depicted based on the identification of core members and their organizational affiliations. The formula for network density calculation is as follows:

Note : R is the number of relationships, and N is the number of members.

Overall, the network density characteristics exhibit consistency across both universities. Specifically, the network density among research group members tends to decrease as the group size expands. The network density among core members is the highest, while that among all members records the lowest. Comparatively, the average amount of various types of network density at Tsinghua University is relatively lower than that at Nanyang Technological University, indicating a lesser degree of connectivity among members within Tsinghua University’s research group. However, the network density levels among core members and core and backbone members of research teams in both institutions remain relatively high. Notably, the network density of backbone-based research groups exceeds 0.5, indicating a close collaboration among the core and backbone members of these university research groups (Table 8 ).

The T-test analysis reveals no significant difference in the network density among core members between Tsinghua University and Nanyang Technological University. This suggests that core members of research groups from universities with high-level discipline often maintain close communication. However, concerning the network density among core and backbone members and all members, the average amount of Tsinghua University’s research groups is significantly lower than those of Nanyang Technological University. This implies less direct collaboration among prolific authors at Tsinghua University, with backbone members relying more on different core members of the group to carry out research.

To present the cooperative relationship among the core and backbone members of the Bbrgs, the prolific authors associated with the backbone-based research groups are extracted. Subsequently, the representative research groups affiliated with Nanyang Technological University and Tsinghua University are identified using the fast-unfolding algorithm. The resultant collaboration network diagram among prolific authors is depicted in Fig. 4 , wherein each node color corresponds to different representative research groups of the respective universities.

figure 4

Nodes (author) and links (relation between different authors) with the same color could be seen as the same representative research group.

The network connection diagram of Nanyang Technological University illustrates the presence of 39 Rrgs, including Rrgs from the School of Materials Science and Engineering and the Singapore Centre for 3D Printing. Owing to the inherently interdisciplinary characteristics of the materials discipline, its research groups are not only distributed in the School of Materials Science and Engineering; other academic units also have research groups engaged in materials science research.

Further insights into the distribution of research groups can be gleaned by examining the departments to which the primary members belong. Counting the departmental affiliations of the members with the highest centrality in each representative team reveals that, among the 39 Rrgs, the School of Materials Science and Engineering and the College of Engineering boast the highest number of affiliations, with nine core members of the research groups coming from these two departments, Following closely is the School of Physical and Mathematical Sciences. Notably, entities external to the university, such as the National Institute of Education and the Singapore Institute of Manufacturing Technology, also host important representative groups, underscoring the interdisciplinarity nature of material science. The distribution of Rrgs affiliations is delineated in Table 9 .

Similar to Nanyang Technological University, Tsinghua University also exhibits tightly woven connections within its backbone-based research group in Materials Science and Engineering, comprising a total of 39 Rrgs. Compared with Nanyang Technological University, Tsinghua University boasts a larger cohort of core and backbone members. The collaboration network diagram of representative groups is shown below (Fig. 5 ).

figure 5

Similar to Nanyang Technological University, representative research groups at Tsinghua University are distributed in different schools within the institution, with the School of Materials being the directly related department. In addition, the School of Medicine and the Center for Brain-like Computing also conduct research related to materials science (Table 10 ).

By summarizing the departmental affiliations of the research groups, it becomes evident that the Rrgs in Materials Science and Engineering at these universities span various academic departments, reflecting the interdisciplinary characteristics of the field. The network density of the research groups is also calculated, with Nanyang Technological University exhibiting a higher density (0.028) compared to Tsinghua University (0.022), indicating tighter connections within the representative research groups at Nanyang Technological University.

Group output

In order to control the impact of scale, this study compares several metrics, including publication, publication per capita of core and backbone members, capita of the most prolific author within the groups, field-weighted citation impact, and citations per publication of Bbrgs at these two top universities.

Regarding publications, the average number and the T-test results show that Tsinghua University significantly outperforms Nanyang Technological University, suggesting that the Bbrgs and prolific authors affiliated with Tsinghua University are more productive in terms of research output.

However, in terms of field-weighted citation impact and citations per publication of the Bbrgs, the average number and the T-test results show that Tsinghua University is significantly lower than that of Nanyang Technological University, which indicates the research papers originating from the Bbrgs at Nanyang Technological University have a greater academic influence (see Table 11 ).

Typical cases

To intuitively present the research groups identified, this study has selected the two Bbrgs with the highest number of published papers at Tsinghua University and Nanyang Technological University for analysis, aiming to offer insights for constructing research teams.

Basic Information of the Bbrgs

Examining the basic information of the Bbrgs reveals that although Kang Feiyu’s group at Tsinghua University comprises fewer researchers than Liu Zheng’s group at Nanyang Technological University, Kang Feiyu’s group has a higher total number of published papers. In order to measure the performance of the research results of these two Bbrgs, the field-weighted citation impact of their research papers was queried using SCIVAL. The results showed that the field-weighted citation impact of Kang Feiyu’s group at Tsinghua University was higher, indicating a greater influence in the field of Materials Science and Engineering. Furthermore, the identity information of the two group leaders was compared. It was found that Kang Feiyu, in addition to being a professor at Tsinghua University, holds administrative positions as the dean of the Shenzhen Graduate School of Tsinghua University. Meanwhile, LIU, Zheng, mainly serves as the chairman of the Singapore Materials Society alongside his role as a professor (see Table 12 ).

Characteristics of team member network structure

In order to reflect the collaboration characteristics of research groups, this study calculates the network density of the two groups and utilizes VOSviewer to present the collaboration network diagrams of their members.

In terms of network density, both groups exhibit a density of 1 among core members, indicating that the collaboration between core members is tight. However, regarding the network density of core and backbone members, as well as all members, Liu Zheng’s group at Nanyang Technological University demonstrates a higher density. This indicates a stronger interconnectedness between the backbone and other members within the group (refer to Table 13 ).

For the co-authorship network diagram of group members, distinctive characteristics are observed between the two Bbrgs. In Kang Feiyu’s team, the core members exhibit prominence, with sub-team structures under evident each team member (Fig. 6 ). Conversely, while Liu Zheng’s team also features different core members, the centrality within each member is not obvious (Fig. 7 ).

figure 6

Nodes (author) and links (relation between different authors) with the same color could be seen as the same sub-team.

figure 7

Discussion and conclusion

Distinguishing different research teams constitutes the foundational stage in conducting team science research. In this study, we employ Price’s Law, Everett’s Rule, Jaccard Similarity Coefficient, and Louvain Algorithm to identify different research teams and groups in two world-leading universities specializing in Materials Science and Engineering. Through this exploration, we aim to explore the characteristics of research teams. The main findings are discussed as follows.

First, based on the co-authorship and project data from scholarly articles, this study develops a methodology for identifying research teams that distinguishes between different types of research teams or groups. In contrast to the prior identification method, our algorithms could identify different types of research teams and realize the member classification within research teams. This affords greater clarity regarding collaboration time and content among team members. The validation of identification results, conducted using the methodology proposed by Boyack and Klavans ( 2014 ), demonstrates the consistency of the main research areas among identified research group members. This validation shows the accuracy and efficacy of the research team identification methodology proposed in this study.

Second, universities have different types of research teams or groups, encompassing both project-based research teams and individual-based research teams lacking project support. Among these, most research teams rely on projects to conduct research (Bloch & Sørensen, 2015 ). Concurrently, this research finds that university research groups predominantly coalesce around eminent scholars, with backbone-based research groups comprising the majority of both project-based and individual-based research teams. This phenomenon shows the concentration of research resources within a select few research groups and institutions, a concept previously highlighted by Mongeon et al. ( 2016 ), who pointed out that research funding tends to be concentrated among a minority of researchers. In this research, we not only corroborate this assertion but also observe that researchers with abundant funding collaborate to form research groups, thereby mutually supporting each other. In addition, based on the structures of research groups at Nanyang Technological University and Tsinghua University, one could posit that these institutions resemble what might be termed a “rich club” (Ma et al., 2015 ). However, despite the heightened productivity of relatively concentrated research groups at Tsinghua University in terms of research output, their academic influence pales compared to that of Nanyang Technological University. To enhance research influence, it seems that the funding agency should curtail funding allocations to these “rich” research groups and instead allocate resources to support more financially challenged research teams. This approach would serve to alleviate the trend of concentration in research project funding, as suggested by Aagaard et al. ( 2020 ).

Thirdly, research groups in Material Science and Engineering exhibit obvious interdisciplinary characteristics. Despite all research papers being classified under the Material Science and Engineering discipline, the distribution of research groups across various academic departments suggests a pervasive interdisciplinary nature. This phenomenon underscores the interconnectedness of Materials Science and Engineering with other disciplines and serves as evidence that members from diverse departments within high-caliber universities actively engage in collaborative efforts. Previous research conducted in the United Kingdom has revealed that interdisciplinary researchers from arts and humanities, biology, economics, engineering and physics, medicine, environmental sciences, and astronomy occupy a pivotal position in academic collaboration and can obtain more funding (Sun et al., 2021 ). In this research, similar conclusions are also found in Material Science and Engineering.

Fourth, the personnel structure distribution in university research groups adheres to Price’s Law, wherein prolific authors are a small part of the group members, with approximately 20% of individuals contributing to 80% of the work. Backbone-based research groups, comprising predominantly project-based and individual-based research teams in universities, typically exhibit a core and backbone members ratio of approximately 10%–15%, aligning with Price’s Law. Peterson ( 2018 ) also pointed out that Price’s Law is almost universally present in all creative work. Scientific research relies more on innovative thinking and collaboration among researchers, and the phenomenon was first confirmed within university research groups. Besides, systematic research activities require many researchers to participate, but few people make important intellectual support and contributions. In practical research endeavors, principal researchers, such as professors and associate professors, often exhibit higher levels of innovation and stability, while graduate students and external support staff tend to be more transient, engaging in foundational research tasks.

Fifth, regarding the research group with the highest publication count of the two universities, Tsinghua University has more core members, highlighting the research model centered around a single scholar, while Nanyang Technological University exhibits a more dispersed distribution of researchers. This discrepancy may be attributed to differences in the university’s system. In China, valuable scientific research often unfolds under the leadership of authoritative scholars, typically holding multiple administrative roles, thus exhibiting hierarchical centralization within the group. This hierarchical structure aligns with Merton’s Sociology of Science ( 1973 ), positing that the higher the position of scientists, the higher their status in the hierarchy, facilitating increased funding acquisition and research impact. Conversely, Singapore’s research system is more like that of developed countries such as the UK and the US, fostering a more democratic culture where communication among members is more open. This relatively flat team culture is conducive to generating high-level research outcomes (Xu et al., 2022 ). However, concerning the field-weighted citation impact of research group papers, the Chinese backbone-based research group outperforms in both publication volume and academic influence, suggesting that this organizational characteristic is more suitable for China and is more conducive to doing research with stronger academic influence.

The research teams and groups in these top two universities offer insights for constructing science teams: Firstly, the university should prioritize individual-based research teams to enhance the academic influence of their research. Secondly, intra-university research teams should foster collaboration across different departments to promote interdisciplinary research, contributing to the advancement of the discipline. Thirdly, emphasis should be placed on supporting core and backbone members who often generate innovative ideas and contribute more to the academic community. Fourth, the research team should cultivate a suitable research atmosphere according to their cultural background, whether centralized or democratic, to harness researchers’ strengths effectively.

This research proposes a method for identifying university research teams and analyzing the characteristics of such teams at the top two universities. In the future, further exploration into the role of different team members and the development of more effective research team construction strategies are warranted.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request. The data about the information of research papers authored by the two universities and the identification results of the members of university research teams are shared.

Aagaard K, Kladakis A, Nielsen MW (2020) Concentration or dispersal of research funding? Quant Sci Stud 1(1):117–149. https://doi.org/10.1162/qss_a_00002

Article   Google Scholar  

Abramo G, D’Angelo CA, Di Costa F (2017) Do interdisciplinary research teams deliver higher gains to science? Scientometrics 111:317–336. https://doi.org/10.1007/s11192-017-2253-x

Barjak F, Robinson S (2008) International collaboration, mobility and team diversity in the life sciences: impact on research performance. Soc Geogr 3(1):23–36. https://doi.org/10.5194/sg-3-23-2008

Boardman C, Ponomariov B (2014) Management knowledge and the organization of team science in university research centers. J Technol Transf 39:75–92. https://doi.org/10.1007/s10961-012-9271-x

Boyack KW, Klavans R (2014) 12 Identifying and Quantifying Research Strengths Using Market Segmentation. In: Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact, 225, MIT Press, Cambridge

Bozeman B, Youtie J (2018) The strength in numbers: The new science of team science. Princeton University Press. https://doi.org/10.1515/9781400888610

Bloch C, Sørensen MP (2015) The size of research funding: Trends and implications. Sci Public Policy 42(1):30–43. https://doi.org/10.1093/scipol/scu019

Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):P10008. https://doi.org/10.1088/1742-5468/2008/10/P10008

Coles NA, Hamlin JK, Sullivan LL, Parker TH, Altschul D (2022) Build up big-team science. Nature 601(7894):505–507. https://doi.org/10.1038/d41586-022-00150-2

Article   ADS   CAS   PubMed   Google Scholar  

Dino H, Yu S, Wan L, Wang M, Zhang K, Guo H, Hussain I (2020) Detecting leaders and key members of scientific teams in co-authorship networks. Comput Electr Eng 85:106703. https://doi.org/10.1016/j.compeleceng.2020.106703

Deng H, Breunig H, Apte J, Qin Y (2022) An early career perspective on the opportunities and challenges of team science. Environ Sci Technol 56(3):1478–1481. https://doi.org/10.1021/acs.est.1c08322

Everett M (2002) Social network analysis. In: Textbook at Essex Summer School in SSDA, 102, Essex Summer School in Social Science Data Analysis, United Kingdom

Forscher PS, Wagenmakers EJ, Coles NA, Silan MA, Dutra N, Basnight-Brown D, IJzerman H (2023) The benefits, barriers, and risks of big-team science. Perspect Psychological Sci 18(3):607–623. https://doi.org/10.1177/17456916221082970

Guo K, Huang X, Wu L, Chen Y (2022) Local community detection algorithm based on local modularity density. Appl Intell 52(2):1238–1253. https://doi.org/10.1007/s10489-020-02052-0

Hu Z, Lin A, Willett P (2019) Identification of research communities in cited and uncited publications using a co-authorship network. Scientometrics 118:1–19. https://doi.org/10.1007/s11192-018-2954-9

Imran F, Abbasi RA, Sindhu MA, Khattak AS, Daud A, Amjad T (2018) Finding research areas of academicians using clique percolation. In 2018 14th International Conference on Emerging Technologies (ICET). IEEE, pp 1–6. https://doi.org/10.1109/ICET.2018.8603549

Lee HJ, Kim JW, Koh J, Lee Y (2008) Relative Importance of Knowledge Portal Functionalities: A Contingent Approach on Knowledge Portal Design for R&D Teams. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008). IEEE, pp 331–331, https://doi.org/10.1109/HICSS.2008.373

Liao Q (2018) Research Team Identification and Influence Factors Analysis of Team Performance. M. A. Thesis. Beijing Institute of Technology, Beijing

Google Scholar  

Li Y, Tan S (2012) Research on identification and network analysis of university research team. Sci Technol Prog policy 29(11):147–150

Li G, Liu M, Wu Q, Mao J (2017) A Research of Characters and Identifications of Roles Among Research Groups Based on the Bow-Tie Model. Libr Inf Serv 61(5):87–94

Liu Y, Wu Y, Rousseau S, Rousseau R (2020) Reflections on and a short review of the science of team science. Scientometrics 125:937–950. https://doi.org/10.1007/s11192-020-03513-6

Lungeanu A, Huang Y, Contractor NS (2014) Understanding the assembly of interdisciplinary teams and its impact on performance. J Informetr 8(1):59–70. https://doi.org/10.1016/j.joi.2013.10.006

Article   PubMed   PubMed Central   Google Scholar  

Lv L, Zhao Y, Wang X, Zhao P (2016) Core R&D Team Recognition Method Based on Association Rules Mining. Sci Technol Manag Res 36(17):148–152

Ma A, Mondragón RJ, Latora V (2015) Anatomy of funded research in science. Proc Natl Acad Sci 112(48):14760–14765. https://doi.org/10.1073/pnas.1513651112

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Merton RK (1973) The sociology of science: Theoretical and empirical investigations. University of Chicago Press, Chicago

Mongeon P, Brodeur C, Beaudry C, Larivière V (2016) Concentration of research funding leads to decreasing marginal returns. Res Eval 25(4):396–404. https://doi.org/10.1093/reseval/rvw007

National Research Council (2015) Enhancing the effectiveness of team science. The National Academies Press, Washington, DC

Okamoto J, Centers for Population Health and Health Disparities Evaluation Working Group (2015) Scientific collaboration and team science: a social network analysis of the centers for population health and health disparities. Transl Behav Med 5(1):12–23. https://doi.org/10.1007/s13142-014-0280-1

Article   PubMed   Google Scholar  

Peterson JB (2018) 12 rules for life: An antidote to chaos. Random House, Canada

Scott J (2017) Social network analysis. Sage Publications Ltd, London

Seidman SB, Foster BL (1978) A graph‐theoretic generalization of the clique concept. J Math Sociol 6(1):139–154. https://doi.org/10.1080/0022250X.1978.9989883

Article   MathSciNet   Google Scholar  

Sun Y, Livan G, Ma A, Latora V (2021) Interdisciplinary researchers attain better long-term funding performance. Commun Phys 4(1):263. https://doi.org/10.1038/s42005-021-00769-z

Stokols D, Hall KL, Taylor BK, Moser RP (2008) The science of team science: overview of the field and introduction to the supplement. Am J Prev Med 35(2):S77–S89. https://doi.org/10.1016/j.amepre.2008.05.002

Wang C, Cheng Z, Huang Z (2017) Analysis on the co-authoring in the field of management in China: based on social network analysis. Int J Emerg Technol Learn 12(6):149. https://doi.org/10.3991/ijet.v12i06.7091

Wang T, Chen S, Wang X, Wang J (2020) Label propagation algorithm based on node importance. Phys A Stat Mech Appl. 551:124137. https://doi.org/10.1016/j.physa.2020.124137

Wu L, Wang D, Evans JA (2019) Large teams develop and small teams disrupt science and technology. Nature 566(7744):378–382. https://doi.org/10.1038/s41586-019-0941-9

Xu F, Wu L, Evans J (2022) Flat teams drive scientific innovation. Proc. Natl Acad. Sci 119(23):e2200927119. https://doi.org/10.1073/pnas.2200927119

Article   CAS   PubMed   PubMed Central   Google Scholar  

Yu H, Bai K, Zou B, Wang Y (2020) Identification and Extraction of Research Team in the Artificial Intelligence Field. Libr Inf Serv 64(20):4–13

Yu Y, Dong C, Han H, Li Z (2018) The Method of Research Teams Identification Based on Social Network Analysis:Identifying Research Team Leaders Based on Iterative Betweenness Centrality Rank Method. Inf Stud Theory Appl 41(7):105–110

Zhao L, Zhang Q, Wang L (2014) Benefit distribution mechanism in the team members’ scientific research collaboration network. Scientometrics 100:363–389. https://doi.org/10.1007/s11192-014-1322-7

Zhang M, Jia Y, Wang N, Ge S (2019) Using Relative Tie Strength to Identify Core Teams of Scientific Research. Int J Emerg Technol Learn 14(23):33–54. https://www.learntechlib.org/p/217243/

Download references

Author information

Authors and affiliations.

School of Education, Central China Normal University, Wuhan, PR China

Zhe Cheng & Yihuan Zou

Faculty of Education, The Chinese University of Hong Kong, Hong Kong SAR, PR China

Yueyang Zheng

You can also search for this author in PubMed   Google Scholar

Contributions

Zhe Cheng contributed to the study conception, research design, data collection, and data analysis. Zhe Cheng wrote the first draft of the manuscript. Yihuan Zou made the last revisions. Yihuan Zou and Yueyang Zheng supervised, proofread, and commented on previous versions of this manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yueyang Zheng .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by the authors.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cheng, Z., Zou, Y. & Zheng, Y. A method for identifying different types of university research teams. Humanit Soc Sci Commun 11 , 523 (2024). https://doi.org/10.1057/s41599-024-03014-4

Download citation

Received : 03 August 2023

Accepted : 28 March 2024

Published : 18 April 2024

DOI : https://doi.org/10.1057/s41599-024-03014-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

methodology of research article

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Transformations That Work

  • Michael Mankins
  • Patrick Litre

methodology of research article

More than a third of large organizations have some type of transformation program underway at any given time, and many launch one major change initiative after another. Though they kick off with a lot of fanfare, most of these efforts fail to deliver. Only 12% produce lasting results, and that figure hasn’t budged in the past two decades, despite everything we’ve learned over the years about how to lead change.

Clearly, businesses need a new model for transformation. In this article the authors present one based on research with dozens of leading companies that have defied the odds, such as Ford, Dell, Amgen, T-Mobile, Adobe, and Virgin Australia. The successful programs, the authors found, employed six critical practices: treating transformation as a continuous process; building it into the company’s operating rhythm; explicitly managing organizational energy; using aspirations, not benchmarks, to set goals; driving change from the middle of the organization out; and tapping significant external capital to fund the effort from the start.

Lessons from companies that are defying the odds

Idea in Brief

The problem.

Although companies frequently engage in transformation initiatives, few are actually transformative. Research indicates that only 12% of major change programs produce lasting results.

Why It Happens

Leaders are increasingly content with incremental improvements. As a result, they experience fewer outright failures but equally fewer real transformations.

The Solution

To deliver, change programs must treat transformation as a continuous process, build it into the company’s operating rhythm, explicitly manage organizational energy, state aspirations rather than set targets, drive change from the middle out, and be funded by serious capital investments.

Nearly every major corporation has embarked on some sort of transformation in recent years. By our estimates, at any given time more than a third of large organizations have a transformation program underway. When asked, roughly 50% of CEOs we’ve interviewed report that their company has undertaken two or more major change efforts within the past five years, with nearly 20% reporting three or more.

  • Michael Mankins is a leader in Bain’s Organization and Strategy practices and is a partner based in Austin, Texas. He is a coauthor of Time, Talent, Energy: Overcome Organizational Drag and Unleash Your Team’s Productive Power (Harvard Business Review Press, 2017).
  • PL Patrick Litre leads Bain’s Global Transformation and Change practice and is a partner based in Atlanta.

Partner Center

ORIGINAL RESEARCH article

Developing key indicators for sustainable food system: a comprehensive application of stakeholder consultations and delphi method provisionally accepted.

  • 1 Institute for Population and Social Research, Mahidol University, Thailand

The final, formatted version of the article will be published soon.

The overall status of the food system in Thailand is currently unknown. Although several national and international reports describe Thailand food system, they are not accurate and relevant to inform policies. This study aims to develop indicators which measure Thailand's sustainable food system. We adopted seven-dimensional metrics proposed by Gustafson to facilitate a comparative analysis of food systems, namely (1) food nutrient adequacy; (2) ecosystem stability; (3) food availability and affordability; (4) sociocultural well-being; (5) food safety; (6) resilience; and (7) waste and loss reduction. Three rounds of the Delphi method were convened to assess the proposed indicators using the Item Objective Congruence (IOC) by 48 Thai stakeholders recruited from the government, NGOs, and academia. IOC is a procedure used in test development for evaluating content validity at the item development stage. In each round, the average IOC for each item was carefully considered, together with stakeholders' comments on whether to retain, remove, or recruit new indicators. The communication through mail and email was sent out so that stakeholders could assess independently. A total of 88 and 73 indicators went to the first and second round Delphi assessment; this resulted in 62 final indicators after the third round. In conclusion, these 62 indicators and 190 sub-indicators are too many for policy uses. As an ongoing indicator development, we plan that these 62 indicators will be further tested in different settings to assess data feasibility. After field tests, the final prioritized indicators will be submitted for policy decisions for regular national monitoring and informing policy towards sustainable food systems in Thailand.

Keywords: Sustainable food system, indicator, Food security, resilience, Agriculture, Delphi method

Received: 08 Jan 2024; Accepted: 17 Apr 2024.

Copyright: © 2024 Rittirong, Chuenglertsiri, Nitnara and Phulkerd. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Dr. Jongjit Rittirong, Institute for Population and Social Research, Mahidol University, Salaya, Thailand

People also looked at

Help | Advanced Search

Computer Science > Computation and Language

Title: leave no context behind: efficient infinite context transformers with infini-attention.

Abstract: This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

  • Campus Life
  • Carsey School of Public Policy
  • College of Engineering & Physical Sciences
  • College of Health & Human Services
  • College of Liberal Arts
  • College of Life Sciences & Agriculture
  • Cooperative Extension
  • Graduate School
  • Peter T. Paul College of Business & Economics
  • School of Marine Science and Ocean Engineering
  • Thompson School of Applied Science
  • UNH Manchester
  • UNH Franklin Pierce School of Law
  • Faculty/Staff

Like us on Facebook

Follow us on Twitter

Follow us on YouTube

Follow us on Instagram

Find us on LinkIn

UNH Today RSS feeds

University of New Hampshire logo

UNH Today is produced for the UNH community and for friends of UNH. The stories are written by the staff of UNH Marketing. Email us: [email protected] .

Manage Your Subscription     Contact Us

UNH Today  •  UNH Main Directory: 603-862-1234 Copyright © 2024  •  TTY Users: 7-1-1 or 800-735-2964 (Relay NH) USNH Privacy Policies  •  USNH Terms of Use  •  ADA Acknowledgement

Somali brother and sister, photographed by Becky Field

Bull moose feeding in Central New Hampshire. Photo Credit: Remington Moll/UNH

A female cow moose looks into the camera.

A female cow moose looks into the camera near Pittsburg, NH. Photo Credit: Remington Moll/UNH

Visual representation of audience frequencies or vocalizations

Visual representation of audience frequencies or vocalizations, known as representative spectrograms, from a) cow, b) bull, and c) calf. Photo Credit: UNH

Key Research Finding

By analyzing online videos of moose vocalizations, researchers quantified moose calls and determined significant differences in calls by sex and age. 

Passive acoustic monitoring: The tracking and monitoring of animals and environments through recording sounds. Kloepper has previously used passive acoustic monitoring to estimate bat colony sizes and frog populations .

Sexual dimorphism: Notable differences in characteristics between sexes of the same species.

Drive around New Hampshire and many of the license plates will quickly show that the moose is iconic to the Granite State. As New England’s largest megafauna, moose serve as a major tourist draw for New Hampshire’s White Mountains  and their feeding patterns are also important to sustaining healthy forests and associated ecosystems . Moose populations can be affected by different forest and land management decisions and by environmental factors such as growing winter tick numbers , but tracking those impacts is difficult because moose are notoriously shy. A team of New Hampshire Agricultural Experiment Station scientists is developing methods to assess whether sound monitoring could be adapted to more cost-effectively and less invasively track moose behaviors.

Station scientist Laura Kloepper , an assistant professor of biological sciences at the UNH College of Life Sciences and Agriculture , is leading research that analyzes and quantifies moose calls, characterizing them by age and sex. In a recent study published in JASA Express Letters , Kloepper and her co-authors, including NHAES scientist Rem Moll , leverage audio from publicly available online videos to assess wild moose sounds, in their natural environment, and identify the distinct differences by age and sex. It’s a critical first step toward creating an acoustic sensor network in New Hampshire’s North Country to automatically detect and help determine moose population density and occupancy.

“Moose are such iconic wildlife for New Hampshire, so understanding how they’re using their landscape and how we can manage our forests while sharing the land with them is key to their conservation,” Kloepper said. “However, due to the moose’s wide roaming range and its low population densities, monitoring them is an ever-present challenge that can be aided by non-invasive technologies. So to accurately develop a moose acoustic sensor, we first needed to quantify a variety of moose calls — and these data were not available yet, so we crowdsourced it.”

“By tracking moose, scientists can also predict how forest habitat affects moose distribution...we can investigate how habitat disturbance, such as from timber management, affects where moose prefer to live. And we can also investigate if moose’s habitat preference changes with the seasons or time of day.” ~ Laura Kloepper, Assistant Professor of Biological Sciences

Moose have notable differences in characteristics between sexes, and the research team found that this dimorphism extends to vocal differences. Using online videos filmed by hunters and recreationalists and which captured more than 670 moose vocalizations from across the United States, the research team matched the mouth or throat movement to sound. This allowed them to quantify moose calls as well as characterize the calls by age and sex. They found that female moose had calls with higher pitches and longer duration. Moose calves, which remain with their mother for one year and could, therefore, be identified by their proximity to a female moose, had the highest pitched calls, with a duration approximately equivalent to a male.

The study’s development of the first bioacoustic moose vocalizations method is an important first step to creating an acoustic network that can add significant value to ongoing efforts of tracking and monitoring moose populations and, in turn, the effectiveness and impacts of forest and land management practices.

“By tracking moose, scientists can also predict how forest habitat affects moose distribution,” said Kloepper. “Specifically, we can investigate how habitat disturbance, such as from timber management, affects where moose prefer to live. And we can also investigate if moose’s habitat preference changes with the seasons or time of day.”

Kloepper’s current NH Agricultural Experiment Station work is focused on creating an automated passive acoustic detector capable of determining moose sex and maturity from sound recordings. An initial set of acoustic recorders and monitoring work was established alongside a portion of a statewide camera trap network established in 2021 for non-invasive wildlife tracking by Station scientist Moll , an assistant professor of natural resources and the environment at the UNH College of Life Sciences and Agriculture.

“From the five acoustic monitoring sites we set up, we were able to record 15 moose, including mother and calf grunts, breaths, sniffs and footsteps,” described Kloepper. “For the published paper, we supplemented these data with hunter-contributed video and audio recordings, which allowed us to better quantify the vocal characteristics of male and female moose and calves.”

Kloepper and her team will expand to 50 acoustic monitoring sites in forests in New Hampshire’s Coos County. The recorders will continuously capture sound activity in the morning and evening—times of peak moose activity.

“Although we specifically are interested in recording moose vocalizations, by recording all the sounds we can investigate how the overall acoustic environment—known as the soundscape—varies across forest type,” said Kloepper.

The bioacoustic sites will expand UNH’s passive wildlife monitoring research system, which already includes a network of 300 camera traps set up on private and public lands across New Hampshire by Moll.

“Each year, these cameras capture hundreds of thousands of wildlife images that we analyze to better understand and predict the distribution and abundance of many forest-dwelling wildlife species,” said Moll. “Work by UNH Emeritus Professor Peter Pekins showed that that the primary drivers of regional moose population dynamics are the availability of young forest habitat and winter tick infestations, which can be exacerbated by climate change and locally high moose densities.”

He added, “Technologies like audio and camera stations provide critical population-level data that inform moose conservation and management in a changing world.”

This material is based on work supported by the NH Agricultural Experiment Station through joint funding from the USDA National Institute of Food and Agriculture (under Hatch award numbers 1024128) and the state of New Hampshire.

This work is co-authored by Alex Zager, Sonja Ahlberg, Olivia Boyan, Jocelyn Brierly, Valeria Eddington, Remington Moll and Laura Kloepper.

You can read the published article, Characteristics of wild moose (Alces alces) vocalizations , in JASA Express Letters .

Written by:, related links, related articles.

A middle-aged white woman shops at a farmers' market, picking produce from a stall

Serving New England Farmers

Sizing Up Cover Crop Seeding Rates

Sizing Up Cover Crop Seeding Rates

A timelapse photo of the Lamprey River

The Changing Lamprey River

ScienceDaily

Researchers resolve old mystery of how phages disarm pathogenic bacteria

New study details long-sought mechanisms and structures.

Depiction of bacteriophage PP7 (orange) at the cell surface of Pseudomonas aeruginosa detaching the bacterium's pilus (blue). The researchers identified protein structures and interactions using fluorescence microscopy, cryogenic-electron microscopy and computational simulations. This image is derived based on the findings from the team. (Jirapat Thongchol/Texas A&M AgriLife)

Bacterial infections pose significant challenges to agriculture and medicine, especially as cases of antibiotic-resistant bacteria continue to rise. In response, scientists at Texas A&M AgriLife Research are elucidating the ways that bacteria-infecting viruses disarm these pathogens and ushering in the possibility of novel treatment methods.

In their recent study published in Science , Lanying Zeng, Ph.D., a professor, and Junjie Zhang, Ph.D., an associate professor, both in the Texas A&M College of Agriculture and Life Sciences Department of Biochemistry and Biophysics, detailed a precise mechanism by which phages disable bacteria.

The collaborative effort also involved:

  • Yiruo Lin, Ph.D., research assistant professor in the Texas A&M College of Engineering Department of Computer Science and Engineering.
  • Matthias Koch, Ph.D., assistant professor in the Texas A&M College of Arts and Sciences Department of Biology.
  • Zemer Gitai, Ph.D., and Joshua Shaevitz, Ph.D., professors in the Princeton University Department of Molecular Biology and Department of Physics, respectively.
  • Yinghao Wu, Ph.D., associate professor in the Albert Einstein College of Medicine Department of Systems and Computational Biology.

Together, the team worked to explain a series of interactions scientists have sought to understand since the early 1970s.

The need for new treatments

Pseudomonas aeruginosa is a type of bacteria that can cause infections in the blood, lungs and occasionally other parts of the body. These infections are especially common in healthcare settings, which often encounter drug-resistant bacteria. According to the Centers for Disease Control and Prevention, there were over 30,000 cases of multi-drug resistant P. aeruginosa infections among hospitalized patients in 2017.

The prevalence of antibiotic-resistant Pseudomonas infections makes them a practical point of focus for phage therapy, a type of treatment method using bacteriophages, or phages, that researchers at the Texas A&M Center for Phage Technology are exploring as an alternative to typical drugs.

Zeng and Zhang, co-directors at the center along with Jason Gill, Ph.D., associate professor in the Department of Animal Science, are exploring the usefulness of phages, even beyond phage therapy, by diving into the structures and mechanisms at play.

Targeting the pilus

One of the factors that allows P. aeruginosa to transmit antimicrobial-resistant genes among each other, as well as move around and create difficult-to-treat structures called biofilms, is an appendage called a pilus, named after the Latin word for spear. These cylindrical structures extend from the surface of bacteria.

Some phages make use of bacterial pili by attaching to them and allowing bacteria to reel the phage to the surface, where the phage can start infecting the bacteria.

In their study in Science , co-first authored by Texas A&M graduate students Jirapat Thongchol and Zihao Yu, the researchers studied this process step by step using fluorescence microscopy, cryogenic-electron microscopy and computational modeling. They observed how a phage called PP7 infects P. aeruginosa by attaching to the pilus, which then retracts and pulls the phage to the cell surface.

At the point of entry for the virus, the pilus bends and snaps off, and the loss of the pilus makes P. aeruginosa much less capable of infecting its own host.

Ongoing research

This work is a continuation of previous research published in 2020, when Zeng's team found a phage that can similarly break off the pili of E. coli cells, preventing the bacteria from sharing genes among each other -- a common way that antibiotic resistance spreads.

From left to right: Lanying Zeng, Ph.D., Junjie Zhang, Ph.D., Zihao Yu and Jirapat Thongchol. Along with others, these researchers at the Texas A&M Center for Phage Technology are searching for solutions to antibiotic-resistant bacterial infections and characterizing phage-bacterium interactions. (Zihao Yu/Texas A&M AgriLife)

The Science study on Pseudomonas is part of the team's recent suite of research studies. Last month, they published findings in Nature Communications on the interaction between another genus of bacteria, Acinetobacter, and a phage that infects it. Another study, expected to be published next month, will cover a third genus of bacteria and additional phage.

The team's progress in determining precise protein structures and molecular interactions has been made possible with AgriLife Research's new cryo-electron microscope, which opened at Texas A&M at the end of 2022 and can resolve structures at the atomic level.

"In our earlier study on E. coli, we did not really explore much about the mechanism," Zeng said. "In our study of Pseudomonas, we were able to explain much more about what exactly is going on, including the force and speed of pilus detachment, and understand why and how this happens."

Uses in medicine

The implications of this ongoing research could prove to be important in treating antimicrobial infections. Zhang said doctors wouldn't need to use phages to kill the bacteria -- as is done in phage therapy -- but could simply allow the viruses to disarm the bacteria, which may give the immune system the chance to fight the infection on its own or allow doctors to treat patients with lower doses of antibiotics.

"If you simply kill the bacteria, you break the cells, and they're going to release toxic material from inside the cell into the host," Zhang said. "Our approach is to use a particular type of phage that disarms the bacteria. We remove their ability to exchange drug-resistance genes or to move around by breaking off this appendage."

The team of phage scientists said they will continue looking for similar instances of phages dampening the virulence of pathogenic bacteria.

"We're taking a synergistic approach," Zhang said. "We're trying to understand a universal mechanism for this type of phage and how they're capable of affecting other types of bacteria. That's the overall aim of our collaborative effort: to try to tackle the problem of multi-drug resistant bacteria."

  • Microbes and More
  • Microbiology
  • Biotechnology and Bioengineering
  • Geochemistry
  • Drought Research
  • Streptococcus
  • Antibiotic resistance
  • Immune system
  • Dog skin disorders
  • Microorganism

Story Source:

Materials provided by Texas A&M AgriLife Communications . Original written by Ashley Vargo. Note: Content may be edited for style and length.

Journal Reference :

  • Jirapat Thongchol, Zihao Yu, Laith Harb, Yiruo Lin, Matthias Koch, Matthew Theodore, Utkarsh Narsaria, Joshua Shaevitz, Zemer Gitai, Yinghao Wu, Junjie Zhang, Lanying Zeng. Removal of Pseudomonas type IV pili by a small RNA virus . Science , 2024; 384 (6691) DOI: 10.1126/science.adl0635

Cite This Page :

Explore More

  • How 3D Printers Can Give Robots a Soft Touch
  • Combo of Multiple Health Stressors Harming Bees
  • Methane Emission On a Cold Brown Dwarf
  • Remarkable Memories of Mountain Chickadees
  • Predicting Future Marine Extinctions
  • Drain On Economy Due to Climate Change
  • 'Tube Map' Around Planets and Moons
  • 'Bizarre' Evolutionary Pattern: Homo Lineage
  • Largest Known Marine Reptile
  • Neolithic Humans Lived in Lava Tube Caves

Trending Topics

Strange & offbeat.

IMAGES

  1. Types of Research Methodology: Uses, Types & Benefits

    methodology of research article

  2. How To Write A Methodology In A Research Paper ~ Alngindabu Words

    methodology of research article

  3. Flowchart of research design for the study.

    methodology of research article

  4. Example Of Methodology In Research / Chapter 4

    methodology of research article

  5. 15 Research Methodology Examples (2023)

    methodology of research article

  6. Methodology Sample In Research : The Importance of the Methods Section

    methodology of research article

VIDEO

  1. Research Methodology Differences

  2. Research Methodology Differences

  3. Introduction to Research Methodology🎧 #research #researchmethodology #bs #typesofresearch

  4. Research Methodology Differences

  5. Metho 4: Good Research Qualities / Research Process / Research Methods Vs Research Methodology

  6. methodology in an article

COMMENTS

  1. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  2. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

  3. 6. The Methodology

    Bem, Daryl J. Writing the Empirical Journal Article. Psychology Writing Center. University of Washington; Denscombe, Martyn. The Good Research Guide: For Small-Scale Social Research Projects. 5th edition.Buckingham, UK: Open University Press, 2014; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences.

  4. LibGuides: Scholarly Articles: How can I tell?: Methodology

    Methodology. The methodology section or methods section tells you how the author (s) went about doing their research. It should let you know a) what method they used to gather data (survey, interviews, experiments, etc.), why they chose this method, and what the limitations are to this method. The methodology section should be detailed enough ...

  5. PDF Methodology: What It Is and Why It Is So Important

    components of methodology one could add. For example, the historical roots of science and science and social policy are legitimate topics that could be covered as well. Yet, in developing an appreciation for methodology and the skills involved in many of the key facets of actually conducting research, the five will suffice.

  6. Literature review as a research methodology: An ...

    This is why the literature review as a research method is more relevant than ever. Traditional literature reviews often lack thoroughness and rigor and are conducted ad hoc, rather than following a specific methodology. Therefore, questions can be raised about the quality and trustworthiness of these types of reviews.

  7. A Comprehensive Guide to Methodology in Research

    Research methodology refers to the system of procedures, techniques, and tools used to carry out a research study. It encompasses the overall approach, including the research design, data collection methods, data analysis techniques, and the interpretation of findings. Research methodology plays a crucial role in the field of research, as it ...

  8. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  9. Reviewing the research methods literature: principles and strategies

    Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand ...

  10. What Is Research Methodology? Definition + Examples

    As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...

  11. Methodology for research II

    The 'methodology' in a research strategy outlines the steps involved in research process. The research problem is identified, aims and objectives are formulated, sample size is calculated; Ethics Committee approval and informed consent from the subject are taken; data collected are summarised. The research design is planned, and the ...

  12. Research Methodology

    Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

  13. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  14. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  15. What is Research Methodology? Definition, Types, and Examples

    Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...

  16. PDF Review Article Exploring Research Methodology: Review Article

    Research methodology is a way to systematically solve the research problem. It may be understood as a science of studying how research is done scientifically. In it we study the various steps that are generally adopted by a researcher in studying his research problem along with the logic behind them.

  17. Full article: Methodology or method? A critical review of qualitative

    Study design. The critical review method described by Grant and Booth (Citation 2009) was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice.This type of review goes beyond the mapping and description of scoping or rapid reviews, to include "analysis and conceptual innovation" (Grant & Booth, Citation 2009 ...

  18. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  19. How to Write a Research Methodology for Your Academic Article

    The Methodology section portrays the reasoning for the application of certain techniques and methods in the context of the study. For your academic article, when you describe and explain your chosen methods it is very important to correlate them to your research questions and/or hypotheses. The description of the methods used should include ...

  20. Methodological Innovations: Sage Journals

    Methodological Innovations is an international, open access journal and the principal venue for publishing peer-reviewed, social-research methods articles. Methodological Innovations is the forum for methodological advances and debates in social research … | View full journal description. This journal is a member of the Committee on ...

  21. (PDF) Research Methodology

    A research approach is a plan of action that gives direction to conduct research. systematically and efficientl y. There are three main research approaches as (Creswell 2009): i) quantitative ...

  22. (PDF) Research Methods and Methodology

    The following research methods were used in writing the article: analysis of literature on the problem, our own experience in teaching at the university. The result of the article is the ...

  23. Methodology for research I

    INTRODUCTION. Research is a process for acquiring new knowledge in systematic approach involving diligent planning and interventions for discovery or interpretation of the new-gained information.[1,2] The outcome reliability and validity of a study would depend on well-designed study with objective, reliable, repeatable methodology with appropriate conduct, data collection and its analysis ...

  24. Journal of Medical Internet Research

    Methods: A concurrent mixed methods, prospective study using a quasi-experimental design was conducted with 147 participants from 5 primary care Family Health Teams (FHTs; characterized by multidisciplinary practice and capitated funding) across southwestern Ontario, Canada. ... Interactive Journal of Medical Research 362 articles JMIRx Med 359 ...

  25. A method for identifying different types of university research teams

    Identifying research teams constitutes a fundamental step in team science research, and universities harbor diverse types of such teams. This study introduces a method and proposes algorithms for ...

  26. Transformations That Work

    In this article the authors present one based on research with dozens of leading companies that have defied the odds, such as Ford, Dell, Amgen, T-Mobile, Adobe, and Virgin Australia. The ...

  27. Frontiers

    The overall status of the food system in Thailand is currently unknown. Although several national and international reports describe Thailand food system, they are not accurate and relevant to inform policies. This study aims to develop indicators which measure Thailand's sustainable food system. We adopted seven-dimensional metrics proposed by Gustafson to facilitate a comparative analysis of ...

  28. [2404.07143] Leave No Context Behind: Efficient Infinite Context

    This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and ...

  29. Listening to Moose Tracks

    Moose have notable differences in characteristics between sexes, and the research team found that this dimorphism extends to vocal differences. Using online videos filmed by hunters and recreationalists and which captured more than 670 moose vocalizations from across the United States, the research team matched the mouth or throat movement to ...

  30. Researchers resolve old mystery of how phages disarm pathogenic

    In response, scientists at Texas A&M AgriLife Research are elucidating the ways that bacteria-infecting viruses disarm these pathogens and ushering in the possibility of novel treatment methods.