What is Comparative Analysis and How to Conduct It? (+ Examples)

Appinio Research · 30.10.2023 · 36min read

What Is Comparative Analysis and How to Conduct It Examples

Have you ever faced a complex decision, wondering how to make the best choice among multiple options? In a world filled with data and possibilities, the art of comparative analysis holds the key to unlocking clarity amidst the chaos.

In this guide, we'll demystify the power of comparative analysis, revealing its practical applications, methodologies, and best practices. Whether you're a business leader, researcher, or simply someone seeking to make more informed decisions, join us as we explore the intricacies of comparative analysis and equip you with the tools to chart your course with confidence.

What is Comparative Analysis?

Comparative analysis is a systematic approach used to evaluate and compare two or more entities, variables, or options to identify similarities, differences, and patterns. It involves assessing the strengths, weaknesses, opportunities, and threats associated with each entity or option to make informed decisions.

The primary purpose of comparative analysis is to provide a structured framework for decision-making by:

  • Facilitating Informed Choices: Comparative analysis equips decision-makers with data-driven insights, enabling them to make well-informed choices among multiple options.
  • Identifying Trends and Patterns: It helps identify recurring trends, patterns, and relationships among entities or variables, shedding light on underlying factors influencing outcomes.
  • Supporting Problem Solving: Comparative analysis aids in solving complex problems by systematically breaking them down into manageable components and evaluating potential solutions.
  • Enhancing Transparency: By comparing multiple options, comparative analysis promotes transparency in decision-making processes, allowing stakeholders to understand the rationale behind choices.
  • Mitigating Risks : It helps assess the risks associated with each option, allowing organizations to develop risk mitigation strategies and make risk-aware decisions.
  • Optimizing Resource Allocation: Comparative analysis assists in allocating resources efficiently by identifying areas where resources can be optimized for maximum impact.
  • Driving Continuous Improvement: By comparing current performance with historical data or benchmarks, organizations can identify improvement areas and implement growth strategies.

Importance of Comparative Analysis in Decision-Making

  • Data-Driven Decision-Making: Comparative analysis relies on empirical data and objective evaluation, reducing the influence of biases and subjective judgments in decision-making. It ensures decisions are based on facts and evidence.
  • Objective Assessment: It provides an objective and structured framework for evaluating options, allowing decision-makers to focus on key criteria and avoid making decisions solely based on intuition or preferences.
  • Risk Assessment: Comparative analysis helps assess and quantify risks associated with different options. This risk awareness enables organizations to make proactive risk management decisions.
  • Prioritization: By ranking options based on predefined criteria, comparative analysis enables decision-makers to prioritize actions or investments, directing resources to areas with the most significant impact.
  • Strategic Planning: It is integral to strategic planning, helping organizations align their decisions with overarching goals and objectives. Comparative analysis ensures decisions are consistent with long-term strategies.
  • Resource Allocation: Organizations often have limited resources. Comparative analysis assists in allocating these resources effectively, ensuring they are directed toward initiatives with the highest potential returns.
  • Continuous Improvement: Comparative analysis supports a culture of continuous improvement by identifying areas for enhancement and guiding iterative decision-making processes.
  • Stakeholder Communication: It enhances transparency in decision-making, making it easier to communicate decisions to stakeholders. Stakeholders can better understand the rationale behind choices when supported by comparative analysis.
  • Competitive Advantage: In business and competitive environments , comparative analysis can provide a competitive edge by identifying opportunities to outperform competitors or address weaknesses.
  • Informed Innovation: When evaluating new products , technologies, or strategies, comparative analysis guides the selection of the most promising options, reducing the risk of investing in unsuccessful ventures.

In summary, comparative analysis is a valuable tool that empowers decision-makers across various domains to make informed, data-driven choices, manage risks, allocate resources effectively, and drive continuous improvement. Its structured approach enhances decision quality and transparency, contributing to the success and competitiveness of organizations and research endeavors.

How to Prepare for Comparative Analysis?

1. define objectives and scope.

Before you begin your comparative analysis, clearly defining your objectives and the scope of your analysis is essential. This step lays the foundation for the entire process. Here's how to approach it:

  • Identify Your Goals: Start by asking yourself what you aim to achieve with your comparative analysis. Are you trying to choose between two products for your business? Are you evaluating potential investment opportunities? Knowing your objectives will help you stay focused throughout the analysis.
  • Define Scope: Determine the boundaries of your comparison. What will you include, and what will you exclude? For example, if you're analyzing market entry strategies for a new product, specify whether you're looking at a specific geographic region or a particular target audience.
  • Stakeholder Alignment: Ensure that all stakeholders involved in the analysis understand and agree on the objectives and scope. This alignment will prevent misunderstandings and ensure the analysis meets everyone's expectations.

2. Gather Relevant Data and Information

The quality of your comparative analysis heavily depends on the data and information you gather. Here's how to approach this crucial step:

  • Data Sources: Identify where you'll obtain the necessary data. Will you rely on primary sources , such as surveys and interviews, to collect original data? Or will you use secondary sources, like published research and industry reports, to access existing data? Consider the advantages and disadvantages of each source.
  • Data Collection Plan: Develop a plan for collecting data. This should include details about the methods you'll use, the timeline for data collection, and who will be responsible for gathering the data.
  • Data Relevance: Ensure that the data you collect is directly relevant to your objectives. Irrelevant or extraneous data can lead to confusion and distract from the core analysis.

3. Select Appropriate Criteria for Comparison

Choosing the right criteria for comparison is critical to a successful comparative analysis. Here's how to go about it:

  • Relevance to Objectives: Your chosen criteria should align closely with your analysis objectives. For example, if you're comparing job candidates, your criteria might include skills, experience, and cultural fit.
  • Measurability: Consider whether you can quantify the criteria. Measurable criteria are easier to analyze. If you're comparing marketing campaigns, you might measure criteria like click-through rates, conversion rates, and return on investment.
  • Weighting Criteria : Not all criteria are equally important. You'll need to assign weights to each criterion based on its relative importance. Weighting helps ensure that the most critical factors have a more significant impact on the final decision.

4. Establish a Clear Framework

Once you have your objectives, data, and criteria in place, it's time to establish a clear framework for your comparative analysis. This framework will guide your process and ensure consistency. Here's how to do it:

  • Comparative Matrix: Consider using a comparative matrix or spreadsheet to organize your data. Each row in the matrix represents an option or entity you're comparing, and each column corresponds to a criterion. This visual representation makes it easy to compare and contrast data.
  • Timeline: Determine the time frame for your analysis. Is it a one-time comparison, or will you conduct ongoing analyses? Having a defined timeline helps you manage the analysis process efficiently.
  • Define Metrics: Specify the metrics or scoring system you'll use to evaluate each criterion. For example, if you're comparing potential office locations, you might use a scoring system from 1 to 5 for factors like cost, accessibility, and amenities.

With your objectives, data, criteria, and framework established, you're ready to move on to the next phase of comparative analysis: data collection and organization.

Comparative Analysis Data Collection

Data collection and organization are critical steps in the comparative analysis process. We'll explore how to gather and structure the data you need for a successful analysis.

1. Utilize Primary Data Sources

Primary data sources involve gathering original data directly from the source. This approach offers unique advantages, allowing you to tailor your data collection to your specific research needs.

Some popular primary data sources include:

  • Surveys and Questionnaires: Design surveys or questionnaires and distribute them to collect specific information from individuals or groups. This method is ideal for obtaining firsthand insights, such as customer preferences or employee feedback.
  • Interviews: Conduct structured interviews with relevant stakeholders or experts. Interviews provide an opportunity to delve deeper into subjects and gather qualitative data, making them valuable for in-depth analysis.
  • Observations: Directly observe and record data from real-world events or settings. Observational data can be instrumental in fields like anthropology, ethnography, and environmental studies.
  • Experiments: In controlled environments, experiments allow you to manipulate variables and measure their effects. This method is common in scientific research and product testing.

When using primary data sources, consider factors like sample size , survey design, and data collection methods to ensure the reliability and validity of your data.

2. Harness Secondary Data Sources

Secondary data sources involve using existing data collected by others. These sources can provide a wealth of information and save time and resources compared to primary data collection.

Here are common types of secondary data sources:

  • Public Records: Government publications, census data, and official reports offer valuable information on demographics, economic trends, and public policies. They are often free and readily accessible.
  • Academic Journals: Scholarly articles provide in-depth research findings across various disciplines. They are helpful for accessing peer-reviewed studies and staying current with academic discourse.
  • Industry Reports: Industry-specific reports and market research publications offer insights into market trends, consumer behavior, and competitive landscapes. They are essential for businesses making strategic decisions.
  • Online Databases: Online platforms like Statista , PubMed , and Google Scholar provide a vast repository of data and research articles. They offer search capabilities and access to a wide range of data sets.

When using secondary data sources, critically assess the credibility, relevance, and timeliness of the data. Ensure that it aligns with your research objectives.

3. Ensure and Validate Data Quality

Data quality is paramount in comparative analysis. Poor-quality data can lead to inaccurate conclusions and flawed decision-making. Here's how to ensure data validation and reliability:

  • Cross-Verification: Whenever possible, cross-verify data from multiple sources. Consistency among different sources enhances the reliability of the data.
  • Sample Size : Ensure that your data sample size is statistically significant for meaningful analysis. A small sample may not accurately represent the population.
  • Data Integrity: Check for data integrity issues, such as missing values, outliers, or duplicate entries. Address these issues before analysis to maintain data quality.
  • Data Source Reliability: Assess the reliability and credibility of the data sources themselves. Consider factors like the reputation of the institution or organization providing the data.

4. Organize Data Effectively

Structuring your data for comparison is a critical step in the analysis process. Organized data makes it easier to draw insights and make informed decisions. Here's how to structure data effectively:

  • Data Cleaning: Before analysis, clean your data to remove inconsistencies, errors, and irrelevant information. Data cleaning may involve data transformation, imputation of missing values, and removing outliers.
  • Normalization: Standardize data to ensure fair comparisons. Normalization adjusts data to a standard scale, making comparing variables with different units or ranges possible.
  • Variable Labeling: Clearly label variables and data points for easy identification. Proper labeling enhances the transparency and understandability of your analysis.
  • Data Organization: Organize data into a format that suits your analysis methods. For quantitative analysis, this might mean creating a matrix, while qualitative analysis may involve categorizing data into themes.

By paying careful attention to data collection, validation, and organization, you'll set the stage for a robust and insightful comparative analysis. Next, we'll explore various methodologies you can employ in your analysis, ranging from qualitative approaches to quantitative methods and examples.

Comparative Analysis Methods

When it comes to comparative analysis, various methodologies are available, each suited to different research goals and data types. In this section, we'll explore five prominent methodologies in detail.

Qualitative Comparative Analysis (QCA)

Qualitative Comparative Analysis (QCA) is a methodology often used when dealing with complex, non-linear relationships among variables. It seeks to identify patterns and configurations among factors that lead to specific outcomes.

  • Case-by-Case Analysis: QCA involves evaluating individual cases (e.g., organizations, regions, or events) rather than analyzing aggregate data. Each case's unique characteristics are considered.
  • Boolean Logic: QCA employs Boolean algebra to analyze data. Variables are categorized as either present or absent, allowing for the examination of different combinations and logical relationships.
  • Necessary and Sufficient Conditions: QCA aims to identify necessary and sufficient conditions for a specific outcome to occur. It helps answer questions like, "What conditions are necessary for a successful product launch?"
  • Fuzzy Set Theory: In some cases, QCA may use fuzzy set theory to account for degrees of membership in a category, allowing for more nuanced analysis.

QCA is particularly useful in fields such as sociology, political science, and organizational studies, where understanding complex interactions is essential.

Quantitative Comparative Analysis

Quantitative Comparative Analysis involves the use of numerical data and statistical techniques to compare and analyze variables. It's suitable for situations where data is quantitative, and relationships can be expressed numerically.

  • Statistical Tools: Quantitative comparative analysis relies on statistical methods like regression analysis, correlation, and hypothesis testing. These tools help identify relationships, dependencies, and trends within datasets.
  • Data Measurement: Ensure that variables are measured consistently using appropriate scales (e.g., ordinal, interval, ratio) for meaningful analysis. Variables may include numerical values like revenue, customer satisfaction scores, or product performance metrics.
  • Data Visualization: Create visual representations of data using charts, graphs, and plots. Visualization aids in understanding complex relationships and presenting findings effectively.
  • Statistical Significance: Assess the statistical significance of relationships. Statistical significance indicates whether observed differences or relationships are likely to be real rather than due to chance.

Quantitative comparative analysis is commonly applied in economics, social sciences, and market research to draw empirical conclusions from numerical data.

Case Studies

Case studies involve in-depth examinations of specific instances or cases to gain insights into real-world scenarios. Comparative case studies allow researchers to compare and contrast multiple cases to identify patterns, differences, and lessons.

  • Narrative Analysis: Case studies often involve narrative analysis, where researchers construct detailed narratives of each case, including context, events, and outcomes.
  • Contextual Understanding: In comparative case studies, it's crucial to consider the context within which each case operates. Understanding the context helps interpret findings accurately.
  • Cross-Case Analysis: Researchers conduct cross-case analysis to identify commonalities and differences across cases. This process can lead to the discovery of factors that influence outcomes.
  • Triangulation: To enhance the validity of findings, researchers may use multiple data sources and methods to triangulate information and ensure reliability.

Case studies are prevalent in fields like psychology, business, and sociology, where deep insights into specific situations are valuable.

SWOT Analysis

SWOT Analysis is a strategic tool used to assess the Strengths, Weaknesses, Opportunities, and Threats associated with a particular entity or situation. While it's commonly used in business, it can be adapted for various comparative analyses.

  • Internal and External Factors: SWOT Analysis examines both internal factors (Strengths and Weaknesses), such as organizational capabilities, and external factors (Opportunities and Threats), such as market conditions and competition.
  • Strategic Planning: The insights from SWOT Analysis inform strategic decision-making. By identifying strengths and opportunities, organizations can leverage their advantages. Likewise, addressing weaknesses and threats helps mitigate risks.
  • Visual Representation: SWOT Analysis is often presented as a matrix or a 2x2 grid, making it visually accessible and easy to communicate to stakeholders.
  • Continuous Monitoring: SWOT Analysis is not a one-time exercise. Organizations use it periodically to adapt to changing circumstances and make informed decisions.

SWOT Analysis is versatile and can be applied in business, healthcare, education, and any context where a structured assessment of factors is needed.

Benchmarking

Benchmarking involves comparing an entity's performance, processes, or practices to those of industry leaders or best-in-class organizations. It's a powerful tool for continuous improvement and competitive analysis.

  • Identify Performance Gaps: Benchmarking helps identify areas where an entity lags behind its peers or industry standards. These performance gaps highlight opportunities for improvement.
  • Data Collection: Gather data on key performance metrics from both internal and external sources. This data collection phase is crucial for meaningful comparisons.
  • Comparative Analysis: Compare your organization's performance data with that of benchmark organizations. This analysis can reveal where you excel and where adjustments are needed.
  • Continuous Improvement: Benchmarking is a dynamic process that encourages continuous improvement. Organizations use benchmarking findings to set performance goals and refine their strategies.

Benchmarking is widely used in business, manufacturing, healthcare, and customer service to drive excellence and competitiveness.

Each of these methodologies brings a unique perspective to comparative analysis, allowing you to choose the one that best aligns with your research objectives and the nature of your data. The choice between qualitative and quantitative methods, or a combination of both, depends on the complexity of the analysis and the questions you seek to answer.

How to Conduct Comparative Analysis?

Once you've prepared your data and chosen an appropriate methodology, it's time to dive into the process of conducting a comparative analysis. We will guide you through the essential steps to extract meaningful insights from your data.

What Is Comparative Analysis and How to Conduct It Examples

1. Identify Key Variables and Metrics

Identifying key variables and metrics is the first crucial step in conducting a comparative analysis. These are the factors or indicators you'll use to assess and compare your options.

  • Relevance to Objectives: Ensure the chosen variables and metrics align closely with your analysis objectives. When comparing marketing strategies, relevant metrics might include customer acquisition cost, conversion rate, and retention.
  • Quantitative vs. Qualitative : Decide whether your analysis will focus on quantitative data (numbers) or qualitative data (descriptive information). In some cases, a combination of both may be appropriate.
  • Data Availability: Consider the availability of data. Ensure you can access reliable and up-to-date data for all selected variables and metrics.
  • KPIs: Key Performance Indicators (KPIs) are often used as the primary metrics in comparative analysis. These are metrics that directly relate to your goals and objectives.

2. Visualize Data for Clarity

Data visualization techniques play a vital role in making complex information more accessible and understandable. Effective data visualization allows you to convey insights and patterns to stakeholders. Consider the following approaches:

  • Charts and Graphs: Use various types of charts, such as bar charts, line graphs, and pie charts, to represent data. For example, a line graph can illustrate trends over time, while a bar chart can compare values across categories.
  • Heatmaps: Heatmaps are particularly useful for visualizing large datasets and identifying patterns through color-coding. They can reveal correlations, concentrations, and outliers.
  • Scatter Plots: Scatter plots help visualize relationships between two variables. They are especially useful for identifying trends, clusters, or outliers.
  • Dashboards: Create interactive dashboards that allow users to explore data and customize views. Dashboards are valuable for ongoing analysis and reporting.
  • Infographics: For presentations and reports, consider using infographics to summarize key findings in a visually engaging format.

Effective data visualization not only enhances understanding but also aids in decision-making by providing clear insights at a glance.

3. Establish Clear Comparative Frameworks

A well-structured comparative framework provides a systematic approach to your analysis. It ensures consistency and enables you to make meaningful comparisons. Here's how to create one:

  • Comparison Matrices: Consider using matrices or spreadsheets to organize your data. Each row represents an option or entity, and each column corresponds to a variable or metric. This matrix format allows for side-by-side comparisons.
  • Decision Trees: In complex decision-making scenarios, decision trees help map out possible outcomes based on different criteria and variables. They visualize the decision-making process.
  • Scenario Analysis: Explore different scenarios by altering variables or criteria to understand how changes impact outcomes. Scenario analysis is valuable for risk assessment and planning.
  • Checklists: Develop checklists or scoring sheets to systematically evaluate each option against predefined criteria. Checklists ensure that no essential factors are overlooked.

A well-structured comparative framework simplifies the analysis process, making it easier to draw meaningful conclusions and make informed decisions.

4. Evaluate and Score Criteria

Evaluating and scoring criteria is a critical step in comparative analysis, as it quantifies the performance of each option against the chosen criteria.

  • Scoring System: Define a scoring system that assigns values to each criterion for every option. Common scoring systems include numerical scales, percentage scores, or qualitative ratings (e.g., high, medium, low).
  • Consistency: Ensure consistency in scoring by defining clear guidelines for each score. Provide examples or descriptions to help evaluators understand what each score represents.
  • Data Collection: Collect data or information relevant to each criterion for all options. This may involve quantitative data (e.g., sales figures) or qualitative data (e.g., customer feedback).
  • Aggregation: Aggregate the scores for each option to obtain an overall evaluation. This can be done by summing the individual criterion scores or applying weighted averages.
  • Normalization: If your criteria have different measurement scales or units, consider normalizing the scores to create a level playing field for comparison.

5. Assign Importance to Criteria

Not all criteria are equally important in a comparative analysis. Weighting criteria allows you to reflect their relative significance in the final decision-making process.

  • Relative Importance: Assess the importance of each criterion in achieving your objectives. Criteria directly aligned with your goals may receive higher weights.
  • Weighting Methods: Choose a weighting method that suits your analysis. Common methods include expert judgment, analytic hierarchy process (AHP), or data-driven approaches based on historical performance.
  • Impact Analysis: Consider how changes in the weights assigned to criteria would affect the final outcome. This sensitivity analysis helps you understand the robustness of your decisions.
  • Stakeholder Input: Involve relevant stakeholders or decision-makers in the weighting process. Their input can provide valuable insights and ensure alignment with organizational goals.
  • Transparency: Clearly document the rationale behind the assigned weights to maintain transparency in your analysis.

By weighting criteria, you ensure that the most critical factors have a more significant influence on the final evaluation, aligning the analysis more closely with your objectives and priorities.

With these steps in place, you're well-prepared to conduct a comprehensive comparative analysis. The next phase involves interpreting your findings, drawing conclusions, and making informed decisions based on the insights you've gained.

Comparative Analysis Interpretation

Interpreting the results of your comparative analysis is a crucial phase that transforms data into actionable insights. We'll delve into various aspects of interpretation and how to make sense of your findings.

  • Contextual Understanding: Before diving into the data, consider the broader context of your analysis. Understand the industry trends, market conditions, and any external factors that may have influenced your results.
  • Drawing Conclusions: Summarize your findings clearly and concisely. Identify trends, patterns, and significant differences among the options or variables you've compared.
  • Quantitative vs. Qualitative Analysis: Depending on the nature of your data and analysis, you may need to balance both quantitative and qualitative interpretations. Qualitative insights can provide context and nuance to quantitative findings.
  • Comparative Visualization: Visual aids such as charts, graphs, and tables can help convey your conclusions effectively. Choose visual representations that align with the nature of your data and the key points you want to emphasize.
  • Outliers and Anomalies: Identify and explain any outliers or anomalies in your data. Understanding these exceptions can provide valuable insights into unusual cases or factors affecting your analysis.
  • Cross-Validation: Validate your conclusions by comparing them with external benchmarks, industry standards, or expert opinions. Cross-validation helps ensure the reliability of your findings.
  • Implications for Decision-Making: Discuss how your analysis informs decision-making. Clearly articulate the practical implications of your findings and their relevance to your initial objectives.
  • Actionable Insights: Emphasize actionable insights that can guide future strategies, policies, or actions. Make recommendations based on your analysis, highlighting the steps needed to capitalize on strengths or address weaknesses.
  • Continuous Improvement: Encourage a culture of continuous improvement by using your analysis as a feedback mechanism. Suggest ways to monitor and adapt strategies over time based on evolving circumstances.

Comparative Analysis Applications

Comparative analysis is a versatile methodology that finds application in various fields and scenarios. Let's explore some of the most common and impactful applications.

Business Decision-Making

Comparative analysis is widely employed in business to inform strategic decisions and drive success. Key applications include:

Market Research and Competitive Analysis

  • Objective: To assess market opportunities and evaluate competitors.
  • Methods: Analyzing market trends, customer preferences, competitor strengths and weaknesses, and market share.
  • Outcome: Informed product development, pricing strategies, and market entry decisions.

Product Comparison and Benchmarking

  • Objective: To compare the performance and features of products or services.
  • Methods: Evaluating product specifications, customer reviews, and pricing.
  • Outcome: Identifying strengths and weaknesses, improving product quality, and setting competitive pricing.

Financial Analysis

  • Objective: To evaluate financial performance and make investment decisions.
  • Methods: Comparing financial statements, ratios, and performance indicators of companies.
  • Outcome: Informed investment choices, risk assessment, and portfolio management.

Healthcare and Medical Research

In the healthcare and medical research fields, comparative analysis is instrumental in understanding diseases, treatment options, and healthcare systems.

Clinical Trials and Drug Development

  • Objective: To compare the effectiveness of different treatments or drugs.
  • Methods: Analyzing clinical trial data, patient outcomes, and side effects.
  • Outcome: Informed decisions about drug approvals, treatment protocols, and patient care.

Health Outcomes Research

  • Objective: To assess the impact of healthcare interventions.
  • Methods: Comparing patient health outcomes before and after treatment or between different treatment approaches.
  • Outcome: Improved healthcare guidelines, cost-effectiveness analysis, and patient care plans.

Healthcare Systems Evaluation

  • Objective: To assess the performance of healthcare systems.
  • Methods: Comparing healthcare delivery models, patient satisfaction, and healthcare costs.
  • Outcome: Informed healthcare policy decisions, resource allocation, and system improvements.

Social Sciences and Policy Analysis

Comparative analysis is a fundamental tool in social sciences and policy analysis, aiding in understanding complex societal issues.

Educational Research

  • Objective: To compare educational systems and practices.
  • Methods: Analyzing student performance, curriculum effectiveness, and teaching methods.
  • Outcome: Informed educational policies, curriculum development, and school improvement strategies.

Political Science

  • Objective: To study political systems, elections, and governance.
  • Methods: Comparing election outcomes, policy impacts, and government structures.
  • Outcome: Insights into political behavior, policy effectiveness, and governance reforms.

Social Welfare and Poverty Analysis

  • Objective: To evaluate the impact of social programs and policies.
  • Methods: Comparing the well-being of individuals or communities with and without access to social assistance.
  • Outcome: Informed policymaking, poverty reduction strategies, and social program improvements.

Environmental Science and Sustainability

Comparative analysis plays a pivotal role in understanding environmental issues and promoting sustainability.

Environmental Impact Assessment

  • Objective: To assess the environmental consequences of projects or policies.
  • Methods: Comparing ecological data, resource use, and pollution levels.
  • Outcome: Informed environmental mitigation strategies, sustainable development plans, and regulatory decisions.

Climate Change Analysis

  • Objective: To study climate patterns and their impacts.
  • Methods: Comparing historical climate data, temperature trends, and greenhouse gas emissions.
  • Outcome: Insights into climate change causes, adaptation strategies, and policy recommendations.

Ecosystem Health Assessment

  • Objective: To evaluate the health and resilience of ecosystems.
  • Methods: Comparing biodiversity, habitat conditions, and ecosystem services.
  • Outcome: Conservation efforts, restoration plans, and ecological sustainability measures.

Technology and Innovation

Comparative analysis is crucial in the fast-paced world of technology and innovation.

Product Development and Innovation

  • Objective: To assess the competitiveness and innovation potential of products or technologies.
  • Methods: Comparing research and development investments, technology features, and market demand.
  • Outcome: Informed innovation strategies, product roadmaps, and patent decisions.

User Experience and Usability Testing

  • Objective: To evaluate the user-friendliness of software applications or digital products.
  • Methods: Comparing user feedback, usability metrics, and user interface designs.
  • Outcome: Improved user experiences, interface redesigns, and product enhancements.

Technology Adoption and Market Entry

  • Objective: To analyze market readiness and risks for new technologies.
  • Methods: Comparing market conditions, regulatory landscapes, and potential barriers.
  • Outcome: Informed market entry strategies, risk assessments, and investment decisions.

These diverse applications of comparative analysis highlight its flexibility and importance in decision-making across various domains. Whether in business, healthcare, social sciences, environmental studies, or technology, comparative analysis empowers researchers and decision-makers to make informed choices and drive positive outcomes.

Comparative Analysis Best Practices

Successful comparative analysis relies on following best practices and avoiding common pitfalls. Implementing these practices enhances the effectiveness and reliability of your analysis.

  • Clearly Defined Objectives: Start with well-defined objectives that outline what you aim to achieve through the analysis. Clear objectives provide focus and direction.
  • Data Quality Assurance: Ensure data quality by validating, cleaning, and normalizing your data. Poor-quality data can lead to inaccurate conclusions.
  • Transparent Methodologies: Clearly explain the methodologies and techniques you've used for analysis. Transparency builds trust and allows others to assess the validity of your approach.
  • Consistent Criteria: Maintain consistency in your criteria and metrics across all options or variables. Inconsistent criteria can lead to biased results.
  • Sensitivity Analysis: Conduct sensitivity analysis by varying key parameters, such as weights or assumptions, to assess the robustness of your conclusions.
  • Stakeholder Involvement: Involve relevant stakeholders throughout the analysis process. Their input can provide valuable perspectives and ensure alignment with organizational goals.
  • Critical Evaluation of Assumptions: Identify and critically evaluate any assumptions made during the analysis. Assumptions should be explicit and justifiable.
  • Holistic View: Take a holistic view of the analysis by considering both short-term and long-term implications. Avoid focusing solely on immediate outcomes.
  • Documentation: Maintain thorough documentation of your analysis, including data sources, calculations, and decision criteria. Documentation supports transparency and facilitates reproducibility.
  • Continuous Learning: Stay updated with the latest analytical techniques, tools, and industry trends. Continuous learning helps you adapt your analysis to changing circumstances.
  • Peer Review: Seek peer review or expert feedback on your analysis. External perspectives can identify blind spots and enhance the quality of your work.
  • Ethical Considerations: Address ethical considerations, such as privacy and data protection, especially when dealing with sensitive or personal data.

By adhering to these best practices, you'll not only improve the rigor of your comparative analysis but also ensure that your findings are reliable, actionable, and aligned with your objectives.

Comparative Analysis Examples

To illustrate the practical application and benefits of comparative analysis, let's explore several real-world examples across different domains. These examples showcase how organizations and researchers leverage comparative analysis to make informed decisions, solve complex problems, and drive improvements:

Retail Industry - Price Competitiveness Analysis

Objective: A retail chain aims to assess its price competitiveness against competitors in the same market.

Methodology:

  • Collect pricing data for a range of products offered by the retail chain and its competitors.
  • Organize the data into a comparative framework, categorizing products by type and price range.
  • Calculate price differentials, averages, and percentiles for each product category.
  • Analyze the findings to identify areas where the retail chain's prices are higher or lower than competitors.

Outcome: The analysis reveals that the retail chain's prices are consistently lower in certain product categories but higher in others. This insight informs pricing strategies, allowing the retailer to adjust prices to remain competitive in the market.

Healthcare - Comparative Effectiveness Research

Objective: Researchers aim to compare the effectiveness of two different treatment methods for a specific medical condition.

  • Recruit patients with the medical condition and randomly assign them to two treatment groups.
  • Collect data on treatment outcomes, including symptom relief, side effects, and recovery times.
  • Analyze the data using statistical methods to compare the treatment groups.
  • Consider factors like patient demographics and baseline health status as potential confounding variables.

Outcome: The comparative analysis reveals that one treatment method is statistically more effective than the other in relieving symptoms and has fewer side effects. This information guides medical professionals in recommending the more effective treatment to patients.

Environmental Science - Carbon Emission Analysis

Objective: An environmental organization seeks to compare carbon emissions from various transportation modes in a metropolitan area.

  • Collect data on the number of vehicles, their types (e.g., cars, buses, bicycles), and fuel consumption for each mode of transportation.
  • Calculate the total carbon emissions for each mode based on fuel consumption and emission factors.
  • Create visualizations such as bar charts and pie charts to represent the emissions from each transportation mode.
  • Consider factors like travel distance, occupancy rates, and the availability of alternative fuels.

Outcome: The comparative analysis reveals that public transportation generates significantly lower carbon emissions per passenger mile compared to individual car travel. This information supports advocacy for increased public transit usage to reduce carbon footprint.

Technology Industry - Feature Comparison for Software Development Tools

Objective: A software development team needs to choose the most suitable development tool for an upcoming project.

  • Create a list of essential features and capabilities required for the project.
  • Research and compile information on available development tools in the market.
  • Develop a comparative matrix or scoring system to evaluate each tool's features against the project requirements.
  • Assign weights to features based on their importance to the project.

Outcome: The comparative analysis highlights that Tool A excels in essential features critical to the project, such as version control integration and debugging capabilities. The development team selects Tool A as the preferred choice for the project.

Educational Research - Comparative Study of Teaching Methods

Objective: A school district aims to improve student performance by comparing the effectiveness of traditional classroom teaching with online learning.

  • Randomly assign students to two groups: one taught using traditional methods and the other through online courses.
  • Administer pre- and post-course assessments to measure knowledge gain.
  • Collect feedback from students and teachers on the learning experiences.
  • Analyze assessment scores and feedback to compare the effectiveness and satisfaction levels of both teaching methods.

Outcome: The comparative analysis reveals that online learning leads to similar knowledge gains as traditional classroom teaching. However, students report higher satisfaction and flexibility with the online approach. The school district considers incorporating online elements into its curriculum.

These examples illustrate the diverse applications of comparative analysis across industries and research domains. Whether optimizing pricing strategies in retail, evaluating treatment effectiveness in healthcare, assessing environmental impacts, choosing the right software tool, or improving educational methods, comparative analysis empowers decision-makers with valuable insights for informed choices and positive outcomes.

Conclusion for Comparative Analysis

Comparative analysis is your compass in the world of decision-making. It helps you see the bigger picture, spot opportunities, and navigate challenges. By defining your objectives, gathering data, applying methodologies, and following best practices, you can harness the power of Comparative Analysis to make informed choices and drive positive outcomes.

Remember, Comparative analysis is not just a tool; it's a mindset that empowers you to transform data into insights and uncertainty into clarity. So, whether you're steering a business, conducting research, or facing life's choices, embrace Comparative Analysis as your trusted guide on the journey to better decisions. With it, you can chart your course, make impactful choices, and set sail toward success.

How to Conduct Comparative Analysis in Minutes?

Are you ready to revolutionize your approach to market research and comparative analysis? Appinio , a real-time market research platform, empowers you to harness the power of real-time consumer insights for swift, data-driven decisions. Here's why you should choose Appinio:

  • Speedy Insights:  Get from questions to insights in minutes, enabling you to conduct comparative analysis without delay.
  • User-Friendly:  No need for a PhD in research – our intuitive platform is designed for everyone, making it easy to collect and analyze data.
  • Global Reach:  With access to over 90 countries and the ability to define your target group from 1200+ characteristics, Appinio provides a worldwide perspective for your comparative analysis

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Pareto Analysis Definition Pareto Chart Examples

30.05.2024 | 29min read

Pareto Analysis: Definition, Pareto Chart, Examples

What is Systematic Sampling Definition Types Examples

28.05.2024 | 32min read

What is Systematic Sampling? Definition, Types, Examples

Time Series Analysis Definition Types Techniques Examples

16.05.2024 | 30min read

Time Series Analysis: Definition, Types, Techniques, Examples

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

2.3: Case Selection (Or, How to Use Cases in Your Comparative Analysis)

  • Last updated
  • Save as PDF
  • Page ID 135832

  • Dino Bozonelos, Julia Wendt, Charlotte Lee, Jessica Scarffe, Masahiro Omae, Josh Franco, Byran Martin, & Stefan Veldhuis
  • Victor Valley College, Berkeley City College, Allan Hancock College, San Diego City College, Cuyamaca College, Houston Community College, and Long Beach City College via ASCCC Open Educational Resources Initiative (OERI)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Learning Objectives

By the end of this section, you will be able to:

  • Discuss the importance of case selection in case studies.
  • Consider the implications of poor case selection.

Introduction

Case selection is an important part of any research design. Deciding how many cases, and which cases, to include, will clearly help determine the outcome of our results. If we decide to select a high number of cases, we often say that we are conducting large-N research. Large-N research is when the number of observations or cases is large enough where we would need mathematical, usually statistical, techniques to discover and interpret any correlations or causations. In order for a large-N analysis to yield any relevant findings, a number of conventions need to be observed. First, the sample needs to be representative of the studied population. Thus, if we wanted to understand the long-term effects of COVID, we would need to know the approximate details of those who contracted the virus. Once we know the parameters of the population, we can then determine a sample that represents the larger population. For example, women make up 55% of all long-term COVID survivors. Thus, any sample we generate needs to be at least 55% women.

Second, some kind of randomization technique needs to be involved in large-N research. So not only must your sample be representative, it must also randomly select people within that sample. In other words, we must have a large selection of people that fit within the population criteria, and then randomly select from those pools. Randomization would help to reduce bias in the study. Also, when cases (people with long-term COVID) are randomly chosen they tend to ensure a fairer representation of the studied population. Third, your sample needs to be large enough, hence the large-N designation for any conclusions to have any external validity. Generally speaking, the larger the number of observations/cases in the sample, the more validity we can have in the study. There is no magic number, but if using the above example, our sample of long-term COVID patients should be at least over 750 people, with an aim of around 1,200 to 1,500 people.

When it comes to comparative politics, we rarely ever reach the numbers typically used in large-N research. There are about 200 fully recognized countries, with about a dozen partially recognized countries, and even fewer areas or regions of study, such as Europe or Latin America. Given this, what is the strategy when one case, or a few cases, are being studied? What happens if we are only wanting to know the COVID-19 response in the United States, and not the rest of the world? How do we randomize this to ensure our results are not biased or are representative? These and other questions are legitimate issues that many comparativist scholars face when completing research. Does randomization work with case studies? Gerring suggests that it does not, as “any given sample may be widely representative” (pg. 87). Thus, random sampling is not a reliable approach when it comes to case studies. And even if the randomized sample is representative, there is no guarantee that the gathered evidence would be reliable.

One can make the argument that case selection may not be as important in large-N studies as they are in small-N studies. In large-N research, potential errors and/or biases may be ameliorated, especially if the sample is large enough. This is not always what happens, errors and biases most certainly can exist in large-N research. However, incorrect or biased inferences are less of a worry when we have 1,500 cases versus 15 cases. In small-N research, case selection simply matters much more.

This is why Blatter and Haverland (2012) write that, “case studies are ‘case-centered’, whereas large-N studies are ‘variable-centered’". In large-N studies we are more concerned with the conceptualization and operationalization of variables. Thus, we want to focus on which data to include in the analysis of long-term COVID patients. If we wanted to survey them, we would want to make sure we construct questions in appropriate ways. For almost all survey-based large-N research, the question responses themselves become the coded variables used in the statistical analysis.

Case selection can be driven by a number of factors in comparative politics, with the first two approaches being the more traditional. First, it can derive from the interests of the researcher(s). For example, if the researcher lives in Germany, they may want to research the spread of COVID-19 within the country, possibly using a subnational approach where the researcher may compare infection rates among German states. Second, case selection may be driven by area studies. This is still based on the interests of the researcher as generally speaking scholars pick areas of studies due to their personal interests. For example, the same researcher may research COVID-19 infection rates among European Union member-states. Finally, the selection of cases selected may be driven by the type of case study that is utilized. In this approach, cases are selected as they allow researchers to compare their similarities or their differences. Or, a case might be selected that is typical of most cases, or in contrast, a case or cases that deviate from the norm. We discuss types of case studies and their impact on case selection below.

Types of Case Studies: Descriptive vs. Causal

There are a number of different ways to categorize case studies. One of the most recent ways is through John Gerring. He wrote two editions on case study research (2017) where he posits that the central question posed by the researcher will dictate the aim of the case study. Is the study meant to be descriptive? If so, what is the researcher looking to describe? How many cases (countries, incidents, events) are there? Or is the study meant to be causal, where the researcher is looking for a cause and effect? Given this, Gerring categorizes case studies into two types: descriptive and causal.

Descriptive case studies are “not organized around a central, overarching causal hypothesis or theory” (pg. 56). Most case studies are descriptive in nature, where the researchers simply seek to describe what they observe. They are useful for transmitting information regarding the studied political phenomenon. For a descriptive case study, a scholar might choose a case that is considered typical of the population. An example could involve researching the effects of the pandemic on medium-sized cities in the US. This city would have to exhibit the tendencies of medium-sized cities throughout the entire country. First, we would have to conceptualize what we mean by ‘a medium-size city’. Second, we would then have to establish the characteristics of medium-sized US cities, so that our case selection is appropriate. Alternatively, cases could be chosen for their diversity . In keeping with our example, maybe we want to look at the effects of the pandemic on a range of US cities, from small, rural towns, to medium-sized suburban cities to large-sized urban areas.

Causal case studies are “organized around a central hypothesis about how X affects Y” (pg. 63). In causal case studies, the context around a specific political phenomenon or phenomena is important as it allows for researchers to identify the aspects that set up the conditions, the mechanisms, for that outcome to occur. Scholars refer to this as the causal mechanism , which is defined by Falleti & Lynch (2009) as “portable concepts that explain how and why a hypothesized cause, in a given context, contributes to a particular outcome”. Remember, causality is when a change in one variable verifiably causes an effect or change in another variable. For causal case studies that employ causal mechanisms, Gerring divides them into exploratory case-selection, estimating case-selection, and diagnostic case-selection. The differences revolve around how the central hypothesis is utilized in the study.

Exploratory case studies are used to identify a potential causal hypothesis. Researchers will single out the independent variables that seem to affect the outcome, or dependent variable, the most. The goal is to build up to what the causal mechanism might be by providing the context. This is also referred to as hypothesis generating as opposed to hypothesis testing. Case selection can vary widely depending on the goal of the researcher. For example, if the scholar is looking to develop an ‘ideal-type’, they might seek out an extreme case. An ideal-type is defined as a “conception or a standard of something in its highest perfection” (New Webster Dictionary). Thus, if we want to understand the ideal-type capitalist system, we want to investigate a country that practices a pure or ‘extreme’ form of the economic system.

Estimating case studies start with a hypothesis already in place. The goal is to test the hypothesis through collected data/evidence. Researchers seek to estimate the ‘causal effect’. This involves determining if the relationship between the independent and dependent variables is positive, negative, or ultimately if no relationship exists at all. Finally, diagnostic case studies are important as they help to “confirm, disconfirm, or refine a hypothesis” (Gerring 2017). Case selection can also vary in diagnostic case studies. For example, scholars can choose an least-likely case, or a case where the hypothesis is confirmed even though the context would suggest otherwise. A good example would be looking at Indian democracy, which has existed for over 70 years. India has a high level of ethnolinguistic diversity, is relatively underdeveloped economically, and a low level of modernization through large swaths of the country. All of these factors strongly suggest that India should not have democratized, or should have failed to stay a democracy in the long-term, or have disintegrated as a country.

Most Similar/Most Different Systems Approach

The discussion in the previous subsection tends to focus on case selection when it comes to a single case. Single case studies are valuable as they provide an opportunity for in-depth research on a topic that requires it. However, in comparative politics, our approach is to compare. Given this, we are required to select more than one case. This presents a different set of challenges. First, how many cases do we pick? This is a tricky question we addressed earlier. Second, how do we apply the previously mentioned case selection techniques, descriptive vs. causal? Do we pick two extreme cases if we used an exploratory approach, or two least-likely cases if choosing a diagnostic case approach?

Thankfully, an English scholar by the name of John Stuart Mill provided some insight on how we should proceed. He developed several approaches to comparison with the explicit goal of isolating a cause within a complex environment. Two of these methods, the 'method of agreement' and the 'method of difference' have influenced comparative politics. In the 'method of agreement' two or more cases are compared for their commonalities. The scholar looks to isolate the characteristic, or variable, they have in common, which is then established as the cause for their similarities. In the 'method of difference' two or more cases are compared for their differences. The scholar looks to isolate the characteristic, or variable, they do not have in common, which is then identified as the cause for their differences. From these two methods, comparativists have developed two approaches.

Book cover of John Stuart Mill's A System of Logic, Ratiocinative and Inductive, 1843

What Is the Most Similar Systems Design (MSSD)?

This approach is derived from Mill’s ‘method of difference’. In a Most Similar Systems Design Design, the cases selected for comparison are similar to each other, but the outcomes differ in result. In this approach we are interested in keeping as many of the variables the same across the elected cases, which for comparative politics often involves countries. Remember, the independent variable is the factor that doesn’t depend on changes in other variables. It is potentially the ‘cause’ in the cause and effect model. The dependent variable is the variable that is affected by, or dependent on, the presence of the independent variable. It is the ‘effect’. In a most similar systems approach the variables of interest should remain the same.

A good example involves the lack of a national healthcare system in the US. Other countries, such as New Zealand, Australia, Ireland, UK and Canada, all have robust, publicly accessible national health systems. However, the US does not. These countries all have similar systems: English heritage and language use, liberal market economies, strong democratic institutions, and high levels of wealth and education. Yet, despite these similarities, the end results vary. The US does not look like its peer countries. In other words, why do we have similar systems producing different outcomes?

What Is the Most Different Systems Design (MDSD)?

This approach is derived from Mill’s ‘method of agreement’. In a Most Different System Design, the cases selected are different from each other, but result in the same outcome. In this approach, we are interested in selecting cases that are quite different from one another, yet arrive at the same outcome. Thus, the dependent variable is the same. Different independent variables exist between the cases, such as democratic v. authoritarian regime, liberal market economy v. non-liberal market economy. Or it could include other variables such as societal homogeneity (uniformity) vs. societal heterogeneity (diversity), where a country may find itself unified ethnically/religiously/racially, or fragmented along those same lines.

A good example involves the countries that are classified as economically liberal. The Heritage Foundation lists countries such as Singapore, Taiwan, Estonia, Australia, New Zealand, as well as Switzerland, Chile and Malaysia as either free or mostly free. These countries differ greatly from one another. Singapore and Malaysia are considered flawed or illiberal democracies (see chapter 5 for more discussion), whereas Estonia is still classified as a developing country. Australia and New Zealand are wealthy, Malaysia is not. Chile and Taiwan became economically free countries under the authoritarian military regimes, which is not the case for Switzerland. In other words, why do we have different systems producing the same outcome?

Sign up for our newsletter

  • Evidence & Evaluation
  • Evaluation guidance
  • Impact evaluation with small cohorts
  • Get started with small n evaluation

Comparative Case Study

Share content.

A comparative case study (CCS) is defined as ‘the systematic comparison of two or more data points (“cases”) obtained through use of the case study method’ (Kaarbo and Beasley 1999, p. 372). A case may be a participant, an intervention site, a programme or a policy. Case studies have a long history in the social sciences, yet for a long time, they were treated with scepticism (Harrison et al. 2017). The advent of grounded theory in the 1960s led to a revival in the use of case-based approaches. From the early 1980s, the increase in case study research in the field of political sciences led to the integration of formal, statistical and narrative methods, as well as the use of empirical case selection and causal inference (George and Bennett 2005), which contributed to its methodological advancement. Now, as Harrison and colleagues (2017) note, CCS:

“Has grown in sophistication and is viewed as a valid form of inquiry to explore a broad scope of complex issues, particularly when human behavior and social interactions are central to understanding topics of interest.”

It is claimed that CCS can be applied to detect causal attribution and contribution when the use of a comparison or control group is not feasible (or not preferred). Comparing cases enables evaluators to tackle causal inference through assessing regularity (patterns) and/or by excluding other plausible explanations. In practical terms, CCS involves proposing, analysing and synthesising patterns (similarities and differences) across cases that share common objectives.

What is involved?

Goodrick (2014) outlines the steps to be taken in undertaking CCS.

Key evaluation questions and the purpose of the evaluation: The evaluator should explicitly articulate the adequacy and purpose of using CCS (guided by the evaluation questions) and define the primary interests. Formulating key evaluation questions allows the selection of appropriate cases to be used in the analysis.

Propositions based on the Theory of Change: Theories and hypotheses that are to be explored should be derived from the Theory of Change (or, alternatively, from previous research around the initiative, existing policy or programme documentation).

Case selection: Advocates for CCS approaches claim an important distinction between case-oriented small n studies and (most typically large n) statistical/variable-focused approaches in terms of the process of selecting cases: in case-based methods, selection is iterative and cannot rely on convenience and accessibility. ‘Initial’ cases should be identified in advance, but case selection may continue as evidence is gathered. Various case-selection criteria can be identified depending on the analytic purpose (Vogt et al., 2011). These may include:

  • Very similar cases
  • Very different cases
  • Typical or representative cases
  • Extreme or unusual cases
  • Deviant or unexpected cases
  • Influential or emblematic cases

Identify how evidence will be collected, analysed and synthesised: CCS often applies mixed methods.

Test alternative explanations for outcomes: Following the identification of patterns and relationships, the evaluator may wish to test the established propositions in a follow-up exploratory phase. Approaches applied here may involve triangulation, selecting contradicting cases or using an analytical approach such as Qualitative Comparative Analysis (QCA). Download a Comparative Case Study here Download a longer briefing on Comparative Case Studies here

Useful resources

A webinar shared by Better Evaluation with an overview of using CCS for evaluation.

A short overview describing how to apply CCS for evaluation:

Goodrick, D. (2014). Comparative Case Studies, Methodological Briefs: Impact Evaluation 9 , UNICEF Office of Research, Florence.

An extensively used book that provides a comprehensive critical examination of case-based methods:

Byrne, D. and Ragin, C. C. (2009). The Sage handbook of case-based methods . Sage Publications.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Case Study? | Definition, Examples & Methods

What Is a Case Study? | Definition, Examples & Methods

Published on May 8, 2019 by Shona McCombes . Revised on November 20, 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyze the case, other interesting articles.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Case study examples
Research question Case study
What are the ecological effects of wolf reintroduction? Case study of wolf reintroduction in Yellowstone National Park
How do populist politicians use narratives about history to gain support? Case studies of Hungarian prime minister Viktor Orbán and US president Donald Trump
How can teachers implement active learning strategies in mixed-level classrooms? Case study of a local school that promotes active learning
What are the main advantages and disadvantages of wind farms for rural communities? Case studies of three rural wind farm development projects in different parts of the country
How are viral marketing strategies changing the relationship between companies and consumers? Case study of the iPhone X marketing campaign
How do experiences of work in the gig economy differ by gender, race and age? Case studies of Deliveroo and Uber drivers in London

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

TipIf your research is more practical in nature and aims to simultaneously investigate an issue as you solve it, consider conducting action research instead.

Unlike quantitative or experimental research , a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

Example of an outlying case studyIn the 1960s the town of Roseto, Pennsylvania was discovered to have extremely low rates of heart disease compared to the US average. It became an important case study for understanding previously neglected causes of heart disease.

However, you can also choose a more common or representative case to exemplify a particular category, experience or phenomenon.

Example of a representative case studyIn the 1920s, two sociologists used Muncie, Indiana as a case study of a typical American city that supposedly exemplified the changing culture of the US at the time.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews , observations , and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data.

Example of a mixed methods case studyFor a case study of a wind farm development in a rural area, you could collect quantitative data on employment rates and business revenue, collect qualitative data on local people’s perceptions and experiences, and analyze local and national media coverage of the development.

The aim is to gain as thorough an understanding as possible of the case and its context.

Prevent plagiarism. Run a free check.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis , with separate sections or chapters for the methods , results and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyze its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Case Study? | Definition, Examples & Methods. Scribbr. Retrieved June 10, 2024, from https://www.scribbr.com/methodology/case-study/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, primary vs. secondary sources | difference & examples, what is a theoretical framework | guide to organizing, what is action research | definition & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Tools and Resources
  • Customer Services
  • Original Language Spotlight
  • Alternative and Non-formal Education 
  • Cognition, Emotion, and Learning
  • Curriculum and Pedagogy
  • Education and Society
  • Education, Change, and Development
  • Education, Cultures, and Ethnicities
  • Education, Gender, and Sexualities
  • Education, Health, and Social Services
  • Educational Administration and Leadership
  • Educational History
  • Educational Politics and Policy
  • Educational Purposes and Ideals
  • Educational Systems
  • Educational Theories and Philosophies
  • Globalization, Economics, and Education
  • Languages and Literacies
  • Professional Learning and Development
  • Research and Assessment Methods
  • Technology and Education
  • Share This Facebook LinkedIn Twitter

Article contents

Comparative case study research.

  • Lesley Bartlett Lesley Bartlett University of Wisconsin–Madison
  •  and  Frances Vavrus Frances Vavrus University of Minnesota
  • https://doi.org/10.1093/acrefore/9780190264093.013.343
  • Published online: 26 March 2019

Case studies in the field of education often eschew comparison. However, when scholars forego comparison, they are missing an important opportunity to bolster case studies’ theoretical generalizability. Scholars must examine how disparate epistemologies lead to distinct kinds of qualitative research and different notions of comparison. Expanded notions of comparison include not only the usual logic of contrast or juxtaposition but also a logic of tracing, in order to embrace approaches to comparison that are coherent with critical, constructivist, and interpretive qualitative traditions. Finally, comparative case study researchers consider three axes of comparison : the vertical, which pays attention across levels or scales, from the local through the regional, state, federal, and global; the horizontal, which examines how similar phenomena or policies unfold in distinct locations that are socially produced; and the transversal, which compares over time.

  • comparative case studies
  • case study research
  • comparative case study approach
  • epistemology

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Oxford Research Encyclopedias, Education. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 11 June 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [81.177.182.174]
  • 81.177.182.174

Character limit 500 /500

  • Corpus ID: 198988669

Comparative Case Studies

  • L. Bartlett
  • Published 2017

Figures from this paper

figure 1

5 Citations

"public innovation in post-transition countries: experiences from brazil and romania", place marketing, evolutionary theories and the management of multifaceted destinations: cartagena, colombia, as a study case, identifying critical success factors of climate mitigation and adaptation inclusion in dutch road planning, “who's ready for the cave” thailand's tham luang rescue museum as teaching case study, a gender discourse analysis, 51 references, transversing the vertical case study: a methodological approach to studies of educational policy as practice, comparatively knowing: making a case for the vertical case study., qualitative research and case study applications in education, the art of case study research, multiple case study analysis, levels of comparison in educational studies: different insights from different literatures and the value of multilevel analyses, how to find out how to do qualitative research, complicating the concept of culture, critical approaches to comparative education: vertical case studies from africa, europe, the middle east, and the americas, key concepts in ethnography, related papers.

Showing 1 through 3 of 0 Related Papers

Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

how to do a comparative case study

  • > The Research Imagination
  • > COMPARATIVE RESEARCH METHODS

how to do a comparative case study

Book contents

  • Frontmatter
  • 1 RESEARCH PROCESS
  • 2 THEORY AND METHOD
  • 3 RESEARCH DESIGN
  • 4 MEASUREMENT
  • 5 ETHICAL AND POLITICAL ISSUES
  • 7 SURVEY RESEARCH
  • 8 INTENSIVE INTERVIEWING
  • 9 OBSERVATIONAL FIELD RESEARCH
  • 10 FEMINIST METHODS
  • 11 HISTORICAL ANALYSIS
  • 12 EXPERIMENTAL RESEARCH
  • 13 CONTENT ANALYSIS
  • 14 AGGREGATE DATA ANALYSIS
  • 15 COMPARATIVE RESEARCH METHODS
  • 16 EVALUATION RESEARCH
  • 17 INDEXES AND SCALES
  • 18 BASIC STATISTICAL ANALYSIS
  • 19 MULTIVARIATE ANALYSIS AND STATISTICAL SIGNIFICANCE
  • EPILOGUE: THE VALUE AND LIMITS OF SOCIAL SCIENCE KNOWLEDGE
  • Appendix A A Precoded Questionnaire
  • Appendix B Excerpt from a Codebook
  • Author Index
  • Subject Index

15 - COMPARATIVE RESEARCH METHODS

Published online by Cambridge University Press:  05 June 2012

INTRODUCTION

In contrast to the chapters on survey research, experimentation, or content analysis that described a distinct set of skills, in this chapter, a variety of comparative research techniques are discussed. What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the research imagination in the choice of data collection strategies. There is a wide divide between quantitative and qualitative approaches in comparative work. Most studies are either exclusively qualitative (e.g., individual case studies of a small number of countries) or exclusively quantitative, most often using many cases and a cross-national focus (Ragin, 1991:7). Ideally, increasing numbers of studies in the future will use both traditions, as the skills, tools, and quality of data in comparative research continue to improve.

In almost all social research, we look at how social processes vary and are experienced in different settings to develop our knowledge of the causes and effects of human behavior. This holds true if we are trying to explain the behavior of nations or individuals. So, it may then seem redundant to include a chapter in this book specifically dedicated to comparative research methods when all the other methods discussed are ultimately comparative.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • COMPARATIVE RESEARCH METHODS
  • Paul S. Gray , Boston College, Massachusetts , John B. Williamson , Boston College, Massachusetts , David A. Karp , Boston College, Massachusetts , John R. Dalphin
  • Book: The Research Imagination
  • Online publication: 05 June 2012
  • Chapter DOI: https://doi.org/10.1017/CBO9780511819391.016

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

  • AI Content Shield
  • AI KW Research
  • AI Assistant
  • SEO Optimizer
  • AI KW Clustering
  • Customer reviews
  • The NLO Revolution
  • Press Center
  • Help Center
  • Content Resources
  • Facebook Group

Writing a Comparative Case Study: Effective Guide

Table of Contents

As a researcher or student, you may be required to write a comparative case study at some point in your academic journey. A comparative study is an analysis of two or more cases. Where the aim is to compare and contrast them based on specific criteria. We created this guide to help you understand how to write a comparative case study . This article will discuss what a comparative study is, the elements of a comparative study, and how to write an effective one. We also include samples to help you get started.

What is a Comparative Case Study?

A comparative study is a research method that involves comparing two or more cases to analyze their similarities and differences . These cases can be individuals, organizations, events, or any other unit of analysis. A comparative study aims to gain a deeper understanding of the subject matter by exploring the differences and similarities between the cases.

Elements of a Comparative Study

Before diving into the writing process, it’s essential to understand the key elements that make up a comparative study. These elements include:

  • Research Question : This is the central question the study seeks to answer. It should be specific and clear, and the basis of the comparison.
  • Cases : The cases being compared should be chosen based on their significance to the research question. They should also be similar in some ways to facilitate comparison.
  • Data Collection : Data collection should be comprehensive and systematic. Data collected can be qualitative, quantitative, or both.
  • Analysis : The analysis should be based on the research question and collected data. The data should be analyzed for similarities and differences between the cases.
  • Conclusion : The conclusion should summarize the findings and answer the research question. It should also provide recommendations for future research.

How to Write a Comparative Study

Now that we have established the elements of a comparative study, let’s dive into the writing process. Here is a detailed approach on how to write a comparative study:

Choose a Topic

The first step in writing a comparative study is to choose a topic relevant to your field of study. It should be a topic that you are familiar with and interested in.

Define the Research Question

Once you have chosen a topic, define your research question. The research question should be specific and clear.

Choose Cases

The next step is to choose the cases you will compare. The cases should be relevant to your research question and have similarities to facilitate comparison.

Collect Data

Collect data on each case using qualitative, quantitative, or both methods. The data collected should be comprehensive and systematic.

Analyze Data

Analyze the data collected for each case. Look for similarities and differences between the cases. The analysis should be based on the research question.

Write the Introduction

The introduction should provide background information on the topic and state the research question.

Write the Literature Review

The literature review should give a summary of the research that has been conducted on the topic.

Write the Methodology

The methodology should describe the data collection and analysis methods used.

Present Findings

Present the findings of the analysis. The results should be organized based on the research question.

Conclusion and Recommendations

Summarize the findings and answer the research question. Provide recommendations for future research.

Sample of Comparative Case Study

To provide a better understanding of how to write a comparative study , here is an example: Comparative Study of Two Leading Airlines: ABC and XYZ

Introduction

The airline industry is highly competitive, with companies constantly seeking new ways to improve customer experiences and increase profits. ABC and XYZ are two of the world’s leading airlines, each with a distinct approach to business. This comparative case study will examine the similarities and differences between the two airlines. And provide insights into what works well in the airline industry.

Research Questions

What are the similarities and differences between ABC and XYZ regarding their approach to business, customer experience, and profitability?

Data Collection and Analysis

To collect data for this comparative study, we will use a combination of primary and secondary sources. Primary sources will include interviews with customers and employees of both airlines, while secondary sources will include financial reports, marketing materials, and industry research. After collecting the data, we will use a systematic and comprehensive approach to data analysis. We will use a framework to compare and contrast the data, looking for similarities and differences between the two airlines. We will then organize the data into categories: customer experience, revenue streams, and operational efficiency.

After analyzing the data, we found several similarities and differences between ABC and XYZ. Similarities Both airlines offer a high level of customer service, with attentive flight attendants, comfortable seating, and in-flight entertainment. They also strongly focus on safety, with rigorous training and maintenance protocols in place. Differences ABC has a reputation for luxury, with features such as private suites and shower spas in first class. On the other hand, XYZ has a reputation for reliability and efficiency, with a strong emphasis on on-time departures and arrivals. In terms of revenue streams, ABC derives a significant portion of its revenue from international travel. At the same time, XYZ has a more diverse revenue stream, focusing on domestic and international travel. ABC also has a more centralized management structure, with decision-making authority concentrated at the top. On the other hand, XYZ has a more decentralized management structure, with decision-making authority distributed throughout the organization.

This comparative case study provides valuable insights into the airline industry and the approaches taken by two leading airlines, ABC and Delta. By comparing and contrasting the two airlines, we can see the strengths and weaknesses of each method. And identify potential strategies for improving the airline industry as a whole. Ultimately, this study shows that there is no one-size-fits-all approach to doing business in the airline industry. And that success depends on a combination of factors, including customer experience, operational efficiency, and revenue streams.

Wrapping Up

A comparative study is an effective research method for analyzing case similarities and differences. Writing a comparative study can be daunting, but proper planning and organization can be an effective research method. Define your research question, choose relevant cases, collect and analyze comprehensive data, and present the findings. The steps detailed in this blog post will help you create a compelling comparative study that provides valuable insights into your research topic . Remember to stay focused on your research question. And use the data collected to provide a clear and concise analysis of the cases being compared.

Writing a Comparative Case Study: Effective Guide

Abir Ghenaiet

Abir is a data analyst and researcher. Among her interests are artificial intelligence, machine learning, and natural language processing. As a humanitarian and educator, she actively supports women in tech and promotes diversity.

Explore All Write A Case Study Articles

How to write a leadership case study (sample) .

Writing a case study isn’t as straightforward as writing essays. But it has proven to be an effective way of…

  • Write A Case Study

Top 5 Online Expert Case Study Writing Services 

It’s a few hours to your deadline — and your case study college assignment is still a mystery to you.…

Examples Of Business Case Study In Research

A business case study can prevent an imminent mistake in business. How? It’s an effective teaching technique that teaches students…

How to Write a Multiple Case Study Effectively

Have you ever been assigned to write a multiple case study but don’t know where to begin? Are you intimidated…

How to Write a Case Study Presentation: 6 Key Steps

Case studies are an essential element of the business world. Understanding how to write a case study presentation will give…

How to Write a Case Study for Your Portfolio

Are you ready to showcase your design skills and move your career to the next level? Crafting a compelling case…

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 10 methods for comparative studies.

Francis Lau and Anne Holbrook .

10.1. Introduction

In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between these groups, while controlling for as many of the conditions as possible such as the composition, system, setting and duration.

According to the typology by Friedman and Wyatt (2006) , comparative studies take on an objective view where events such as the use and effect of an eHealth system can be defined, measured and compared through a set of variables to prove or disprove a hypothesis. For comparative studies, the design options are experimental versus observational and prospective versus retro­­spective. The quality of eHealth comparative studies depends on such aspects of methodological design as the choice of variables, sample size, sources of bias, confounders, and adherence to quality and reporting guidelines.

In this chapter we focus on experimental studies as one type of comparative study and their methodological considerations that have been reported in the eHealth literature. Also included are three case examples to show how these studies are done.

10.2. Types of Comparative Studies

Experimental studies are one type of comparative study where a sample of participants is identified and assigned to different conditions for a given time duration, then compared for differences. An example is a hospital with two care units where one is assigned a cpoe system to process medication orders electronically while the other continues its usual practice without a cpoe . The participants in the unit assigned to the cpoe are called the intervention group and those assigned to usual practice are the control group. The comparison can be performance or outcome focused, such as the ratio of correct orders processed or the occurrence of adverse drug events in the two groups during the given time period. Experimental studies can take on a randomized or non-randomized design. These are described below.

10.2.1. Randomized Experiments

In a randomized design, the participants are randomly assigned to two or more groups using a known randomization technique such as a random number table. The design is prospective in nature since the groups are assigned concurrently, after which the intervention is applied then measured and compared. Three types of experimental designs seen in eHealth evaluation are described below ( Friedman & Wyatt, 2006 ; Zwarenstein & Treweek, 2009 ).

Randomized controlled trials ( rct s) – In rct s participants are randomly assigned to an intervention or a control group. The randomization can occur at the patient, provider or organization level, which is known as the unit of allocation. For instance, at the patient level one can randomly assign half of the patients to receive emr reminders while the other half do not. At the provider level, one can assign half of the providers to receive the reminders while the other half continues with their usual practice. At the organization level, such as a multisite hospital, one can randomly assign emr reminders to some of the sites but not others. Cluster randomized controlled trials ( crct s) – In crct s, clusters of participants are randomized rather than by individual participant since they are found in naturally occurring groups such as living in the same communities. For instance, clinics in one city may be randomized as a cluster to receive emr reminders while clinics in another city continue their usual practice. Pragmatic trials – Unlike rct s that seek to find out if an intervention such as a cpoe system works under ideal conditions, pragmatic trials are designed to find out if the intervention works under usual conditions. The goal is to make the design and findings relevant to and practical for decision-makers to apply in usual settings. As such, pragmatic trials have few criteria for selecting study participants, flexibility in implementing the intervention, usual practice as the comparator, the same compliance and follow-up intensity as usual practice, and outcomes that are relevant to decision-makers.

10.2.2. Non-randomized Experiments

Non-randomized design is used when it is neither feasible nor ethical to randomize participants into groups for comparison. It is sometimes referred to as a quasi-experimental design. The design can involve the use of prospective or retrospective data from the same or different participants as the control group. Three types of non-randomized designs are described below ( Harris et al., 2006 ).

Intervention group only with pretest and post-test design – This design involves only one group where a pretest or baseline measure is taken as the control period, the intervention is implemented, and a post-test measure is taken as the intervention period for comparison. For example, one can compare the rates of medication errors before and after the implementation of a cpoe system in a hospital. To increase study quality, one can add a second pretest period to decrease the probability that the pretest and post-test difference is due to chance, such as an unusually low medication error rate in the first pretest period. Other ways to increase study quality include adding an unrelated outcome such as patient case-mix that should not be affected, removing the intervention to see if the difference remains, and removing then re-implementing the intervention to see if the differences vary accordingly. Intervention and control groups with post-test design – This design involves two groups where the intervention is implemented in one group and compared with a second group without the intervention, based on a post-test measure from both groups. For example, one can implement a cpoe system in one care unit as the intervention group with a second unit as the control group and compare the post-test medication error rates in both units over six months. To increase study quality, one can add one or more pretest periods to both groups, or implement the intervention to the control group at a later time to measure for similar but delayed effects. Interrupted time series ( its ) design – In its design, multiple measures are taken from one group in equal time intervals, interrupted by the implementation of the intervention. The multiple pretest and post-test measures decrease the probability that the differences detected are due to chance or unrelated effects. An example is to take six consecutive monthly medication error rates as the pretest measures, implement the cpoe system, then take another six consecutive monthly medication error rates as the post-test measures for comparison in error rate differences over 12 months. To increase study quality, one may add a concurrent control group for comparison to be more convinced that the intervention produced the change.

10.3. Methodological Considerations

The quality of comparative studies is dependent on their internal and external validity. Internal validity refers to the extent to which conclusions can be drawn correctly from the study setting, participants, intervention, measures, analysis and interpretations. External validity refers to the extent to which the conclusions can be generalized to other settings. The major factors that influence validity are described below.

10.3.1. Choice of Variables

Variables are specific measurable features that can influence validity. In comparative studies, the choice of dependent and independent variables and whether they are categorical and/or continuous in values can affect the type of questions, study design and analysis to be considered. These are described below ( Friedman & Wyatt, 2006 ).

Dependent variables – This refers to outcomes of interest; they are also known as outcome variables. An example is the rate of medication errors as an outcome in determining whether cpoe can improve patient safety. Independent variables – This refers to variables that can explain the measured values of the dependent variables. For instance, the characteristics of the setting, participants and intervention can influence the effects of cpoe . Categorical variables – This refers to variables with measured values in discrete categories or levels. Examples are the type of providers (e.g., nurses, physicians and pharmacists), the presence or absence of a disease, and pain scale (e.g., 0 to 10 in increments of 1). Categorical variables are analyzed using non-parametric methods such as chi-square and odds ratio. Continuous variables – This refers to variables that can take on infinite values within an interval limited only by the desired precision. Examples are blood pressure, heart rate and body temperature. Continuous variables are analyzed using parametric methods such as t -test, analysis of variance or multiple regression.

10.3.2. Sample Size

Sample size is the number of participants to include in a study. It can refer to patients, providers or organizations depending on how the unit of allocation is defined. There are four parts to calculating sample size. They are described below ( Noordzij et al., 2010 ).

Significance level – This refers to the probability that a positive finding is due to chance alone. It is usually set at 0.05, which means having a less than 5% chance of drawing a false positive conclusion. Power – This refers to the ability to detect the true effect based on a sample from the population. It is usually set at 0.8, which means having at least an 80% chance of drawing a correct conclusion. Effect size – This refers to the minimal clinically relevant difference that can be detected between comparison groups. For continuous variables, the effect is a numerical value such as a 10-kilogram weight difference between two groups. For categorical variables, it is a percentage such as a 10% difference in medication error rates. Variability – This refers to the population variance of the outcome of interest, which is often unknown and is estimated by way of standard deviation ( sd ) from pilot or previous studies for continuous outcome.

Table 10.1. Sample Size Equations for Comparing Two Groups with Continuous and Categorical Outcome Variables.

Sample Size Equations for Comparing Two Groups with Continuous and Categorical Outcome Variables.

An example of sample size calculation for an rct to examine the effect of cds on improving systolic blood pressure of hypertensive patients is provided in the Appendix. Refer to the Biomath website from Columbia University (n.d.) for a simple Web-based sample size / power calculator.

10.3.3. Sources of Bias

There are five common sources of biases in comparative studies. They are selection, performance, detection, attrition and reporting biases ( Higgins & Green, 2011 ). These biases, and the ways to minimize them, are described below ( Vervloet et al., 2012 ).

Selection or allocation bias – This refers to differences between the composition of comparison groups in terms of the response to the intervention. An example is having sicker or older patients in the control group than those in the intervention group when evaluating the effect of emr reminders. To reduce selection bias, one can apply randomization and concealment when assigning participants to groups and ensure their compositions are comparable at baseline. Performance bias – This refers to differences between groups in the care they received, aside from the intervention being evaluated. An example is the different ways by which reminders are triggered and used within and across groups such as electronic, paper and phone reminders for patients and providers. To reduce performance bias, one may standardize the intervention and blind participants from knowing whether an intervention was received and which intervention was received. Detection or measurement bias – This refers to differences between groups in how outcomes are determined. An example is where outcome assessors pay more attention to outcomes of patients known to be in the intervention group. To reduce detection bias, one may blind assessors from participants when measuring outcomes and ensure the same timing for assessment across groups. Attrition bias – This refers to differences between groups in ways that participants are withdrawn from the study. An example is the low rate of participant response in the intervention group despite having received reminders for follow-up care. To reduce attrition bias, one needs to acknowledge the dropout rate and analyze data according to an intent-to-treat principle (i.e., include data from those who dropped out in the analysis). Reporting bias – This refers to differences between reported and unreported findings. Examples include biases in publication, time lag, citation, language and outcome reporting depending on the nature and direction of the results. To reduce reporting bias, one may make the study protocol available with all pre-specified outcomes and report all expected outcomes in published results.

10.3.4. Confounders

Confounders are factors other than the intervention of interest that can distort the effect because they are associated with both the intervention and the outcome. For instance, in a study to demonstrate whether the adoption of a medication order entry system led to lower medication costs, there can be a number of potential confounders that can affect the outcome. These may include severity of illness of the patients, provider knowledge and experience with the system, and hospital policy on prescribing medications ( Harris et al., 2006 ). Another example is the evaluation of the effect of an antibiotic reminder system on the rate of post-operative deep venous thromboses ( dvt s). The confounders can be general improvements in clinical practice during the study such as prescribing patterns and post-operative care that are not related to the reminders ( Friedman & Wyatt, 2006 ).

To control for confounding effects, one may consider the use of matching, stratification and modelling. Matching involves the selection of similar groups with respect to their composition and behaviours. Stratification involves the division of participants into subgroups by selected variables, such as comorbidity index to control for severity of illness. Modelling involves the use of statistical techniques such as multiple regression to adjust for the effects of specific variables such as age, sex and/or severity of illness ( Higgins & Green, 2011 ).

10.3.5. Guidelines on Quality and Reporting

There are guidelines on the quality and reporting of comparative studies. The grade (Grading of Recommendations Assessment, Development and Evaluation) guidelines provide explicit criteria for rating the quality of studies in randomized trials and observational studies ( Guyatt et al., 2011 ). The extended consort (Consolidated Standards of Reporting Trials) Statements for non-pharmacologic trials ( Boutron, Moher, Altman, Schulz, & Ravaud, 2008 ), pragmatic trials ( Zwarestein et al., 2008 ), and eHealth interventions ( Baker et al., 2010 ) provide reporting guidelines for randomized trials.

The grade guidelines offer a system of rating quality of evidence in systematic reviews and guidelines. In this approach, to support estimates of intervention effects rct s start as high-quality evidence and observational studies as low-quality evidence. For each outcome in a study, five factors may rate down the quality of evidence. The final quality of evidence for each outcome would fall into one of high, moderate, low, and very low quality. These factors are listed below (for more details on the rating system, refer to Guyatt et al., 2011 ).

Design limitations – For rct s they cover the lack of allocation concealment, lack of blinding, large loss to follow-up, trial stopped early or selective outcome reporting. Inconsistency of results – Variations in outcomes due to unexplained heterogeneity. An example is the unexpected variation of effects across subgroups of patients by severity of illness in the use of preventive care reminders. Indirectness of evidence – Reliance on indirect comparisons due to restrictions in study populations, intervention, comparator or outcomes. An example is the 30-day readmission rate as a surrogate outcome for quality of computer-supported emergency care in hospitals. Imprecision of results – Studies with small sample size and few events typically would have wide confidence intervals and are considered of low quality. Publication bias – The selective reporting of results at the individual study level is already covered under design limitations, but is included here for completeness as it is relevant when rating quality of evidence across studies in systematic reviews.

The original consort Statement has 22 checklist items for reporting rct s. For non-pharmacologic trials extensions have been made to 11 items. For pragmatic trials extensions have been made to eight items. These items are listed below. For further details, readers can refer to Boutron and colleagues (2008) and the consort website ( consort , n.d.).

Title and abstract – one item on the means of randomization used. Introduction – one item on background, rationale, and problem addressed by the intervention. Methods – 10 items on participants, interventions, objectives, outcomes, sample size, randomization (sequence generation, allocation concealment, implementation), blinding (masking), and statistical methods. Results – seven items on participant flow, recruitment, baseline data, numbers analyzed, outcomes and estimation, ancillary analyses, adverse events. Discussion – three items on interpretation, generalizability, overall evidence.

The consort Statement for eHealth interventions describes the relevance of the consort recommendations to the design and reporting of eHealth studies with an emphasis on Internet-based interventions for direct use by patients, such as online health information resources, decision aides and phr s. Of particular importance is the need to clearly define the intervention components, their role in the overall care process, target population, implementation process, primary and secondary outcomes, denominators for outcome analyses, and real world potential (for details refer to Baker et al., 2010 ).

10.4. Case Examples

10.4.1. pragmatic rct in vascular risk decision support.

Holbrook and colleagues (2011) conducted a pragmatic rct to examine the effects of a cds intervention on vascular care and outcomes for older adults. The study is summarized below.

Setting – Community-based primary care practices with emr s in one Canadian province. Participants – English-speaking patients 55 years of age or older with diagnosed vascular disease, no cognitive impairment and not living in a nursing home, who had a provider visit in the past 12 months. Intervention – A Web-based individualized vascular tracking and advice cds system for eight top vascular risk factors and two diabetic risk factors, for use by both providers and patients and their families. Providers and staff could update the patient’s profile at any time and the cds algorithm ran nightly to update recommendations and colour highlighting used in the tracker interface. Intervention patients had Web access to the tracker, a print version mailed to them prior to the visit, and telephone support on advice. Design – Pragmatic, one-year, two-arm, multicentre rct , with randomization upon patient consent by phone, using an allocation-concealed online program. Randomization was by patient with stratification by provider using a block size of six. Trained reviewers examined emr data and conducted patient telephone interviews to collect risk factors, vascular history, and vascular events. Providers completed questionnaires on the intervention at study end. Patients had final 12-month lab checks on urine albumin, low-density lipoprotein cholesterol, and A1c levels. Outcomes – Primary outcome was based on change in process composite score ( pcs ) computed as the sum of frequency-weighted process score for each of the eight main risk factors with a maximum score of 27. The process was considered met if a risk factor had been checked. pcs was measured at baseline and study end with the difference as the individual primary outcome scores. The main secondary outcome was a clinical composite score ( ccs ) based on the same eight risk factors compared in two ways: a comparison of the mean number of clinical variables on target and the percentage of patients with improvement between the two groups. Other secondary outcomes were actual vascular event rates, individual pcs and ccs components, ratings of usability, continuity of care, patient ability to manage vascular risk, and quality of life using the EuroQol five dimensions questionnaire ( eq-5D) . Analysis – 1,100 patients were needed to achieve 90% power in detecting a one-point pcs difference between groups with a standard deviation of five points, two-tailed t -test for mean difference at 5% significance level, and a withdrawal rate of 10%. The pcs , ccs and eq-5D scores were analyzed using a generalized estimating equation accounting for clustering within providers. Descriptive statistics and χ2 tests or exact tests were done with other outcomes. Findings – 1,102 patients and 49 providers enrolled in the study. The intervention group with 545 patients had significant pcs improvement with a difference of 4.70 ( p < .001) on a 27-point scale. The intervention group also had significantly higher odds of rating improvements in their continuity of care (4.178, p < .001) and ability to improve their vascular health (3.07, p < .001). There was no significant change in vascular events, clinical variables and quality of life. Overall the cds intervention led to reduced vascular risks but not to improved clinical outcomes in a one-year follow-up.

10.4.2. Non-randomized Experiment in Antibiotic Prescribing in Primary Care

Mainous, Lambourne, and Nietert (2013) conducted a prospective non-randomized trial to examine the impact of a cds system on antibiotic prescribing for acute respiratory infections ( ari s) in primary care. The study is summarized below.

Setting – A primary care research network in the United States whose members use a common emr and pool data quarterly for quality improvement and research studies. Participants – An intervention group with nine practices across nine states, and a control group with 61 practices. Intervention – Point-of-care cds tool as customizable progress note templates based on existing emr features. cds recommendations reflect Centre for Disease Control and Prevention ( cdc ) guidelines based on a patient’s predominant presenting symptoms and age. cds was used to assist in ari diagnosis, prompt antibiotic use, record diagnosis and treatment decisions, and access printable patient and provider education resources from the cdc . Design – The intervention group received a multi-method intervention to facilitate provider cds adoption that included quarterly audit and feedback, best practice dissemination meetings, academic detailing site visits, performance review and cds training. The control group did not receive information on the intervention, the cds or education. Baseline data collection was for three months with follow-up of 15 months after cds implementation. Outcomes – The outcomes were frequency of inappropriate prescribing during an ari episode, broad-spectrum antibiotic use and diagnostic shift. Inappropriate prescribing was computed by dividing the number of ari episodes with diagnoses in the inappropriate category that had an antibiotic prescription by the total number of ari episodes with diagnosis for which antibiotics are inappropriate. Broad-spectrum antibiotic use was computed by all ari episodes with a broad-spectrum antibiotic prescription by the total number of ari episodes with an antibiotic prescription. Antibiotic drift was computed in two ways: dividing the number of ari episodes with diagnoses where antibiotics are appropriate by the total number of ari episodes with an antibiotic prescription; and dividing the number of ari episodes where antibiotics were inappropriate by the total number of ari episodes. Process measure included frequency of cds template use and whether the outcome measures differed by cds usage. Analysis – Outcomes were measured quarterly for each practice, weighted by the number of ari episodes during the quarter to assign greater weight to practices with greater numbers of relevant episodes and to periods with greater numbers of relevant episodes. Weighted means and 95% ci s were computed separately for adult and pediatric (less than 18 years of age) patients for each time period for both groups. Baseline means in outcome measures were compared between the two groups using weighted independent-sample t -tests. Linear mixed models were used to compare changes over the 18-month period. The models included time, intervention status, and were adjusted for practice characteristics such as specialty, size, region and baseline ari s. Random practice effects were included to account for clustering of repeated measures on practices over time. P -values of less than 0.05 were considered significant. Findings – For adult patients, inappropriate prescribing in ari episodes declined more among the intervention group (-0.6%) than the control group (4.2%)( p = 0.03), and prescribing of broad-spectrum antibiotics declined by 16.6% in the intervention group versus an increase of 1.1% in the control group ( p < 0.0001). For pediatric patients, there was a similar decline of 19.7% in the intervention group versus an increase of 0.9% in the control group ( p < 0.0001). In summary, the cds had a modest effect in reducing inappropriate prescribing for adults, but had a substantial effect in reducing the prescribing of broad-spectrum antibiotics in adult and pediatric patients.

10.4.3. Interrupted Time Series on EHR Impact in Nursing Care

Dowding, Turley, and Garrido (2012) conducted a prospective its study to examine the impact of ehr implementation on nursing care processes and outcomes. The study is summarized below.

Setting – Kaiser Permanente ( kp ) as a large not-for-profit integrated healthcare organization in the United States. Participants – 29 kp hospitals in the northern and southern regions of California. Intervention – An integrated ehr system implemented at all hospitals with cpoe , nursing documentation and risk assessment tools. The nursing component for risk assessment documentation of pressure ulcers and falls was consistent across hospitals and developed by clinical nurses and informaticists by consensus. Design – its design with monthly data on pressure ulcers and quarterly data on fall rates and risk collected over seven years between 2003 and 2009. All data were collected at the unit level for each hospital. Outcomes – Process measures were the proportion of patients with a fall risk assessment done and the proportion with a hospital-acquired pressure ulcer ( hapu ) risk assessment done within 24 hours of admission. Outcome measures were fall and hapu rates as part of the unit-level nursing care process and nursing sensitive outcome data collected routinely for all California hospitals. Fall rate was defined as the number of unplanned descents to the floor per 1,000 patient days, and hapu rate was the percentage of patients with stages i-IV or unstageable ulcer on the day of data collection. Analysis – Fall and hapu risk data were synchronized using the month in which the ehr was implemented at each hospital as time zero and aggregated across hospitals for each time period. Multivariate regression analysis was used to examine the effect of time, region and ehr . Findings – The ehr was associated with significant increase in document rates for hapu risk (2.21; 95% CI 0.67 to 3.75) and non-significant increase for fall risk (0.36; -3.58 to 4.30). The ehr was associated with 13% decrease in hapu rates (-0.76; -1.37 to -0.16) but no change in fall rates (-0.091; -0.29 to 011). Hospital region was a significant predictor of variation for hapu (0.72; 0.30 to 1.14) and fall rates (0.57; 0.41 to 0.72). During the study period, hapu rates decreased significantly (-0.16; -0.20 to -0.13) but not fall rates (0.0052; -0.01 to 0.02). In summary, ehr implementation was associated with a reduction in the number of hapu s but not patient falls, and changes over time and hospital region also affected outcomes.

10.5. Summary

In this chapter we introduced randomized and non-randomized experimental designs as two types of comparative studies used in eHealth evaluation. Randomization is the highest quality design as it reduces bias, but it is not always feasible. The methodological issues addressed include choice of variables, sample size, sources of biases, confounders, and adherence to reporting guidelines. Three case examples were included to show how eHealth comparative studies are done.

  • Baker T. B., Gustafson D. H., Shaw B., Hawkins R., Pingree S., Roberts L., Strecher V. Relevance of consort reporting criteria for research on eHealth interventions. Patient Education and Counselling. 2010; 81 (suppl. 7):77–86. [ PMC free article : PMC2993846 ] [ PubMed : 20843621 ]
  • Columbia University. (n.d.). Statistics: sample size / power calculation. Biomath (Division of Biomathematics/Biostatistics), Department of Pediatrics. New York: Columbia University Medical Centre. Retrieved from http://www ​.biomath.info/power/index.htm .
  • Boutron I., Moher D., Altman D. G., Schulz K. F., Ravaud P. consort Group. Extending the consort statement to randomized trials of nonpharmacologic treatment: Explanation and elaboration. Annals of Internal Medicine. 2008; 148 (4):295–309. [ PubMed : 18283207 ]
  • Cochrane Collaboration. Cochrane handbook. London: Author; (n.d.) Retrieved from http://handbook ​.cochrane.org/
  • consort Group. (n.d.). The consort statement . Retrieved from http://www ​.consort-statement.org/
  • Dowding D. W., Turley M., Garrido T. The impact of an electronic health record on nurse sensitive patient outcomes: an interrupted time series analysis. Journal of the American Medical Informatics Association. 2012; 19 (4):615–620. [ PMC free article : PMC3384108 ] [ PubMed : 22174327 ]
  • Friedman C. P., Wyatt J.C. Evaluation methods in biomedical informatics. 2nd ed. New York: Springer Science + Business Media, Inc; 2006.
  • Guyatt G., Oxman A. D., Akl E. A., Kunz R., Vist G., Brozek J. et al. Schunemann H. J. grade guidelines: 1. Introduction – grade evidence profiles and summary of findings tables. Journal of Clinical Epidemiology. 2011; 64 (4):383–394. [ PubMed : 21195583 ]
  • Harris A. D., McGregor J. C., Perencevich E. N., Furuno J. P., Zhu J., Peterson D. E., Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. Journal of the American Medical Informatics Association. 2006; 13 (1):16–23. [ PMC free article : PMC1380192 ] [ PubMed : 16221933 ]
  • The Cochrane Collaboration. Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. London: 2011. (Version 5.1.0, updated March 2011) Retrieved from http://handbook ​.cochrane.org/
  • Holbrook A., Pullenayegum E., Thabane L., Troyan S., Foster G., Keshavjee K. et al. Curnew G. Shared electronic vascular risk decision support in primary care. Computerization of medical practices for the enhancement of therapeutic effectiveness (compete III) randomized trial. Archives of Internal Medicine. 2011; 171 (19):1736–1744. [ PubMed : 22025430 ]
  • Mainous III A. G., Lambourne C. A., Nietert P.J. Impact of a clinical decision support system on antibiotic prescribing for acute respiratory infections in primary care: quasi-experimental trial. Journal of the American Medical Informatics Association. 2013; 20 (2):317–324. [ PMC free article : PMC3638170 ] [ PubMed : 22759620 ]
  • Noordzij M., Tripepi G., Dekker F. W., Zoccali C., Tanck M. W., Jager K.J. Sample size calculations: basic principles and common pitfalls. Nephrology Dialysis Transplantation. 2010; 25 (5):1388–1393. Retrieved from http://ndt ​.oxfordjournals ​.org/content/early/2010/01/12/ndt ​.gfp732.short . [ PubMed : 20067907 ]
  • Vervloet M., Linn A. J., van Weert J. C. M., de Bakker D. H., Bouvy M. L., van Dijk L. The effectiveness of interventions using electronic reminders to improve adherence to chronic medication: A systematic review of the literature. Journal of the American Medical Informatics Association. 2012; 19 (5):696–704. [ PMC free article : PMC3422829 ] [ PubMed : 22534082 ]
  • Zwarenstein M., Treweek S., Gagnier J. J., Altman D. G., Tunis S., Haynes B., Oxman A. D., Moher D. for the consort and Pragmatic Trials in Healthcare (Practihc) groups. Improving the reporting of pragmatic trials: an extension of the consort statement. British Medical Journal. 2008; 337 :a2390. [ PMC free article : PMC3266844 ] [ PubMed : 19001484 ] [ CrossRef ]
  • Zwarenstein M., Treweek S. What kind of randomized trials do we need? Canadian Medical Association Journal. 2009; 180 (10):998–1000. [ PMC free article : PMC2679816 ] [ PubMed : 19372438 ]

Appendix. Example of Sample Size Calculation

This is an example of sample size calculation for an rct that examines the effect of a cds system on reducing systolic blood pressure in hypertensive patients. The case is adapted from the example described in the publication by Noordzij et al. (2010) .

(a) Systolic blood pressure as a continuous outcome measured in mmHg

Based on similar studies in the literature with similar patients, the systolic blood pressure values from the comparison groups are expected to be normally distributed with a standard deviation of 20 mmHg. The evaluator wishes to detect a clinically relevant difference of 15 mmHg in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds . Assuming a significance level or alpha of 0.05 for 2-tailed t -test and power of 0.80, the corresponding multipliers 1 are 1.96 and 0.842, respectively. Using the sample size equation for continuous outcome below we can calculate the sample size needed for the above study.

n = 2[(a+b)2σ2]/(μ1-μ2)2 where

n = sample size for each group

μ1 = population mean of systolic blood pressures in intervention group

μ2 = population mean of systolic blood pressures in control group

μ1- μ2 = desired difference in mean systolic blood pressures between groups

σ = population variance

a = multiplier for significance level (or alpha)

b = multiplier for power (or 1-beta)

Providing the values in the equation would give the sample size (n) of 28 samples per group as the result

n = 2[(1.96+0.842)2(202)]/152 or 28 samples per group

(b) Systolic blood pressure as a categorical outcome measured as below or above 140 mmHg (i.e., hypertension yes/no)

In this example a systolic blood pressure from a sample that is above 140 mmHg is considered an event of the patient with hypertension. Based on published literature the proportion of patients in the general population with hypertension is 30%. The evaluator wishes to detect a clinically relevant difference of 10% in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds . This means the expected proportion of patients with hypertension is 20% (p1 = 0.2) in the intervention group and 30% (p2 = 0.3) in the control group. Assuming a significance level or alpha of 0.05 for 2-tailed t -test and power of 0.80 the corresponding multipliers are 1.96 and 0.842, respectively. Using the sample size equation for categorical outcome below, we can calculate the sample size needed for the above study.

n = [(a+b)2(p1q1+p2q2)]/χ2

p1 = proportion of patients with hypertension in intervention group

q1 = proportion of patients without hypertension in intervention group (or 1-p1)

p2 = proportion of patients with hypertension in control group

q2 = proportion of patients without hypertension in control group (or 1-p2)

χ = desired difference in proportion of hypertensive patients between two groups

Providing the values in the equation would give the sample size (n) of 291 samples per group as the result

n = [(1.96+0.842)2((0.2)(0.8)+(0.3)(0.7))]/(0.1)2 or 291 samples per group

From Table 3 on p. 1392 of Noordzij et al. (2010).

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Lau F, Holbrook A. Chapter 10 Methods for Comparative Studies. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Types of Comparative Studies
  • Methodological Considerations
  • Case Examples
  • Example of Sample Size Calculation

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 10 Methods for Comparative Studies - Handbook of eHealth Evaluation: An ... Chapter 10 Methods for Comparative Studies - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Teaching Case Study Methods in Comparative Education

  • Living reference work entry
  • First Online: 14 September 2019
  • Cite this living reference work entry

how to do a comparative case study

  • Lesley Bartlett 2 ,
  • Diana Famakinwa 2 &
  • Regina Fuller 2  

39 Accesses

Introduction

What are the limitations in how case studies are conceptualized and conducted in the field of Comparative and International Education? How may comparative case studies address those limitations? This entry addresses those questions, reviews the basic tenets of comparative case studies, and offers two examples of CCS to show the opportunities and constraints of the methodological approach. The entry concludes with implications for teaching case study methods in comparative education.

Case Studies and Comparison

Case studies are widely used in educational research and by scholars who embrace very different epistemological positions. However, the methodological literature on case studies does not reflect these divergent theories of knowledge. While those working in the neo- or post-positivist paradigm may emulate social science traditions, other qualitative approaches on the continuum of paradigms, particularly critical and constructivist ones, adhere to a more inductive...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Bartlett, L., & Vavrus, F. (2017). Rethinking case study research: A comparative approach . New York: Routledge.

Google Scholar  

Maxwell, J. (2013). Qualitative research design: An interactive approach . Thousand Oaks: Sage.

Merriam, S. (2009). Qualitative research: A guide to design and implementation . San Francisco: Jossey-Bass.

Stake, R. (1994). The art of case study research . Thousand Oaks: Sage.

Stake, R. (2003). Case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), Strategies of qualitative inquiry (2nd ed., pp. 134–164). Thousand Oaks: Sage.

Vavrus, F., & Bartlett, L. (2009). Critical approaches to comparative education . New York: Palgrave Macmillan.

Book   Google Scholar  

Yin, R. (2014). Case study research (5th ed.). Thousand Oaks: Sage.

Download references

Author information

Authors and affiliations.

University of Wisconsin-Madison, Madison, WI, USA

Lesley Bartlett, Diana Famakinwa & Regina Fuller

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lesley Bartlett .

Editor information

Editors and affiliations.

WMIER, The University of Waikato WMIER, Hamilton, Waikato, New Zealand

Michael A. Peters

Section Editor information

Faculty of Education, Institute of Educational Theories, Beijing Normal University, Beijing, China

Greg William Misiaszek

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Bartlett, L., Famakinwa, D., Fuller, R. (2019). Teaching Case Study Methods in Comparative Education. In: Peters, M. (eds) Encyclopedia of Teacher Education. Springer, Singapore. https://doi.org/10.1007/978-981-13-1179-6_316-1

Download citation

DOI : https://doi.org/10.1007/978-981-13-1179-6_316-1

Received : 21 March 2019

Accepted : 17 June 2019

Published : 14 September 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-13-1179-6

Online ISBN : 978-981-13-1179-6

eBook Packages : Springer Reference Education Reference Module Humanities and Social Sciences Reference Module Education

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

how to do a comparative case study

Home Market Research Research Tools and Apps

Comparative Analysis: What It Is & How to Conduct It

Comparative analysis compares your site or tool to those of your competitors. It's better to know what your competitors have to offer.

When a business wants to start a marketing campaign or grow, a comparative analysis can give them information that helps them make crucial decisions. This analysis gathers different data sets to compare different options so a business can make good decisions for its customers and itself. If you or your business want to make good decisions, learning about comparative analyses could be helpful. 

In this article, we’ll explain the comparative analysis and its importance. We’ll also learn how to do a good in-depth analysis .

What is comparative analysis?

Comparative analysis is a way to look at two or more similar things to see how they are different and what they have in common. 

It is used in many ways and fields to help people understand the similarities and differences between products better. It can help businesses make good decisions about key issues.

One meaningful way it’s used is when applied to scientific data. Scientific data is information that has been gathered through scientific research and will be used for a certain purpose.

When it is used on scientific data, it determines how consistent and reliable the data is. It also helps scientists make sure their data is accurate and valid.

Importance of comparative analysis 

Comparative analyses are important if you want to understand a problem better or find answers to important questions. Here are the main goals businesses want to reach through comparative analysis.

  • It is a part of the diagnostic phase of business analytics. It can answer many of the most important questions a company may have and help you figure out how to fix problems at the company’s core to improve performance and even make more money.
  • It encourages a deep understanding of the opportunities that apply to specific processes, departments, or business units. This analysis also ensures that we’re addressing the real reasons for performance gaps.
  • It is used a lot because it helps people understand the challenges an organization has faced in the past and the ones it faces now. This method gives objective, fact-based information about performance and ways to improve it.

How to successfully conduct it

Consider using the advice below to carry out a successful comparative analysis:

Conduct research

Before doing an analysis, it’s important to do a lot of research . Research not only gives you evidence to back up your conclusions, but it might also show you something you hadn’t thought of before.

Research could also tell you how your competitors might handle a problem.

Make a list of what’s different and what’s the same.

When comparing two things in a comparative analysis, you need to make a detailed list of the similarities and differences.

Try to figure out how a change to one thing might affect another. Such as how increasing the number of vacation days affects sales, production, or costs. 

A comparative analysis can also help you find outside causes, such as economic conditions or environmental analysis problems.

Describe both sides

Comparative analysis may try to show that one argument or idea is better, but the analysis must cover both sides equally. The analysis shows both sides of the main arguments and claims. 

For example, to compare the benefits and drawbacks of starting a recycling program, one might examine both the positive effects, such as corporate responsibility and the potential negative effects, such as high implementation costs, to make wise, practical decisions or come up with alternate solutions.

Include variables

A thorough comparison unit of analysis is usually more than just a list of pros and cons because it usually considers factors that affect both sides.

Variables can be both things that can’t be changed, like how the weather in the summer affects shipping speeds, and things that can be changed, like when to work with a local shipper.

Do analyses regularly

Comparative analyses are important for any business practice. Consider the different areas and factors that a comparative analysis looks at:

  • Competitors
  • How well do stocks
  • Financial position
  • Profitability
  • Dividends and revenue
  • Development and research

Because a comparative analysis can help more than one department in a company, doing them often can help you keep up with market changes and stay relevant.

We’ve talked about how good a comparative analysis is for your business. But things always have two sides. It is a good workaround, but still do your own user interviews or user tests if you can. 

We hope you have fun doing comparative analyses! Comparative analysis is always a method you like to use, and the point of learning from competitors is to add your own ideas. In this way, you are not just following but also learning and making.

QuestionPro can help you with your analysis process, create and design a survey to meet your goals, and analyze data for your business’s comparative analysis.

At QuestionPro, we give researchers tools for collecting data, like our survey software and a library of insights for all kinds of l ong-term research . If you want to book a demo or learn more about our platform, just click here.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

types of correlation

Exploring Types of Correlation for Patterns and Relationship 

Jun 10, 2024

Life@QuestionPro: The Journey of Kristie Lawrence

Life@QuestionPro: The Journey of Kristie Lawrence

Jun 7, 2024

We are on the front end of an innovation that can help us better predict how to transform our customer interactions.

How Can I Help You? — Tuesday CX Thoughts

Jun 5, 2024

how to do a comparative case study

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

This website may not work correctly because your browser is out of date. Please update your browser .

Comparative case studies

Comparative case studies can be useful to check variation in program implementation. 

Comparative case studies are another way of checking if results match the program theory. Each context and environment is different. The comparative case study can help the evaluator check whether the program theory holds for each different context and environment. If implementation differs, the reasons and results can be recorded. The opposite is also true, similar patterns across sites can increase the confidence in results.

Evaluators used a comparative case study method for the National Cancer Institute’s (NCI’s) Community Cancer Centers Program (NCCCP). The aim of this program was to expand cancer research and deliver the latest, most advanced cancer care to a greater number of Americans in the communities in which they live via community hospitals. The evaluation examined each of the program components (listed below) at each program site. The six program components were:

  • increasing capacity to collect biospecimens per NCI’s best practices;
  • enhancing clinical trials (CT) research;
  • reducing disparities across the cancer continuum;
  • improving the use of information technology (IT) and electronic medical records (EMRs) to support improvements in research and care delivery;
  • improving quality of cancer care and related areas, such as the development of integrated, multidisciplinary care teams; and
  • placing greater emphasis on survivorship and palliative care.

The evaluators use of this method assisted in providing recommendations at the program level as well as to each specific program site.

Advice for choosing this method

  • Compare cases with the same outcome but differences in an intervention (known as MDD, most different design)
  • Compare cases with the same intervention but differences in outcomes (known as MSD, most similar design)

Advice for using this method

  • Consider the variables of each case, and which cases can be matched for comparison.
  • Provide the evaluator with as much detail and background on each case as possible. Provide advice on possible criteria for matching.

National Cancer Institute, (2007).  NCI Community Cancer Centers Program Evaluation (NCCCP) . Retrieved from website: https://digitalscholarship.unlv.edu/jhdrp/vol8/iss1/4/

Expand to view all resources related to 'Comparative case studies'

  • Broadening the range of designs and methods for impact evaluations

'Comparative case studies' is referenced in:

Framework/guide.

  • Rainbow Framework :  Check the results are consistent with causal contribution
  • Sustained and Emerging Impacts Evaluation (SEIE)

Back to top

© 2022 BetterEvaluation. All right reserved.

Rasmussen University: FAQS banner

How do I write a comparative analysis?

A comparative analysis is an essay in which two things are compared and contrasted. You may have done a "compare and contrast" paper in your English class, and a comparative analysis is the same general idea, but as a graduate student you are expected to produce a higher level of analysis in your writing. You can follow these guidelines to get started. 

  • Conduct your research. Need help? Ask a Librarian!
  • Brainstorm a list of similarities and differences. The Double Bubble  document linked below can be helpful for this step.
  • Write your thesis. This will be based on what you have discovered regarding the weight of similarities and differences between the things you are comparing. 
  • Alternating (point-by-point) method: Find similar points between each subject and alternate writing about each of them.
  • Block (subject-by-subject) method: Discuss all of the first subject and then all of the second.
  • This page from the University of Toronto gives some great examples of when each of these is most effective.
  • Don't forget to cite your sources! 

Visvis, V., & Plotnik, J. (n.d.). The comparative essay . University of Toronto. https://advice.writing.utoronto.ca/types-of-writing/comparative-essay/

Walk, K. (1998). How to write a comparative analysis . Harvard University. https://writingcenter.fas.harvard.edu/pages/how-write-comparative-analysis

Links & Files

  • Double_Bubble_Map.docx
  • Health Sciences
  • Reading and Writing
  • Graduate Writing
  • Last Updated Sep 06, 2023
  • Views 129660
  • Answered By Kerry Louvier

FAQ Actions

  • Share on Facebook

Comments (0)

Hello! We're here to help! Please log in to ask your question.

Need an answer now? Search our FAQs !

How can I find my course textbook?

You can expect a prompt response, Monday through Friday, 8:00 AM-4:00 PM Central Time (by the next business day on weekends and holidays).

Questions may be answered by a Librarian, Learning Services Coordinator, Instructor, or Tutor. 

You are using an outdated browser. Please upgrade your browser to improve your experience.

India Election Results

NDA Won Leads Seats
BJP 0 0

BJP’s allies 0 0

I.N.D.I.A. Won Leads Seats
Congress 0 0

Congress’ allies 0 0

Narendra Modi vowed to continue as India’s prime minister even after his party lost its outright majority in parliament. The Bharatiya Janata Party fell short of the 272 seats needed for control, meaning it must rely on its allies to form a government.

The outcome is a stunning blow for Modi, who had predicted his party and its allies would capture more than two-thirds of all seats in this election. The opposition bloc spearheaded by the Indian National Congress performed far better than predicted with its appeal to voters who felt left out of India’s growth story.

Listen to our Big Take Asia podcast: Election Results Shake Modi's Hold on India

How India Voted

At stake in the election were 543 seats in the lower house of parliament, known as the Lok Sabha. The party or coalition that wins a majority of seats selects the prime minister. Modi set a high bar for success heading into the vote, predicting that his BJP-led National Democratic Alliance would win more than 400 seats in parliament, up from about 350 seats in 2019.

In a bid to oust Modi, more than 20 opposition parties — including the Indian National Congress, which had ruled India for much of its history before Modi took power — formed an alliance. It had sought to woo voters by focusing on issues like joblessness and the high cost of living.

How Support for Modi Shifted

Note: Analysis conducted only in constituencies where BJP coalition parties have contested in the 2019 and 2024 election

Changing Fortunes

The BJP’s decades-long rise started off with victory in just two seats in 1984. It rose to dominate Indian politics on the back of its Hindu-first policies, which resonated with the country’s largest religious group, and the country’s economic boom.

At the same time, the Congress party, which ruled India for much of its post-independence period, lost its standing after 2009 amid corruption scandals, weakened leadership and what many voters saw as a lack of visionary new policies.

The 2024 election results are a major setback for the BJP and force Modi to rely on allies to form a government for the first time since he stormed to power a decade ago.

BJP and allies

Congress and allies

More India News Coverage From Bloomberg

IMAGES

  1. Step by step process of in-depth comparative case study research

    how to do a comparative case study

  2. Comparative Analysis of the Two Case Studies

    how to do a comparative case study

  3. [PDF] Comparative Case Studies

    how to do a comparative case study

  4. Comparative Case Studies by David Boers

    how to do a comparative case study

  5. Comparative case study design

    how to do a comparative case study

  6. Comparative Case Study Research Template in Word, Google Docs

    how to do a comparative case study

VIDEO

  1. Comparative Study of Mobile Applications that Utilize AI Technology for English Learning

  2. Case study, causal comparative or ex-post-facto research, prospective, retrospective research

  3. Integrating distribution, sales and services in manufacturing: a comparative case study

  4. GGCI Student Research Summit V (2023) Panel 2: Metropolitan Development Studies

  5. Anonymization, Hashing and Data Encryption Techniques: A Comparative Case Study

  6. Comparative Political Analysis: Why we compare, Part 1

COMMENTS

  1. Comparative Case Studies: Methodological Discussion

    Comparative Case Studies have been suggested as providing effective tools to understanding policy and practice along three different axes of social scientific research, namely horizontal (spaces), vertical (scales), and transversal (time). The chapter, first, sketches the methodological basis of case-based research in comparative studies as a ...

  2. Comparative Case Studies: An Innovative Approach

    In this article, we argue for a new approach—the comparative case study approach—that attends simultaneously to macro, meso, and micro dimensions of case-based research. The approach engages ...

  3. Comparative case studies

    Comparative case studies cover two or more cases in a way that produces more generalizable knowledge about causal questions - how and why particular programmes or policies work or fail to work. Comparative case studies are undertaken over time and emphasize comparison within and across contexts. Comparative case studies may be selected when ...

  4. Comparative Research Methods

    Comparative Case Study Analysis. Mono-national case studies can contribute to comparative research if they are composed with a larger framework in mind and follow the Method of Structured, Focused Comparison (George & Bennett, 2005). For case studies to contribute to cumulative development of knowledge and theory they must all explore the same ...

  5. What is Comparative Analysis and How to Conduct It?

    Contextual Understanding: In comparative case studies, it's crucial to consider the context within which each case operates. Understanding the context helps interpret findings accurately. Cross-Case Analysis: Researchers conduct cross-case analysis to identify commonalities and differences across cases. This process can lead to the discovery of ...

  6. 2.3: Case Selection (Or, How to Use Cases in Your Comparative Analysis

    Most case studies are descriptive in nature, where the researchers simply seek to describe what they observe. They are useful for transmitting information regarding the studied political phenomenon. For a descriptive case study, a scholar might choose a case that is considered typical of the population. An example could involve researching the ...

  7. Comparative Case Study

    A comparative case study (CCS) is defined as 'the systematic comparison of two or more data points ("cases") obtained through use of the case study method' (Kaarbo and Beasley 1999, p. 372). A case may be a participant, an intervention site, a programme or a policy. Case studies have a long history in the social sciences, yet for a long ...

  8. PDF Comparative Case Studies: Methodological Discussion

    3.2 Case-Based Research in Comparative Studies In the past, comparativists have oftentimes regarded case study research as an alternative to comparative studies proper. At the risk of oversimpli-cation: methodological choices in comparative and international educa-tion (CIE) research, from the 1960s onwards, have fallen primarily on

  9. What Is a Case Study?

    Case studies are good for describing, comparing, evaluating and understanding different aspects of a research problem. Table of contents. When to do a case study. Step 1: Select a case. Step 2: Build a theoretical framework. Step 3: Collect your data. Step 4: Describe and analyze the case.

  10. Comparative Studies

    Case study research is related to the number of cases investigated and the amount of detailed information that the researcher collects. The fewer cases investigated, the more information can be collected. The case study subject in comparative approaches may be an event, an institution, a sector, a policy process, or even a whole nation.

  11. Comparative Case Study Research

    Summary. Case studies in the field of education often eschew comparison. However, when scholars forego comparison, they are missing an important opportunity to bolster case studies' theoretical generalizability. Scholars must examine how disparate epistemologies lead to distinct kinds of qualitative research and different notions of comparison.

  12. 7

    Summary. Pavone analyzes how our evolving understanding of case-based causal inference via process-tracing should alter how we select cases for comparative inquiry. The chapter explicates perhaps the most influential and widely used means to conduct qualitative research involving two or more cases: Mill's methods of agreement and difference.

  13. [PDF] Comparative Case Studies

    Next, we propose a new approach - the comparative case study approach - that attends simultaneously to global, national, and local dimensions of case-based research. We contend that new approaches are necessitated by conceptual shifts in the social sciences, specifically in relation to culture, context, space, place, and…. Expand.

  14. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  15. (PDF) A Short Introduction to Comparative Research

    A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...

  16. Writing a Comparative Case Study: Effective Guide

    A comparative study is an effective research method for analyzing case similarities and differences. Writing a comparative study can be daunting, but proper planning and organization can be an effective research method. Define your research question, choose relevant cases, collect and analyze comprehensive data, and present the findings.

  17. Chapter 10 Methods for Comparative Studies

    In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between ...

  18. Full article: Doing comparative case study research in urban and

    1. Introduction 'At the very least, comparative urbanism must be practiced in a conscious manner: comparative conceptual frameworks and comparative methodologies must be explicated and argued' (Nijman, Citation 2007, p. 3). This citation skilfully discloses the challenges associated with comparative research and it also applies to comparative case study research.

  19. Comparative Case Study

    13.1.2 A numerical case study: olefin metathesis. To compare the open-loop and closed-loop dynamic behaviors, in what follows we present a comparative case study for olefin metathesis to produce 2-butene and 3-hexene from 2-pentene. The task is to produce 50 kmol/h of 0.98 mol/mol butene and 50 kmol/h of 0.98 mol/mol hexene at 1 atm, given as ...

  20. Teaching Case Study Methods in Comparative Education

    First, traditional case study methods are limited; teachers should encourage students to rethink the dictum to "bound the study" a priori and also the tendency to bound a study by a place or "cultural" group rather than a phenomenon of interest. Second, given that comparative education is a multidisciplinary and possibly ...

  21. Comparative Analysis: What It Is & How to Conduct It

    Describe both sides. Comparative analysis may try to show that one argument or idea is better, but the analysis must cover both sides equally. The analysis shows both sides of the main arguments and claims. For example, to compare the benefits and drawbacks of starting a recycling program, one might examine both the positive effects, such as ...

  22. Comparative case studies

    Each context and environment is different. The comparative case study can help the evaluator check whether the program theory holds for each different context and environment. If implementation differs, the reasons and results can be recorded. The opposite is also true, similar patterns across sites can increase the confidence in results.

  23. How do I write a comparative analysis?

    Write the body of your paper. There are two main approaches to organizing a comparative analysis: Alternating (point-by-point) method: Find similar points between each subject and alternate writing about each of them. Block (subject-by-subject) method: Discuss all of the first subject and then all of the second.

  24. 2024 India Election Results: Live Updates, Maps

    The Bharatiya Janata Party fell short of the 272 seats needed for control, meaning it must rely on its allies to form a government. The outcome is a stunning blow for Modi, who had predicted his ...