Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

Type of design Purpose and characteristics
Experimental
Quasi-experimental
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Questionnaires Interviews

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Organizing Your Social Sciences Research Paper: Types of Research Designs

  • Purpose of Guide
  • Writing a Research Proposal
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • The Research Problem/Question
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • The C.A.R.S. Model
  • Background Information
  • Theoretical Framework
  • Citation Tracking
  • Evaluating Sources
  • Reading Research Effectively
  • Primary Sources
  • Secondary Sources
  • What Is Scholarly vs. Popular?
  • Is it Peer-Reviewed?
  • Qualitative Methods
  • Quantitative Methods
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism [linked guide]
  • Annotated Bibliography
  • Grading Someone Else's Paper

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data. Note that your research problem determines the type of design you should use, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base . 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations far too early, before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing research designs in your paper can vary considerably, but any well-developed design will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the data which will be necessary for an adequate testing of the hypotheses and explain how such data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction and varies in length depending on the type of design you are using. However, you can get a sense of what to do by reviewing the literature of studies that have utilized the same research design. This can provide an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Video content

Videos in Business and Management , Criminology and Criminal Justice , Education , and Media, Communication and Cultural Studies specifically created for use in higher education.

A literature review tool that highlights the most influential works in Business & Management, Education, Politics & International Relations, Psychology and Sociology. Does not contain full text of the cited works. Dates vary.

Encyclopedias, handbooks, ebooks, and videos published by Sage and CQ Press. 2000 to present

Causal Design

Definition and Purpose

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.

What do these studies tell you ?

  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.

What these studies don't tell you ?

  • Not all relationships are casual! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation ; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base . 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, r ather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101 . Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study . Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design, Application, Strengths and Weaknesses of Cross-Sectional Studies . Healthknowledge, 2009. Cross-Sectional Study . Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies . Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design , September 26, 2008. Explorable.com website.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs . School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research . Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design . Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research . Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research . Wikipedia.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study . Wikipedia.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research . Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

  • << Previous: Writing a Research Proposal
  • Next: Design Flaws to Avoid >>
  • Last Updated: Sep 8, 2023 12:19 PM
  • URL: https://guides.library.txstate.edu/socialscienceresearch

research designs writing assignment (evaluative)

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research designs writing assignment (evaluative)

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research designs writing assignment (evaluative)

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research designs writing assignment (evaluative)

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research designs writing assignment (evaluative)

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

11 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

Rachael Opoku

This post is really helpful.

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Research Designs Compared | Guide & Examples

Types of Research Designs Compared | Guide & Examples

Published on June 20, 2019 by Shona McCombes . Revised on June 22, 2023.

When you start planning a research project, developing research questions and creating a  research design , you will have to make various decisions about the type of research you want to do.

There are many ways to categorize different types of research. The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by:

  • The type of knowledge you aim to produce
  • The type of data you will collect and analyze
  • The sampling methods , timescale and location of the research

This article takes a look at some common distinctions made between different types of research and outlines the key differences between them.

Table of contents

Types of research aims, types of research data, types of sampling, timescale, and location, other interesting articles.

The first thing to consider is what kind of knowledge your research aims to contribute.

Type of research What’s the difference? What to consider
Basic vs. applied Basic research aims to , while applied research aims to . Do you want to expand scientific understanding or solve a practical problem?
vs. Exploratory research aims to , while explanatory research aims to . How much is already known about your research problem? Are you conducting initial research on a newly-identified issue, or seeking precise conclusions about an established issue?
aims to , while aims to . Is there already some theory on your research problem that you can use to develop , or do you want to propose new theories based on your findings?

Prevent plagiarism. Run a free check.

The next thing to consider is what type of data you will collect. Each kind of data is associated with a range of specific research methods and procedures.

Type of research What’s the difference? What to consider
Primary research vs secondary research Primary data is (e.g., through or ), while secondary data (e.g., in government or scientific publications). How much data is already available on your topic? Do you want to collect original data or analyze existing data (e.g., through a )?
, while . Is your research more concerned with measuring something or interpreting something? You can also create a research design that has elements of both.
vs Descriptive research gathers data , while experimental research . Do you want to identify characteristics, patterns and or test causal relationships between ?

Finally, you have to consider three closely related questions: how will you select the subjects or participants of the research? When and how often will you collect data from your subjects? And where will the research take place?

Keep in mind that the methods that you choose bring with them different risk factors and types of research bias . Biases aren’t completely avoidable, but can heavily impact the validity and reliability of your findings if left unchecked.

Type of research What’s the difference? What to consider
allows you to , while allows you to draw conclusions . Do you want to produce  knowledge that applies to many contexts or detailed knowledge about a specific context (e.g. in a )?
vs Cross-sectional studies , while longitudinal studies . Is your research question focused on understanding the current situation or tracking changes over time?
Field research vs laboratory research Field research takes place in , while laboratory research takes place in . Do you want to find out how something occurs in the real world or draw firm conclusions about cause and effect? Laboratory experiments have higher but lower .
Fixed design vs flexible design In a fixed research design the subjects, timescale and location are begins, while in a flexible design these aspects may . Do you want to test hypotheses and establish generalizable facts, or explore concepts and develop understanding? For measuring, testing and making generalizations, a fixed research design has higher .

Choosing between all these different research types is part of the process of creating your research design , which determines exactly how your research will be conducted. But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study.

Read more about creating a research design

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Types of Research Designs Compared | Guide & Examples. Scribbr. Retrieved August 14, 2024, from https://www.scribbr.com/methodology/types-of-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research methodology | steps & tips, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Organizing Your Social Sciences Research Assignments

  • Annotated Bibliography
  • Analyzing a Scholarly Journal Article
  • Group Presentations
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • Types of Structured Group Activities
  • Group Project Survival Skills
  • Leading a Class Discussion
  • Multiple Book Review Essay
  • Reviewing Collected Works
  • Writing a Case Analysis Paper
  • Writing a Case Study
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Reflective Paper
  • Writing a Research Proposal
  • Generative AI and Writing
  • Acknowledgments

Definition and Introduction

Journal article analysis assignments require you to summarize and critically assess the quality of an empirical research study published in a scholarly [a.k.a., academic, peer-reviewed] journal. The article may be assigned by the professor, chosen from course readings listed in the syllabus, or you must locate an article on your own, usually with the requirement that you search using a reputable library database, such as, JSTOR or ProQuest . The article chosen is expected to relate to the overall discipline of the course, specific course content, or key concepts discussed in class. In some cases, the purpose of the assignment is to analyze an article that is part of the literature review for a future research project.

Analysis of an article can be assigned to students individually or as part of a small group project. The final product is usually in the form of a short paper [typically 1- 6 double-spaced pages] that addresses key questions the professor uses to guide your analysis or that assesses specific parts of a scholarly research study [e.g., the research problem, methodology, discussion, conclusions or findings]. The analysis paper may be shared on a digital course management platform and/or presented to the class for the purpose of promoting a wider discussion about the topic of the study. Although assigned in any level of undergraduate and graduate coursework in the social and behavioral sciences, professors frequently include this assignment in upper division courses to help students learn how to effectively identify, read, and analyze empirical research within their major.

Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Sego, Sandra A. and Anne E. Stuart. "Learning to Read Empirical Articles in General Psychology." Teaching of Psychology 43 (2016): 38-42; Kershaw, Trina C., Jordan P. Lippman, and Jennifer Fugate. "Practice Makes Proficient: Teaching Undergraduate Students to Understand Published Research." Instructional Science 46 (2018): 921-946; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36; MacMillan, Margy and Allison MacKenzie. "Strategies for Integrating Information Literacy and Academic Literacy: Helping Undergraduate Students make the most of Scholarly Articles." Library Management 33 (2012): 525-535.

Benefits of Journal Article Analysis Assignments

Analyzing and synthesizing a scholarly journal article is intended to help students obtain the reading and critical thinking skills needed to develop and write their own research papers. This assignment also supports workplace skills where you could be asked to summarize a report or other type of document and report it, for example, during a staff meeting or for a presentation.

There are two broadly defined ways that analyzing a scholarly journal article supports student learning:

Improve Reading Skills

Conducting research requires an ability to review, evaluate, and synthesize prior research studies. Reading prior research requires an understanding of the academic writing style , the type of epistemological beliefs or practices underpinning the research design, and the specific vocabulary and technical terminology [i.e., jargon] used within a discipline. Reading scholarly articles is important because academic writing is unfamiliar to most students; they have had limited exposure to using peer-reviewed journal articles prior to entering college or students have yet to gain exposure to the specific academic writing style of their disciplinary major. Learning how to read scholarly articles also requires careful and deliberate concentration on how authors use specific language and phrasing to convey their research, the problem it addresses, its relationship to prior research, its significance, its limitations, and how authors connect methods of data gathering to the results so as to develop recommended solutions derived from the overall research process.

Improve Comprehension Skills

In addition to knowing how to read scholarly journals articles, students must learn how to effectively interpret what the scholar(s) are trying to convey. Academic writing can be dense, multi-layered, and non-linear in how information is presented. In addition, scholarly articles contain footnotes or endnotes, references to sources, multiple appendices, and, in some cases, non-textual elements [e.g., graphs, charts] that can break-up the reader’s experience with the narrative flow of the study. Analyzing articles helps students practice comprehending these elements of writing, critiquing the arguments being made, reflecting upon the significance of the research, and how it relates to building new knowledge and understanding or applying new approaches to practice. Comprehending scholarly writing also involves thinking critically about where you fit within the overall dialogue among scholars concerning the research problem, finding possible gaps in the research that require further analysis, or identifying where the author(s) has failed to examine fully any specific elements of the study.

In addition, journal article analysis assignments are used by professors to strengthen discipline-specific information literacy skills, either alone or in relation to other tasks, such as, giving a class presentation or participating in a group project. These benefits can include the ability to:

  • Effectively paraphrase text, which leads to a more thorough understanding of the overall study;
  • Identify and describe strengths and weaknesses of the study and their implications;
  • Relate the article to other course readings and in relation to particular research concepts or ideas discussed during class;
  • Think critically about the research and summarize complex ideas contained within;
  • Plan, organize, and write an effective inquiry-based paper that investigates a research study, evaluates evidence, expounds on the author’s main ideas, and presents an argument concerning the significance and impact of the research in a clear and concise manner;
  • Model the type of source summary and critique you should do for any college-level research paper; and,
  • Increase interest and engagement with the research problem of the study as well as with the discipline.

Kershaw, Trina C., Jennifer Fugate, and Aminda J. O'Hare. "Teaching Undergraduates to Understand Published Research through Structured Practice in Identifying Key Research Concepts." Scholarship of Teaching and Learning in Psychology . Advance online publication, 2020; Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Sego, Sandra A. and Anne E. Stuart. "Learning to Read Empirical Articles in General Psychology." Teaching of Psychology 43 (2016): 38-42; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36; MacMillan, Margy and Allison MacKenzie. "Strategies for Integrating Information Literacy and Academic Literacy: Helping Undergraduate Students make the most of Scholarly Articles." Library Management 33 (2012): 525-535; Kershaw, Trina C., Jordan P. Lippman, and Jennifer Fugate. "Practice Makes Proficient: Teaching Undergraduate Students to Understand Published Research." Instructional Science 46 (2018): 921-946.

Structure and Organization

A journal article analysis paper should be written in paragraph format and include an instruction to the study, your analysis of the research, and a conclusion that provides an overall assessment of the author's work, along with an explanation of what you believe is the study's overall impact and significance. Unless the purpose of the assignment is to examine foundational studies published many years ago, you should select articles that have been published relatively recently [e.g., within the past few years].

Since the research has been completed, reference to the study in your paper should be written in the past tense, with your analysis stated in the present tense [e.g., “The author portrayed access to health care services in rural areas as primarily a problem of having reliable transportation. However, I believe the author is overgeneralizing this issue because...”].

Introduction Section

The first section of a journal analysis paper should describe the topic of the article and highlight the author’s main points. This includes describing the research problem and theoretical framework, the rationale for the research, the methods of data gathering and analysis, the key findings, and the author’s final conclusions and recommendations. The narrative should focus on the act of describing rather than analyzing. Think of the introduction as a more comprehensive and detailed descriptive abstract of the study.

Possible questions to help guide your writing of the introduction section may include:

  • Who are the authors and what credentials do they hold that contributes to the validity of the study?
  • What was the research problem being investigated?
  • What type of research design was used to investigate the research problem?
  • What theoretical idea(s) and/or research questions were used to address the problem?
  • What was the source of the data or information used as evidence for analysis?
  • What methods were applied to investigate this evidence?
  • What were the author's overall conclusions and key findings?

Critical Analysis Section

The second section of a journal analysis paper should describe the strengths and weaknesses of the study and analyze its significance and impact. This section is where you shift the narrative from describing to analyzing. Think critically about the research in relation to other course readings, what has been discussed in class, or based on your own life experiences. If you are struggling to identify any weaknesses, explain why you believe this to be true. However, no study is perfect, regardless of how laudable its design may be. Given this, think about the repercussions of the choices made by the author(s) and how you might have conducted the study differently. Examples can include contemplating the choice of what sources were included or excluded in support of examining the research problem, the choice of the method used to analyze the data, or the choice to highlight specific recommended courses of action and/or implications for practice over others. Another strategy is to place yourself within the research study itself by thinking reflectively about what may be missing if you had been a participant in the study or if the recommended courses of action specifically targeted you or your community.

Possible questions to help guide your writing of the analysis section may include:

Introduction

  • Did the author clearly state the problem being investigated?
  • What was your reaction to and perspective on the research problem?
  • Was the study’s objective clearly stated? Did the author clearly explain why the study was necessary?
  • How well did the introduction frame the scope of the study?
  • Did the introduction conclude with a clear purpose statement?

Literature Review

  • Did the literature review lay a foundation for understanding the significance of the research problem?
  • Did the literature review provide enough background information to understand the problem in relation to relevant contexts [e.g., historical, economic, social, cultural, etc.].
  • Did literature review effectively place the study within the domain of prior research? Is anything missing?
  • Was the literature review organized by conceptual categories or did the author simply list and describe sources?
  • Did the author accurately explain how the data or information were collected?
  • Was the data used sufficient in supporting the study of the research problem?
  • Was there another methodological approach that could have been more illuminating?
  • Give your overall evaluation of the methods used in this article. How much trust would you put in generating relevant findings?

Results and Discussion

  • Were the results clearly presented?
  • Did you feel that the results support the theoretical and interpretive claims of the author? Why?
  • What did the author(s) do especially well in describing or analyzing their results?
  • Was the author's evaluation of the findings clearly stated?
  • How well did the discussion of the results relate to what is already known about the research problem?
  • Was the discussion of the results free of repetition and redundancies?
  • What interpretations did the authors make that you think are in incomplete, unwarranted, or overstated?
  • Did the conclusion effectively capture the main points of study?
  • Did the conclusion address the research questions posed? Do they seem reasonable?
  • Were the author’s conclusions consistent with the evidence and arguments presented?
  • Has the author explained how the research added new knowledge or understanding?

Overall Writing Style

  • If the article included tables, figures, or other non-textual elements, did they contribute to understanding the study?
  • Were ideas developed and related in a logical sequence?
  • Were transitions between sections of the article smooth and easy to follow?

Overall Evaluation Section

The final section of a journal analysis paper should bring your thoughts together into a coherent assessment of the value of the research study . This section is where the narrative flow transitions from analyzing specific elements of the article to critically evaluating the overall study. Explain what you view as the significance of the research in relation to the overall course content and any relevant discussions that occurred during class. Think about how the article contributes to understanding the overall research problem, how it fits within existing literature on the topic, how it relates to the course, and what it means to you as a student researcher. In some cases, your professor will also ask you to describe your experiences writing the journal article analysis paper as part of a reflective learning exercise.

Possible questions to help guide your writing of the conclusion and evaluation section may include:

  • Was the structure of the article clear and well organized?
  • Was the topic of current or enduring interest to you?
  • What were the main weaknesses of the article? [this does not refer to limitations stated by the author, but what you believe are potential flaws]
  • Was any of the information in the article unclear or ambiguous?
  • What did you learn from the research? If nothing stood out to you, explain why.
  • Assess the originality of the research. Did you believe it contributed new understanding of the research problem?
  • Were you persuaded by the author’s arguments?
  • If the author made any final recommendations, will they be impactful if applied to practice?
  • In what ways could future research build off of this study?
  • What implications does the study have for daily life?
  • Was the use of non-textual elements, footnotes or endnotes, and/or appendices helpful in understanding the research?
  • What lingering questions do you have after analyzing the article?

NOTE: Avoid using quotes. One of the main purposes of writing an article analysis paper is to learn how to effectively paraphrase and use your own words to summarize a scholarly research study and to explain what the research means to you. Using and citing a direct quote from the article should only be done to help emphasize a key point or to underscore an important concept or idea.

Business: The Article Analysis . Fred Meijer Center for Writing, Grand Valley State University; Bachiochi, Peter et al. "Using Empirical Article Analysis to Assess Research Methods Courses." Teaching of Psychology 38 (2011): 5-9; Brosowsky, Nicholaus P. et al. “Teaching Undergraduate Students to Read Empirical Articles: An Evaluation and Revision of the QALMRI Method.” PsyArXi Preprints , 2020; Holster, Kristin. “Article Evaluation Assignment”. TRAILS: Teaching Resources and Innovations Library for Sociology . Washington DC: American Sociological Association, 2016; Kershaw, Trina C., Jennifer Fugate, and Aminda J. O'Hare. "Teaching Undergraduates to Understand Published Research through Structured Practice in Identifying Key Research Concepts." Scholarship of Teaching and Learning in Psychology . Advance online publication, 2020; Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Reviewer's Guide . SAGE Reviewer Gateway, SAGE Journals; Sego, Sandra A. and Anne E. Stuart. "Learning to Read Empirical Articles in General Psychology." Teaching of Psychology 43 (2016): 38-42; Kershaw, Trina C., Jordan P. Lippman, and Jennifer Fugate. "Practice Makes Proficient: Teaching Undergraduate Students to Understand Published Research." Instructional Science 46 (2018): 921-946; Gyuris, Emma, and Laura Castell. "To Tell Them or Show Them? How to Improve Science Students’ Skills of Critical Reading." International Journal of Innovation in Science and Mathematics Education 21 (2013): 70-80; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36; MacMillan, Margy and Allison MacKenzie. "Strategies for Integrating Information Literacy and Academic Literacy: Helping Undergraduate Students Make the Most of Scholarly Articles." Library Management 33 (2012): 525-535.

Writing Tip

Not All Scholarly Journal Articles Can Be Critically Analyzed

There are a variety of articles published in scholarly journals that do not fit within the guidelines of an article analysis assignment. This is because the work cannot be empirically examined or it does not generate new knowledge in a way which can be critically analyzed.

If you are required to locate a research study on your own, avoid selecting these types of journal articles:

  • Theoretical essays which discuss concepts, assumptions, and propositions, but report no empirical research;
  • Statistical or methodological papers that may analyze data, but the bulk of the work is devoted to refining a new measurement, statistical technique, or modeling procedure;
  • Articles that review, analyze, critique, and synthesize prior research, but do not report any original research;
  • Brief essays devoted to research methods and findings;
  • Articles written by scholars in popular magazines or industry trade journals;
  • Academic commentary that discusses research trends or emerging concepts and ideas, but does not contain citations to sources; and
  • Pre-print articles that have been posted online, but may undergo further editing and revision by the journal's editorial staff before final publication. An indication that an article is a pre-print is that it has no volume, issue, or page numbers assigned to it.

Journal Analysis Assignment - Myers . Writing@CSU, Colorado State University; Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36.

  • << Previous: Annotated Bibliography
  • Next: Giving an Oral Presentation >>
  • Last Updated: Jun 3, 2024 9:44 AM
  • URL: https://libguides.usc.edu/writingguide/assignments

Ask a Librarian

  • Keeping Current
  • Staying Organized
  • Multicultural Teaching
  • Open Educational Resources
  • Language for Required Resources
  • Supporting Retention & Student Success
  • University of Washington Libraries
  • Library Guides
  • Faculty Toolkit
  • Designing Research Assignments

Faculty Toolkit: Designing Research Assignments

It's Complicated: What Students Say About Research and Writing Assignments from Project Information Literacy

How Librarians Can Help

Librarians are available to consult with faculty and instructors to create or revise effective research assignments and classroom activities that foster critical thinking, evaluation skills, and promote lifelong learning.

Librarians can help you:

  • Understand students' research capabilities.
  • Create, revise, or offer suggestions on your research-based assignments.
  • Talk about alternatives to traditional research papers or presentations.
  • Identify and discuss library resources suitable for an online class research guide
  • Provide individualized training on library resources.

Provide Tools & Support

  • Provide copies of research assignments to your librarian so we are better prepared to assist your students when they need help.
  • Consider putting materials on reserve that will be needed by large numbers of students to ensure all students will have access to them.

Consider Alternatives to the Research Paper

  • Explore the library as an "Ethnographer" (Library Discovery Tour not to be confused with a scavenger hunt)
  • Generate a shared bibliography of readings (see " How to get students to find and read 94 articles before the next class ")
  • Compare disciplinary perspectives on the same topic
  • Find and compare articles on oil spills in the news and the scientific literature
  • Read a short article from the popular press (provided by professor) dealing with results of original research. Locate the original research findings on which the article was based, discuss the relationship between the popular article and the original research, and critique the accuracy of the popular article
  • Find facts to support or contradict an editorial
  • Research the publications and career of a prominent scholar
  • Compile an annotated bibliography
  • Prepare a literature review
  • Find book reviews on a text used in class
  • Evaluate a web site
  • Find and summarize recent news related to a class topic, discuss in class (one-time or recurring).
  • Research a topic and present findings as a poster session for classmates or larger group.
  • Research a topic or event using information published in different decades. Compare and discuss what changes occurred in the literature and why.

Tips for Designing Library Research Assignments

  • Address Learning Goals Related to the Research Process . Consider what research skills you would like students to develop in completing the assignment and discuss with your students the importance of developing those skills.
  • Be Clear about Your Expectations . Remember that your students may not have prior experience with scholarly journals, monographs, or academic libraries. Spend time in class discussing how research is produced and disseminated in your discipline and how you expect your students to participate in academic discourse in the context of your class.
  • Scaffolding your Assignment Brings Focus to the Research Process . Breaking a complex research assignment down into a sequence of smaller, more manageable parts has a number of benefits: it models how to approach a research question and effective time management, it gives students the opportunity to focus on and master key research skills, it provides opportunities for feedback, and it can be an effective deterrent to plagiarism.
  • Devote Class Time to Discussion of the Assignment in Progress . Periodic discussions in class can help students reflect on the research process and its importance, encourage questions, and help students develop a sense that what they are doing is a transferable process that they can use for other assignments.
  • Criteria for Assessment . In your criteria for assessment (i.e. written instructions, rubrics), make expectations related to the research process explicit. For example, are there specific expectations for the types of resources students should use and how they should be cited? Research shows that students tend to use more scholarly sources when faculty provide them with clear guidelines regarding the types of sources that should be used.
  • Test Your Assignment . In testing an assignment yourself, you may uncover practical roadblocks (e.g., too few copies of a book for too many students, a source is no longer available online). Librarians can help with testing your assignment, suggest strategies for mitigating roadblocks (i.e. place books on reserve for your students, suggest other resources), or design customized supporting materials (i.e. handouts or web pages).
  • Collaborate with Librarians . Librarians can help you design an effective research assignment that helps students develop the research skills you value and introduces your students to the most useful resources. We also can work with you to develop and teach a library instruction session for your students that will help them learn the strategies they will need in order to complete your assignment.
  • Make sure they know how and where to get help from librarians.
  • Librarians will meet with students to help them develop their topics and teach them how to find and evaluate sources.

Some content is adapted from University of Wisconsin - Madison Libraries

Common Problems to Avoid

  • Waiting until a couple days before the class to ask for an instruction session doesn't allow librarians adequate time to prepare and reserve a classroom.
  • Sending (or bringing) an entire class to the Library for research time without notice. The Tioga Library Building is for Quiet Study.  In the Snoqualmie Building, there is a limited number of computer workstations and small group study spaces. The staffing at the Reference desk cannot adequately accommodate working with classes.
  • Assigning Scavenger hunts - Roaming around the library looking for trivia is not research and is often seen as busy work by students that is disconnected from their research assignments.
  • Be sure the library has the resources your students need!  Avoid requiring students to use resources the library does not own or have in your preferred format (e.g. print journal articles) and cannot obtain within a reasonable timeframe.
  • Avoid having each student research the same topic.  This tends to stretch library resources too thin, especially when printed materials or limited connections to a key database are involved.
  • << Previous: Open Educational Resources
  • Next: Language for Required Resources >>
  • Last Updated: Apr 9, 2024 3:51 PM
  • URL: https://guides.lib.uw.edu/uwtfac

research designs writing assignment (evaluative)

Academic Evaluations

In our daily lives, we are continually evaluating objects, people, and ideas in our immediate environments. We pass judgments in conversation, while reading, while shopping, while eating, and while watching television or movies, often being unaware that we are doing so. Evaluation is an equally fundamental writing process, and writing assignments frequently ask us to make and defend value judgments.

Evaluation is an important step in almost any writing process, since we are constantly making value judgments as we write. When we write an "academic evaluation," however, this type of value judgment is the focus of our writing.

A Definition of Evaluation

Kate Kiefer, English Professor Like most specific assignments that teachers give, writing evaluations mirrors what happens so often in our day-to-day lives. Every day we decide whether the temperature is cold enough to need a light or heavy jacket; whether we're willing to spend money on a good book or a good movie; whether the prices at the grocery store tell us to keep shopping at the same place or somewhere else for a better value. Academic tasks rely on evaluation just as often. Is a source reliable? Does an argument convince? Is the article worth reading? So writing evaluation helps students make this often unconscious daily task more overt and prepares them to examine ideas, facts, arguments, and so on more critically.

To evaluate is to assess or appraise. Evaluation is the process of examining a subject and rating it based on its important features. We determine how much or how little we value something, arriving at our judgment on the basis of criteria that we can define.

We evaluate when we write primarily because it is almost impossible to avoid doing so. If right now you were asked to write for five minutes on any subject and were asked to keep your writing completely value-free, you would probably find such an assignment difficult. Readers come to evaluative writing in part because they seek the opinions of other people for one reason or another.

Uses for Evaluation

Consider a time recently when you decided to watch a movie. There were at least two kinds of evaluation available to you through the media: the rating system and critical reviews.

Newspapers and magazines, radio and TV programs all provide critical evaluations for their readers and viewers. Many movie-goers consult more than one media reviewer to adjust for bias. Most movie-goers also consider the rating system, especially if they are deciding to take children to a movie. In addition, most people will also ask for recommendations from friends who have already seen the movie.

Whether professional or personal, judgments like these are based on the process of evaluation. The terminology associated with the elements of this process--criteria, evidence, and judgment--might seem alien to you, but you have undoubtedly used these elements almost every time you have expressed an opinion on something.

Types of Written Evaluation

Quite a few of the assignments writers are given at the university and in the workplace involve the process of evaluation.

One type of written evaluation that most people are familiar with is the review. Reviewers will attend performances, events, or places (like restaurants, movies, or concerts), basing their evaluations on their observations. Reviewers typically use a particular set of criteria they establish for themselves, and their reviews most often appear in newspapers and magazines.

Critical Writing

Reviews are a type of critical writing, but there are other types of critical writing which focus on objects (like works of art or literature) rather than on events and performances. Literary criticism, for instance, is a way of establishing the worth or literary merit of a text on the basis of certain established criteria. When we write about literary texts, we do so using one of many critical "lenses," viewing the text as it addresses matters like form, culture, historical context, gender, and class (to name a few). Deciding whether a text is "good" or "bad" is a matter of establishing which "lens" you are viewing that text through, and using the appropriate set of criteria to do so. For example, we might say that a poem by an obscure Nineteenth Century African American poet is not "good" or "useful" in terms of formal characteristics like rhyme, meter, or diction, but we might judge that same text as "good" or "useful" in terms of the way it addresses cultural and political issues historically.

Response Essays

One very common type of academic writing is the response essay. In many different disciplines, we are asked to respond to something that we read or observe. Some types of response, like the interpretive response, simply ask us to explain a text. However, there are other types of response (like agree/disagree and analytical response) which demand that we make some sort of judgment based on careful consideration of the text, object, or event in question.

Problem Solving Essays

In writing assignments which focus on issues, policies, or phenomena, we are often asked to propose possible solutions for identifiable problems. This type of essay requires evaluation on two levels. First of all, it demands that we use evaluation in order to determine that there is a legitimate problem. And secondly, it demands that we take more than one policy or solution into consideration to determine which will be the most feasible, viable, or effective one, given that problem.

Arguing Essays

Written argument is a type of evaluative writing, particularly when it focuses on a claim of value (like "The death penalty is cruel and ineffective") or policy claim (like "Oakland's Ebonics program is an effective way of addressing standard English deficiencies among African American students in public schools"). In written argument, we advance a claim like one of the above, then support this claim with solid reasons and evidence.

Process Analysis

In scientific or investigative writing, in which experiments are conducted and processes or phenomena are observed or studied, evaluation plays a part in the writer's discussion of findings. Often, these findings need to be both interpreted and analyzed by way of criteria established by the writer.

Source Evaluation

Although not a form of written evaluation in and of itself, source evaluation is a process that is involved in many other types of academic writing, like argument, investigative and scientific writing, and research papers. When we conduct research, we quickly learn that not every source is a good source and that we need to be selective about the quality of the evidence we transplant into our own writing.

Relevance to the Topic

When you conduct research, you naturally look for sources that are relevant to your topic. However, writers also often fall prey to the tendency to accept sources that are just relevant enough . For example, if you were writing an essay on Internet censorship, you might find that your research yielded quite a few sources on music censorship, art censorship, or censorship in general. Though these sources could possibly be marginally useful in an essay on Internet censorship, you will probably want to find more directly relevant sources to serve a more central role in your essay.

Perspective on the Topic

Another point to consider is that even though you want sources relevant to your topic, you might not necessarily want an exclusive collection of sources which agree with your own perspective on that topic. For example, if you are writing an essay on Internet censorship from an anti-censorship perspective, you will want to include in your research sources which also address the pro-censorship side. In this way, your essay will be able to fully address perspectives other than (and sometimes in opposition to) your own.

Credibility

One of the questions you want to ask yourself when you consider using a source is "How credible will my audience consider this source to be?" You will want to ask this question not only of the source itself (the book, journal, magazine, newspaper, home page, etc.) but also of the author. To use an extreme example, for most academic writing assignments you would probably want to steer clear of using a source like the National Enquirer or like your eight year old brother, even though we could imagine certain writing situations in which such sources would be entirely appropriate. The key to determining the credibility of a source/author is to decide not only whether you think the source is reliable, but also whether your audience will find it so, given the purpose of your writing.

Currency of Publication

Unless you are doing research with an historical emphasis, you will generally want to choose sources which have been published recently. Sometimes research and statistics maintain their authority for a very long time, but the more common trend in most fields is that the more recent a study is, the more comprehensive and accurate it is.

Accessibility

When sorting through research, it is best to select sources that are readable and accessible both for you and for your intended audience. If a piece of writing is laden with incomprehensible jargon and incoherent structure or style, you will want to think twice about directing it toward an audience unfamiliar with that type of jargon, structure, or style. In short, it is a good rule of thumb to avoid using any source which you yourself do not understand and are not able to interpret for your audience.

Quality of Writing

When choosing sources, consider the quality of writing in the texts themselves. It is possible to paraphrase from sources that are sloppily written, but quoting from such a source would serve only to diminish your own credibility in the eyes of your audience.

Understanding of Biases

Few are sources are truly objective or unbiased . Trying to eliminate bias from your sources will be nearly impossible, but all writers can try to understand and recognize the biases of their sources. For instance, if you were doing a comparative study of 1/2-ton pickup trucks on the market, you might consult the Ford home page. However, you would also need to be aware that this source would have some very definite biases. Likewise, it would not be unreasonable to use an article from Catholic World in an anti-abortion argument, but you would want to understand how your audience would be likely to view that source. Although there is no fail-proof way to determine the bias of a particular journal or newspaper, you can normally sleuth this out by looking at the language in the article itself or in the surrounding articles.

Use of Research

In evaluating a source, you will need to examine the sources that it in turn uses. Looking at the research used by the author of your source, what biases can you recognize? What are the quantity and quality of evidence and statistics included? How reliable and readable do the excerpts cited seem to be?

Considering Purpose and Audience

We typically think of "values" as being personal matters. But in our writing, as in other areas of our lives, values often become matters of public and political concern. Therefore, it is important when we evaluate to consider why we are making judgments on a subject (purpose) and who we hope to affect with our judgments (audience).

Purposes of Evaluation

Your purpose in written evaluation is not only to express your opinion or judgment about a subject, but also to convince, persuade, or otherwise influence an audience by way of that judgment. In this way, evaluation is a type of argument, in which you as a writer are attempting consciously to have an effect on your readers' ways of thinking or acting. If, for example, you are writing an evaluation in which you make a judgment that Mountain Bike A is a better buy than Mountain Bike B, you are doing more than expressing your approval of the merits of Bike A; you are attempting to convince your audience that Bike A is the better buy and, ultimately, to persuade them to buy Bike A rather than Bike B.

Effects of Audience

Kate Kiefer, English Professor When we evaluate for ourselves, we don't usually take the time to articulate criteria and detail evidence. Our thought processes work fast enough that we often seem to make split-second decisions. Even when we spend time thinking over a decision--like which expensive toy (car, stereo, skis) to buy--we don't often lay out the criteria explicitly. We can't take that shortcut when we write to other folks, though. If we want readers to accept our judgment, then we need to be clear about the criteria we use and the evidence that helps us determine value for each criterion. After all, why should I agree with you to eat at the Outback Steak House if you care only about cost but I care about taste and safe food handling? To write an effective evaluation, you need to figure out what your readers care about and then match your criteria to their concerns. Similarly, you can overwhelm readers with too much detail when they don't have the background knowledge to care about that level of detail. Or you can ignore the expertise of your readers (at your peril) and not give enough detail. Then, as a writer, you come across as condescending, or worse. So targeting an audience is really key to successful evaluation.

In written evaluation, it is important to keep in mind not only your own system of value, but also that of your audience. Writers do not evaluate in a vacuum. Giving some thought to the audience you are attempting to influence will help you to determine what criteria are important to them and what evidence they will require in order to be convinced or persuaded by your evaluative argument. In order to evaluate effectively, it is important that you consider what motivates and concerns your audience.

Criteria and Audience Considerations

The first step in deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument. Your next step in suiting your criteria to your audience is to determine how you will explain and/or defend not only your judgments, but the criteria supporting them as well. For example, if you are arguing that a Mexican restaurant is excellent because, among other reasons, the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Evidence and Audience Considerations

The amount and type of evidence you use to support your judgments will depend largely on the demands of your audience. Common sense tells us that the more oppositional an audience is, the more evidence will be needed to convince them of the validity a judgment. For instance, if you were writing a favorable review of La Cocina on the basis of their fiery green chile, you might not need to use a great deal of evidence for an audience of people who like spicy food but have not tried any of the Mexican restaurants in town. However, if you are addressing an audience who is deeply devoted to the green chile at Manuel's, you will need to provide a fair amount of solid evidence in order to persuade them to try another restaurant.

Parts of an Evaluation

When we evaluate, we make an overall value claim about a subject, using criteria to make judgments based on evidence. Often, we also make use of comparison and contrast as strategies for determining the relative worth of the subject we are considering. This section examines these parts of an evaluation and shows how each functions in a successful evaluation.

Overall Claim

An overall claim or judgment is an evaluator's final decision about worth. When we evaluate, we make a general statement about the worth of objects, goods, services, or solutions to problems.

An overall claim or judgment in an evaluation can be as simple as "See this movie!" or "Brand X is a better buy than the name brand." It can also be complex, particularly when the evaluator recognizes certain conditions that affect the judgment: If citizens of our community want to improve air and water quality and are willing to forego 300 additional jobs, then we should not approve the new plant Acme is hoping to build here.

Qualifications

An overall claim or judgment usually requires qualification so that it seems balanced. If judgments are weighted too much to one side, they will sometimes mar the credibility of your argument. If your overall judgment is wholly positive, your evaluation will wind up sounding like propaganda or advertisement. If it is wholly negative, you might present yourself as overly critical, unfair, or undiplomatic. An example of a qualified claim or judgment might be the following: Although La Cocina is not without its faults, it is the best Mexican restaurant in town. Qualifications are almost always positive additions to evaluative arguments, but writers must learn not to overuse them. If you make too many qualifications, your audience will be unable to determine your final position on your subject, and you will appear to be "waffling."

Example Text

Creating more parking lots is a possible solution to the horrendous traffic congestion in Taiwan's major cities. When a new building permit is issued, each building must include a certain number of spaces for parking. However, new construction takes time, and results will be seen only as new buildings are erected. This solution alone is inadequate for most of Taiwan's problem areas, which need a solution whose results will be noticed immediately.

Comment Notice how this sentence at the end of the paragraph seems to be a formal "thesis" or "claim" which might drive the rest of the essay. Based on this claim, we would assume that the remainder of the essay will deal with the reasons why the proposed policy along is "inadequate," and will address other possible solutions.

Supporting Judgments

In academic evaluations, the overall claim or judgment is backed up by smaller, more detailed judgments about aspects of a subject being evaluated. Supporting judgments function in the same way that "reasons" function in most arguments. They provide structure and justification for a more general claim. For example, if your overall claim or judgment in your evaluation is

"Although La Cocina is not without its faults, it is the best Mexican restaurant in town,"

one supporting judgment might be

"La Cocina's green chile is superb."

This judgment would be based on criteria you have established, and it would be supported by evidence.

Providing more parking spaces near buildings is not the only act necessary to solve Taiwan's parking problems. A combination of more parking spaces, increased fines, and lowered traffic volume may be necessary to eliminate the nightmare of driving in the cities. In fact, until laws are enforced and fines increased, no number of new parking spaces will impact the congestion seen in downtown areas.

Comment There are arguably three supporting judgments being made here, as three possible solutions are being suggested to rectify this problem of parking in Taiwan. If we were reading these supporting judgments at the beginning of an essay, we would expect the essay to discuss them in depth, pointing out evidence that these proposed solutions would be effective.

When we write evaluations, we consciously adopt certain standards of measurement, or criteria .

Criteria can be concrete standards, like size or speed, or can be abstract, like practicality. When we write evaluations in an academic context, we typically avoid using criteria that are wholly personal, and rely instead on those that are less "subjective" and more likely to be shared by the majority of the audience we are addressing. Choosing appropriate criteria often involves careful consideration of audience demands, values, and concerns.

As an evaluator, you will sometimes discover that you will need to explain and/or defend not only your judgments, but also the criteria informing those judgments. For example, if you are arguing that a Mexican restaurant is excellent because (among other reasons) the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Types of Criteria

If you are evaluating a concrete canoe for an engineering class, you will use concrete criteria such as float time, cost of materials, hydrodynamic design, and so on. If you are evaluating the suitability of a textbook for a history class, you will probably rely on more abstract criteria such as readability, length, and controversial vs. mainstream interpretation of history.

In evaluation, we often rely on concrete , measurable standards according to which subjects (usually objects) may be evaluated. For example, cars may be evaluated according to the criteria of size, speed, or cost.

Many academic evaluations, however, don't focus on objects that we can measure in terms of size, speed, or cost. Rather, they look at somewhat more abstract concepts (problems and solutions often), which we might measure in terms of "effectiveness," "feasibility," or other abstract criteria. When writing this kind of evaluation, it is vital to be as clear as possible when articulating, defining, and using your criteria, since not all readers are likely to understand and agree with these criteria as readily as they would understand and agree with concrete criteria.

Related Information: Abstract Criteria

Abstract criteria are not easily measurable, and they are usually less self-evident, more in need of definition, than concrete criteria. Even though criteria may be abstract, they should not be imprecise. Always state your criteria as clearly and precisely as possible. "Feasibility" is one example of an abstract criterion that a writer might use to evaluate a solution to a problem. Feasibility is the degree of likelihood of success of something like a plan of action or a solution to a problem. "Capability of being implemented" is a way to look at feasibility in terms of solutions to problems. The relative ease with which a solution would be adopted is sometimes a way to look at feasibility. The following example mentions directly the criteria it is using (the words in italics). Fire prevention should be the major consideration of a family building a home. By using concrete, the risk of fire is significantly decreased. But that is not all that concrete provides. It is affordable , suitable for all climates , and helps reduce deforestation . Since all of these factors are important, concrete should be demanded more than it is, and it should certainly be used more than wood for homebuilding.

Related Information: Concrete Criteria

Concrete criteria are measurable standards which most people are likely to understand and (usually) to agree with. For example, a person might make use of criteria like "size," "speed," and "cost" when buying a car.

If size is your main criterion, and something with a larger size will receive a more favorable evaluation.

Perhaps the only quality that you desire in a car is low initial cost. You don't need to take into account anything else. In this case, you can put judgments on these three cars in the local used car lot:



Nissan


$1,000


Toyota


$1,200


Saab


$3,000

Because the Nissan has the lowest initial price, it receives the most favorable judgment. The evidence is found on the price tag. Each car is compared by way of a single criterion: cost.

Using Clear and Well-defined Criteria

When we evaluate informally (passing judgments during the course of conversation, for instance), we typically assume that our criteria are self-evident and require no explanation. However, in written evaluation, it is often necessary that we clarify and define our criteria in order to make a persuasive evaluative argument.

Criteria That Are Too Vague or Personal

Although we frequently find ourselves needing to use abstract criteria like "feasibility" or "effectiveness," we also must avoid using criteria that are overly vague or personal and difficult to support with evidence. As evaluators, we must steer clear of criteria that are matters of taste, belief, or personal preference. For example, the "best" lamp might simply be the one that you think looks prettiest in your home. If you depend on a criterion like "pretty in my home," and neglect to use more common, shared criteria like "brightness," "cost," and "weight," you are probably relying on a criterion that is too specific to your own personal preferences. To make "pretty in my home" an effective criterion, you would need to explain what "pretty in my home" means and how it might relate to other people's value systems. (For example: "Lamp A is attractive because it is an unoffensive style and color that would be appropriate for many people's decorating tastes.")

Using Criteria Based on the Appropriate "Class" of Subjects

When you make judgments, it is important that you use criteria that are appropriate to the type of object, person, policy, etc. that you are examining. If you are evaluating Steven Spielburg's film, Schindler's List , for instance, it is unfair to criticize it because it isn't a knee-slapper. Because "Schindler's List" is a drama and not a comedy, using the criterion of "humor" is inappropriate.

Weighing Criteria

Once you have established criteria for your evaluation of a subject, it is necessary to decide which of these criteria are most important. For example, if you are evaluating a Mexican restaurant and you have arrived at several criteria (variety of items on the menu, spiciness of the food, size of the portions, decor, and service), you need to decide which of these criteria are most critical to your evaluation. If the size of the portions is good, but the service is bad, can you give the restaurant a good rating? What about if the decor is attractive, but the food is bland? Once you have placed your criteria in a hierarchy of importance, it is much easier to make decisions like these.

When we evaluate, we must consider the audience we hope to influence with our judgments. This is particularly true when we decide which criteria are informing (and should inform) these judgments.

After establishing some criteria for your evaluation, it is important to ask yourself whether or not your audience is likely to accept those criteria. It is crucial that they do accept the criteria if, in turn, you expect them to accept the supporting judgments and overall claim or judgment built on them.

Related Information: Explaining and Defending Criteria

In deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument.

Related Information: Understanding Audience Criteria

How Background Experience Influences Criteria

Laura Thomas - Composition Lecturer Your background experience influences the criteria that you use in evaluation. If you know a lot about something, you will have a good idea of what criteria should govern your judgments. On the other hand, it's hard if you don't know enough about what you're judging. Sometimes you have to research first in order to come up with useful criteria. For example, I recently went shopping for a new pair of skis for the first time in fifteen years. When I began shopping, I realized that I didn't even know what questions to ask anymore. The last time I had bought skis, you judged them according to whether they had a foam core or a wood core. But I had no idea what the important considerations were anymore.

Evidence consists of the specifics you use to reach your conclusion or judgment. For example, if you judge that "La Cocina's green chile is superb" on the basis of the criterion, "Good green chile is so fiery that you can barely eat it," you might offer evidence like the following:

"I drank an entire pitcher of water on my own during the course of the meal."
"Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Related Information: Example Text

In the following paragraph, evidence appears in italics. Note that the reference to the New York Times backs up the evidence offered in the previous sentence:

Since killer whales have small lymphatic systems, they catch infections more easily when held captive ( Obee 23 ). The orca from the movie "Free Willy," Keiko, developed a skin disorder because the water he was living in was not cold enough. This infection was a result of the combination of tank conditions and the animal's immune system, according to a New York Times article .

Types of Evidence

Evidence for academic evaluations is usually of two types: concrete detail and analytic detail. Analytic detail comes from critical thinking about abstract elements of the thing being evaluated. It will also include quotations from experts. Concrete detail comes from sense perceptions and measurements--facts about color, speed, size, texture, smell, taste, and so on. Concrete details are more likely to support concrete criteria (as opposed to abstract criteria) used in judging objects. Analytic detail will more often support abstract criteria (as opposed to concrete criteria), like the criterion "feasibility," discussed in the section on criteria. Analytic detail also appears most often in academic evaluations of solutions to problems, although such solutions can also sometimes be evaluated according to concrete criteria.

What Kinds of Evidence Work

Good evidence ranges from personal experience to interviews with experts to published sources. The kind of evidence that works best for you will depend on your audience and often on the writing assignment you have been given.

Evidence and the Writing Assignment

When you choose evidence to support the judgments you are making in an evaluation, it will be important to consider what type of evaluation you are being asked to do. If, for instance, you are being asked to review a play you have attended, your evidence will most likely consist primarily of your own observations. However, if your assignment asks you to compare and contrast two potential national health care policies (toward deciding which is the better one), your evidence will need to be more statistical, more dependent on reputable sources, and more directed toward possible effects or outcomes of your judgment.

Comparison and Contrast

Comparison and contrast is the process of positioning an item or concept being evaluated among other like items or concepts. We are all familiar with this technique as it's used in the marketing of products: soft drink "taste tests," comparisons of laundry detergent effectiveness, and the like. It is a way of determining the value of something in relation to comparable things. For example, if you have made the judgment that "La Cocina's green chile is superb" and you have offered evidence of the spiciness and the flavor of the chile, you might also use comparison by giving your audience a scale on which to base judgment: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park."

In this case, the writer compares limestone with wood to show that limestone is a better building material. Although this comparison could be developed much more, it still begins to point out the relative merits of limestone. Concrete is a feasible substitute for wood as a building material. Concrete comes from a rock called limestone. Limestone is found all over the United States. By using limestone instead of wood, the dependence on dwindling forest reserves would decrease. There are more sedimentary rocks than there are forests left in this country, and they are more evenly distributed. For this reason, it is quite possible to switch from wood to concrete as the primary building material for residential construction.

Determining Relative Worth

Comparing and contrasting rarely means placing the item or concept being evaluated in relation to another item or concept that is obviously grossly inferior. For instance, if you are attempting to demonstrate the value of a Cannondale mountain bike, it would be foolish to compare it with a Huffy. However, it would be useful to compare it with a Klein, arguably a similar bicycle. In this type of maneuver, you are not comparing good with bad; rather, you are deciding which bike is better and which bike is worse. In order to determine relative worth in this way, you will need to be very careful in defining the criteria you are using to make the comparison.

Using Comparison and Contrast Effectively

In order to make comparison and contrast function well in evaluation, it is necessary to be attentive to: 1) focusing on the item or concept under consideration and 2) the use of evidence in comparison and contrast. When using comparison and contrast, writers must remember that they are using comparable items or concepts only as a way of demonstrating the worth of the main item or concept under consideration. It is easy to lose focus when using this technique, because of the temptation to evaluate two (or more) items or concepts rather than just the one under consideration. It is important to remember that judgments made on the basis of comparison and contrast need to be supported with evidence. It is not enough to assert that "La Cocina's chile is even more fiery and flavorful than Manuel's." It will be necessary to support this judgment with evidence, showing in what ways La Cocina's chile is more flavorful: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

The Process of Writing an Evaluation

A variety of writing assignments call for evaluation. Bearing in mind the various approaches that might be demanded by those particular assignments, this section offers some general strategies for formulating a written evaluation.

Choosing a Topic for Evaluation

Sometimes your topic for evaluation will be dictated by the writing assignment you have been given. Other times, though, you will be required to choose your own topic. Common sense tells you that it is best to choose something about which you already have a base knowledge. For instance, if you are a skier, you might want to evaluate a particular model of skis. In addition, it is best to choose something that is tangible, observable, and/or researchable. For example, if you chose a topic like "methods of sustainable management of forests," you would know that there would be research to support your evaluation. Likewise, if you chose to evaluate a film like Pulp Fiction , you could rent the video and watch it several times in order to get the evidence you needed. However, you would have fewer options if you were to choose an abstract concept like "loyalty" or "faith." When evaluating, it is usually best to steer clear of abstractions like these as much as possible.

Brainstorming Possible Judgments

Once you have chosen a topic, you might begin your evaluation by thinking about what you already know about the topic. In doing this, you will be coming up with possible judgments to include in your evaluation. Begin with a tentative overall judgment or claim. Then decide what supporting judgments you might make to back that claim. Keep in mind that your judgments will likely change as you collect evidence for your evaluation.

Determining a Tentative Overall Judgment

Start by making an overall judgment on the topic in question, based on what you already know. For instance, if you were writing an evaluation of sustainable management practices in forestry, your tentative overall judgment might be: "Sustainable management is a viable way of dealing with deforestation in old growth forests."

Brainstorming Possible Supporting Judgments

With a tentative overall judgment in mind, you can begin to brainstorm judgments (or reasons) that could support your overall judgment by asking the question, "Why?" For example, asking "Why?" of the tentative overall judgment "Sustainable management is a viable way of dealing with deforestation in old growth forests" might yield the following supporting judgments:

  • Sustainable management allows for continued support of the logging industry.
  • It eliminates much unnecessary waste.
  • It is much better for the environment than unrestricted, traditional forestry methods.
  • It is less expensive than these traditional methods.

Anticipating Changes to Your Judgments After Collecting Evidence

When brainstorming possible judgments this early in the writing process, it is necessary to keep an open mind as you enter into the stage in which you collect evidence. Once you have done observations, analysis, or research, you might find that you are unable to advance your tentative overall judgment. Or you might find that some of the supporting judgments you came up with are not true or are not supportable. Your findings might also point you toward other judgments you can make in addition to the ones you are already making.

Defining Criteria

To prepare to organize and write your evaluation, it is important to clearly define the criteria you are using to make your judgments. These criteria govern the direction of the evaluation and provide structure and justification for the judgments you make.

Looking at the Criteria Informing Your Judgments (Working Backwards)

We often work backwards from the judgments we make, discovering what criteria we are using on the basis of what our judgments look like. For instance, our tentative judgments about sustainable management practices are as follows:

If we were to analyze these judgments, asking ourselves why we made them, we would see that we used the following criteria: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost.

Thinking of Additional Criteria

Once you have identified the criteria informing your initial judgments, you will want to determine what other criteria should be included in your evaluation. For example, in addition to the criteria you've already come up with (wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost), you might include the criterion of preservation of the old growth forests.

Comparing Your Criteria with Those of Your Audience

In deciding which criteria are most important to include in your evaluation, it is necessary to consider the criteria your audience is likely to find important. Let's say we are directing our evaluation of sustainable management methods toward an audience of loggers. If we look at our list of criteria--wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests--we might decide that wellbeing of the logging industry and cost are the criteria most important to loggers. At this point, we would also want to identify additional criteria the audience might expect us to address: perhaps feasibility, labor requirements, and efficiency.

Deciding Which Criteria Are Most Important

Once you have developed a long list of possible criteria for judging your subject (in this case, sustainable management methods), you will need to narrow the list, since it is impractical and ineffective to use of all possible criteria in your essay. To decide which criteria to address, determine which are least dispensable, both to you and to your audience. Your own criteria were: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests. Those you anticipated for your audience were: feasibility, labor requirements, and efficiency. In the written evaluation, you might choose to address those criteria most important to your audience, with a couple of your own included. For example, your list of indispensable criteria might look like this: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests.

Criteria and Assumptions

Stephen Reid, English Professor Warrants (to use a term from argumentation) come on the scene when we ask why a given criterion should be used or should be acceptable in evaluating the particular text, product, or performance in question. When we ask WHY a particular criterion should be important (let's say, strong performance in an automobile engine, quickly moving plot in a murder mystery, outgoing personality in a teacher), we are getting at the assumptions (i.e., the warrant) behind why the data is relevant to the claim of value we are about to make. Strong performance in an automobile engine might be a positive criterion in an urban, industrialized environment, where traveling at highway speeds on American interstates is important. But we might disagree about whether strong performance (accompanied by lower mileage) might be important in a rural European environment where gas costs are several dollars a litre. Similarly, an outgoing personality for a teacher might be an important standard of judgment or criterion in a teacher-centered classroom, but we could imagine another kind of decentered class where interpersonal skills are more important than teacher personality. By QUESTIONING the validity and appropriateness of a given criterion in a particular situation, we are probing for the ASSUMPTIONS or WARRANTS we are making in using that criterion in that particular situation. Thus, criteria are important, but it is often equally important for writers to discuss the assumptions that they are making in choosing the major criteria in their evaluations.

Collecting Evidence

Once you have established the central criteria you will use in our evaluation, you will investigate your subject in terms of these criteria. In order to investigate the subject of sustainable management methods, you would more than likely have to research whether these methods stand up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, time efficiency, conservation of resources, and preservation of the old growth forests. However, library research is only one of the techniques evaluators use. Depending on the type of evaluation being made, the evaluator might use such methods as observation, field research, and analysis.

Thinking About What You Already Know

The best place to start looking for evidence is with the knowledge you already possess. To do this, you might try brainstorming, clustering, or freewriting ideas.

Library Research

When you are evaluating policies, issues, or products, you will usually need to conduct library research to find the evidence your evaluation requires. It is always a good idea to check journals, databases, and bibliographies relevant to your subject when you begin research. It is also helpful to speak with a reference librarian about how to get started.

Observation

When you are asked to evaluate a performance, event, place, object, or person, one of the best methods available is simple observation. What makes observation not so simple is the need to focus on criteria you have developed ahead of time. If, for instance, you are reviewing a student production of Hamlet , you will want to review your list of criteria (perhaps quality of acting, costumes, faithfulness to the text, set design, lighting, and length of time before intermission) before attending the play. During or after the play, you will want to take as many notes as possible, keeping these criteria in mind.

Field Research

To expand your evaluation beyond your personal perspective or the perspective of your sources, you might conduct your own field research . Typical field research techniques include interviewing, taking a survey, administering a questionnaire, and conducting an experiment. These methods can help you support your judgment and can sometimes help you determine whether or not your judgment is valid.

When you are asked to evaluate a text, analysis is often the technique you will use in collecting evidence. If you are analyzing an argument, you might use the Toulmin Method. Other texts might not require such a structured analysis but might be better addressed by more general critical reading strategies.

Applying Criteria

After developing a list of indispensable criteria, you will need to "test" the subject according to these criteria. At this point, it will probably be necessary to collect evidence (through research, analysis, or observation) to determine, for example, whether sustainable management methods would hold up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests. One way of recording the results of this "test" is by putting your notes in a three-column log.

Organizing the Evaluation

One of the best ways to organize your information in preparation for writing is to construct an informal outline of sorts. Outlines might be arranged according to criteria, comparison and contrast, chronological order, or causal analysis. They also might follow what Robert K. Miller and Suzanne S. Webb refer to in their book, Motives for Writing (2nd ed.) as "the pattern of classical oration for evaluations" (286). In addition to deciding on a general structure for your evaluation, it will be necessary to determine the most appropriate placement for your overall claim or judgment.

Placement of the Overall Claim or Judgment

Writers can state their final position at the beginning or the end of an essay. The same is true of the overall claim or judgment in a written evaluation.

When you place your overall claim or judgment at the end of your written evaluation, you are able to build up to it and to demonstrate how your evaluative argument (evidence, explanation of criteria, etc.) has led to that judgment.

Writers of academic evaluations normally don't need to keep readers in suspense about their judgments. By stating the overall claim or judgment early in the paper, writers help readers both to see the structure of the essay and to accept the evidence as convincing proof of the judgment. (Writers of evaluations should remember, of course, that there is no rule against stating the overall claim or judgment at both the beginning and the end of the essay.)

Organization by Criteria

The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how a writer might arrange an evaluation according to criteria:

Introductory paragraphs: information about the restaurant (location, hours, prices), general description of Chinese restaurants today, and overall claim : The Hunan Dynasty is reliable, a good value, and versatile.
Criterion # 1/Judgment: Good restaurants should have an attractive setting and atmosphere/Hunan Dynasty is attractive.
Criterion # 2/Judgment: Good restaurants should give strong priority to service/ Hunan Dynasty has, despite an occasional glitch, expert service.
Criterion # 3/Judgment: Restaurants that serve modestly priced food should have quality main dishes/ Main dishes at Hunan Dynasty are generally good but not often memorable. (Note: The most important criterion--the quality of the main dishes--is saved for last.)
Concluding paragraphs: Hunan Dynasty is a top-flight neighborhood restaurant (338).

Organization by Comparison and Contrast

Sometimes comparison and contrast is not merely a strategy used in part [italics] of an evaluation, but is the strategy governing the organization of the entire essay. The following are examples from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing two ways that a writer might organize an evaluation according to comparison and contrast.

Introductory paragraph(s)

Thesis [or overall claim/judgment]: Although several friends recommended the Yakitori, we preferred the Unicorn for its more authentic atmosphere, courteous service, and well-prepared food. [Notice that the criteria are stated in this thesis.]

Authentic atmosphere: Yakitori vs. Unicorn

Courteous service: Yakitori vs. Unicorn

Well-prepared food: Yakitori vs. Unicorn

Concluding paragraph(s) (Reid 339)

The Yakitori : atmosphere, service, and food

The Unicorn : atmosphere, service, and food as compared to the Yakitori

Concluding paragraph(s) (Reid 339).

Organization by Chronological Order

Writers often follow chronological order when evaluating or reviewing events or performances. This method of organization allows the writer to evaluate portions of the event or performance in the order in which it happens.

Organization by Causal Analysis

When using analysis to evaluate places, objects, events, or policies, writers often focus on causes or effects. The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how one writer organizes an evaluation of a Goya painting by discussing its effects on the viewer.

Criterion #1/Judgment: The iconography, or use of symbols, contributes to the powerful effect of this picture on the viewer.

Evidence : The church as a symbol of hopefulness contrasts with the cruelty of the execution. The spire on the church emphasizes for the viewer how powerless the Church is to save the victims.

Criterion #2/Judgment: The use of light contributes to the powerful effect of the picture on the viewer.

Evidence : The light casts an intense glow on the scene, and its glaring, lurid, and artificial qualities create the same effect on the viewer that modern art sometimes does.

Criterion #3/Judgment: The composition or use of formal devices contributes to the powerful effect of the picture on the viewer.

Evidence : The diagonal lines scissors the picture into spaces that give the viewer a claustrophobic feeling. The corpse is foreshortened, so that it looks as though the dead man is bidding the viewer welcome (Reid 340).

Pattern of Classical Oration for Evaluations

Robert K. Miller and Suzanne S. Webb, in their book, Motives for Writing (2nd ed.) discuss what they call "the pattern of classical oration for evaluations," which incorporates opposing evaluations as well as supporting reasons and judgments. This pattern is as follows:

Present your subject. (This discussion includes any background information, description, acknowledgement of weaknesses, and so forth.)

State your criteria. (If your criteria are controversial, be sure to justify them.)

Make your judgment. (State it as clearly and emphatically as possible.)

Give your reasons. (Be sure to present good evidence for each reason.)

Refute opposing evaluations. (Let your reader know you have given thoughtful consideration to opposing views, since such views exist.)

State your conclusion. (You may restate or summarize your judgment.) (Miller and Webb 286-7)

Example: Part of an Outline for an Evaluation

The following is a portion of an outline for an evaluation, organized by way of supporting judgments or reasons. Notice that this pattern would need to be repeated (using criteria other than the fieriness of the green chile) in order to constitute a complete evaluation proving that "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Evaluation of La Cocina, a Mexican Restaurant

Intro Paragraph Leading to Overall Judgment: "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Supporting Judgment: "La Cocina's green chile is superb."

Criterion used to make this judgment: "Good green chile is so fiery that you can barely eat it."

Evidence in support of this judgment: "I drank an entire pitcher of water on my own during the course of the meal" or "Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Supporting Judgment made by way of Comparison and Contrast: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park itself."

Evidence in support of this judgment: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato, and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

Writing the Draft

If you have an outline to follow, writing a draft of a written evaluation is simple. Stephen Reid, in his Prentice Hall Guide for College Writers , recommends that writers maintain focus on both the audience they are addressing and the central criteria they want to include. Such a focus will help writers remember what their audience expects and values and what is most important in constructing an effective and persuasive evaluation.

Guidelines for Revision

In his Prentice Hall Guide for College Writers , 4th ed., Stephen Reid offers some helpful tips for revising written evaluations. These guidelines are reproduced here and grouped as follows:

Examining Criteria

Criteria are standards of value . They contain categories and judgments, as in "good fuel economy," "good reliability," or "powerful use of light and shade in painting." Some categories, such as "price," have clearly implied judgments ("low price"), but make sure that your criteria refer implicitly or explicitly to a standard of value.

Examine your criteria from your audience's point of view. Which criteria are most important in evaluating your subject? Will your readers agree that the criteria you select are indeed the most important ones? Will changing the order in which you present your criteria make your evaluation more convincing? (Reid 342)

Balancing the Evaluation

Include both positive and negative evaluations of your subject. If all of your judgments are positive, your evaluation will sound like an advertisement. If all of your judgments are negative, your readers may think you are too critical (Reid 342).

Using Evidence

Be sure to include supporting evidence for each criterion. Without any data or support, your evaluation will be just an opinion that will not persuade your reader.

If you need additional evidence to persuade your readers, [go back to the "Collecting" stage of this process] (Reid 343).

Avoiding Overgeneralization

Avoid overgeneralizing your claims. If you are evaluating only three software programs, you cannot say that Lotus 1-2-3 is the best business program around. You can say only that it is the best among the group or the best in the particular class that you measured (Reid 343).

Making Appropriate Comparisons

Unless your goal is humor or irony, compare subjects that belong in the same class. Comparing a Yugo to a BMW is absurd because they are not similar cars in terms of cost, design, or purpose (Reid 343).

Checking for Accuracy

If you are citing other people's data or quoting sources, check to make sure your summaries and data are accurate (Reid 343).

Working on Transitions, Clarity, and Style

Signal the major divisions in your evaluation to your reader using clear transitions, key words, and paragraph hooks. At the beginning of new paragraphs or sections of your essay, let your reader know where you are going.

Revise sentences for directness and clarity.

Edit your evaluation for correct spelling, appropriate word choice, punctuation, usage, and grammar (343).

Nesbitt, Laurel, Kathy Northcut, & Kate Kiefer. (1997). Academic Evaluations. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=47

  • Recognize when information is required
  • Determine the extent of information needed
  • Access the needed information effectively and efficiently
  • Evaluate information and its sources critically
  • Incorporate selected information into one's knowledge base
  • Use information effectively to accomplish a specific purpose
  • Understand the economic, legal, and social issues surrounding the use of information, and access and use information ethically and legally *
  • are relevant to the course, and provide enriching material for students
  • encourage students to think about the type of information they need (factual, background, evaluative), and the form in which they're most likely to find it
  • include retrieval of information through some finding tool such as an index, catalog, database or search engine
  • ask students to look at information critically -- to evaluate it, to compare it with other information, to synthesize information from different sources, to identify the most crucial pieces of information available

Questions to ask when designing assignments:

  • Does this assignment help to achieve the learning goals of the course?
  • What core research skill is being addressed in this assignment, and how?
  • Is this assignment integrated into the course, providing material to be used in other work within the course?
  • Will this assignment serve to bring in enriching material for the students?
  • Does this assignment encourage my students to think about the type of information they need (factual, background, evaluative), and the form in which they're most likely to find it?
  • Does this assignment help my students distinguish among various types of information sources:  magazine articles, books, academic or research journals, personal web sites, etc?
  • Does this assignment include retrieval of information through some major finding tool such as an index, catalog, database or search engine?
  • Does this assignment provide meaningful practice in using tools in ways that might be helpful in other contexts?
  • Does this assignment ask students to look at information critically -- to evaluate it, to compare it with other information, to synthesize information from different sources, to identify the most crucial pieces of information available?
  • Is this assignment designed so that student success is feasible? Are the likely obstacles, however salutary, also surmountable?

Assignment suggestions:

  • Prepare brief annotated bibliographies This assignment may ask students to retrieve a variety of sources - articles, books, personal accounts, web sites - and describe the contribution of each source to an understanding of the topic. This can help students develop a sense of the scholarly conversation around a topic.
  • Retrieve and compare two sources of information on the same topic This helps students become aware of the impact that the author's background, intent and audience may have on the information presented, and may highlight the differences among various disciplines. It works particularly well when students are asked to locate deliberately disparate sources, such as an article from a popular magazine and another from an academic journal 1 , articles from conservative and liberal sources 2 , articles from different disciplines, journal articles and web sites,a personal and an organizational web site.
  • Look at the treatment of a topic over time. This can build students' awareness of the process of scholarship on a topic -- what do researchers now know that they didn't know before, how might the social context of research have had impact on a topic, etc. It can work for timespans as limited as two years and as wide as a century. It may also heighten awareness that it is not enough to search the last six months in a database!
  • Starting with a significant publication or event within the field, prepare a report on the people or issues involved. 3 This helps students contextualize some of the material, and begins to focus them on the research in the discipline.
  • Review a major journal in the field over time. Through tracing shifts in who is published, what topics are considered of interest, what methodology is used, students develop a sense of a discipline as an evolving entity.
  • Compare items retrieved by searches using two different search engines or databases. 4 Students learn that indexes, databases and even search engines may have different foci and functions. This helps them learn to make deliberate choices about which finding tool to locate information in various fields, at differing levels, or in differing formats. Searching a general database such as Academic Search Premiere and the standard indexing tool within your discipline might yield some interesting results. (Is the general database useful for an interdisciplinary approach? Are its articles more accessible? Does the specialized index do better for narrow searches?)
  • Starting with a short article or announcment in the popular press, locate the original research on which the popular article was based.  Evaluate the accuracy of the announcement. 5 This highlights the distinction between popular and scholarly press, and helps students understand the differences in audience and level of authority.
  • Locate and evaluate reviews of books used in the course. The focus here is on analyzing the reception of a piece of research within a field. Students can gain a sense of the conversation within a discipline by reading scholarly critiques of the material they are reading for class. The retrieval skills it teaches are fairly mechanical and straightforward, but it will acquaint students with local resources, including the basics of finding journals, etc.
  • Locate and compare two contemporary accounts of an event. Heightens awareness of difference in perspective between the immediacy and detail of the contemporary account and the treatment of the event by later scholars. Students are often intrigued with old newspapers and magazines, and finding a topic, then using an index to find another article, helps them understand the use of indexes.  
  • Locate and evaluate the “best” and the “worst” web site on a topic, describing the criteria used and recommending improvements for the "worst"site. Students use search engines or directories to locate web sites, and must develop criteria for judging the pertinence and reliability of the information found.
  • Debates requiring outside research . This works well with controversial topics, encouraging students to support their opinions with analyses and data from the field. Requiring a bibliography of the sources they used gives practice in the mechanics of citation, and helps the instructor assess the range of materials they consulted.
  • Present brief factual background to the class, introducing a new topic. Helps students identify when consulting a reference work (print or electronic) is more efficient than looking for articles or books, and helps students invest in the process of the course itself. It also can mesh well with the oral components of the seminar.
  • Write or present a brief intellectual biography of a scholar identified or read in the course. Although care must be taken to select scholars who are prolific enough to leave a traceable trail, students can locate dissertations, articles and books by the individual, and trace shifts or developments in his/her interests or understanding of the field. This might be combined with checking book reviews of a scholar's work over the course of his/her career.
  • Write a newspaper article on an event. The entire class can research an event, with each individual writing a news story on it. In addition to encouraging students to identify important elements and to summarize, the differences among the stories may alert students to the impact a writer's perspective has on writing.
  • Prepare for a news conference with a scholar read in class, or with a figure involved in some significant event in history. 6 Students must research the scholar's or historical figure's general context to decide what questions they would want to ask, and perhaps prepare questions someone from another culture or time period might pose.
  • Write a proposal for an extended research project. This asks students to do almost everything involved with writing a paper, except the actual writing: they must locate and retrieve information in the field, and analyze how it fits together and perhaps where it does not.
  • Create an anthology of readings on a topic. Select a variety of resources on a topic, and write an introduction that explains how they fit together. Another twist on this (from Wesleyan University Library) is to have students assume they're unable to obtain copyright permission, and so must have a secondary list of resources, with justification for not including them in their optimal collection.
  • Compare the treatment of the same topic in two different disciplines. This helps students both practice physically locating material and learn to identify the perspectives and approaches of different disciplines.  
  • Locate and summarize information to support an editorial on a topic within the course. 7 This helps student identify information needs that might arise outside class, and highlights the importance of approaching opinions critically.
  • Locate two scholarly articles on a topic, and compare and evaluate their bibliographies. 8 Students observe both common and unique sources across the articles, and think about the impact the quality of sources can have on the authority of the article.
  • Create a profile of a species, or of a chemical compound found in a household product. 9 Familiarizes students with the common scientific reference tools, and can introduce them to scientific literature.
* Association of College and Research Libraries/ALA, Information literacy competency standards for higher education , endorsed by the American Association for Higher Education, Council of Independent Colleges, and Middle States Commission on Higher Education, 2000. 1 "Effective library assignments" <www.bgsu.edu/colleges/library/infoserv/lue/effectiveassignments.html> 5/6/04. 2 K. Huber and P. Lewis, "Tired of Term Papers?" Research Strategies 2 (1984), 192-199. 3 Joseph, Miriam E. "Term Paper Alternatives." <http://www.lib.berkeley.edu/TeachingLib/PaperAlternatives.html> 5/6/04 4 "Creating assignments." <www.lib.unb.ca/instruction/assignments.html> 5/6/04 5 VT Sapziano and JL Gibbons, "Brain chemistry and behavior: A new interdisciplinary course" Journal of Chemical Education 63 (1986), 398-399. 6 "Ideas for library assignments," <library.ups.edu/instruct/assign.htm>, 5/6/04. 7 "Creating assignments." <www.lib.unb.ca/instruction/assignments.html> 5/6/04 8 "Alternative assignments," <www.library.ohiou.edu/libinfo/depts/refdept/bi/alternatives.htm>, 5/6/04. 9 "Library assignments for lower-division science courses" <[email protected]>

Designing Writing Assignments

Designing Writing Assignments designing-assignments

As you think about creating writing assignments, use these five principles:

  • Tie the writing task to specific pedagogical goals.
  • Note rhetorical aspects of the task, i.e., audience, purpose, writing situation.
  • Make all elements of the task clear.
  • Include grading criteria on the assignment sheet.
  • Break down the task into manageable steps.

You'll find discussions of these principles in the following sections of this guide.

Writing Should Meet Teaching Goals

Working backwards from goals, guidelines for writing assignments, resource: checksheets, resources: sample assignments.

  • Citation Information

To guarantee that writing tasks tie directly to the teaching goals for your class, ask yourself questions such as the following:

  • What specific course objectives will the writing assignment meet?
  • Will informal or formal writing better meet my teaching goals?
  • Will students be writing to learn course material, to master writing conventions in this discipline, or both?
  • Does the assignment make sense?

Although it might seem awkward at first, working backwards from what you hope the final papers will look like often produces the best assignment sheets. We recommend jotting down several points that will help you with this step in writing your assignments:

  • Why should students write in your class? State your goals for the final product as clearly and concretely as possible.
  • Determine what writing products will meet these goals and fit your teaching style/preferences.
  • Note specific skills that will contribute to the final product.
  • Sequence activities (reading, researching, writing) to build toward the final product.

Successful writing assignments depend on preparation, careful and thorough instructions, and on explicit criteria for evaluation. Although your experience with a given assignment will suggest ways of improving a specific paper in your class, the following guidelines should help you anticipate many potential problems and considerably reduce your grading time.

  • Explain the purpose of the writing assignment.
  • Make the format of the writing assignment fit the purpose (format: research paper, position paper, brief or abstract, lab report, problem-solving paper, etc.).

II. The assignment

  • Provide complete written instructions.
  • Provide format models where possible.
  • Discuss sample strong, average, and weak papers.

III. Revision of written drafts

Where appropriate, peer group workshops on rough drafts of papers may improve the overall quality of papers. For example, have students critique each others' papers one week before the due date for format, organization, or mechanics. For these workshops, outline specific and limited tasks on a checksheet. These workshops also give you an opportunity to make sure that all the students are progressing satisfactorily on the project.

IV. Evaluation

On a grading sheet, indicate the percentage of the grade devoted to content and the percentage devoted to writing skills (expression, punctuation, spelling, mechanics). The grading sheet should indicate the important content features as well as the writing skills you consider significant.

Visitors to this site are welcome to download and print these guidelines

Checksheet 1: (thanks to Kate Kiefer and Donna Lecourt)

  • written out the assignment so that students can take away a copy of the precise task?
  • made clear which course goals this writing task helps students meet?
  • specified the audience and purpose of the assignment?
  • outlined clearly all required sub-parts of the assignment (if any)?
  • included my grading criteria on the assignment sheet?
  • pointed students toward appropriate prewriting activities or sources of information?
  • specified the format of the final paper (including documentation, headings or sections, page layout)?
  • given students models or appropriate samples?
  • set a schedule that will encourage students to review each other's drafts and revise their papers?

Checksheet 2: (thanks to Jean Wyrick)

  • Is the assignment written clearly on the board or on a handout?
  • Do the instructions explain the purpose(s) of the assignment?
  • Does the assignment fit the purpose?
  • Is the assignment stated in precise language that cannot be misunderstood?
  • If choices are possible, are these options clearly marked?
  • Are there instructions for the appropriate format? (examples: length? typed? cover sheet? type of paper?)
  • Are there any special instructions, such as use of a particular citation format or kinds of headings? If so, are these clearly stated?
  • Is the due date clearly visible? (Are late assignments accepted? If so, any penalty?)
  • Are any potential problems anticipated and explained?
  • Are the grading criteria spelled out as specifically as possible? How much does content count? Organization? Writing skills? One grade or separate grades on form and content? Etc.
  • Does the grading criteria section specifically indicate which writing skills the teacher considers important as well as the various aspects of content?
  • What part of the course grade is this assignment?
  • Does the assignment include use of models (strong, average, weak) or samples outlines?

Sample Full-Semester Assignment from Ag Econ 4XX

Good analytical writing is a rigorous and difficult task. It involves a process of editing and rewriting, and it is common to do a half dozen or more drafts. Because of the difficulty of analytical writing and the need for drafting, we will be completing the assignment in four stages. A draft of each of the sections described below is due when we finish the class unit related to that topic (see due dates on syllabus). I will read the drafts of each section and provide comments; these drafts will not be graded but failure to pass in a complete version of a section will result in a deduction in your final paper grade. Because of the time both you and I are investing in the project, it will constitute one-half of your semester grade.

Content, Concepts and Substance

Papers will focus on the peoples and policies related to population, food, and the environment of your chosen country. As well as exploring each of these subsets, papers need to highlight the interrelations among them. These interrelations should form part of your revision focus for the final draft. Important concepts relevant to the papers will be covered in class; therefore, your research should be focused on the collection of information on your chosen country or region to substantiate your themes. Specifically, the paper needs to address the following questions.

  • Population - Developing countries have undergone large changes in population. Explain the dynamic nature of this continuing change in your country or region and the forces underlying the changes. Better papers will go beyond description and analyze the situation at hand. That is, go behind the numbers to explain what is happening in your country with respect to the underlying population dynamics: structure of growth, population momentum, rural/urban migration, age structure of population, unanticipated populations shocks, etc. DUE: WEEK 4.
  • Food - What is the nature of food consumption in your country or region? Is the average daily consumption below recommended levels? Is food consumption increasing with economic growth? What is the income elasticity of demand? Use Engel's law to discuss this behavior. Is production able to stay abreast with demand given these trends? What is the nature of agricultural production: traditional agriculture or green revolution technology? Is the trend in food production towards self-sufficiency? If not, can comparative advantage explain this? Does the country import or export food? Is the politico-economic regime supportive of a progressive agricultural sector? DUE: WEEK 8.
  • Environment - This is the third issue to be covered in class. It is crucial to show in your paper the environmental impact of agricultural production techniques as well as any direct impacts from population changes. This is especially true in countries that have evolved from traditional agriculture to green revolution techniques in the wake of population pressures. While there are private benefits to increased production, the use of petroleum-based inputs leads to environmental and human health related social costs which are exacerbated by poorly defined property rights. Use the concepts of technological externalities, assimilative capacity, property rights, etc. to explain the nature of this situation in your country or region. What other environmental problems are evident? Discuss the problems and methods for economically measuring environmental degradation. DUE: WEEK 12.
  • Final Draft - The final draft of the project should consider the economic situation of agriculture in your specified country or region from the three perspectives outlined above. Key to such an analysis are the interrelationships of the three perspectives. How does each factor contribute to an overall analysis of the successes and problems in agricultural policy and production of your chosen country or region? The paper may conclude with recommendations, but, at the very least, it should provide a clear summary statement about the challenges facing your country or region. DUE: WEEK15.

Landscape Architecture 3XX: Design Critique

Critical yet often overlooked components of the landscape architect's professional skills are the ability to critically evaluate existing designs and the ability to eloquently express him/herself in writing. To develop your skills at these fundamental components, you are to professionally critique a built project with which you are personally and directly familiar. The critique is intended for the "informed public" as might be expected to be read in such features in The New York Times or Columbus Monthly ; therefore, it should be insightful and professionally valid, yet also entertaining and eloquent. It should reflect a sophisticated knowledge of the subject without being burdened with professional jargon.

As in most critiques or reviews, you are attempting not only to identify the project's good and bad features but also to interpret the project's significance and meaning. As such, the critique should have a clear "point of view" or thesis that is then supported by evidence (your description of the place) that persuades the reader that your thesis is valid. Note, however, that your primary goal is not to force the reader to agree with your point of view but rather to present a valid discussion that enriches and broadens the reader's understanding of the project.

To assist in the development of the best possible paper, you are to submit a typed draft by 1:00 pm, Monday, February 10th. The drafts will be reviewed as a set and will then serve as a basis of an in-class writing improvement seminar on Friday, February 14th. The seminar will focus on problems identified in the set of drafts, so individual papers will not have been commented on or marked. You may also submit a typed draft of your paper to the course instructor for review and comment at any time prior to the final submission.

Final papers are due at 2:00 pm, Friday, February 23rd.

Animal/Dairy/Poultry Science 2XX: Comparative Animal Nutrition

Purpose: Students should be able to integrate lecture and laboratory material, relate class material to industry situations, and improve their problem-solving abilities.

Assignment 1: Weekly laboratory reports (50 points)

For the first laboratory, students will be expected to provide depth and breadth of knowledge, creativity, and proper writing format in a one-page, typed, double-spaced report. Thus, conciseness will be stressed. Five points total will be possible for the first draft, another five points possible will be given to a student peer-reviewer of the draft, and five final points will be available for a second draft. This assignment, in its entirety, will be due before the first midterm (class 20). Any major writing flaws will be addressed early so that students can grasp concepts stressed by the instructors without major impact on their grades. Additional objectives are to provide students with skills in critically reviewing papers and to acquaint writers and reviewers of the instructors' expectations for assignments 2 and 3, which are weighted much more heavily.

Students will submit seven one-page handwritten reports from each week's previous laboratory. These reports will cover laboratory classes 2-9; note that one report can be dropped and week 10 has no laboratory. Reports will be graded (5 points each) by the instructors for integration of relevant lecture material or prior experience with the current laboratory.

Assignment 2: Group problem-solving approach to a nutritional problem in the animal industry (50 points)

Students will be divided into groups of four. Several problems will be offered by the instructors, but a group can choose an alternative, approved topic. Students should propose a solution to the problem. Because most real-life problems are solved by groups of employees and (or) consultants, this exercise should provide students an opportunity to practice skills they will need after graduation. Groups will divide the assignment as they see fit. However, 25 points will be based on an individual's separate assignment (1-2 typed pages), and 25 points will be based on the group's total document. Thus, it is assumed that papers will be peer-reviewed. The audience intended will be marketing directors, who will need suitable background, illustrations, etc., to help their salespersons sell more products. This assignment will be started in about the second week of class and will be due by class 28.

Assignment 3: Students will develop a topic of their own choosing (approved by instructors) to be written for two audiences (100 points).

The first assignment (25 points) will be written in "common language," e.g., to farmers or salespersons. High clarity of presentation will be expected. It also will be graded for content to assure that the student has developed the topic adequately. This assignment will be due by class 38.

Concomitant with this assignment will be a first draft of a scientific term paper on the same subject. Ten scientific articles and five typed, double-spaced pages are minimum requirements. Basic knowledge of scientific principles will be incorporated into this term paper written to an audience of alumni of this course working in a nutrition-related field. This draft (25 points) will be due by class 38. It will be reviewed by a peer who will receive up to 25 points for his/her critique. It will be returned to the student and instructor by class 43. The final draft, worth an additional 25 points, will be due before class 50 and will be returned to the student during the final exam period.

Integration Papers - HD 3XX

Two papers will be assigned for the semester, each to be no more than three typewritten pages in length. Each paper will be worth 50 points.

Purpose:   The purpose of this assignment is to aid the student in learning skills necessary in forming policy-making decisions and to encourage the student to consider the integral relationship between theory, research, and social policy.

Format:   The student may choose any issue of interest that is appropriate to the socialization focus of the course, but the issue must be clearly stated and the student is advised to carefully limit the scope of the issue question.

There are three sections to the paper:

First:   One page will summarize two conflicting theoretical approaches to the chosen issue. Summarize only what the selected theories may or would say about the particular question you've posed; do not try to summarize the entire theory. Make clear to a reader in what way the two theories disagree or contrast. Your text should provide you with the basic information to do this section.

Second:   On the second page, summarize (abstract) one relevant piece of current research. The research article must be chosen from a professional journal (not a secondary source) written within the last five years. The article should be abstracted and then the student should clearly show how the research relates to the theoretical position(s) stated earlier, in particular, and to the socialization issue chosen in general. Be sure the subjects used, methodology, and assumptions can be reasonably extended to your concern.

Third:   On the third page, the student will present a policy guideline (for example, the Colorado courts should be required to include, on the child's behalf, a child development specialist's testimony at all custody hearings) that can be supported by the information gained and presented in the first two pages. My advice is that you picture a specific audience and the final purpose or use of such a policy guideline. For example, perhaps as a child development specialist you have been requested to present an informed opinion to a federal or state committee whose charge is to develop a particular type of human development program or service. Be specific about your hypothetical situation and this will help you write a realistic policy guideline.

Sample papers will be available in the department reading room.

SP3XX Short Essay Grading Criteria

A (90-100): Thesis is clearly presented in first paragraph. Every subsequent paragraph contributes significantly to the development of the thesis. Final paragraph "pulls together" the body of the essay and demonstrates how the essay as a whole has supported the thesis. In terms of both style and content, the essay is a pleasure to read; ideas are brought forth with clarity and follow each other logically and effortlessly. Essay is virtually free of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

B (80-89): Thesis is clearly presented in first paragraph. Every subsequent paragraph contributes significantly to the development of the thesis. Final paragraph "pulls together" the body of the essay and demonstrates how the essay as a whole has supported the thesis. In terms of style and content, the essay is still clear and progresses logically, but the essay is somewhat weaker due to awkward word choice, sentence structure, or organization. Essay may have a few (approximately 3) instances of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

C (70-79): There is a thesis, but the reader may have to hunt for it a bit. All the paragraphs contribute to the thesis, but the organization of these paragraphs is less than clear. Final paragraph simply summarizes essay without successfully integrating the ideas presented into a unified support for thesis. In terms of style and content, the reader is able to discern the intent of the essay and the support for the thesis, but some amount of mental gymnastics and "reading between the lines" is necessary; the essay is not easy to read, but it still has said some important things. Essay may have instances (approximately 6) of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

D (60-69): Thesis is not clear. Individual paragraphs may have interesting insights, but the paragraphs do not work together well in support of the thesis. In terms of style and content, the essay is difficult to read and to understand, but the reader can see there was a (less than successful) effort to engage a meaningful subject. Essay may have several instances (approximately 6) of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

Teacher Comments

Patrick Fitzhorn, Mechanical Engineering: My expectations for freshman are relatively high. I'm jaded with the seniors, who keep disappointing me. Often, we don't agree on the grading criteria.

There's three parts to our writing in engineering. The first part, is the assignment itself.

The four types: lab reports, technical papers, design reports, and proposals. The other part is expectations in terms of a growth of writing style at each level in our curriculum and an understanding of that from students so they understand that high school writing is not acceptable as a senior in college. Third, is how we transform our expectations into justifiable grades that have real feedback for the students.

To the freshman, I might give a page to a page and one half to here's how I want the design report. To the seniors it was three pages long. We try to capture how our expectations change from freshman to senior. I bet the structure is almost identical...

We always give them pretty rigorous outlines. Often times, the way students write is to take the outline we give them and students write that chunk. Virtually every writing assignment we give, we provide a writing outline of the writing style we want. These patterns are then used in industry. One organization style works for each of the writing styles. Between faculty, some minute details may change with organization, but there is a standard for writers to follow.

Interviewer: How do students determine purpose

Ken Reardon, Chemical Engineerin: Students usually respond to an assignment. That tells them what the purpose is. . . . I think it's something they infer from the assignment sheet.

Interviewer What types of purposes are there?

Ken Reardon: Persuading is the case with proposals. And informing with progress and the final results. Informing is to just "Here are the results of analysis; here's the answer to the question." It's presenting information. Persuasion is analyzing some information and coming to a conclusion. More of the writing I've seen engineers do is a soft version of persuasion, where they're not trying to sell. "Here's my analysis, here's how I interpreted those results and so here's what I think is worthwhile." Justifying.

Interviewer: Why do students need to be aware of this concept?

Ken Reardon: It helps to tell the reader what they're reading. Without it, readers don't know how to read.

Kate Kiefer. (2018). Designing Writing Assignments. The WAC Clearinghouse. Retrieved from https://wac.colostate.edu/repository/teaching/guides/designing-assignments/. Originally developed for Writing@CSU (https://writing.colostate.edu).

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Panel on the Evaluation of AIDS Interventions; Coyle SL, Boruch RF, Turner CF, editors. Evaluating AIDS Prevention Programs: Expanded Edition. Washington (DC): National Academies Press (US); 1991.

Cover of Evaluating AIDS Prevention Programs

Evaluating AIDS Prevention Programs: Expanded Edition.

  • Hardcopy Version at National Academies Press

1 Design and Implementation of Evaluation Research

Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or collective change. This setting usually engenders a great need for cooperation between those who conduct the program and those who evaluate it. This need for cooperation can be particularly acute in the case of AIDS prevention programs because those programs have been developed rapidly to meet the urgent demands of a changing and deadly epidemic.

Although the characteristics of AIDS intervention programs place some unique demands on evaluation, the techniques for conducting good program evaluation do not need to be invented. Two decades of evaluation research have provided a basic conceptual framework for undertaking such efforts (see, e.g., Campbell and Stanley [1966] and Cook and Campbell [1979] for discussions of outcome evaluation; see Weiss [1972] and Rossi and Freeman [1982] for process and outcome evaluations); in addition, similar programs, such as the antismoking campaigns, have been subject to evaluation, and they offer examples of the problems that have been encountered.

In this chapter the panel provides an overview of the terminology, types, designs, and management of research evaluation. The following chapter provides an overview of program objectives and the selection and measurement of appropriate outcome variables for judging the effectiveness of AIDS intervention programs. These issues are discussed in detail in the subsequent, program-specific Chapters 3 - 5 .

  • Types of Evaluation

The term evaluation implies a variety of different things to different people. The recent report of the Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences defines the area through a series of questions (Turner, Miller, and Moses, 1989:317-318):

Evaluation is a systematic process that produces a trustworthy account of what was attempted and why; through the examination of results—the outcomes of intervention programs—it answers the questions, "What was done?" "To whom, and how?" and "What outcomes were observed?'' Well-designed evaluation permits us to draw inferences from the data and addresses the difficult question: ''What do the outcomes mean?"

These questions differ in the degree of difficulty of answering them. An evaluation that tries to determine the outcomes of an intervention and what those outcomes mean is a more complicated endeavor than an evaluation that assesses the process by which the intervention was delivered. Both kinds of evaluation are necessary because they are intimately connected: to establish a project's success, an evaluator must first ask whether the project was implemented as planned and then whether its objective was achieved. Questions about a project's implementation usually fall under the rubric of process evaluation . If the investigation involves rapid feedback to the project staff or sponsors, particularly at the earliest stages of program implementation, the work is called formative evaluation . Questions about effects or effectiveness are often variously called summative evaluation, impact assessment, or outcome evaluation, the term the panel uses.

Formative evaluation is a special type of early evaluation that occurs during and after a program has been designed but before it is broadly implemented. Formative evaluation is used to understand the need for the intervention and to make tentative decisions about how to implement or improve it. During formative evaluation, information is collected and then fed back to program designers and administrators to enhance program development and maximize the success of the intervention. For example, formative evaluation may be carried out through a pilot project before a program is implemented at several sites. A pilot study of a community-based organization (CBO), for example, might be used to gather data on problems involving access to and recruitment of targeted populations and the utilization and implementation of services; the findings of such a study would then be used to modify (if needed) the planned program.

Another example of formative evaluation is the use of a "story board" design of a TV message that has yet to be produced. A story board is a series of text and sketches of camera shots that are to be produced in a commercial. To evaluate the effectiveness of the message and forecast some of the consequences of actually broadcasting it to the general public, an advertising agency convenes small groups of people to react to and comment on the proposed design.

Once an intervention has been implemented, the next stage of evaluation is process evaluation, which addresses two broad questions: "What was done?" and "To whom, and how?" Ordinarily, process evaluation is carried out at some point in the life of a project to determine how and how well the delivery goals of the program are being met. When intervention programs continue over a long period of time (as is the case for some of the major AIDS prevention programs), measurements at several times are warranted to ensure that the components of the intervention continue to be delivered by the right people, to the right people, in the right manner, and at the right time. Process evaluation can also play a role in improving interventions by providing the information necessary to change delivery strategies or program objectives in a changing epidemic.

Research designs for process evaluation include direct observation of projects, surveys of service providers and clients, and the monitoring of administrative records. The panel notes that the Centers for Disease Control (CDC) is already collecting some administrative records on its counseling and testing program and community-based projects. The panel believes that this type of evaluation should be a continuing and expanded component of intervention projects to guarantee the maintenance of the projects' integrity and responsiveness to their constituencies.

The purpose of outcome evaluation is to identify consequences and to establish that consequences are, indeed, attributable to a project. This type of evaluation answers the questions, "What outcomes were observed?" and, perhaps more importantly, "What do the outcomes mean?" Like process evaluation, outcome evaluation can also be conducted at intervals during an ongoing program, and the panel believes that such periodic evaluation should be done to monitor goal achievement.

The panel believes that these stages of evaluation (i.e., formative, process, and outcome) are essential to learning how AIDS prevention programs contribute to containing the epidemic. After a body of findings has been accumulated from such evaluations, it may be fruitful to launch another stage of evaluation: cost-effectiveness analysis (see Weinstein et al., 1989). Like outcome evaluation, cost-effectiveness analysis also measures program effectiveness, but it extends the analysis by adding a measure of program cost. The panel believes that consideration of cost-effective analysis should be postponed until more experience is gained with formative, process, and outcome evaluation of the CDC AIDS prevention programs.

  • Evaluation Research Design

Process and outcome evaluations require different types of research designs, as discussed below. Formative evaluations, which are intended to both assess implementation and forecast effects, use a mix of these designs.

Process Evaluation Designs

To conduct process evaluations on how well services are delivered, data need to be gathered on the content of interventions and on their delivery systems. Suggested methodologies include direct observation, surveys, and record keeping.

Direct observation designs include case studies, in which participant-observers unobtrusively and systematically record encounters within a program setting, and nonparticipant observation, in which long, open-ended (or "focused") interviews are conducted with program participants. 1 For example, "professional customers" at counseling and testing sites can act as project clients to monitor activities unobtrusively; 2 alternatively, nonparticipant observers can interview both staff and clients. Surveys —either censuses (of the whole population of interest) or samples—elicit information through interviews or questionnaires completed by project participants or potential users of a project. For example, surveys within community-based projects can collect basic statistical information on project objectives, what services are provided, to whom, when, how often, for how long, and in what context.

Record keeping consists of administrative or other reporting systems that monitor use of services. Standardized reporting ensures consistency in the scope and depth of data collected. To use the media campaign as an example, the panel suggests using standardized data on the use of the AIDS hotline to monitor public attentiveness to the advertisements broadcast by the media campaign.

These designs are simple to understand, but they require expertise to implement. For example, observational studies must be conducted by people who are well trained in how to carry out on-site tasks sensitively and to record their findings uniformly. Observers can either complete narrative accounts of what occurred in a service setting or they can complete some sort of data inventory to ensure that multiple aspects of service delivery are covered. These types of studies are time consuming and benefit from corroboration among several observers. The use of surveys in research is well-understood, although they, too, require expertise to be well implemented. As the program chapters reflect, survey data collection must be carefully designed to reduce problems of validity and reliability and, if samples are used, to design an appropriate sampling scheme. Record keeping or service inventories are probably the easiest research designs to implement, although preparing standardized internal forms requires attention to detail about salient aspects of service delivery.

Outcome Evaluation Designs

Research designs for outcome evaluations are meant to assess principal and relative effects. Ideally, to assess the effect of an intervention on program participants, one would like to know what would have happened to the same participants in the absence of the program. Because it is not possible to make this comparison directly, inference strategies that rely on proxies have to be used. Scientists use three general approaches to construct proxies for use in the comparisons required to evaluate the effects of interventions: (1) nonexperimental methods, (2) quasi-experiments, and (3) randomized experiments. The first two are discussed below, and randomized experiments are discussed in the subsequent section.

Nonexperimental and Quasi-Experimental Designs 3

The most common form of nonexperimental design is a before-and-after study. In this design, pre-intervention measurements are compared with equivalent measurements made after the intervention to detect change in the outcome variables that the intervention was designed to influence.

Although the panel finds that before-and-after studies frequently provide helpful insights, the panel believes that these studies do not provide sufficiently reliable information to be the cornerstone for evaluation research on the effectiveness of AIDS prevention programs. The panel's conclusion follows from the fact that the postintervention changes cannot usually be attributed unambiguously to the intervention. 4 Plausible competing explanations for differences between pre-and postintervention measurements will often be numerous, including not only the possible effects of other AIDS intervention programs, news stories, and local events, but also the effects that may result from the maturation of the participants and the educational or sensitizing effects of repeated measurements, among others.

Quasi-experimental and matched control designs provide a separate comparison group. In these designs, the control group may be selected by matching nonparticipants to participants in the treatment group on the basis of selected characteristics. It is difficult to ensure the comparability of the two groups even when they are matched on many characteristics because other relevant factors may have been overlooked or mismatched or they may be difficult to measure (e.g., the motivation to change behavior). In some situations, it may simply be impossible to measure all of the characteristics of the units (e.g., communities) that may affect outcomes, much less demonstrate their comparability.

Matched control designs require extraordinarily comprehensive scientific knowledge about the phenomenon under investigation in order for evaluators to be confident that all of the relevant determinants of outcomes have been properly accounted for in the matching. Three types of information or knowledge are required: (1) knowledge of intervening variables that also affect the outcome of the intervention and, consequently, need adjustment to make the groups comparable; (2) measurements on all intervening variables for all subjects; and (3) knowledge of how to make the adjustments properly, which in turn requires an understanding of the functional relationship between the intervening variables and the outcome variables. Satisfying each of these information requirements is likely to be more difficult than answering the primary evaluation question, "Does this intervention produce beneficial effects?"

Given the size and the national importance of AIDS intervention programs and given the state of current knowledge about behavior change in general and AIDS prevention, in particular, the panel believes that it would be unwise to rely on matching and adjustment strategies as the primary design for evaluating AIDS intervention programs. With differently constituted groups, inferences about results are hostage to uncertainty about the extent to which the observed outcome actually results from the intervention and is not an artifact of intergroup differences that may not have been removed by matching or adjustment.

Randomized Experiments

A remedy to the inferential uncertainties that afflict nonexperimental designs is provided by randomized experiments . In such experiments, one singly constituted group is established for study. A subset of the group is then randomly chosen to receive the intervention, with the other subset becoming the control. The two groups are not identical, but they are comparable. Because they are two random samples drawn from the same population, they are not systematically different in any respect, which is important for all variables—both known and unknown—that can influence the outcome. Dividing a singly constituted group into two random and therefore comparable subgroups cuts through the tangle of causation and establishes a basis for the valid comparison of respondents who do and do not receive the intervention. Randomized experiments provide for clear causal inference by solving the problem of group comparability, and may be used to answer the evaluation questions "Does the intervention work?" and "What works better?"

Which question is answered depends on whether the controls receive an intervention or not. When the object is to estimate whether a given intervention has any effects, individuals are randomly assigned to the project or to a zero-treatment control group. The control group may be put on a waiting list or simply not get the treatment. This design addresses the question, "Does it work?"

When the object is to compare variations on a project—e.g., individual counseling sessions versus group counseling—then individuals are randomly assigned to these two regimens, and there is no zero-treatment control group. This design addresses the question, "What works better?" In either case, the control groups must be followed up as rigorously as the experimental groups.

A randomized experiment requires that individuals, organizations, or other treatment units be randomly assigned to one of two or more treatments or program variations. Random assignment ensures that the estimated differences between the groups so constituted are statistically unbiased; that is, that any differences in effects measured between them are a result of treatment. The absence of statistical bias in groups constituted in this fashion stems from the fact that random assignment ensures that there are no systematic differences between them, differences that can and usually do affect groups composed in ways that are not random. 5 The panel believes this approach is far superior for outcome evaluations of AIDS interventions than the nonrandom and quasi-experimental approaches. Therefore,

To improve interventions that are already broadly implemented, the panel recommends the use of randomized field experiments of alternative or enhanced interventions.

Under certain conditions, the panel also endorses randomized field experiments with a nontreatment control group to evaluate new interventions. In the context of a deadly epidemic, ethics dictate that treatment not be withheld simply for the purpose of conducting an experiment. Nevertheless, there may be times when a randomized field test of a new treatment with a no-treatment control group is worthwhile. One such time is during the design phase of a major or national intervention.

Before a new intervention is broadly implemented, the panel recommends that it be pilot tested in a randomized field experiment.

The panel considered the use of experiments with delayed rather than no treatment. A delayed-treatment control group strategy might be pursued when resources are too scarce for an intervention to be widely distributed at one time. For example, a project site that is waiting to receive funding for an intervention would be designated as the control group. If it is possible to randomize which projects in the queue receive the intervention, an evaluator could measure and compare outcomes after the experimental group had received the new treatment but before the control group received it. The panel believes that such a design can be applied only in limited circumstances, such as when groups would have access to related services in their communities and that conducting the study was likely to lead to greater access or better services. For example, a study cited in Chapter 4 used a randomized delayed-treatment experiment to measure the effects of a community-based risk reduction program. However, such a strategy may be impractical for several reasons, including:

  • sites waiting for funding for an intervention might seek resources from another source;
  • it might be difficult to enlist the nonfunded site and its clients to participate in the study;
  • there could be an appearance of favoritism toward projects whose funding was not delayed.

Although randomized experiments have many benefits, the approach is not without pitfalls. In the planning stages of evaluation, it is necessary to contemplate certain hazards, such as the Hawthorne effect 6 and differential project dropout rates. Precautions must be taken either to prevent these problems or to measure their effects. Fortunately, there is some evidence suggesting that the Hawthorne effect is usually not very large (Rossi and Freeman, 1982:175-176).

Attrition is potentially more damaging to an evaluation, and it must be limited if the experimental design is to be preserved. If sample attrition is not limited in an experimental design, it becomes necessary to account for the potentially biasing impact of the loss of subjects in the treatment and control conditions of the experiment. The statistical adjustments required to make inferences about treatment effectiveness in such circumstances can introduce uncertainties that are as worrisome as those afflicting nonexperimental and quasi-experimental designs. Thus, the panel's recommendation of the selective use of randomized design carries an implicit caveat: To realize the theoretical advantages offered by randomized experimental designs, substantial efforts will be required to ensure that the designs are not compromised by flawed execution.

Another pitfall to randomization is its appearance of unfairness or unattractiveness to participants and the controversial legal and ethical issues it sometimes raises. Often, what is being criticized is the control of project assignment of participants rather than the use of randomization itself. In deciding whether random assignment is appropriate, it is important to consider the specific context of the evaluation and how participants would be assigned to projects in the absence of randomization. The Federal Judicial Center (1981) offers five threshold conditions for the use of random assignment.

  • Does present practice or policy need improvement?
  • Is there significant uncertainty about the value of the proposed regimen?
  • Are there acceptable alternatives to randomized experiments?
  • Will the results of the experiment be used to improve practice or policy?
  • Is there a reasonable protection against risk for vulnerable groups (i.e., individuals within the justice system)?

The parent committee has argued that these threshold conditions apply in the case of AIDS prevention programs (see Turner, Miller, and Moses, 1989:331-333).

Although randomization may be desirable from an evaluation and ethical standpoint, and acceptable from a legal standpoint, it may be difficult to implement from a practical or political standpoint. Again, the panel emphasizes that questions about the practical or political feasibility of the use of randomization may in fact refer to the control of program allocation rather than to the issues of randomization itself. In fact, when resources are scarce, it is often more ethical and politically palatable to randomize allocation rather than to allocate on grounds that may appear biased.

It is usually easier to defend the use of randomization when the choice has to do with assignment to groups receiving alternative services than when the choice involves assignment to groups receiving no treatment. For example, in comparing a testing and counseling intervention that offered a special "skills training" session in addition to its regular services with a counseling and testing intervention that offered no additional component, random assignment of participants to one group rather than another may be acceptable to program staff and participants because the relative values of the alternative interventions are unknown.

The more difficult issue is the introduction of new interventions that are perceived to be needed and effective in a situation in which there are no services. An argument that is sometimes offered against the use of randomization in this instance is that interventions should be assigned on the basis of need (perhaps as measured by rates of HIV incidence or of high-risk behaviors). But this argument presumes that the intervention will have a positive effect—which is unknown before evaluation—and that relative need can be established, which is a difficult task in itself.

The panel recognizes that community and political opposition to randomization to zero treatments may be strong and that enlisting participation in such experiments may be difficult. This opposition and reluctance could seriously jeopardize the production of reliable results if it is translated into noncompliance with a research design. The feasibility of randomized experiments for AIDS prevention programs has already been demonstrated, however (see the review of selected experiments in Turner, Miller, and Moses, 1989:327-329). The substantial effort involved in mounting randomized field experiments is repaid by the fact that they can provide unbiased evidence of the effects of a program.

Unit of Assignment.

The unit of assignment of an experiment may be an individual person, a clinic (i.e., the clientele of the clinic), or another organizational unit (e.g., the community or city). The treatment unit is selected at the earliest stage of design. Variations of units are illustrated in the following four examples of intervention programs.

Two different pamphlets (A and B) on the same subject (e.g., testing) are distributed in an alternating sequence to individuals calling an AIDS hotline. The outcome to be measured is whether the recipient returns a card asking for more information.

Two instruction curricula (A and B) about AIDS and HIV infections are prepared for use in high school driver education classes. The outcome to be measured is a score on a knowledge test.

Of all clinics for sexually transmitted diseases (STDs) in a large metropolitan area, some are randomly chosen to introduce a change in the fee schedule. The outcome to be measured is the change in patient load.

A coordinated set of community-wide interventions—involving community leaders, social service agencies, the media, community associations and other groups—is implemented in one area of a city. Outcomes are knowledge as assessed by testing at drug treatment centers and STD clinics and condom sales in the community's retail outlets.

In example (1), the treatment unit is an individual person who receives pamphlet A or pamphlet B. If either "treatment" is applied again, it would be applied to a person. In example (2), the high school class is the treatment unit; everyone in a given class experiences either curriculum A or curriculum B. If either treatment is applied again, it would be applied to a class. The treatment unit is the clinic in example (3), and in example (4), the treatment unit is a community .

The consistency of the effects of a particular intervention across repetitions justly carries a heavy weight in appraising the intervention. It is important to remember that repetitions of a treatment or intervention are the number of treatment units to which the intervention is applied. This is a salient principle in the design and execution of intervention programs as well as in the assessment of their results.

The adequacy of the proposed sample size (number of treatment units) has to be considered in advance. Adequacy depends mainly on two factors:

  • How much variation occurs from unit to unit among units receiving a common treatment? If that variation is large, then the number of units needs to be large.
  • What is the minimum size of a possible treatment difference that, if present, would be practically important? That is, how small a treatment difference is it essential to detect if it is present? The smaller this quantity, the larger the number of units that are necessary.

Many formal methods for considering and choosing sample size exist (see, e.g., Cohen, 1988). Practical circumstances occasionally allow choosing between designs that involve units at different levels; thus, a classroom might be the unit if the treatment is applied in one way, but an entire school might be the unit if the treatment is applied in another. When both approaches are feasible, the use of a power analysis for each approach may lead to a reasoned choice.

Choice of Methods

There is some controversy about the advantages of randomized experiments in comparison with other evaluative approaches. It is the panel's belief that when a (well executed) randomized study is feasible, it is superior to alternative kinds of studies in the strength and clarity of whatever conclusions emerge, primarily because the experimental approach avoids selection biases. 7 Other evaluation approaches are sometimes unavoidable, but ordinarily the accumulation of valid information will go more slowly and less securely than in randomized approaches.

Experiments in medical research shed light on the advantages of carefully conducted randomized experiments. The Salk vaccine trials are a successful example of a large, randomized study. In a double-blind test of the polio vaccine, 8 children in various communities were randomly assigned to two treatments, either the vaccine or a placebo. By this method, the effectiveness of Salk vaccine was demonstrated in one summer of research (Meier, 1957).

A sufficient accumulation of relevant, observational information, especially when collected in studies using different procedures and sample populations, may also clearly demonstrate the effectiveness of a treatment or intervention. The process of accumulating such information can be a long one, however. When a (well-executed) randomized study is feasible, it can provide evidence that is subject to less uncertainty in its interpretation, and it can often do so in a more timely fashion. In the midst of an epidemic, the panel believes it proper that randomized experiments be one of the primary strategies for evaluating the effectiveness of AIDS prevention efforts. In making this recommendation, however, the panel also wishes to emphasize that the advantages of the randomized experimental design can be squandered by poor execution (e.g., by compromised assignment of subjects, significant subject attrition rates, etc.). To achieve the advantages of the experimental design, care must be taken to ensure that the integrity of the design is not compromised by poor execution.

In proposing that randomized experiments be one of the primary strategies for evaluating the effectiveness of AIDS prevention programs, the panel also recognizes that there are situations in which randomization will be impossible or, for other reasons, cannot be used. In its next report the panel will describe at length appropriate nonexperimental strategies to be considered in situations in which an experiment is not a practical or desirable alternative.

  • The Management of Evaluation

Conscientious evaluation requires a considerable investment of funds, time, and personnel. Because the panel recognizes that resources are not unlimited, it suggests that they be concentrated on the evaluation of a subset of projects to maximize the return on investment and to enhance the likelihood of high-quality results.

Project Selection

Deciding which programs or sites to evaluate is by no means a trivial matter. Selection should be carefully weighed so that projects that are not replicable or that have little chance for success are not subjected to rigorous evaluations.

The panel recommends that any intensive evaluation of an intervention be conducted on a subset of projects selected according to explicit criteria. These criteria should include the replicability of the project, the feasibility of evaluation, and the project's potential effectiveness for prevention of HIV transmission.

If a project is replicable, it means that the particular circumstances of service delivery in that project can be duplicated. In other words, for CBOs and counseling and testing projects, the content and setting of an intervention can be duplicated across sites. Feasibility of evaluation means that, as a practical matter, the research can be done: that is, the research design is adequate to control for rival hypotheses, it is not excessively costly, and the project is acceptable to the community and the sponsor. Potential effectiveness for HIV prevention means that the intervention is at least based on a reasonable theory (or mix of theories) about behavioral change (e.g., social learning theory [Bandura, 1977], the health belief model [Janz and Becker, 1984], etc.), if it has not already been found to be effective in related circumstances.

In addition, since it is important to ensure that the results of evaluations will be broadly applicable,

The panel recommends that evaluation be conducted and replicated across major types of subgroups, programs, and settings. Attention should be paid to geographic areas with low and high AIDS prevalence, as well as to subpopulations at low and high risk for AIDS.

Research Administration

The sponsoring agency interested in evaluating an AIDS intervention should consider the mechanisms through which the research will be carried out as well as the desirability of both independent oversight and agency in-house conduct and monitoring of the research. The appropriate entities and mechanisms for conducting evaluations depend to some extent on the kinds of data being gathered and the evaluation questions being asked.

Oversight and monitoring are important to keep projects fully informed about the other evaluations relevant to their own and to render assistance when needed. Oversight and monitoring are also important because evaluation is often a sensitive issue for project and evaluation staff alike. The panel is aware that evaluation may appear threatening to practitioners and researchers because of the possibility that evaluation research will show that their projects are not as effective as they believe them to be. These needs and vulnerabilities should be taken into account as evaluation research management is developed.

Conducting the Research

To conduct some aspects of a project's evaluation, it may be appropriate to involve project administrators, especially when the data will be used to evaluate delivery systems (e.g., to determine when and which services are being delivered). To evaluate outcomes, the services of an outside evaluator 9 or evaluation team are almost always required because few practitioners have the necessary professional experience or the time and resources necessary to do evaluation. The outside evaluator must have relevant expertise in evaluation research methodology and must also be sensitive to the fears, hopes, and constraints of project administrators.

Several evaluation management schemes are possible. For example, a prospective AIDS prevention project group (the contractor) can bid on a contract for project funding that includes an intensive evaluation component. The actual evaluation can be conducted either by the contractor alone or by the contractor working in concert with an outside independent collaborator. This mechanism has the advantage of involving project practitioners in the work of evaluation as well as building separate but mutually informing communities of experts around the country. Alternatively, a contract can be let with a single evaluator or evaluation team that will collaborate with the subset of sites that is chosen for evaluation. This variation would be managerially less burdensome than awarding separate contracts, but it would require greater dependence on the expertise of a single investigator or investigative team. ( Appendix A discusses contracting options in greater depth.) Both of these approaches accord with the parent committee's recommendation that collaboration between practitioners and evaluation researchers be ensured. Finally, in the more traditional evaluation approach, independent principal investigators or investigative teams may respond to a request for proposal (RFP) issued to evaluate individual projects. Such investigators are frequently university-based or are members of a professional research organization, and they bring to the task a variety of research experiences and perspectives.

Independent Oversight

The panel believes that coordination and oversight of multisite evaluations is critical because of the variability in investigators' expertise and in the results of the projects being evaluated. Oversight can provide quality control for individual investigators and can be used to review and integrate findings across sites for developing policy. The independence of an oversight body is crucial to ensure that project evaluations do not succumb to the pressures for positive findings of effectiveness.

When evaluation is to be conducted by a number of different evaluation teams, the panel recommends establishing an independent scientific committee to oversee project selection and research efforts, corroborate the impartiality and validity of results, conduct cross-site analyses, and prepare reports on the progress of the evaluations.

The composition of such an independent oversight committee will depend on the research design of a given program. For example, the committee ought to include statisticians and other specialists in randomized field tests when that approach is being taken. Specialists in survey research and case studies should be recruited if either of those approaches is to be used. Appendix B offers a model for an independent oversight group that has been successfully implemented in other settings—a project review team, or advisory board.

Agency In-House Team

As the parent committee noted in its report, evaluations of AIDS interventions require skills that may be in short supply for agencies invested in delivering services (Turner, Miller, and Moses, 1989:349). Although this situation can be partly alleviated by recruiting professional outside evaluators and retaining an independent oversight group, the panel believes that an in-house team of professionals within the sponsoring agency is also critical. The in-house experts will interact with the outside evaluators and provide input into the selection of projects, outcome objectives, and appropriate research designs; they will also monitor the progress and costs of evaluation. These functions require not just bureaucratic oversight but appropriate scientific expertise.

This is not intended to preclude the direct involvement of CDC staff in conducting evaluations. However, given the great amount of work to be done, it is likely a considerable portion will have to be contracted out. The quality and usefulness of the evaluations done under contract can be greatly enhanced by ensuring that there are an adequate number of CDC staff trained in evaluation research methods to monitor these contracts.

The panel recommends that CDC recruit and retain behavioral, social, and statistical scientists trained in evaluation methodology to facilitate the implementation of the evaluation research recommended in this report.

Interagency Collaboration

The panel believes that the federal agencies that sponsor the design of basic research, intervention programs, and evaluation strategies would profit from greater interagency collaboration. The evaluation of AIDS intervention programs would benefit from a coherent program of studies that should provide models of efficacious and effective interventions to prevent further HIV transmission, the spread of other STDs, and unwanted pregnancies (especially among adolescents). A marriage could then be made of basic and applied science, from which the best evaluation is born. Exploring the possibility of interagency collaboration and CDC's role in such collaboration is beyond the scope of this panel's task, but it is an important issue that we suggest be addressed in the future.

Costs of Evaluation

In view of the dearth of current evaluation efforts, the panel believes that vigorous evaluation research must be undertaken over the next few years to build up a body of knowledge about what interventions can and cannot do. Dedicating no resources to evaluation will virtually guarantee that high-quality evaluations will be infrequent and the data needed for policy decisions will be sparse or absent. Yet, evaluating every project is not feasible simply because there are not enough resources and, in many cases, evaluating every project is not necessary for good science or good policy.

The panel believes that evaluating only some of a program's sites or projects, selected under the criteria noted in Chapter 4 , is a sensible strategy. Although we recommend that intensive evaluation be conducted on only a subset of carefully chosen projects, we believe that high-quality evaluation will require a significant investment of time, planning, personnel, and financial support. The panel's aim is to be realistic—not discouraging—when it notes that the costs of program evaluation should not be underestimated. Many of the research strategies proposed in this report require investments that are perhaps greater than has been previously contemplated. This is particularly the case for outcome evaluations, which are ordinarily more difficult and expensive to conduct than formative or process evaluations. And those costs will be additive with each type of evaluation that is conducted.

Panel members have found that the cost of an outcome evaluation sometimes equals or even exceeds the cost of actual program delivery. For example, it was reported to the panel that randomized studies used to evaluate recent manpower training projects cost as much as the projects themselves (see Cottingham and Rodriguez, 1987). In another case, the principal investigator of an ongoing AIDS prevention project told the panel that the cost of randomized experimentation was approximately three times higher than the cost of delivering the intervention (albeit the study was quite small, involving only 104 participants) (Kelly et al., 1989). Fortunately, only a fraction of a program's projects or sites need to be intensively evaluated to produce high-quality information, and not all will require randomized studies.

Because of the variability in kinds of evaluation that will be done as well as in the costs involved, there is no set standard or rule for judging what fraction of a total program budget should be invested in evaluation. Based upon very limited data 10 and assuming that only a small sample of projects would be evaluated, the panel suspects that program managers might reasonably anticipate spending 8 to 12 percent of their intervention budgets to conduct high-quality evaluations (i.e., formative, process, and outcome evaluations). 11 Larger investments seem politically infeasible and unwise in view of the need to put resources into program delivery. Smaller investments in evaluation may risk studying an inadequate sample of program types, and it may also invite compromises in research quality.

The nature of the HIV/AIDS epidemic mandates an unwavering commitment to prevention programs, and the prevention activities require a similar commitment to the evaluation of those programs. The magnitude of what can be learned from doing good evaluations will more than balance the magnitude of the costs required to perform them. Moreover, it should be realized that the costs of shoddy research can be substantial, both in their direct expense and in the lost opportunities to identify effective strategies for AIDS prevention. Once the investment has been made, however, and a reservoir of findings and practical experience has accumulated, subsequent evaluations should be easier and less costly to conduct.

  • Bandura, A. (1977) Self-efficacy: Toward a unifying theory of behavioral change . Psychological Review 34:191-215. [ PubMed : 847061 ]
  • Campbell, D. T., and Stanley, J. C. (1966) Experimental and Quasi-Experimental Design and Analysis . Boston: Houghton-Mifflin.
  • Centers for Disease Control (CDC) (1988) Sourcebook presented at the National Conference on the Prevention of HIV Infection and AIDS Among Racial and Ethnic Minorities in the United States (August).
  • Cohen, J. (1988) Statistical Power Analysis for the Behavioral Sciences . 2nd ed. Hillsdale, NJ.: L. Erlbaum Associates.
  • Cook, T., and Campbell, D. T. (1979) Quasi-Experimentation: Design and Analysis for Field Settings . Boston: Houghton-Mifflin.
  • Federal Judicial Center (1981) Experimentation in the Law . Washington, D.C.: Federal Judicial Center.
  • Janz, N. K., and Becker, M. H. (1984) The health belief model: A decade later . Health Education Quarterly 11 (1):1-47. [ PubMed : 6392204 ]
  • Kelly, J. A., St. Lawrence, J. S., Hood, H. V., and Brasfield, T. L. (1989) Behavioral intervention to reduce AIDS risk activities . Journal of Consulting and Clinical Psychology 57:60-67. [ PubMed : 2925974 ]
  • Meier, P. (1957) Safety testing of poliomyelitis vaccine . Science 125(3257): 1067-1071. [ PubMed : 13432758 ]
  • Roethlisberger, F. J. and Dickson, W. J. (1939) Management and the Worker . Cambridge, Mass.: Harvard University Press.
  • Rossi, P. H., and Freeman, H. E. (1982) Evaluation: A Systematic Approach . 2nd ed. Beverly Hills, Cal.: Sage Publications.
  • Turner, C. F., editor; , Miller, H. G., editor; , and Moses, L. E., editor. , eds. (1989) AIDS, Sexual Behavior, and Intravenous Drug Use . Report of the NRC Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences. Washington, D.C.: National Academy Press. [ PubMed : 25032322 ]
  • Weinstein, M. C., Graham, J. D., Siegel, J. E., and Fineberg, H. V. (1989) Cost-effectiveness analysis of AIDS prevention programs: Concepts, complications, and illustrations . In C.F. Turner, editor; , H. G. Miller, editor; , and L. E. Moses, editor. , eds., AIDS, Sexual Behavior, and Intravenous Drug Use . Report of the NRC Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences. Washington, D.C.: National Academy Press. [ PubMed : 25032322 ]
  • Weiss, C. H. (1972) Evaluation Research . Englewood Cliffs, N.J.: Prentice-Hall, Inc.

On occasion, nonparticipants observe behavior during or after an intervention. Chapter 3 introduces this option in the context of formative evaluation.

The use of professional customers can raise serious concerns in the eyes of project administrators at counseling and testing sites. The panel believes that site administrators should receive advance notification that professional customers may visit their sites for testing and counseling services and provide their consent before this method of data collection is used.

Parts of this section are adopted from Turner, Miller, and Moses, (1989:324-326).

This weakness has been noted by CDC in a sourcebook provided to its HIV intervention project grantees (CDC, 1988:F-14).

The significance tests applied to experimental outcomes calculate the probability that any observed differences between the sample estimates might result from random variations between the groups.

Research participants' knowledge that they were being observed had a positive effect on their responses in a series of famous studies made at General Electric's Hawthorne Works in Chicago (Roethlisberger and Dickson, 1939); the phenomenon is referred to as the Hawthorne effect.

participants who self-select into a program are likely to be different from non-random comparison groups in terms of interests, motivations, values, abilities, and other attributes that can bias the outcomes.

A double-blind test is one in which neither the person receiving the treatment nor the person administering it knows which treatment (or when no treatment) is being given.

As discussed under ''Agency In-House Team,'' the outside evaluator might be one of CDC's personnel. However, given the large amount of research to be done, it is likely that non-CDC evaluators will also need to be used.

See, for example, chapter 3 which presents cost estimates for evaluations of media campaigns. Similar estimates are not readily available for other program types.

For example, the U. K. Health Education Authority (that country's primary agency for AIDS education and prevention programs) allocates 10 percent of its AIDS budget for research and evaluation of its AIDS programs (D. McVey, Health Education Authority, personal communication, June 1990). This allocation covers both process and outcome evaluation.

  • Cite this Page National Research Council (US) Panel on the Evaluation of AIDS Interventions; Coyle SL, Boruch RF, Turner CF, editors. Evaluating AIDS Prevention Programs: Expanded Edition. Washington (DC): National Academies Press (US); 1991. 1, Design and Implementation of Evaluation Research.
  • PDF version of this title (6.0M)

In this Page

Related information.

  • PubMed Links to PubMed

Recent Activity

  • Design and Implementation of Evaluation Research - Evaluating AIDS Prevention Pr... Design and Implementation of Evaluation Research - Evaluating AIDS Prevention Programs

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. Evaluation Essay

    research designs writing assignment (evaluative)

  2. Expert Advice on How to Write a Successful Evaluative Essay

    research designs writing assignment (evaluative)

  3. Evaluation Essay

    research designs writing assignment (evaluative)

  4. What Is an Evaluation Essay? Simple Examples To Guide You

    research designs writing assignment (evaluative)

  5. How to Write an Evaluation Essay: Examples and Format

    research designs writing assignment (evaluative)

  6. Evaluative Research Design Examples, Methods, And Questions For PMs

    research designs writing assignment (evaluative)

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Building A Research Design Assignment

    Research Design building research design building research design assignment jadda yambo department of clinical mental health counseling, liberty university. Skip to document. University; High School. Books; ... Research and Program Evaluation (COUC 515) 89 Documents. Students shared 89 documents in this course. University Liberty University ...

  3. Types of Research Designs

    Before beginning your paper, you need to decide how you plan to design the study.. The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection ...

  4. Research Design

    This will guide your research design and help you select appropriate methods. Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.

  5. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  6. Types of Research Designs

    Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice. Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies ...

  7. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  8. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  9. Step 3 of EBP: Part 1—Evaluating Research Designs

    Step 3 of the EBP process involves evaluating the quality and client relevance of research results you have located to inform treatment planning. While some useful clinical resources include careful appraisals of research quality, clinicians must critically evaluate the content both included in these summaries and what is excluded or omitted ...

  10. PDF RESEARCH DESIGNS FOR PROGRAM EVALUATIONS

    research designs in an evaluation, and test different parts of the program logic with each one. These designs are often referred to as patched-up research designs (Poister, 1978), and usually, they do not test all the causal linkages in a logic model. Research designs that fully test the causal links in logic models often

  11. Organizing Your Social Sciences Research Assignments

    Reading prior research requires an understanding of the academic writing style, the type of epistemological beliefs or practices underpinning the research design, and the specific vocabulary and technical terminology [i.e., jargon] used within a discipline. Reading scholarly articles is important because academic writing is unfamiliar to most ...

  12. Designing Research Assignments

    It's Complicated: What Students Say About Research and Writing Assignments from Project Information Literacy How Librarians Can Help Librarians are available to consult with faculty and instructors to create or revise effective research assignments and classroom activities that foster critical thinking, evaluation skills, and promote lifelong ...

  13. PDF Assignment Design

    Exercise 1: Improve one of the assignments by. Making some of the hidden skills or knowledge explicit by creating learning outcomes or objectives. Devising an activity that gives students practice with required skills. Clarifying the instructions. Directing students to university resources where they can get help.

  14. Evaluating Research

    Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...

  15. PDF Designing Better Assignments

    • Visual as well as text representations of the assignment Challenging, specific research questions and thinking prompts • Writing assignments of different lengths • Specificity -but not overwhelming detail -in the assignment and rubric • Detailed feedback -especially feedback that can be applied (formative) 31

  16. Guide: Academic Evaluations

    Academic Evaluations. In our daily lives, we are continually evaluating objects, people, and ideas in our immediate environments. We pass judgments in conversation, while reading, while shopping, while eating, and while watching television or movies, often being unaware that we are doing so. Evaluation is an equally fundamental writing process ...

  17. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  18. Designing Research Assignments

    Not every instructor, though, uses research papers. For those that don't, or who wish to highlight information retrieval and evaluation more distinctly, this page provides an overview of some possibilities of assignments that deal explicitly with locating authoritative, reliable information sources. Successful assignments:

  19. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  20. Designing Writing Assignments

    Designing Writing Assignments designing-assignments. As you think about creating writing assignments, use these five principles: Tie the writing task to specific pedagogical goals. Note rhetorical aspects of the task, i.e., audience, purpose, writing situation. Make all elements of the task clear. Include grading criteria on the assignment ...

  21. PDF Writing Assessment and Evaluation Rubrics

    Holistic scoring is a quick method of evaluating a composition based on the reader's general impression of the overall quality of the writing—you can generally read a student's composition and assign a score to it in two or three minutes. Holistic scoring is usually based on a scale of 0-4, 0-5, or 0-6.

  22. Design and Implementation of Evaluation Research

    Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or ...

  23. Quantitative Methods

    Research Designs - Writing Assignment (Evaluative) ... In the previous two modules we discussed research designs and methods to measure and manipulate our variables of interest and disinterest. Before a researcher can move on to the testing phase and can actually collect data, there is just one more procedure that needs to be decided on ...