SkillsYouNeed

  • INTERPERSONAL SKILLS
  • Problem Solving and Decision Making

Investigating Ideas and Solutions

Search SkillsYouNeed:

Interpersonal Skills:

  • A - Z List of Interpersonal Skills
  • Interpersonal Skills Self-Assessment
  • Communication Skills
  • Emotional Intelligence
  • Conflict Resolution and Mediation Skills
  • Customer Service Skills
  • Team-Working, Groups and Meetings
  • Decision-Making and Problem-Solving
  • Effective Decision Making
  • Decision-Making Framework
  • Introduction to Problem Solving
  • Identifying and Structuring Problems
  • Implementing a Solution and Feedback
  • Creative Problem-Solving
  • Social Problem-Solving
  • Negotiation and Persuasion Skills
  • Personal and Romantic Relationship Skills

Subscribe to our FREE newsletter and start improving your life in just 5 minutes a day.

You'll get our 5 free 'One Minute Life Skills' and our weekly newsletter.

We'll never share your email address and you can unsubscribe at any time.

This page continues working through the stages of problem solving as laid out in: Problem Solving - An Introduction .

This page provides detailed information on 'Stage Three' of the problem solving process - finding possible solutions to problems. In group situations this involves finding ways to actively involve everybody - encouraging participation and generating as many ideas and possible solutions as possible.

Stage Three: Possible Solutions

Brainstorming.

Brainstorming is perhaps one of the most commonly used techniques for generating a large number of ideas in a short period of time.  Whilst it can be done individually, it is more often practised in groups.

Before a brainstorming session begins, the leader or facilitator encourages everyone to contribute as many ideas as possible, no matter how irrelevant or absurd they may seem.

There should be lots of large sheets of paper, Post-It notes and/or flip charts available, so that any ideas generated can be written down in such a way that everyone present can see them.

The Rules of Brainstorming

The facilitator should explain the purpose of the brainstorming session (outline the problem/s), and emphasise the four rules of brainstorming that must be adhered to:

Absolutely no criticism of suggestion or person is allowed.  Positive feedback for all ideas should be encouraged.

The aim is to produce as many ideas as possible.

The aim is to generate a sense of creative momentum.  There should be a feeling of excitement in the group with ideas being produced at a rapid pace.  All ideas should be encouraged, regardless of how irrelevant, 'stupid' or 'off the mark' they might seem.

Ideas should cross-fertilise each other, in other words everyone should continually look at the suggestions of the rest of the group and see if these spark any new ideas.  Each person is then feeding off the ideas of the others.

Warming-up exercises encourage participants to get in the right frame of mind for creative thinking.  The exercises should be fun and exciting, with the facilitator encouraging everyone to think up wild and creative ideas in rapid succession.  Possible topics could be: 'What would you wish to have with you if you were stranded on a desert island?' or 'Design a better mousetrap!'

It is better if the warm-up problems are somewhat absurd as this will encourage the uncritical, free-flowing creativity needed to confront the later, real problem.  A time limit of ten minutes is useful for the group to come up with as many ideas as possible, each being written down for all to see.  Remember, the aim is to develop an uncritical , creative momentum in the group.

The definition of the problem arrived at earlier in the problem solving process should be written up, so that everyone is clearly focused on the problem in hand.  Sometimes it may be useful to have more than one definition.

As in the warm-up exercises, a time limit is usually set for the group to generate their ideas, each one being written up without comment from the facilitator.  It helps to keep them in order so the progression of ideas can be seen later.  If the brainstorming session seems productive, it is as well to let it continue until all possible avenues have been explored. However setting a time limit may also instil a sense of urgency and may result in a flurry of new ideas a few minutes before the time runs out.

At the end of the session, time is given to reflect on and to discuss the suggestions, perhaps to clarify some of the ideas and then consider how to deal with them.  Perhaps further brainstorming sessions may be valuable in order to consider some of the more fruitful ideas.

See our page Brainstorming Techniques for lots more ideas on how to use brainstorming effectively.

Divergent and Convergent Thinking

Divergent thinking:.

Divergent thinking is the process of recalling possible solutions from past experience, or inventing new ones.  Thoughts spread out or 'diverge' along a number of paths to a range of possible solutions.  It is the process from which many of the following creative problem solving techniques have been designed.

Convergent thinking:

Convergent thinking is the subsequent process of narrowing down the possibilities to 'converge' on the most appropriate form of action.

The elements necessary for divergent thinking include:

  • Releasing the mind from old patterns of thought and other inhibiting influences.
  • Bringing the elements of a problem into new combinations.
  • Not rejecting any ideas during the creative, problem solving period.
  • Actively practicing, encouraging and rewarding the creation of new ideas.

Techniques of Divergent Thinking:

Often when people get stuck in trying to find a solution to a problem, it is because they are continually trying to approach it from the same starting point.  The same patterns of thinking are continually followed over and over again, with reliance placed on familiar solutions or strategies.

If problems can be thought of in different ways - a fresh approach - then previous patterns of thought, biases and cycles may be avoided.

Three techniques of divergent thinking are to:

  • Bring in someone else from a different area.
  • Question any assumptions being made.
  • Use creative problem solving techniques such as 'brainstorming'.

Bring in Someone Else From a Different Area:

While it is obviously helpful to involve people who are more knowledgeable about the issues involved in a problem, sometimes non-experts can be equally, or more valuable. This is because they do not know what the 'common solutions' are, and can, therefore, tackle the problem with a more open mind and so help by introducing a fresh perspective.

Another advantage of having non-experts on the team is that it forces the 'experts' to explain their reasoning in simple terms.  This very act of explanation can often help them to clarify their own thinking and sometimes uncovers inconsistencies and errors in their thinking.

Another way of gaining a fresh viewpoint, if the problem is not urgent, is to put it aside for a while and then return to it at a later date and tackle it afresh. It is important not to look at any of your old solutions or ideas during this second look in order to maintain this freshness of perspective.

Questioning Assumptions:

Sometimes problem solving runs into difficulties because it is based on the wrong assumptions.  For example, if a new sandwich shop is unsuccessful in attracting customers, has it been questioned whether there are sufficient office workers or shoppers in the local area?  Great effort might be spent in attempting to improve the range and quality of the sandwiches, when questioning this basic assumption might reveal a better, if perhaps unpopular, solution.

Listing assumptions is a good starting point.  However, this is not as easy as it first appears for many basic assumptions might not be clearly understood, or seem so obvious that they are not questioned. Again, someone totally unconnected with the problem is often able to offer a valuable contribution to this questioning process, acting as 'devil's advocate', i.e. questioning the most obvious of assumptions.

Such questions could include:

  • What has been done in similar circumstances in the past?  Why was it done that way? Is it the best/only way?
  • What is the motivation for solving the problem? Are there any influences such as prejudices or emotions involved?

Of course, many assumptions that need to be questioned are specific to a particular problem. Following our previous example:

Continue to: Implementing a Solution and Feedback Social Problem Solving

See also: Project Management Questioning Skills Reflective Practice

What is the Scientific Method: How does it work and why is it important?

The scientific method is a systematic process involving steps like defining questions, forming hypotheses, conducting experiments, and analyzing data. It minimizes biases and enables replicable research, leading to groundbreaking discoveries like Einstein's theory of relativity, penicillin, and the structure of DNA. This ongoing approach promotes reason, evidence, and the pursuit of truth in science.

Updated on November 18, 2023

What is the Scientific Method: How does it work and why is it important?

Beginning in elementary school, we are exposed to the scientific method and taught how to put it into practice. As a tool for learning, it prepares children to think logically and use reasoning when seeking answers to questions.

Rather than jumping to conclusions, the scientific method gives us a recipe for exploring the world through observation and trial and error. We use it regularly, sometimes knowingly in academics or research, and sometimes subconsciously in our daily lives.

In this article we will refresh our memories on the particulars of the scientific method, discussing where it comes from, which elements comprise it, and how it is put into practice. Then, we will consider the importance of the scientific method, who uses it and under what circumstances.

What is the scientific method?

The scientific method is a dynamic process that involves objectively investigating questions through observation and experimentation . Applicable to all scientific disciplines, this systematic approach to answering questions is more accurately described as a flexible set of principles than as a fixed series of steps.

The following representations of the scientific method illustrate how it can be both condensed into broad categories and also expanded to reveal more and more details of the process. These graphics capture the adaptability that makes this concept universally valuable as it is relevant and accessible not only across age groups and educational levels but also within various contexts.

a graph of the scientific method

Steps in the scientific method

While the scientific method is versatile in form and function, it encompasses a collection of principles that create a logical progression to the process of problem solving:

  • Define a question : Constructing a clear and precise problem statement that identifies the main question or goal of the investigation is the first step. The wording must lend itself to experimentation by posing a question that is both testable and measurable.
  • Gather information and resources : Researching the topic in question to find out what is already known and what types of related questions others are asking is the next step in this process. This background information is vital to gaining a full understanding of the subject and in determining the best design for experiments. 
  • Form a hypothesis : Composing a concise statement that identifies specific variables and potential results, which can then be tested, is a crucial step that must be completed before any experimentation. An imperfection in the composition of a hypothesis can result in weaknesses to the entire design of an experiment.
  • Perform the experiments : Testing the hypothesis by performing replicable experiments and collecting resultant data is another fundamental step of the scientific method. By controlling some elements of an experiment while purposely manipulating others, cause and effect relationships are established.
  • Analyze the data : Interpreting the experimental process and results by recognizing trends in the data is a necessary step for comprehending its meaning and supporting the conclusions. Drawing inferences through this systematic process lends substantive evidence for either supporting or rejecting the hypothesis.
  • Report the results : Sharing the outcomes of an experiment, through an essay, presentation, graphic, or journal article, is often regarded as a final step in this process. Detailing the project's design, methods, and results not only promotes transparency and replicability but also adds to the body of knowledge for future research.
  • Retest the hypothesis : Repeating experiments to see if a hypothesis holds up in all cases is a step that is manifested through varying scenarios. Sometimes a researcher immediately checks their own work or replicates it at a future time, or another researcher will repeat the experiments to further test the hypothesis.

a chart of the scientific method

Where did the scientific method come from?

Oftentimes, ancient peoples attempted to answer questions about the unknown by:

  • Making simple observations
  • Discussing the possibilities with others deemed worthy of a debate
  • Drawing conclusions based on dominant opinions and preexisting beliefs

For example, take Greek and Roman mythology. Myths were used to explain everything from the seasons and stars to the sun and death itself.

However, as societies began to grow through advancements in agriculture and language, ancient civilizations like Egypt and Babylonia shifted to a more rational analysis for understanding the natural world. They increasingly employed empirical methods of observation and experimentation that would one day evolve into the scientific method . 

In the 4th century, Aristotle, considered the Father of Science by many, suggested these elements , which closely resemble the contemporary scientific method, as part of his approach for conducting science:

  • Study what others have written about the subject.
  • Look for the general consensus about the subject.
  • Perform a systematic study of everything even partially related to the topic.

a pyramid of the scientific method

By continuing to emphasize systematic observation and controlled experiments, scholars such as Al-Kindi and Ibn al-Haytham helped expand this concept throughout the Islamic Golden Age . 

In his 1620 treatise, Novum Organum , Sir Francis Bacon codified the scientific method, arguing not only that hypotheses must be tested through experiments but also that the results must be replicated to establish a truth. Coming at the height of the Scientific Revolution, this text made the scientific method accessible to European thinkers like Galileo and Isaac Newton who then put the method into practice.

As science modernized in the 19th century, the scientific method became more formalized, leading to significant breakthroughs in fields such as evolution and germ theory. Today, it continues to evolve, underpinning scientific progress in diverse areas like quantum mechanics, genetics, and artificial intelligence.

Why is the scientific method important?

The history of the scientific method illustrates how the concept developed out of a need to find objective answers to scientific questions by overcoming biases based on fear, religion, power, and cultural norms. This still holds true today.

By implementing this standardized approach to conducting experiments, the impacts of researchers’ personal opinions and preconceived notions are minimized. The organized manner of the scientific method prevents these and other mistakes while promoting the replicability and transparency necessary for solid scientific research.

The importance of the scientific method is best observed through its successes, for example: 

  • “ Albert Einstein stands out among modern physicists as the scientist who not only formulated a theory of revolutionary significance but also had the genius to reflect in a conscious and technical way on the scientific method he was using.” Devising a hypothesis based on the prevailing understanding of Newtonian physics eventually led Einstein to devise the theory of general relativity .
  • Howard Florey “Perhaps the most useful lesson which has come out of the work on penicillin has been the demonstration that success in this field depends on the development and coordinated use of technical methods.” After discovering a mold that prevented the growth of Staphylococcus bacteria, Dr. Alexander Flemimg designed experiments to identify and reproduce it in the lab, thus leading to the development of penicillin .
  • James D. Watson “Every time you understand something, religion becomes less likely. Only with the discovery of the double helix and the ensuing genetic revolution have we had grounds for thinking that the powers held traditionally to be the exclusive property of the gods might one day be ours. . . .” By using wire models to conceive a structure for DNA, Watson and Crick crafted a hypothesis for testing combinations of amino acids, X-ray diffraction images, and the current research in atomic physics, resulting in the discovery of DNA’s double helix structure .

Final thoughts

As the cases exemplify, the scientific method is never truly completed, but rather started and restarted. It gave these researchers a structured process that was easily replicated, modified, and built upon. 

While the scientific method may “end” in one context, it never literally ends. When a hypothesis, design, methods, and experiments are revisited, the scientific method simply picks up where it left off. Each time a researcher builds upon previous knowledge, the scientific method is restored with the pieces of past efforts.

By guiding researchers towards objective results based on transparency and reproducibility, the scientific method acts as a defense against bias, superstition, and preconceived notions. As we embrace the scientific method's enduring principles, we ensure that our quest for knowledge remains firmly rooted in reason, evidence, and the pursuit of truth.

The AJE Team

The AJE Team

See our "Privacy Policy"

National Academies Press: OpenBook

Taking Science to School: Learning and Teaching Science in Grades K-8 (2007)

Chapter: 5 generating and evaluating scientific evidence and explanations, 5 generating and evaluating scientific evidence and explanations.

Major Findings in the Chapter:

Children are far more competent in their scientific reasoning than first suspected and adults are less so. Furthermore, there is great variation in the sophistication of reasoning strategies across individuals of the same age.

In general, children are less sophisticated than adults in their scientific reasoning. However, experience plays a critical role in facilitating the development of many aspects of reasoning, often trumping age.

Scientific reasoning is intimately intertwined with conceptual knowledge of the natural phenomena under investigation. This conceptual knowledge sometimes acts as an obstacle to reasoning, but often facilitates it.

Many aspects of scientific reasoning require experience and instruction to develop. For example, distinguishing between theory and evidence and many aspects of modeling do not emerge without explicit instruction and opportunities for practice.

In this chapter, we discuss the various lines of research related to Strand 2—generate and evaluate evidence and explanations. 1 The ways in which

scientists generate and evaluate scientific evidence and explanations have long been the focus of study in philosophy, history, anthropology, and sociology. More recently, psychologists and learning scientists have begun to study the cognitive and social processes involved in building scientific knowledge. For our discussion, we draw primarily from the past 20 years of research in developmental and cognitive psychology that investigates how children’s scientific thinking develops across the K-8 years.

We begin by developing a broad sketch of how key aspects of scientific thinking develop across the K-8 years, contrasting children’s abilities with those of adults. This contrast allows us to illustrate both how children’s knowledge and skill can develop over time and situations in which adults’ and children’s scientific thinking are similar. Where age differences exist, we comment on what underlying mechanisms might be responsible for them. In this research literature, two broad themes emerge, which we take up in detail in subsequent sections of the chapter. The first is the role of prior knowledge in scientific thinking at all ages. The second is the importance of experience and instruction.

Scientific investigation, broadly defined, includes numerous procedural and conceptual activities, such as asking questions, hypothesizing, designing experiments, making predictions, using apparatus, observing, measuring, being concerned with accuracy, precision, and error, recording and interpreting data, consulting data records, evaluating evidence, verification, reacting to contradictions or anomalous data, presenting and assessing arguments, constructing explanations (to oneself and others), constructing various representations of the data (graphs, maps, three-dimensional models), coordinating theory and evidence, performing statistical calculations, making inferences, and formulating and revising theories or models (e.g., Carey et al., 1989; Chi et al., 1994; Chinn and Malhotra, 2001; Keys, 1994; McNay and Melville, 1993; Schauble et al., 1995; Slowiaczek et al., 1992; Zachos et al., 2000). As noted in Chapter 2 , over the past 20 to 30 years, the image of “doing science” emerging from across multiple lines of research has shifted from depictions of lone scientists conducting experiments in isolated laboratories to the image of science as both an individual and a deeply social enterprise that involves problem solving and the building and testing of models and theories.

Across this same period, the psychological study of science has evolved from a focus on scientific reasoning as a highly developed form of logical thinking that cuts across scientific domains to the study of scientific thinking as the interplay of general reasoning strategies, knowledge of the natural phenomena being studied, and a sense of how scientific evidence and explanations are generated. Much early research on scientific thinking and inquiry tended to focus primarily either on conceptual development or on the development of reasoning strategies and processes, often using very

simplified reasoning tasks. In contrast, many recent studies have attempted to describe a larger number of the complex processes that are deployed in the context of scientific inquiry and to describe their coordination. These studies often engage children in firsthand investigations in which they actively explore multivariable systems. In such tasks, participants initiate all phases of scientific discovery with varying amounts of guidance provided by the researcher. These studies have revealed that, in the context of inquiry, reasoning processes and conceptual knowledge are interdependent and in fact facilitate each other (Schauble, 1996; Lehrer et al. 2001).

It is important to note that, across the studies reviewed in this chapter, researchers have made different assumptions about what scientific reasoning entails and which aspects of scientific practice are most important to study. For example, some emphasize the design of well-controlled experiments, while others emphasize building and critiquing models of natural phenomena. In addition, some researchers study scientific reasoning in stripped down, laboratory-based tasks, while others examine how children approach complex inquiry tasks in the context of the classroom. As a result, the research base is difficult to integrate and does not offer a complete picture of students’ skills and knowledge related to generating and evaluating evidence and explanations. Nor does the underlying view of scientific practice guiding much of the research fully reflect the image of science and scientific understanding we developed in Chapter 2 .

TRENDS ACROSS THE K-8 YEARS

Generating evidence.

The evidence-gathering phase of inquiry includes designing the investigation as well as carrying out the steps required to collect the data. Generating evidence entails asking questions, deciding what to measure, developing measures, collecting data from the measures, structuring the data, systematically documenting outcomes of the investigations, interpreting and evaluating the data, and using the empirical results to develop and refine arguments, models, and theories.

Asking Questions and Formulating Hypotheses

Asking questions and formulating hypotheses is often seen as the first step in the scientific method; however, it can better be viewed as one of several phases in an iterative cycle of investigation. In an exploratory study, for example, work might start with structured observation of the natural world, which would lead to formulation of specific questions and hypotheses. Further data might then be collected, which lead to new questions,

revised hypotheses, and yet another round of data collection. The phase of asking questions also includes formulating the goals of the activity and generating hypotheses and predictions (Kuhn, 2002).

Children differ from adults in their strategies for formulating hypotheses and in the appropriateness of the hypotheses they generate. Children often propose different hypotheses from adults (Klahr, 2000), and younger children (age 10) often conduct experiments without explicit hypotheses, unlike 12- to 14-year-olds (Penner and Klahr, 1996a). In self-directed experimental tasks, children tend to focus on plausible hypotheses and often get stuck focusing on a single hypothesis (e.g., Klahr, Fay, and Dunbar, 1993). Adults are more likely to consider multiple hypotheses (e.g., Dunbar and Klahr, 1989; Klahr, Fay, and Dunbar, 1993). For both children and adults, the ability to consider many alternative hypotheses is a factor contributing to success.

At all ages, prior knowledge of the domain under investigation plays an important role in the formulation of questions and hypotheses (Echevarria, 2003; Klahr, Fay, and Dunbar, 1993; Penner and Klahr, 1996b; Schauble, 1990, 1996; Zimmerman, Raghavan, and Sartoris, 2003). For example, both children and adults are more likely to focus initially on variables they believe to be causal (Kanari and Millar, 2004; Schauble, 1990, 1996). Hypotheses that predict expected results are proposed more frequently than hypotheses that predict unexpected results (Echevarria, 2003). The role of prior knowledge in hypothesis formulation is discussed in greater detail later in the chapter.

Designing Experiments

The design of experiments has received extensive attention in the research literature, with an emphasis on developmental changes in children’s ability to build experiments that allow them to identify causal variables. Experimentation can serve to generate observations in order to induce a hypothesis to account for the pattern of data produced (discovery context) or to test the tenability of an existing hypothesis under consideration (confirmation/ verification context) (Klahr and Dunbar, 1988). At a minimum, one must recognize that the process of experimentation involves generating observations that will serve as evidence that will be related to hypotheses.

Ideally, experimentation should produce evidence or observations that are interpretable in order to make the process of evidence evaluation uncomplicated. One aspect of experimentation skill is to isolate variables in such a way as to rule out competing hypotheses. The control of variables is a basic strategy that allows valid inferences and narrows the number of possible experiments to consider (Klahr, 2000). Confounded experiments, those in which variables have not been isolated correctly, yield indetermi-

nate evidence, thereby making valid inferences and subsequent knowledge gain difficult, if not impossible.

Early approaches to examining experimentation skills involved minimizing the role of prior knowledge in order to focus on the strategies that participants used. That is, the goal was to examine the domain-general strategies that apply regardless of the content to which they are applied. For example, building on the research tradition of Piaget (e.g., Inhelder and Piaget, 1958), Siegler and Liebert (1975) examined the acquisition of experimental design skills by fifth and eighth graders. The problem involved determining how to make an electric train run. The train was connected to a set of four switches, and the children needed to determine the particular on/off configuration required. The train was in reality controlled by a secret switch, so that the discovery of the correct solution was postponed until all 16 combinations were generated. In this task, there was no principled reason why any one of the combinations would be more or less likely, and success was achieved by systematically testing all combinations of a set of four switches. Thus the task involved no domain-specific knowledge that would constrain the hypotheses about which configuration was most likely. A similarly knowledge-lean task was used by Kuhn and Phelps (1982), similar to a task originally used by Inhelder and Piaget (1958), involving identifying reaction properties of a set of colorless fluids. Success on the task was dependent on the ability to isolate and control variables in the set of all possible fluid combinations in order to determine which was causally related to the outcome. The study extended over several weeks with variations in the fluids used and the difficulty of the problem.

In both studies, the importance of practice and instructional support was apparent. Siegler and Liebert’s study included two experimental groups of children who received different kinds of instructional support. Both groups were taught about factors, levels, and tree diagrams. One group received additional, more elaborate support that included practice and help representing all possible solutions with a tree diagram. For fifth graders, the more elaborate instructional support improved their performance compared with a control group that did not receive any support. For eighth graders, both kinds of instructional support led to improved performance. In the Kuhn and Phelps task, some students improved over the course of the study, although an abrupt change from invalid to valid strategies was not common. Instead, the more typical pattern was one in which valid and invalid strategies coexisted both within and across sessions, with a pattern of gradual attainment of stable valid strategies by some students (the stabilization point varied but was typically around weeks 5-7).

Since this early work, researchers have tended to investigate children’s and adults’ performance on experimental design tasks that are more knowledge rich and less constrained. Results from these studies indicate that, in

general, adults are more proficient than children at designing informative experiments. In a study comparing adults with third and sixth graders, adults were more likely to focus on experiments that would be informative (Klahr, Fay, and Dunbar, 1993). Similarly, Schauble (1996) found that during the initial 3 weeks of exploring a domain, children and adults considered about the same number of possible experiments. However, when they began experimentation of another domain in the second 3 weeks of the study, adults considered a greater range of possible experiments. Over the full 6 weeks, children and adults conducted approximately the same number of experiments. Thus, children were more likely to conduct unintended duplicate or triplicate experiments, making their experimentation efforts less informative relative to the adults, who were selecting a broader range of experiments. Similarly, children are more likely to devote multiple experimental trials to variables that were already well understood, whereas adults move on to exploring variables they did not understand as well (Klahr, Fay, and Dunbar, 1993; Schauble, 1996). Evidence also indicates, however, that dimensions of the task often have a greater influence on performance than age (Linn, 1978, 1980; Linn, Chen, and Their, 1977; Linn and Levine, 1978).

With respect to attending to one feature at a time, children are less likely to control one variable at a time than adults. For example, Schauble (1996) found that across two task domains, children used controlled comparisons about a third of the time. In contrast, adults improved from 50 percent usage on the first task to 63 percent on the second task. Children usually begin by designing confounded experiments (often as a means to produce a desired outcome), but with repeated practice begin to use a strategy of changing one variable at time (e.g., Kuhn, Schauble, and Garcia-Mila, 1992; Kuhn et al. 1995; Schauble, 1990).

Reminiscent of the results of the earlier study by Kuhn and Phelps, both children and adults display intraindividual variability in strategy usage. That is, multiple strategy usage is not unique to childhood or periods of developmental transition (Kuhn et al., 1995). A robust finding is the coexistence of valid and invalid strategies (e.g., Kuhn, Schuable, and Garcia-Mila, 1992; Garcia-Mila and Andersen, 2005; Gleason and Schauble, 2000; Schauble, 1990; Siegler and Crowley, 1991; Siegler and Shipley, 1995). That is, participants may progress to the use of a valid strategy, but then return to an inefficient or invalid strategy. Similar use of multiple strategies has been found in research on the development of other academic skills, such as mathematics (e.g., Bisanz and LeFevre, 1990; Siegler and Crowley, 1991), reading (e.g., Perfetti, 1992), and spelling (e.g., Varnhagen, 1995). With respect to experimentation strategies, an individual may begin with an invalid strategy, but once the usefulness of changing one variable at a time is discovered, it is not immediately used exclusively. The newly discovered, effective strategy is only slowly incorporated into an individual’s set of strategies.

An individual’s perception of the goals of an investigation also has an important effect on the hypotheses they generate and their approach to experimentation. Individuals tend to differ in whether they see the overarching goal of an inquiry task as seeking to identify which factors make a difference (scientific) or seeking to produce a desired effect (engineering). It is a question for further research if these different approaches characterize an individual, or if they are invoked by task demand or implicit assumptions.

In a direct exploration of the effect of adopting scientific versus engineering goals, Schauble, Klopfer, and Raghavan (1991) provided fifth and sixth graders with an “engineering context” and a “science context.” When the children were working as scientists, their goal was to determine which factors made a difference and which ones did not. When the children were working as engineers, their goal was optimization, that is, to produce a desired effect (i.e., the fastest boat in the canal task). When working in the science context, the children worked more systematically, by establishing the effect of each variable, alone and in combination. There was an effort to make inclusion inferences (i.e., an inference that a factor is causal) and exclusion inferences (i.e., an inference that a factor is not causal). In the engineering context, children selected highly contrastive combinations and focused on factors believed to be causal while overlooking factors believed or demonstrated to be noncausal. Typically, children took a “try-and-see” approach to experimentation while acting as engineers, but they took a theory-driven approach to experimentation when acting as scientists. Schauble et al. (1991) found that children who received the engineering instructions first, followed by the scientist instructions, made the greatest improvements. Similarly, Sneider et al. (1984) found that students’ ability to plan and critique experiments improved when they first engaged in an engineering task of designing rockets.

Another pair of contrasting approaches to scientific investigation is the theorist versus the experimentalist (Klahr and Dunbar, 1998; Schauble, 1990). Similar variation in strategies for problem solving have been observed for chess, puzzles, physics problems, science reasoning, and even elementary arithmetic (Chase and Simon, 1973; Klahr and Robinson, 1981; Klayman and Ha, 1989; Kuhn et al., 1995; Larkin et al., 1980; Lovett and Anderson, 1995, 1996; Simon, 1975; Siegler, 1987; Siegler and Jenkins, 1989). Individuals who take a theory-driven approach tend to generate hypotheses and then test the predictions of the hypotheses. Experimenters tend to make data-driven discoveries, by generating data and finding the hypothesis that best summarizes or explains that data. For example, Penner and Klahr (1996a) asked 10-to 14-year-olds to conduct experiments to determine how the shape, size, material, and weight of an object influence sinking times. Students’ approaches to the task could be classified as either “prediction oriented” (i.e., a theorist: “I believe that weight makes a difference) or “hypothesis oriented” (i.e., an

experimenter: “I wonder if …”). The 10-year-olds were more likely to take a prediction (or demonstration) approach, whereas the 14-year-olds were more likely to explicitly test a hypothesis about an attribute without a strong belief or need to demonstrate that belief. Although these patterns may characterize approaches to any given task, it has yet to be determined if such styles are idiosyncratic to the individual and likely to remain stable across varying tasks, or if different styles might emerge for the same person depending on task demands or the domain under investigation.

Observing and Recording

Record keeping is an important component of scientific investigation in general, and of self-directed experimental tasks especially, because access to and consulting of cumulative records are often important in interpreting evidence. Early studies of experimentation demonstrated that children are often not aware of their own memory limitations, and this plays a role in whether they document their work during an investigation (e.g., Siegler and Liebert, 1975). Recent studies corroborate the importance of an awareness of one’s own memory limitations while engaged in scientific inquiry tasks, regardless of age. Spontaneous note-taking or other documentation of experimental designs and results may be a factor contributing to the observed developmental differences in performance on both experimental design tasks and in evaluation of evidence. Carey et al. (1989) reported that, prior to instruction, seventh graders did not spontaneously keep records when trying to determine and keep track of which substance was responsible for producing a bubbling reaction in a mixture of yeast, flour, sugar, salt, and warm water. Nevertheless, even though preschoolers are likely to produce inadequate and uninformative notations, they can distinguish between the two when asked to choose between them (Triona and Klahr, in press). Dunbar and Klahr (1988) also noted that children (grades 3-6) were unlikely to check if a current hypothesis was or was not consistent with previous experimental results. In a study by Trafton and Trickett (2001), undergraduates solving scientific reasoning problems in a computer environment were more likely to achieve correct performance when using the notebook function (78 percent) than were nonusers (49 percent), showing that this issue is not unique to childhood.

In a study of fourth graders’ and adults’ spontaneous use of notebooks during a 10-week investigation of multivariable systems, all but one of the adults took notes, whereas only half of the children took notes. Moreover, despite variability in the amount of notebook usage in both groups, on average adults made three times more notebook entries than children did. Adults’ note-taking remained stable across the 10 weeks, but children’s frequency of use decreased over time, dropping to about half of their initial

usage. Children rarely reviewed their notes, which typically consisted of conclusions, but not the variables used or the outcomes of the experimental tests (i.e., the evidence for the conclusion was not recorded) (Garcia-Mila and Andersen, 2005).

Children may differentially record the results of experiments, depending on familiarity or strength of prior theories. For example, 10- to 14-year-olds recorded more data points when experimenting with factors affecting force produced by the weight and surface area of boxes than when they were experimenting with pendulums (Kanari and Millar, 2004). Overall, it is a fairly robust finding that children are less likely than adults to record experimental designs and outcomes or to review what notes they do keep, despite task demands that clearly necessitate a reliance on external memory aids.

Given the increasing attention to the importance of metacognition for proficient performance on such tasks (e.g., Kuhn and Pearsall, 1998, 2000), it is important to determine at what point children and early adolescents recognize their own memory limitations as they navigate through a complex task. Some studies show that children’s understanding of how their own memories work continues to develop across the elementary and middle school grades (Siegler and Alibali, 2005). The implication is that there is no particular age or grade level when memory and limited understanding of one’s own memory are no longer a consideration. As such, knowledge of how one’s own memory works may represent an important moderating variable in understanding the development of scientific reasoning (Kuhn, 2001). For example, if a student is aware that it will be difficult for her to remember the results of multiple trials, she may be more likely to carefully record each outcome. However, it may also be the case that children, like adult scientists, need to be inducted into the practice of record keeping and the use of records. They are likely to need support to understand the important role of records in generating scientific evidence and supporting scientific arguments.

Evaluating Evidence

The important role of evidence evaluation in the process of scientific activity has long been recognized. Kuhn (1989), for example, has argued that the defining feature of scientific thinking is the set of skills involved in differentiating and coordinating theory and evidence. Various strands of research provide insight on how children learn to engage in this phase of scientific inquiry. There is an extensive literature on the evaluation of evidence, beginning with early research on identifying patterns of covariation and cause that used highly structured experimental tasks. More recently researchers have studied how children evaluate evidence in the context of self-directed experimental tasks. In real-world contexts (in contrast to highly controlled laboratory tasks) the process of evidence evaluation is very messy

and requires an understanding of error and variation. As was the case for hypothesis generation and the design of experiments, the role of prior knowledge and beliefs has emerged as an important influence on how individuals evaluate evidence.

Covariation Evidence

A number of early studies on the development of evidence evaluation skills used knowledge-lean tasks that asked participants to evaluate existing data. These data were typically in the form of covariation evidence—that is, the frequency with which two events do or do not occur together. Evaluation of covariation evidence is potentially important in regard to scientific thinking because covariation is one potential cue that two events are causally related. Deanna Kuhn and her colleagues carried out pioneering work on children’s and adults’ evaluation of covariation evidence, with a focus on how participants coordinate their prior beliefs about the phenomenon with the data presented to them (see Box 5-1 ).

Results across a series of studies revealed continuous improvement of the skills involved in differentiating and coordinating theory and evidence, as well as bracketing prior belief while evaluating evidence, from middle childhood (grades 3 and 6) to adolescence (grade 9) to adulthood (Kuhn, Amsel, and O’Loughlin, 1988). These skills, however, did not appear to develop to an optimal level even among adults. Even adults had a tendency to meld theory and evidence into a single mental representation of “the way things are.”

Participants had a variety of strategies for keeping theory and evidence in alignment with one another when they were in fact discrepant. One tendency was to ignore, distort, or selectively attend to evidence that was inconsistent with a favored theory. For example, the protocol from one ninth grader demonstrated that upon repeated instances of covariation between type of breakfast roll and catching colds, he would not acknowledge this relationship: “They just taste different … the breakfast roll to me don’t cause so much colds because they have pretty much the same thing inside” (Kuhn, Amsel, and O’Loughlin, 1998, p. 73).

Another tendency was to adjust a theory to fit the evidence, a process that was most often outside an individual’s conscious awareness and control. For example, when asked to recall their original beliefs, participants would often report a theory consistent with the evidence that was presented, and not the theory as originally stated. Take the case of one ninth grader who did not believe that type of condiment (mustard versus ketchup) was causally related to catching colds. With each presentation of an instance of covariation evidence, he acknowledged the evidence and elaborated a theory based on the amount of ingredients or vitamins and the temperature of the

food the condiment was served with to make sense of the data (Kuhn, Amsel, and O’Loughlin, 1988, p. 83). Kuhn argued that this tendency suggests that the student’s theory does not exist as an object of cognition. That is, a theory and the evidence for that theory are undifferentiated—they do not exist as separate cognitive entities. If they do not exist as separate entities, it is not possible to flexibly and consciously reflect on the relation of one to the other.

A number of researchers have criticized Kuhn’s findings on both methodological and theoretical grounds. Sodian, Zaitchik, and Carey (1991), for example, questioned the finding that third and sixth grade children cannot distinguish between their beliefs and the evidence, pointing to the complex-

ity of the tasks Kuhn used as problematic. They chose to employ simpler tasks that involved story problems about phenomena for which children did not hold strong beliefs. Children’s performance on these tasks demonstrated that even first and second graders could differentiate a hypothesis from the evidence. Likewise, Ruffman et al. (1993) used a simplified task and showed that 6-year-olds were able to form a causal hypothesis based on a pattern of covariation evidence. A study of children and adults (Amsel and Brock, 1996) indicated an important role of prior beliefs, especially for children. When presented with evidence that disconfirmed prior beliefs, children from both grade levels tended to make causal judgments consistent with their prior beliefs. When confronted with confirming evidence, however, both groups of children and adults made similar judgments. Looking across these studies provides insight into the conditions under which children are more or less proficient at coordinating theory and evidence. In some situations, children are better at distinguishing prior beliefs from evidence than the results of Kuhn et al. suggest.

Koslowksi (1996) criticized Kuhn et al.’s work on more theoretical grounds. She argued that reliance on knowledge-lean tasks in which participants are asked to suppress their prior knowledge may lead to an incomplete or distorted picture of the reasoning abilities of children and adults. Instead, Koslowski suggested that using prior knowledge when gathering and evaluating evidence is a valid strategy. She developed a series of experiments to support her thesis and to explore the ways in which prior knowledge might play a role in evaluating evidence. The results of these investigations are described in detail in the later section of this chapter on the role of prior knowledge.

Evidence in the Context of Investigations

Researchers have also looked at reasoning about cause in the context of full investigations of causal systems. Two main types of multivariable systems are used in these studies. In the first type of system, participants are involved in a hands-on manipulation of a physical system, such as a ramp (e.g., Chen and Klahr, 1999; Masnick and Klahr, 2003) or a canal (e.g., Gleason and Schauble, 2000; Kuhn, Schauble, and Garcia-Mila, 1992). The second type of system is a computer simulation, such as the Daytona microworld in which participants discover the factors affecting the speed of race cars (Schauble, 1990). A variety of virtual environments have been created in domains such as electric circuits (Schauble et al., 1992), genetics (Echevarria, 2003), earthquake risk, and flooding risk (e.g., Keselman, 2003).

The inferences that are made based on self-generated experimental evidence are typically classified as either causal (or inclusion), noncausal (or exclusion), indeterminate, or false inclusion. All inference types can be fur-

ther classified as valid or invalid. Invalid inclusion, by definition, is of particular interest because in self-directed experimental contexts, both children and adults often infer based on prior beliefs that a variable is causal, when in reality it is not.

Children tend to focus on making causal inferences during their initial explorations of a causal system. In a study in which children worked to discover the causal structure of a computerized microworld, fifth and sixth graders began by producing confounded experiments and relied on prior knowledge or expectations (Schauble, 1990). As a result, in their early explorations of the causal system, they were more likely to make incorrect causal inferences. In a direct comparison of adults and children (Schauble, 1996), adults also focused on making causal inferences, but they made more valid inferences because their experimentation was more often done using a control-of-variables strategy. Overall, children’s inferences were valid 44 percent of the time, compared with 72 percent for adults. The fifth and sixth graders improved over the course of six sessions, starting at 25 percent but improving to almost 60 percent valid inferences (Schauble, 1996). Adults were more likely than children to make inferences about which variables were noncausal or inferences of indeterminacy (80 and 30 percent, respectively) (Schauble, 1996).

Children’s difficulty with inferences of noncausality also emerged in a study of 10- to 14-year-olds who explored factors influencing the swing of a pendulum or the force needed to pull a box along a level surface (Kanari and Millar, 2004). Only half of the students were able draw correct conclusions about factors that did not covary with outcome. Students were likely to either selectively record data, selectively attend to data, distort or reinterpret the data, or state that noncovariation experimental trials were “inconclusive.” Such tendencies are reminiscent of other findings that some individuals selectively attend to or distort data in order to preserve a prior theory or belief (Kuhn, Amsel, and O’Loughlin, 1988; Zimmerman, Raghavan, and Sartoris, 2003).

Some researchers suggest children’s difficulty with noncausal or indeterminate inferences may be due both to experience and to the inherent complexity of the problem. In terms of experience, in the science classroom it is typical to focus on variables that “make a difference,” and therefore students struggle when testing variables that do not covary with the outcome (e.g., the weight of a pendulum does not affect the time of swing or the vertical height of a weight does not affect balance) (Kanari and Millar, 2004). Also, valid exclusion and indeterminacy inferences may be conceptually more complex, because they require one to consider a pattern of evidence produced from several experimental trials (Kuhn et al., 1995; Schauble, 1996). Looking across several trials may require one to review cumulative records of previous outcomes. As has been suggested previously, children do not

often have the memory skills to either record information, record sufficient information, or consult such information when it has been recorded.

The importance of experience is highlighted by the results of studies conducted over several weeks with fifth and sixth graders. After several weeks with a task, children started making more exclusion inferences (that factors are not causal) and indeterminacy inferences (that one cannot make a conclusive judgment about a confounded comparison) and did not focus solely on causal inferences (e.g., Keselman, 2003; Schauble, 1996). They also began to distinguish between an informative and an uninformative experiment by attending to or controlling other factors leading to an improved ability to make valid inferences. Through repeated exposure, invalid inferences, such as invalid inclusions, dropped in frequency. The tendency to begin to make inferences of indeterminacy suggests that students developed more awareness of the adequacy or inadequacy of their experimentation strategies for generating sufficient and interpretable evidence.

Children and adults also differ in generating sufficient evidence to support inferences. In contexts in which it is possible, children often terminate their search early, believing that they have determined a solution to the problem (e.g., Dunbar and Klahr, 1989). In studies over several weeks in which children must continue their investigation (e.g., Schauble et al., 1991), this is less likely because of the task requirements. Children are also more likely to refer to the most recently generated evidence. They may jump to a conclusion after a single experiment, whereas adults typically need to see the results of several experiments (e.g., Gleason and Schauble, 2000).

As was found with experimentation, children and adults display intraindividual variability in strategy usage with respect to inference types. Likewise, the existence of multiple inference strategies is not unique to childhood (Kuhn et al., 1995). In general, early in an investigation, individuals focus primarily on identifying factors that are causal and are less likely to consider definitely ruling out factors that are not causal. However, a mix of valid and invalid inference strategies co-occur during the course of exploring a causal system. As with experimentation, the addition of a valid inference strategy to an individual’s repertoire does not mean that they immediately give up the others. Early in investigations, there is a focus on causal hypotheses and inferences, whether they are warranted or not. Only with additional exposure do children start to make inferences of noncausality and indeterminacy. Knowledge change and experience—gaining a better understanding of the causal system via experimentation—was associated with the use of valid experimentation and inference strategies.

THE ROLE OF PRIOR KNOWLEDGE

In the previous section we reviewed evidence on developmental differences in using scientific strategies. Across multiple studies, prior knowledge

emerged as an important influence on several parts of the process of generating and evaluating evidence. In this section we look more closely at the specific ways that prior knowledge may shape part of the process. Prior knowledge includes conceptual knowledge, that is, knowledge of the natural world and specifically of the domain under investigation, as well as prior knowledge and beliefs about the purpose of an investigation and the goals of science more generally. This latter kind of prior knowledge is touched on here and discussed in greater detail in the next chapter.

Beliefs About Causal Mechanism and Plausibility

In response to research on evaluation of covariation evidence that used knowledge-lean tasks or even required participants to suppress prior knowledge, Koslowski (1996) argued that it is legitimate and even helpful to consider prior knowledge when gathering and evaluating evidence. The world is full of correlations, and consideration of plausibility, causal mechanism, and alternative causes can help to determine which correlations between events should be taken seriously and which should be viewed as spurious. For example, the identification of the E. coli bacterium allows a causal relationship between hamburger consumption and certain types of illness or mortality. Because of the absence of a causal mechanism, one does not consider seriously the correlation between ice cream consumption and violent crime rate as causal, but one looks for other covarying quantities (such as high temperatures) that may be causal for both behaviors and thus explain the correlation.

Koslowski (1996) presented a series of experiments that demonstrate the interdependence of theory and evidence in legitimate scientific reasoning (see Box 5-2 for an example). In most of these studies, all participants (sixth graders, ninth graders, and adults) did take mechanism into consideration when evaluating evidence in relation to a hypothesis about a causal relationship. Even sixth graders considered more than patterns of covariation when making causal judgments (Koslowksi and Okagaki, 1986; Koslowski et al., 1989). In fact, as discussed in the previous chapter, results of studies by Koslowski (1996) and others (Ahn et al., 1995) indicate that children and adults have naïve theories about the world that incorporate information about both covariation and causal mechanism.

The plausibility of a mechanism also plays a role in reasoning about cause. In some situations, scientific progress occurs by taking seemingly implausible correlations seriously (Wolpert, 1993). Similarly, Koslowski argued that if people rely on covariation and mechanism information in an interdependent and judicious manner, then they should pay attention to implausible correlations (i.e., those with no apparent mechanism) when the implausible correlation occurs repeatedly. For example, discovering the cause of Kawasaki’s syndrome depended on taking seriously the implausible cor-

relation between the illness and having recently cleaned carpets. Similarly, Thagard (1998a, 1998b) describes the case of researchers Warren and Marshall, who proposed that peptic ulcers could be caused by a bacterium, and their efforts to have their theory accepted by the medical community. The bacterial theory of ulcers was initially rejected as implausible, given the assumption that the stomach is too acidic to allow bacteria to survive.

Studies with both children and adults reveal links between reasoning about mechanism and the plausibility of that mechanism (Koslowski, 1996). When presented with an implausible covariation (e.g., improved gas mileage and color of car), participants rated the causal status of the implausible cause (color) before and after learning about a possible way that the cause could bring about the effect (improved gas mileage). In this example, par-

ticipants learned that the color of the car affects the driver’s alertness (which affects driving quality, which in turn affects gas mileage). At all ages, participants increased their causal ratings after learning about a possible mediating mechanism. The presence of a possible mechanism in addition to a large number of covariations (four or more) was taken to indicate the possibility of a causal relationship for both plausible and implausible covariations. When either generating or assessing mechanisms for plausible covariations, all age groups (sixth and ninth graders and adults) were comparable. When the covariation was implausible, sixth graders were more likely to generate dubious mechanisms to account for the correlation.

The role of prior knowledge, especially beliefs about causal mechanism and plausibility, is also evident in hypothesis formation and the design of investigations. Individuals’ prior beliefs influence the choice of which hypotheses to test, including which hypotheses are tested first, repeatedly, or receive the most time and attention (e.g., Echevarria, 2003; Klahr, Fay, and Dunbar, 1993; Penner and Klahr, 1996b; Schauble, 1990, 1996; Zimmerman, Raghavan, and Sartoris, 2003). For example, children’s favored theories sometimes result in the selection of invalid experimentation and evidence evaluation heuristics (e.g., Dunbar and Klahr, 1989; Schauble, 1990). Plausibility of a hypothesis may serve as a guide for which experiments to pursue. Klahr, Fay, and Dunbar (1993) provided third and sixth grade children and adults with hypotheses to test that were incorrect but either plausible or implausible. For plausible hypotheses, children and adults tended to go about demonstrating the correctness of the hypothesis rather than setting up experiments to decide between rival hypotheses. For implausible hypotheses, adults and some sixth graders proposed a plausible rival hypothesis and set up an experiment that would discriminate between the two. Third graders tended to propose a plausible hypothesis but then ignore or forget the initial implausible hypothesis, getting sidetracked in an attempt to demonstrate that the plausible hypothesis was correct.

Recognizing the interdependence of theory and data in the evaluation of evidence and explanations, Chinn and Brewer (2001) proposed that people evaluate evidence by building a mental model of the interrelationships between theories and data. These models integrate patterns of data, procedural details, and the theoretical explanation of the observed findings (which may include unobservable mechanisms, such as molecules, electrons, enzymes, or intentions and desires). The information and events can be linked by different kinds of connections, including causal, contrastive, analogical, and inductive links. The mental model may then be evaluated by considering the plausibility of these links. In addition to considering the links between, for example, data and theory, the model might also be evaluated by appealing to alternate causal mechanisms or alternate explanations. Essentially, an individual seeks to “undermine one or more of the links in the

model” (p. 337). If no reasons to be critical can be identified, the individual may accept the new evidence or theoretical interpretation.

Some studies suggest that the strength of prior beliefs, as well as the personal relevance of those beliefs, may influence the evaluation of the mental model (Chinn and Malhotra, 2002; Klaczynski, 2000; Klaczynski and Narasimham, 1998). For example, when individuals have reason to disbelieve evidence (e.g., because it is inconsistent with prior belief), they will search harder for flaws in the data (Kunda, 1990). As a result, individuals may not find the evidence compelling enough to reassess their cognitive model. In contrast, beliefs about simple empirical regularities may not be held with such conviction (e.g., the falling speed of heavy versus light objects), making it easier to change a belief in response to evidence.

Evaluating Evidence That Contradicts Prior Beliefs

Anomalous data or evidence refers to results that do not fit with one’s current beliefs. Anomalous data are considered very important by scientists because of their role in theory change, and they have been used by science educators to promote conceptual change. The idea that anomalous evidence promotes conceptual change (in the scientist or the student) rests on a number of assumptions, including that individuals have beliefs or theories about natural or social phenomena, that they are capable of noticing that some evidence is inconsistent with those theories, that such evidence calls into question those theories, and, in some cases, that a belief or theory will be altered or changed in response to the new (anomalous) evidence (Chinn and Brewer, 1998). Chinn and Brewer propose that there are eight possible responses to anomalous data. Individuals can (1) ignore the data; (2) reject the data (e.g., because of methodological error, measurement error, bias); (3) acknowledge uncertainty about the validity of the data; (4) exclude the data as being irrelevant to the current theory; (5) hold the data in abeyance (i.e., withhold a judgment about the relation of the data to the initial theory); (6) reinterpret the data as consistent with the initial theory; (7) accept the data and make peripheral change or minor modification to the theory; or (8) accept the data and change the theory. Examples of all of these responses were found in undergraduates’ responses to data that contradicted theories to explain the mass extinction of dinosaurs and theories about whether dinosaurs were warm-blooded or cold-blooded.

In a series of studies, Chinn and Malhotra (2002) examined how fourth, fifth, and sixth graders responded to experimental data that were inconsistent with their existing beliefs. Experiments from physical science domains were selected in which the outcomes produced either ambiguous or unambiguous data, and for which the findings were counterintuitive for most children. For example, most children assume that a heavy object falls faster

than a light object. When the two objects are dropped simultaneously, there is some ambiguity because it is difficult to observe both objects. An example of a topic that is counterintuitive but results in unambiguous evidence is the reaction temperature of baking soda added to vinegar. Children believe that either no change in temperature will occur, or that the fizzing causes an increase in temperature. Thermometers unambiguously show a temperature drop of about 4 degrees centigrade.

When examining the anomalous evidence produced by these experiments, children’s difficulties seemed to occur in one of four cognitive processes: observation, interpretation, generalization, or retention (Chinn and Malhotra, 2002). For example, prior belief may influence what is “observed,” especially in the case of data that are ambiguous, and children may not perceive the two objects as landing simultaneously. Inferences based on this faulty observation will then be incorrect. At the level of interpretation, even if individuals accurately observed the outcome, they might not shift their theory to align with the evidence. They can fail to do so in many ways, such as ignoring or distorting the data or discounting the data because they are considered flawed. At the level of generalization, an individual may accept, for example, that these particular heavy and light objects fell at the same rate but insist that the same rule may not hold for other situations or objects. Finally, even when children appeared to change their beliefs about an observed phenomenon in the immediate context of the experiment, their prior beliefs reemerged later, indicating a lack of long-term retention of the change.

Penner and Klahr (1996a) investigated the extent to which children’s prior beliefs affect their ability to design and interpret experiments. They used a domain in which most children hold a strong belief that heavier objects sink in fluid faster than light objects, and they examined children’s ability to design unconfounded experiments to test that belief. In this study, for objects of a given composition and shape, sink times for heavy and light objects are nearly indistinguishable to an observer. For example, the sink times for the stainless steel spheres weighing 65 gm and 19 gm were .58 sec and .62 sec, respectively. Only one of the eight children (out of 30) who chose to directly contrast these two objects continued to explore the reason for the unexpected finding that the large and small spheres had equivalent sink times. The process of knowledge change was not straightforward. For example, some children suggested that the size of the smaller steel ball offset the fact that it weighed less because it was able to move through the water as fast as the larger, heavier steel ball. Others concluded that both weight and shape make a difference. That is, there was an attempt to reconcile the evidence with prior knowledge and expectations by appealing to causal mechanisms, alternate causes, or enabling conditions.

What is also important to note about the children in the Penner and Klahr study is that they did in fact notice the surprising finding, rather than

ignore or misrepresent the data. They tried to make sense of the outcome by acting as a theorist who conjectures about the causal mechanisms, boundary conditions, or other ad hoc explanations (e.g., shape) to account for the results of an experiment. In Chinn and Malhotra’s (2002) study of students’ evaluation of observed evidence (e.g., watching two objects fall simultaneously), the process of noticing was found to be an important mediator of conceptual change.

Echevarria (2003) examined seventh graders’ reactions to anomalous data in the domain of genetics and whether they served as a catalyst for knowledge construction during the course of self-directed experimentation. Students in the study completed a 3-week unit on genetics that involved genetics simulation software and observing plant growth. In both the software and the plants, students investigated or observed the transmission of one trait. Anomalies in the data were defined as outcomes that were not readily explainable on the basis of the appearance of the parents.

In general, the number of hypotheses generated, the number of tests conducted, and the number of explanations generated were a function of students’ ability to encounter, notice, and take seriously an anomalous finding. The majority of students (80 percent) developed some explanation for the pattern of anomalous data. For those who were unable to generate an explanation, it was suggested that the initial knowledge was insufficient and therefore could not undergo change as a result of the encounter with “anomalous” evidence. Analogous to case studies in the history of science (e.g., Simon, 2001), these students’ ability to notice and explore anomalies was related to their level of domain-specific knowledge (as suggested by Pasteur’s oft quoted maxim “serendipity favors the prepared mind”). Surprising findings were associated with an increase in hypotheses and experiments to test these potential explanations, but without the domain knowledge to “notice,” anomalies could not be exploited.

There is some evidence that, with instruction, students’ ability to evaluate anomalous data improves (Chinn and Malhotra, 2002). In a study of fourth, fifth, and sixth graders, one group of students was instructed to predict the outcomes of three experiments that produce counterintuitive but unambiguous data (e.g., reaction temperature). A second group answered questions that were designed to promote unbiased observations and interpretations by reflecting on the data. A third group was provided with an explanation of what scientists expected to find and why. All students reported their prediction of the outcome, what they observed, and their interpretation of the experiment. They were then tested for generalizations, and a retention test followed 9-10 days later. Fifth and sixth graders performed better than did fourth graders. Students who heard an explanation of what scientists expected to find and why did best. Further analyses suggest that the explanation-based intervention worked by influencing students’ initial

predictions. This correct prediction then influenced what was observed. A correct observation then led to correct interpretations and generalizations, which resulted in conceptual change that was retained. A similar pattern of results was found using interventions employing either full or reduced explanations prior to the evaluation of evidence.

Thus, it appears that children were able to change their beliefs on the basis of anomalous or unexpected evidence, but only when they were capable of making the correct observations. Difficulty in making observations was found to be the main cognitive process responsible for impeding conceptual change (i.e., rather than interpretation, generalization, or retention). Certain interventions, in particular those involving an explanation of what scientists expected to happen and why, were very effective in mediating conceptual change when encountering counterintuitive evidence. With particular scaffolds, children made observations independent of theory, and they changed their beliefs based on observed evidence.

THE IMPORTANCE OF EXPERIENCE AND INSTRUCTION

There is increasing evidence that, as in the case of intellectual skills in general, the development of the component skills of scientific reasoning “cannot be counted on to routinely develop” (Kuhn and Franklin, 2006, p. 47). That is, young children have many requisite skills needed to engage in scientific thinking, but there are also ways in which even adults do not show full proficiency in investigative and inference tasks. Recent research efforts have therefore been focused on how such skills can be promoted by determining which types of educational interventions (e.g., amount of structure, amount of support, emphasis on strategic or metastrategic skills) will contribute most to learning, retention, and transfer, and which types of interventions are best suited to different students. There is a developing picture of what children are capable of with minimal support, and research is moving in the direction of ascertaining what children are capable of, and when, under conditions of practice, instruction, and scaffolding. It may one day be possible to tailor educational opportunities that neither under- or overestimate children’s ability to extract meaningful experiences from inquiry-based science classes.

Very few of the early studies focusing on the development of experimentation and evidence evaluation skills explicitly addressed issues of instruction and experience. Those that did, however, indicated an important role of experience and instruction in supporting scientific thinking. For example, Siegler and Liebert (1975) incorporated instructional manipulations aimed at teaching children about variables and variable levels with or without practice on analogous tasks. In the absence of both instruction and

extended practice, no fifth graders and a small minority of eighth graders were successful. Kuhn and Phelps (1982) reported that, in the absence of explicit instruction, extended practice over several weeks was sufficient for the development and modification of experimentation and inference strategies. Later studies of self-directed experimentation also indicate that frequent engagement with the inquiry environment alone can lead to the development and modification of cognitive strategies (e.g., Kuhn, Schauble, and Garcia-Mila, 1992; Schauble et al., 1991).

Some researchers have suggested that even simple prompts, which are often used in studies of students’ investigation skills, may provide a subtle form of instruction intervention (Klahr and Carver, 1995). Such prompts may cue the strategic requirements of the task, or they may promote explanation or the type of reflection that could induce a metacognitive or metastrategic awareness of task demands. Because of their role in many studies of revealing students’ thinking generation, it may be very difficult to tease apart the relative contributions of practice from the scaffolding provided by researcher prompts.

In the absence of instruction or prompts, students may not routinely ask questions of themselves, such as “What are you going to do next?” “What outcome do you predict?” “What did you learn?” and “How do you know?” Questions such as these may promote self-explanation, which has been shown to enhance understanding in part because it facilitates the integration of newly learned material with existing knowledge (Chi et al., 1994). Questions such as the prompts used by researchers may serve to promote such integration. Chinn and Malhotra (2002) incorporated different kinds of interventions, aimed at promoting conceptual change in response to anomalous experimental evidence. Interventions included practice at making predictions, reflecting on data, and explanation. The explanation-based interventions were most successful at promoting conceptual change, retention, and generalization. The prompts used in some studies of self-directed experimentation are very likely to serve the same function as the prompts used by Chi et al. (1994). Incorporating such prompts in classroom-based inquiry activities could serve as a powerful teaching tool, given that the use of self-explanation in tutoring systems (human and computer interface) has been shown to be quite effective (e.g., Chi, 1996; Hausmann and Chi, 2002).

Studies that compare the effects of different kinds of instruction and practice opportunities have been conducted in the laboratory, with some translation to the classroom. For example, Chen and Klahr (1999) examined the effects of direct and indirect instruction of the control of variables strategy on students’ (grades 2-4) experimentation and knowledge acquisition. The instructional intervention involved didactic teaching of the control-of-variables strategy, along with examples and probes. Indirect (or implicit) training involved the use of systematic probes during the course of children’s

experimentation. A control group did not receive instruction or probes. No group received instruction on domain knowledge for any task used (springs, ramps, sinking objects). For the students who received instruction, use of the control-of-variables strategy increased from 34 percent prior to instruction to 65 percent after, with 61-64 percent usage maintained on transfer tasks that followed after 1 day and again after 7 months, respectively. No such gains were evident for the implicit training or control groups.

Instruction about control of variables improved children’s ability to design informative experiments, which in turn facilitated conceptual change in a number of domains. They were able to design unconfounded experiments, which facilitated valid causal and noncausal inferences, resulting in a change in knowledge about how various multivariable causal systems worked. Significant gains in domain knowledge were evident only for the instruction group. Fourth graders showed better skill retention at long-term assessment than second or third graders.

The positive impact of instruction on control of variables also appears to translate to the classroom (Toth, Klahr, and Chen, 2000; Klahr, Chen and Toth, 2001). Fourth graders who received instruction in the control-of-variables strategy in their classroom increased their use of the strategy, and their domain knowledge improved. The percentage of students who were able to correctly evaluate others’ research increased from 28 to 76 percent.

Instruction also appears to promote longer term use of the control-of-variables strategy and transfer of the strategy to a new task (Klahr and Nigam, 2004). Third and fourth graders who received instruction were more likely to master the control-of-variables strategy than students who explored a multivariable system on their own. Interestingly, although the group that received instruction performed better overall, a quarter of the students who explored the system on their own also mastered the strategy. These results raise questions about the kinds of individual differences that may allow for some students to benefit from the discovery context, but not others. That is, which learner traits are associated with the success of different learning experiences?

Similar effects of experience and instruction have been demonstrated for improving students’ ability to use evidence from multiple records and make correct inferences from noncausal variables (Keselman, 2003). In many cases, students show some improvement when they are given the opportunity for practice, but greater improvement when they receive instruction (Kuhn and Dean, 2005).

Long-term studies of students’ learning in the classroom with instructional support and structured experiences over months and years reveal children’s potential to engage in sophisticated investigations given the appropriate experiences (Metz, 2004; Lehrer and Schauble, 2005). For example, in one classroom-based study, second and fourth and fifth graders took part

in a curriculum unit on animal behavior that emphasized domain knowledge, whole-class collaboration, scaffolded instruction, and discussions about the kinds of questions that can and cannot be answered by observational records (Metz, 2004). Pairs or triads of students then developed a research question, designed an experiment, collected and analyzed data, and presented their findings on a research poster. Such studies have demonstrated that, with appropriate support, students in grades K-8 and students from a variety of socioeconomic, cultural, and linguistic backgrounds can be successful in generating and evaluating scientific evidence and explanations (Kuhn and Dean, 2005; Lehrer and Schauble, 2005; Metz, 2004; Warren, Rosebery, and Conant, 1994).

KNOWLEDGE AND SKILL IN MODELING

The picture that emerges from developmental and cognitive research on scientific thinking is one of a complex intertwining of knowledge of the natural world, general reasoning processes, and an understanding of how scientific knowledge is generated and evaluated. Science and scientific thinking are not only about logical thinking or conducting carefully controlled experiments. Instead, building knowledge in science is a complex process of building and testing models and theories, in which knowledge of the natural world and strategies for generating and evaluating evidence are closely intertwined. Working from this image of science, a few researchers have begun to investigate the development of children’s knowledge and skills in modeling.

The kinds of models that scientists construct vary widely, both within and across disciplines. Nevertheless, the rhetoric and practice of science are governed by efforts to invent, revise, and contest models. By modeling, we refer to the construction and test of representations that serve as analogues to systems in the real world (Lehrer and Schauble, 2006). These representations can be of many forms, including physical models, computer programs, mathematical equations, or propositions. Objects and relations in the model are interpreted as representing theoretically important objects and relations in the represented world. Models are useful in summarizing known features and predicting outcomes—that is, they can become elements of or representations of theories. A key hurdle for students is to understand that models are not copies; they are deliberate simplifications. Error is a component of all models, and the precision required of a model depends on the purpose for its current use.

The forms of thinking required for modeling do not progress very far without explicit instruction and fostering (Lehrer and Schauble, 2000). For this reason, studies of modeling have most often taken place in classrooms over sustained periods of time, often years. These studies provide a pro-

vocative picture of the sophisticated scientific thinking that can be supported in classrooms if students are provided with the right kinds of experiences over extended periods of time. The instructional approaches used in studies of students’ modeling, as well as the approach to curriculum that may be required to support the development of modeling skills over multiple years of schooling, are discussed in the chapters in Part III .

Lehrer and Schauble (2000, 2003, 2006) reported observing characteristic shifts in the understanding of modeling over the span of the elementary school grades, from an early emphasis on literal depictional forms, to representations that are progressively more symbolic and mathematically powerful. Diversity in representational and mathematical resources both accompanied and produced conceptual change. As children developed and used new mathematical means for characterizing growth, they understood biological change in increasingly dynamic ways. For example, once students understood the mathematics of ratio and changing ratios, they began to conceive of growth not as simple linear increase, but as a patterned rate of change. These transitions in conception and representation appeared to support each other, and they opened up new lines of inquiry. Children wondered whether plant growth was like animal growth, and whether the growth of yeast and bacteria on a Petri dish would show a pattern like the growth of a single plant. These forms of conceptual development required a context in which teachers systematically supported a restricted set of central ideas, building successively on earlier concepts over the grades of schooling.

Representational Systems That Support Modeling

The development of specific representational forms and notations, such as graphs, tables, computer programs, and mathematical expressions, is a critical part of engaging in mature forms of modeling. Mathematics, data and scale models, diagrams, and maps are particularly important for supporting science learning in grades K-8.

Mathematics

Mathematics and science are, of course, separate disciplines. Nevertheless, for the past 200 years, the steady press in science has been toward increasing quantification, visualization, and precision (Kline, 1980). Mathematics in all its forms is a symbol system that is fundamental to both expressing and understanding science. Often, expressing an idea mathematically results in noticing new patterns or relationships that otherwise would not be grasped. For example, elementary students studying the growth of organisms (plants, tobacco hornworms, populations of bacteria) noted that when they graphed changes in heights over the life span, all the organisms

studied produced an emergent S-shaped curve. However, such seeing depended on developing a “disciplined perception” (Stevens and Hall, 1998), a firm grounding in a Cartesian system. Moreover, the shape of the curve was determined in light of variation, accounted for by selecting and connecting midpoints of intervals that defined piece-wise linear segments. This way of representing typical growth was contentious, because some midpoints did not correspond to any particular case value. This debate was therefore a pathway toward the idealization and imagined qualities of the world necessary for adopting a modeling stance. The form of the growth curve was eventually tested in other systems, and its replications inspired new questions. For example, why would bacteria populations and plants be describable by the same growth curve? In this case and in others, explanatory models and data models mutually bootstrapped conceptual development (Lehrer and Schauble, 2002).

It is not feasible in this report to summarize the extensive body of research in mathematics education, but one point is especially critical for science education: the need to expand elementary school mathematics beyond arithmetic to include space and geometry, measurement, and data/ uncertainty. The National Council of Teachers of Mathematics standards (2000) has strongly supported this extension of early mathematics, based on their judgment that arithmetic alone does not constitute a sufficient mathematics education. Moreover, if mathematics is to be used as a resource for science, the resource base widens considerably with a broader mathematical base, affording students a greater repertoire for making sense of the natural world.

For example, consider the role of geometry and visualization in comparing crystalline structures or evaluating the relationship between the body weights and body structures of different animals. Measurement is a ubiquitous part of the scientific enterprise, although its subtleties are almost always overlooked. Students are usually taught procedures for measuring but are rarely taught a theory of measure. Educators often overestimate children’s understanding of measurement because measuring tools—like rulers or scales—resolve many of the conceptual challenges of measurement for children, so that they may fail to grasp the idea that measurement entails the iteration of constant units, and that these units can be partitioned. It is reasonably common, for example, for even upper elementary students who seem proficient at measuring lengths with rulers to tacitly hold the theory that measuring merely entails the counting of units between boundaries. If these students are given unconnected units (say, tiles of a constant length) and asked to demonstrate how to measure a length, some of them almost always place the units against the object being measured in such a way that the first and last tile are lined up flush with the end of the object measured. This arrangement often requires leaving spaces between units. Diagnosti-

cally, these spaces do not trouble a student who holds this “boundary-filling” conception of measurement (Lehrer, 2003; McClain et al., 1999).

Researchers agree that scientific thinking entails the coordination of theory with evidence (Klahr and Dunbar, 1988; Kuhn, Amsel, and O’Loughlin, 1988), but there are many ways in which evidence may vary in both form and complexity. Achieving this coordination therefore requires tools for structuring and interpreting data and error. Otherwise, students’ interpretation of evidence cannot be accountable. There have been many studies of students’ reasoning about data, variation, and uncertainty, conducted both by psychologists (Kahneman, Solvic, and Tversky, 1982; Konold, 1989; Nisbett et al., 1983) and by educators (Mokros and Russell, 1995; Pollatsek, Lima, and Well, 1981; Strauss and Bichler, 1988). Particularly pertinent here are studies that focus on data modeling (Lehrer and Romberg, 1996), that is, how reasoning with data is recruited as a way of investigating genuine questions about the world.

Data modeling is, in fact, what professionals do when they reason with data and statistics. It is central to a variety of enterprises, including engineering, medicine, and natural science. Scientific models are generated with acute awareness of their entailments for data, and data are recorded and structured as a way of making progress in articulating a scientific model or adjudicating among rival models. The tight relationship between model and data holds generally in domains in which inquiry is conducted by inscribing, representing, and mathematizing key aspects of the world (Goodwin, 2000; Kline, 1980; Latour, 1990).

Understanding the qualities and meaning of data may be enhanced if students spend as much attention on its generation as on its analysis. First and foremost, students need to grasp the notion that data are constructed to answer questions (Lehrer, Giles, and Schauble, 2002). The National Council of Teachers of Mathematics (2000) emphasizes that the study of data should be firmly anchored in students’ inquiry, so that they “address what is involved in gathering and using the data wisely” (p. 48). Questions motivate the collection of certain types of information and not others, and many aspects of data coding and structuring also depend on the question that motivated their collection. Defining the variables involved in addressing a research question, considering the methods and timing to collect data, and finding efficient ways to record it are all involved in the initial phases of data modeling. Debates about the meaning of an attribute often provoke questions that are more precise.

For example, a group of first graders who wanted to learn which student’s pumpkin was the largest eventually understood that they needed to agree

whether they were interested in the heights of the pumpkins, their circumferences, or their weights (Lehrer et al., 2001). Deciding what to measure is bound up with deciding how to measure. As the students went on to count the seeds in their pumpkins (they were pursuing a question about whether there might be relationship between pumpkin size and number of seeds), they had to make decisions about whether they would include seeds that were not full grown and what criteria would be used to decide whether any particular seed should be considered mature.

Data are inherently a form of abstraction: an event is replaced by a video recording, a sensation of heat is replaced by a pointer reading on a thermometer, and so on. Here again, the tacit complexity of tools may need to be explained. Students often have a fragile grasp of the relationship between the event of interest and the operation (hence, the output) of a tool, whether that tool is a microscope, a pan balance, or a “simple” ruler. Some students, for example, do not initially consider measurement to be a form of comparison and may find a balance a very confusing tool. In their mind, the number displayed on a scale is the weight of the object. If no number is displayed, weight cannot be found.

Once the data are recorded, making sense of them requires that they be structured. At this point, students sometimes discover that their data require further abstraction. For example, as they categorized features of self-portraits drawn by other students, a group of fourth graders realized that it would not be wise to follow their original plan of creating 23 categories of “eye type” for the 25 portraits that they wished to categorize (DiPerna, 2002). Data do not come with an inherent structure; rather, structure must be imposed (Lehrer, Giles, and Schauble, 2002). The only structure for a set of data comes from the inquirers’ prior and developing understanding of the phenomenon under investigation. He imposes structure by selecting categories around which to describe and organize the data.

Students also need to mentally back away from the objects or events under study to attend to the data as objects in their own right, by counting them, manipulating them to discover relationships, and asking new questions of already collected data. Students often believe that new questions can be addressed only with new data; they rarely think of querying existing data sets to explore questions that were not initially conceived when the data were collected (Lehrer and Romberg, 1996).

Finally, data are represented in various ways in order to see or understand general trends. Different kinds of displays highlight certain aspects of the data and hide others. An important educational agenda for students, one that extends over several years, is to come to understand the conventions and properties of different kinds of data displays. We do not review here the extensive literature on students’ understanding of different kinds of representational displays (tables, graphs of various kinds, distributions), but, for

purposes of science, students should not only understand the procedures for generating and reading displays, but they should also be able to critique them and to grasp the communicative advantages and disadvantages of alternative forms for a given purpose (diSessa, 2004; Greeno and Hall, 1997). The structure of the data will affect the interpretation. Data interpretation often entails seeking and confirming relationships in the data, which may be at varying levels of complexity. For example, simple linear relationships are easier to spot than inverse relationships or interactions (Schauble, 1990), and students often fail to entertain the possibility that more than one relationship may be operating.

The desire to interpret data may further inspire the creation of statistics, such as measures of center and spread. These measures are a further step of abstraction beyond the objects and events originally observed. Even primary grade students can learn to consider the overall shape of data displays to make interpretations based on the “clumps” and “holes” in the data. Students often employ multiple criteria when trying to identify a “typical value” for a set of data. Many young students tend to favor the mode and justify their choice on the basis of repetition—if more than one student obtained this value, perhaps it is to be trusted. However, students tend to be less satisfied with modes if they do not appear near the center of the data, and they also shy away from measures of center that do not have several other values clustered near them (“part of a clump”). Understanding the mean requires an understanding of ratio, and if students are merely taught to “average” data in a procedural way without having a well-developed sense of ratio, their performance notoriously tends to degrade into “average stew”—eccentric procedures for adding and dividing things that make no sense (Strauss and Bichler, 1988). With good instruction, middle and upper elementary students can simultaneously consider the center and the spread of the data. Students can also generate various forms of mathematical descriptions of error, especially in contexts of measurement, where they can readily grasp the relationships between their own participation in the act of measuring and the resulting variation in measures (Petrosino, Lehrer, and Schauble, 2003).

Scale Models, Diagrams, and Maps

Although data representations are central to science, they are not, of course, the only representations students need to use and understand. Perhaps the most easily interpretable form of representation widely used in science is scale models. Physical models of this kind are used in science education to make it possible for students to visualize objects or processes that are at a scale that makes their direct perception impossible or, alternatively, that permits them to directly manipulate something that otherwise

they could not handle. The ease or difficulty with which students understand these models depends on the complexity of the relationships being communicated. Even preschoolers can understand scale models used to depict location in a room (DeLoache, 2004). Primary grade students can pretty readily overcome the influence of the appearance of the model to focus on and investigate the way it functions (Penner et al., 1997), but middle school students (and some adults) struggle to work out the positional relationships of the earth, the sun, and the moon, which involves not only reconciling different perspectives with respect to perspective and frame (what one sees standing on the earth, what one would see from a hypothetical point in space), but also visualizing how these perspectives would change over days and months (see, for example, the detailed curricular suggestions at the web site http://www.wcer.wisc.edu/ncisla/muse/ ).

Frequently, students are expected to read or produce diagrams, often integrating the information from the diagram with information from accompanying text (Hegarty and Just, 1993; Mayer, 1993). The comprehensibility of diagrams seems to be governed less by domain-general principles than by the specifics of the diagram and its viewer. Comprehensibility seems to vary with the complexity of what is portrayed, the particular diagrammatic details and features, and the prior knowledge of the user.

Diagrams can be difficult to understand for a host of reasons. Sometimes the desired information is missing in the first place; sometimes, features of the diagram unwittingly play into an incorrect preconception. For example, it has been suggested that the common student misconception that the earth is closer to the sun in the summer than in the winter may be due in part to the fact that two-dimensional representations of the three-dimensional orbit make it appear as if the foreshortened orbit is indeed closer to the sun at some points than at others.

Mayer (1993) proposes three common reasons why diagrams mis-communicate: some do not include explanatory information (they are illustrative or decorative rather than explanatory), some lack a causal chain, and some fail to map the explanation to a familiar or recognizable context. It is not clear that school students misperceive diagrams in ways that are fundamentally different from the perceptions of adults. There may be some diagrammatic conventions that are less familiar to children, and children may well have less knowledge about the phenomena being portrayed, but there is no reason to expect that adult novices would respond in fundamentally different ways. Although they have been studied for a much briefer period of time, the same is probably true of complex computer displays.

Finally, there is a growing developmental literature on students’ understanding of maps. Maps can be particularly confusing because they preserve some analog qualities of the space being represented (e.g., relative position and distance) but also omit or alter features of the landscape in ways that

require understanding of mapping conventions. Young children often initially confuse maps of the landscape with pictures of objects in the landscape. It is much easier for youngsters to represent objects than to represent large-scale space (which is the absence of or frame for objects). Students also may struggle with orientation, perspective (the traditional bird’s eye view), and mathematical descriptions of space, such as polar coordinate representations (Lehrer and Pritchard, 2002; Liben and Downs, 1993).

CONCLUSIONS

There is a common thread throughout the observations of this chapter that has deep implications for what one expects from children in grades K-8 and how their science learning should be structured. In almost all cases, the studies converge to the position that the skills under study develop with age, but also that this development is significantly enhanced by prior knowledge, experience, and instruction.

One of the continuing themes evident from studies on the development of scientific thinking is that children are far more competent than first suspected, and likewise that adults are less so. Young children experiment, but their experimentation is generally not systematic, and their observations as well as their inferences may be flawed. The progression of ability is seen with age, but it is not uniform, either across individuals or for a given individual. There is variation across individuals at the same age, as well as variation within single individuals in the strategies they use. Any given individual uses a collection of strategies, some more valid than others. Discovering a valid strategy does not mean that an individual, whether a child or an adult, will use the strategy consistently across all contexts. As Schauble (1996, p. 118) noted:

The complex and multifaceted nature of the skills involved in solving these problems, and the variability in performance, even among the adults, suggest that the developmental trajectory of the strategies and processes associated with scientific reasoning is likely to be a very long one, perhaps even lifelong . Previous research has established the existence of both early precursors and competencies … and errors and biases that persist regardless of maturation, training, and expertise.

One aspect of cognition that appears to be particularly important for supporting scientific thinking is awareness of one’s own thinking. Children may be less aware of their own memory limitations and therefore may be unsystematic in recording plans, designs, and outcomes, and they may fail to consult such records. Self-awareness of the cognitive strategies available is also important in order to determine when and why to employ various strategies. Finally, awareness of the status of one’s own knowledge, such as

recognizing the distinctions between theory and evidence, is important for reasoning in the context of scientific investigations. This last aspect of cognition is discussed in detail in the next chapter.

Prior knowledge, particularly beliefs about causality and plausibility, shape the approach to investigations in multiple ways. These beliefs influence which hypotheses are tested, how experiments are designed, and how evidence is evaluated. Characteristics of prior knowledge, such as its type, strength, and relevance, are potential determinants of how new evidence is evaluated and whether anomalies are noticed. Knowledge change occurs as a result of the encounter.

Finally, we conclude that experience and instruction are crucial mediators of the development of a broad range of scientific skills and of the degree of sophistication that children exhibit in applying these skills in new contexts. This means that time spent doing science in appropriately structured instructional frames is a crucial part of science education. It affects not only the level of skills that children develop, but also their ability to think about the quality of evidence and to interpret evidence presented to them. Students need instructional support and practice in order to become better at coordinating their prior theories and the evidence generated in investigations. Instructional support is also critical for developing skills for experimental design, record keeping during investigations, dealing with anomalous data, and modeling.

Ahn, W., Kalish, C.W., Medin, D.L., and Gelman, S.A. (1995). The role of covariation versus mechanism information in causal attribution. Cognition, 54, 299-352.

Amsel, E., and Brock, S. (1996). The development of evidence evaluation skills. Cognitive Development, 11 , 523-550.

Bisanz, J., and LeFevre, J. (1990). Strategic and nonstrategic processing in the development of mathematical cognition. In. D. Bjorklund (Ed.), Children’s strategies: Contemporary views of cognitive development (pp. 213-243). Hillsdale, NJ: Lawrence Erlbaum Associates.

Carey, S., Evans, R., Honda, M., Jay, E., and Unger, C. (1989). An experiment is when you try it and see if it works: A study of grade 7 students’ understanding of the construction of scientific knowledge. International Journal of Science Education, 11 , 514-529.

Chase, W.G., and Simon, H.A. (1973). The mind’s eye in chess. In W.G. Chase (Ed.), Visual information processing . New York: Academic.

Chen, Z., and Klahr, D. (1999). All other things being equal: Children’s acquisition of the control of variables strategy. Child Development, 70, 1098-1120.

Chi, M.T.H. (1996). Constructing self-explanations and scaffolded explanations in tutoring. Applied Cognitive Psychology, 10, 33-49.

Chi, M.T.H., de Leeuw, N., Chiu, M., and Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477.

Chinn, C.A., and Brewer, W.F. (1998). An empirical test of a taxonomy of responses to anomalous data in science. Journal of Research in Science Teaching, 35, 623-654.

Chinn, C.A., and Brewer, W. (2001). Model of data: A theory of how people evaluate data. Cognition and Instruction , 19 (3), 323-343.

Chinn, C.A., and Malhotra, B.A. (2001). Epistemologically authentic scientific reasoning. In K. Crowley, C.D. Schunn, and T. Okada (Eds.), Designing for science: Implications from everyday, classroom, and professional settings (pp. 351-392). Mahwah, NJ: Lawrence Erlbaum Associates.

Chinn, C.A., and Malhotra, B.A. (2002). Children’s responses to anomalous scientific data: How is conceptual change impeded? Journal of Educational Psychology, 94, 327-343.

DeLoache, J.S. (2004). Becoming symbol-minded. Trends in Cognitive Sciences, 8 , 66-70.

DiPerna, E. (2002). Data models of ourselves: Body self-portrait project. In R. Lehrer and L. Schauble (Eds.), Investigating real data in the classroom: Expanding children’s understanding of math and science. Ways of knowing in science and mathematics series . Willington, VT: Teachers College Press.

diSessa, A.A. (2004). Metarepresentation: Native competence and targets for instruction. Cognition and Instruction, 22 (3), 293-331.

Dunbar, K., and Klahr, D. (1989). Developmental differences in scientific discovery strategies. In D. Klahr and K. Kotovsky (Eds.), Complex information processing: The impact of Herbert A. Simon (pp. 109-143). Hillsdale, NJ: Lawrence Erlbaum Associates.

Echevarria, M. (2003). Anomalies as a catalyst for middle school students’ knowledge construction and scientific reasoning during science inquiry. Journal of Educational Psychology, 95, 357-374 .

Garcia-Mila, M., and Andersen, C. (2005). Developmental change in notetaking during scientific inquiry. Manuscript submitted for publication.

Gleason, M.E., and Schauble, L. (2000). Parents’ assistance of their children’s scientific reasoning. Cognition and Instruction, 17 (4), 343-378.

Goodwin, C. (2000). Introduction: Vision and inscription in practice. Mind, Culture, and Activity , 7 , 1-3.

Greeno, J., and Hall, R. (1997). Practicing representation: Learning with and about representational forms. Phi Delta Kappan, January, 361-367.

Hausmann, R., and Chi, M. (2002) Can a computer interface support self-explaining? The International Journal of Cognitive Technology , 7 (1).

Hegarty, M., and Just, A. (1993). Constructing mental models of machines from text and diagrams. Journal of Memory and Language , 32 , 717-742.

Inhelder, B., and Piaget, J. (1958). The growth of logical thinking from childhood to adolescence . New York: Basic Books.

Kahneman, D., Slovic, P, and Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases . New York: Cambridge University Press.

Kanari, Z., and Millar, R. (2004). Reasoning from data: How students collect and interpret data in science investigations. Journal of Research in Science Teaching , 41 , 17.

Keselman, A. (2003). Supporting inquiry learning by promoting normative understanding of multivariable causality. Journal of Research in Science Teaching, 40, 898-921.

Keys, C.W. (1994). The development of scientific reasoning skills in conjunction with collaborative writing assignments: An interpretive study of six ninth-grade students. Journal of Research in Science Teaching, 31, 1003-1022.

Klaczynski, P.A. (2000). Motivated scientific reasoning biases, epistemological beliefs, and theory polarization: A two-process approach to adolescent cognition. Child Development , 71 (5), 1347-1366.

Klaczynski, P.A., and Narasimham, G. (1998). Development of scientific reasoning biases: Cognitive versus ego-protective explanations. Developmental Psychology, 34 (1), 175-187.

Klahr, D. (2000). Exploring science: The cognition and development of discovery processes. Cambridge, MA: MIT Press.

Klahr, D., and Carver, S.M. (1995). Scientific thinking about scientific thinking. Monographs of the Society for Research in Child Development, 60, 137-151.

Klahr, D., Chen, Z., and Toth, E.E. (2001). From cognition to instruction to cognition: A case study in elementary school science instruction. In K. Crowley, C.D. Schunn, and T. Okada (Eds.), Designing for science: Implications from everyday, classroom, and professional settings (pp. 209-250). Mahwah, NJ: Lawrence Erlbaum Associates.

Klahr, D., and Dunbar, K. (1988). Dual search space during scientific reasoning. Cognitive Science, 12, 1-48.

Klahr, D., Fay, A., and Dunbar, K. (1993). Heuristics for scientific experimentation: A developmental study. Cognitive Psychology, 25, 111-146.

Klahr, D., and Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15 (10), 661-667.

Klahr, D., and Robinson, M. (1981). Formal assessment of problem solving and planning processes in preschool children. Cognitive Psychology , 13 , 113-148.

Klayman, J., and Ha, Y. (1989). Hypothesis testing in rule discovery: Strategy, structure, and content. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15 (4), 596-604.

Kline, M. (1980). Mathematics: The loss of certainty . New York: Oxford University Press.

Konold, C. (1989). Informal conceptions of probability. Cognition and Instruction , 6 , 59-98.

Koslowski, B. (1996). Theory and evidence: The development of scientific reasoning. Cambridge, MA: MIT Press.

Koslowski, B., and Okagaki, L. (1986). Non-human indices of causation in problem-solving situations: Causal mechanisms, analogous effects, and the status of rival alternative accounts. Child Development, 57, 1100-1108.

Koslowski, B., Okagaki, L., Lorenz, C., and Umbach, D. (1989). When covariation is not enough: The role of causal mechanism, sampling method, and sample size in causal reasoning. Child Development, 60, 1316-1327.

Kuhn, D. (1989). Children and adults as intuitive scientists . Psychological Review, 96 , 674-689.

Kuhn, D. (2001). How do people know? Psychological Science, 12, 1-8.

Kuhn, D. (2002). What is scientific thinking and how does it develop? In U. Goswami (Ed.), Blackwell handbook of childhood cognitive development (pp. 371-393). Oxford, England: Blackwell.

Kuhn, D., Amsel, E., and O’Loughlin, M. (1988). The development of scientific thinking skills. Orlando, FL: Academic Press.

Kuhn, D., and Dean, D. (2005). Is developing scientific thinking all about learning to control variables? Psychological Science, 16 (11), 886-870.

Kuhn, D., and Franklin, S. (2006). The second decade: What develops (and how)? In W. Damon, R.M. Lerner, D. Kuhn, and R.S. Siegler (Eds.), Handbook of child psychology, volume 2, cognition, peception, and language, 6th edition (pp. 954-994). Hoboken, NJ: Wiley.

Kuhn, D., Garcia-Mila, M., Zohar, A., and Andersen, C. (1995). Strategies of knowledge acquisition. Monographs of the Society for Research in Child Development, Serial No. 245 (60), 4.

Kuhn, D., and Pearsall, S. (1998). Relations between metastrategic knowledge and strategic performance. Cognitive Development, 13, 227-247.

Kuhn, D., and Pearsall, S. (2000). Developmental origins of scientific thinking. Journal of Cognition and Development, 1, 113-129.

Kuhn, D., and Phelps, E. (1982). The development of problem-solving strategies. In H. Reese (Ed.), Advances in child development and behavior ( vol. 17, pp. 1-44). New York: Academic Press.

Kuhn, D., Schauble, L., and Garcia-Mila, M. (1992). Cross-domain development of scientific reasoning. Cognition and Instruction, 9, 285-327.

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480-498.

Larkin, J.H., McDermott, J., Simon, D.P, and Simon, H.A. (1980). Expert and novice performance in solving physics problems. Science , 208 , 1335-1342.

Latour, B. (1990). Drawing things together. In M. Lynch and S. Woolgar (Eds.), Representation in scientific practice (pp. 19-68). Cambridge, MA: MIT Press.

Lehrer, R. (2003). Developing understanding of measurement. In J. Kilpatrick, W.G. Martin, and D.E. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 179-192). Reston, VA: National Council of Teachers of Mathematics.

Lehrer, R., Giles, N., and Schauble, L. (2002). Data modeling. In R. Lehrer and L. Schauble (Eds.), Investigating real data in the classroom: Expanding children’s understanding of math and science (pp. 1-26). New York: Teachers College Press.

Lehrer, R., and Pritchard, C. (2002). Symbolizing space into being. In K. Gravemeijer, R. Lehrer, B. van Oers, and L. Verschaffel (Eds.), Symbolization, modeling and tool use in mathematics education (pp. 59-86). Dordrecht, The Netherlands: Kluwer Academic.

Lehrer, R., and Romberg, T. (1996). Exploring children’s data modeling. Cognition and Instruction , 14 , 69-108.

Lehrer, R., and Schauble, L. (2000). The development of model-based reasoning. Journal of Applied Developmental Psychology, 21 (1), 39-48.

Lehrer, R., and Schauble, L. (2002). Symbolic communication in mathematics and science: Co-constituting inscription and thought. In E.D. Amsel and J. Byrnes (Eds.), Language, literacy, and cognitive development: The development and consequences of symbolic communicat i on (pp. 167-192). Mahwah, NJ: Lawrence Erlbaum Associates.

Lehrer, R., and Schauble, L. (2003). Origins and evolution of model-based reasoning in mathematics and science. In R. Lesh and H.M. Doerr (Eds.), Beyond constructivism: A models and modeling perspective on mathematics problem-solving, learning, and teaching (pp. 59-70). Mahwah, NJ: Lawrence Erlbaum Associates.

Lehrer, R., and Schauble, L., (2005). Developing modeling and argument in the elementary grades. In T.A. Rombert, T.P. Carpenter, and F. Dremock (Eds.), Understanding mathematics and science matters (Part II: Learning with understanding). Mahwah, NJ: Lawrence Erlbaum Associates.

Lehrer, R., and Schauble, L. (2006). Scientific thinking and science literacy. In W. Damon, R. Lerner, K.A. Renninger, and I.E. Sigel (Eds.), Handbook of child psychology, 6th edition (vol. 4). Hoboken, NJ: Wiley.

Lehrer, R., Schauble, L., Strom, D., and Pligge, M. (2001). Similarity of form and substance: Modeling material kind. In D. Klahr and S. Carver (Eds.), Cognition and instruction: 25 years of progress (pp. 39-74). Mahwah, NJ: Lawrence Erlbaum Associates.

Liben, L.S., and Downs, R.M. (1993). Understanding per son-space-map relations: Cartographic and developmental perspectives. Developmental Psychology, 29 , 739-752.

Linn, M.C. (1978). Influence of cognitive style and training on tasks requiring the separation of variables schema. Child Development , 49 , 874-877.

Linn, M.C. (1980). Teaching students to control variables: Some investigations using free choice experiences. In S. Modgil and C. Modgil (Eds.), Toward a theory of psychological development within the Piagettian framework . Windsor Berkshire, England: National Foundation for Educational Research.

Linn, M.C., Chen, B., and Thier, H.S. (1977). Teaching children to control variables: Investigations of a free choice environment. Journal of Research in Science Teaching , 14 , 249-255.

Linn, M.C., and Levine, D.I. (1978). Adolescent reasoning: Influence of question format and type of variables on ability to control variables. Science Education , 62 (3), 377-388.

Lovett, M.C., and Anderson, J.R. (1995). Making heads or tails out of selecting problem-solving strategies. In J.D. Moore and J.F. Lehman (Eds.), Proceedings of the seventieth annual conference of the Cognitive Science Society (pp. 265-270). Hillsdale, NJ: Lawrence Erlbaum Associates.

Lovett, M.C., and Anderson, J.R. (1996). History of success and current context in problem solving. Cognitive Psychology , 31 (2), 168-217.

Masnick, A.M., and Klahr, D. (2003). Error matters: An initial exploration of elementary school children’s understanding of experimental error. Journal of Cognition and Development, 4 , 67-98.

Mayer, R. (1993). Illustrations that instruct. In R. Glaser (Ed.), Advances in instructional psychology (vol. 4, pp. 253-284). Hillsdale, NJ: Lawrence Erlbaum Associates.

McClain, K., Cobb, P., Gravemeijer, K., and Estes, B. (1999). Developing mathematical reasoning within the context of measurement. In L. Stiff (Ed.), Developing mathematical reasoning, K-12 (pp. 93-106). Reston, VA: National Council of Teachers of Mathematics.

McNay, M., and Melville, K.W. (1993). Children’s skill in making predictions and their understanding of what predicting means: A developmental study. Journal of Research in Science Teaching , 30, 561-577.

Metz, K.E. (2004). Children’s understanding of scientific inquiry: Their conceptualization of uncertainty in investigations of their own design. Cognition and Instruction, 22( 2), 219-290.

Mokros, J., and Russell, S. (1995). Children’s concepts of average and representativeness. Journal for Research in Mathematics Education, 26 (1), 20-39.

National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematics. Reston, VA: Author.

Nisbett, R.E., Krantz, D.H., Jepson, C., and Kind, Z. (1983). The use of statistical heuristics in everyday inductive reasoning. Psychological Review, 90 , 339-363.

Penner, D., Giles, N.D., Lehrer, R., and Schauble, L. (1997). Building functional models: Designing an elbow. Journal of Research in Science Teaching, 34(2) , 125-143.

Penner, D.E., and Klahr, D. (1996a). The interaction of domain-specific knowledge and domain-general discovery strategies: A study with sinking objects. Child Development, 67, 2709-2727.

Penner, D.E., and Klahr, D. (1996b). When to trust the data: Further investigations of system error in a scientific reasoning task. Memory and Cognition, 24, 655-668 .

Perfetti, CA. (1992). The representation problem in reading acquisition. In P.B. Gough, L.C. Ehri, and R. Treiman (Eds.), Reading acquisition (pp. 145-174). Hillsdale, NJ: Lawrence Erlbaum Associates.

Petrosino, A., Lehrer, R., and Schauble, L. (2003). Structuring error and experimental variation as distribution in the fourth grade. Mathematical Thinking and Learning, 5 (2-3), 131-156.

Pollatsek, A., Lima, S., and Well, A.D. (1981). Concept or computation: Students’ misconceptions of the mean. Educational Studies in Mathematics , 12, 191-204.

Ruffman, T., Perner, I., Olson, D.R., and Doherty, M. (1993). Reflecting on scientific thinking: Children’s understanding of the hypothesis-evidence relation. Child Development, 64 (6), 1617-1636.

Schauble, L. (1990). Belief revision in children: The role of prior knowledge and strategies for generating evidence. Journal of Experimental Child Psychology , 49 (1), 31-57.

Schauble, L. (1996). The development of scientific reasoning in knowledge-rich contexts. Developmental Psychology , 32 (1), 102-119.

Schauble, L., Glaser, R., Duschl, R., Schulze, S., and John, J. (1995). Students’ understanding of the objectives and procedures of experimentation in the science classroom. Journal of the Learning Sciences , 4 (2), 131-166.

Schauble, L., Glaser, R., Raghavan, K., and Reiner, M. (1991). Causal models and experimentation strategies in scientific reasoning. Journal of the Learning Sciences , 1 (2), 201-238.

Schauble, L., Glaser, R., Raghavan, K., and Reiner, M. (1992). The integration of knowledge and experimentation strategies in understanding a physical system. Applied Cognitive Psychology , 6 , 321-343.

Schauble, L., Klopfer, L.E., and Raghavan, K. (1991). Students’ transition from an engineering model to a science model of experimentation. Journal of Research in Science Teaching , 28 (9), 859-882.

Siegler, R.S. (1987). The perils of averaging data over strategies: An example from children’s addition. Journal of Experimental Psychology: General, 116, 250-264 .

Siegler, R.S., and Alibali, M.W. (2005). Children’s thinking (4th ed.). Upper Saddle River, NJ: Prentice Hall.

Siegler, R.S., and Crowley, K. (1991). The microgenetic method: A direct means for studying cognitive development. American Psychologist , 46 , 606-620.

Siegler, R.S., and Jenkins, E. (1989). How children discover new strategies . Hillsdale, NJ: Lawrence Erlbaum Associates.

Siegler, R.S., and Liebert, R.M. (1975). Acquisition of formal experiment. Developmental Psychology , 11 , 401-412.

Siegler, R.S., and Shipley, C. (1995). Variation, selection, and cognitive change. In T. Simon and G. Halford (Eds.), Developing cognitive competence: New approaches to process modeling (pp. 31-76). Hillsdale, NJ: Lawrence Erlbaum Associates.

Simon, H.A. (1975). The functional equivalence of problem solving skills. Cognitive Psychology, 7 , 268-288.

Simon, H.A. (2001). Learning to research about learning. In S.M. Carver and D. Klahr (Eds.), Cognition and instruction: Twenty-five years of progress (pp. 205-226). Mahwah, NJ: Lawrence Erlbaum Associates.

Slowiaczek, L.M., Klayman, J., Sherman, S.J., and Skov, R.B. (1992). Information selection and use in hypothesis testing: What is a good question, and what is a good answer. Memory and Cognition, 20 (4), 392-405.

Sneider, C., Kurlich, K., Pulos, S., and Friedman, A. (1984). Learning to control variables with model rockets: A neo-Piagetian study of learning in field settings. Science Education , 68 (4), 463-484.

Sodian, B., Zaitchik, D., and Carey, S. (1991). Young children’s differentiation of hypothetical beliefs from evidence. Child Development, 62 (4), 753-766.

Stevens, R., and Hall, R. (1998). Disciplined perception: Learning to see in technoscience. In M. Lampert and M.L. Blunk (Eds.), Talking mathematics in school: Studies of teaching and learning (pp. 107-149). Cambridge, MA: Cambridge University Press.

Strauss, S., and Bichler, E. (1988). The development of children’s concepts of the arithmetic average. Journal for Research in Mathematics Education, 19 (1), 64-80.

Thagard, P. (1998a). Ulcers and bacteria I: Discovery and acceptance. Studies in History and Philosophy of Science. Part C: Studies in History and Philosophy of Biology and Biomedical Sciences, 29, 107-136.

Thagard, P. (1998b). Ulcers and bacteria II: Instruments, experiments, and social interactions. Studies in History and Philosophy of Science. Part C: Studies in History and Philosophy of Biology and Biomedical Sciences, 29 (2), 317-342.

Toth, E.E., Klahr, D., and Chen, Z. (2000). Bridging research and practice: A cognitively-based classroom intervention for teaching experimentation skills to elementary school children. Cognition and Instruction , 18 (4), 423-459.

Trafton, J.G., and Trickett, S.B. (2001). Note-taking for self-explanation and problem solving. Human-Computer Interaction, 16, 1-38.

Triona, L., and Klahr, D. (in press). The development of children’s abilities to produce external representations. In E. Teubal, J. Dockrell, and L. Tolchinsky (Eds.), Notational knowledge: Developmental and historical perspectives . Rotterdam, The Netherlands: Sense.

Varnhagen, C. (1995). Children’s spelling strategies. In V. Berninger (Ed.), The varieties of orthographic knowledge: Relationships to phonology, reading and writing (vol. 2, pp. 251-290). Dordrecht, The Netherlands: Kluwer Academic.

Warren, B., Rosebery, A., and Conant, F. (1994). Discourse and social practice: Learning science in language minority classrooms. In D. Spencer (Ed.), Adult biliteracy in the United States (pp. 191-210). McHenry, IL: Delta Systems.

Wolpert, L. (1993). The unnatural nature of science . London, England: Faber and Faber.

Zachos, P., Hick, T.L., Doane, W.E.I., and Sargent, C. (2000). Setting theoretical and empirical foundations for assessing scientific inquiry and discovery in educational programs. Journal of Research in Science Teaching, 37 (9), 938-962.

Zimmerman, C., Raghavan, K., and Sartoris, M.L. (2003). The impact of the MARS curriculum on students’ ability to coordinate theory and evidence. International Journal of Science Education, 25, 1247-1271.

What is science for a child? How do children learn about science and how to do science? Drawing on a vast array of work from neuroscience to classroom observation, Taking Science to School provides a comprehensive picture of what we know about teaching and learning science from kindergarten through eighth grade. By looking at a broad range of questions, this book provides a basic foundation for guiding science teaching and supporting students in their learning. Taking Science to School answers such questions as:

  • When do children begin to learn about science? Are there critical stages in a child's development of such scientific concepts as mass or animate objects?
  • What role does nonschool learning play in children's knowledge of science?
  • How can science education capitalize on children's natural curiosity?
  • What are the best tasks for books, lectures, and hands-on learning?
  • How can teachers be taught to teach science?

The book also provides a detailed examination of how we know what we know about children's learning of science—about the role of research and evidence. This book will be an essential resource for everyone involved in K-8 science education—teachers, principals, boards of education, teacher education providers and accreditors, education researchers, federal education agencies, and state and federal policy makers. It will also be a useful guide for parents and others interested in how children learn.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

  • Problem solving
  • Skills & Tools

A problem is any unpleasant situation which prevents people from achieving what they want to achieve. Any activity to eliminate a problem is termed problem solving.

Problem solving skills refers to our ability to solve problems in an effective and timely manner without any impediments.

It involves being able to identify and define the problem, generating alternative solutions, evaluating and selecting the best alternative, and implementing the selected solution. Obtaining a feedback and responding to it appropriately is an essential aspect of problem solving skills too.

We face problems every time. However, some problems are more complex than others. But whether you face big problems or small ones, this skill helps solve it effectively.

Importance of problem solving skills

Obviously, every organization has problems and every individual has problems too. For this reason, the ability to solve problems is of great importance to individuals and organizations. Some of the benefits include:

  • Make the impossible possible.  Knowledge alone is not the key to solving problems but rather, complimenting it with systematic problem solving approaches makesthe difference. This helps individuals and organizations overcome perilous challenges.
  • Makes you a stand out.  People are trained to do the usual. They have acquired skills and knowledge in what they do. However, people can hardly solve problems when they are unexpected or unprecedented ones. If you become a regular problem solver at your workplace, you are easily noticed, recognized, and appreciated.
  • Increased confidence.  No matter where you work or what your profession is, having the ability to solve problems will boost your confidence level. Because you are sure of your ability to solve problems, you don’t spend time worrying about what you will do if a problem should arise.

How to improve upon problem solving skills

Just like any of the other skills, the art of problem solving can be learnt and improved upon. Below are few tips to help you improve this skill.

  • Detach yourself from the problem.  Don’t regard yourself as the problem itself and don’t presume you are incapacitated to solve the problem. See the problem as the enemy that has to be defeated by you.
  • Analyze it in parts and not as a whole.  Don’t see the problem as a whole big unit that needs to be fixed – That may deter you from attempting to solve it. Rather, break it into parts and tackle them step by step, and portion by portion. The little pieces you solve will add up to become the solution for the whole unit. For instance; if there’s turmoil in your organization, analyze the various aspects or departments of the organization. Choose one problematic area, such as communication, to start from. When that is fixed, you may move on to the other problematic areas.
  • Be inquisitive and investigative.  Being inquisitive and conducting thorough investigation and research helps you identify what the core of the problem is. In other words, it grants you access to the cause of the problem. Once the real cause of the problem is known, it becomes easier to solve it.
  • Be open to suggestions.  Other people’s contributions can be very helpful. It saves you the time of having to search for every piece of information that is needed.

Job profiles that require this skill

the importance of investigation and problem solving

Not yet a member? Sign Up

join cleverism

Find your dream job. Get on promotion fasstrack and increase tour lifetime salary.

Post your jobs & get access to millions of ambitious, well-educated talents that are going the extra mile.

First name*

Company name*

Company Website*

E-mail (work)*

Login or Register

Password reset instructions will be sent to your E-mail.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 11 January 2023

The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature

  • Enwei Xu   ORCID: orcid.org/0000-0001-6424-8169 1 ,
  • Wei Wang 1 &
  • Qingxia Wang 1  

Humanities and Social Sciences Communications volume  10 , Article number:  16 ( 2023 ) Cite this article

15k Accesses

15 Citations

3 Altmetric

Metrics details

  • Science, technology and society

Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field of education as well as a key competence for learners in the 21st century. However, the effectiveness of collaborative problem-solving in promoting students’ critical thinking remains uncertain. This current research presents the major findings of a meta-analysis of 36 pieces of the literature revealed in worldwide educational periodicals during the 21st century to identify the effectiveness of collaborative problem-solving in promoting students’ critical thinking and to determine, based on evidence, whether and to what extent collaborative problem solving can result in a rise or decrease in critical thinking. The findings show that (1) collaborative problem solving is an effective teaching approach to foster students’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]); (2) in respect to the dimensions of critical thinking, collaborative problem solving can significantly and successfully enhance students’ attitudinal tendencies (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI[0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI[0.58, 0.82]); and (3) the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have an impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. On the basis of these results, recommendations are made for further study and instruction to better support students’ critical thinking in the context of collaborative problem-solving.

Similar content being viewed by others

the importance of investigation and problem solving

Determinants of behaviour and their efficacy as targets of behavioural change interventions

the importance of investigation and problem solving

Impact of artificial intelligence on human loss in decision making, laziness and safety in education

the importance of investigation and problem solving

Interviews in the social sciences

Introduction.

Although critical thinking has a long history in research, the concept of critical thinking, which is regarded as an essential competence for learners in the 21st century, has recently attracted more attention from researchers and teaching practitioners (National Research Council, 2012 ). Critical thinking should be the core of curriculum reform based on key competencies in the field of education (Peng and Deng, 2017 ) because students with critical thinking can not only understand the meaning of knowledge but also effectively solve practical problems in real life even after knowledge is forgotten (Kek and Huijser, 2011 ). The definition of critical thinking is not universal (Ennis, 1989 ; Castle, 2009 ; Niu et al., 2013 ). In general, the definition of critical thinking is a self-aware and self-regulated thought process (Facione, 1990 ; Niu et al., 2013 ). It refers to the cognitive skills needed to interpret, analyze, synthesize, reason, and evaluate information as well as the attitudinal tendency to apply these abilities (Halpern, 2001 ). The view that critical thinking can be taught and learned through curriculum teaching has been widely supported by many researchers (e.g., Kuncel, 2011 ; Leng and Lu, 2020 ), leading to educators’ efforts to foster it among students. In the field of teaching practice, there are three types of courses for teaching critical thinking (Ennis, 1989 ). The first is an independent curriculum in which critical thinking is taught and cultivated without involving the knowledge of specific disciplines; the second is an integrated curriculum in which critical thinking is integrated into the teaching of other disciplines as a clear teaching goal; and the third is a mixed curriculum in which critical thinking is taught in parallel to the teaching of other disciplines for mixed teaching training. Furthermore, numerous measuring tools have been developed by researchers and educators to measure critical thinking in the context of teaching practice. These include standardized measurement tools, such as WGCTA, CCTST, CCTT, and CCTDI, which have been verified by repeated experiments and are considered effective and reliable by international scholars (Facione and Facione, 1992 ). In short, descriptions of critical thinking, including its two dimensions of attitudinal tendency and cognitive skills, different types of teaching courses, and standardized measurement tools provide a complex normative framework for understanding, teaching, and evaluating critical thinking.

Cultivating critical thinking in curriculum teaching can start with a problem, and one of the most popular critical thinking instructional approaches is problem-based learning (Liu et al., 2020 ). Duch et al. ( 2001 ) noted that problem-based learning in group collaboration is progressive active learning, which can improve students’ critical thinking and problem-solving skills. Collaborative problem-solving is the organic integration of collaborative learning and problem-based learning, which takes learners as the center of the learning process and uses problems with poor structure in real-world situations as the starting point for the learning process (Liang et al., 2017 ). Students learn the knowledge needed to solve problems in a collaborative group, reach a consensus on problems in the field, and form solutions through social cooperation methods, such as dialogue, interpretation, questioning, debate, negotiation, and reflection, thus promoting the development of learners’ domain knowledge and critical thinking (Cindy, 2004 ; Liang et al., 2017 ).

Collaborative problem-solving has been widely used in the teaching practice of critical thinking, and several studies have attempted to conduct a systematic review and meta-analysis of the empirical literature on critical thinking from various perspectives. However, little attention has been paid to the impact of collaborative problem-solving on critical thinking. Therefore, the best approach for developing and enhancing critical thinking throughout collaborative problem-solving is to examine how to implement critical thinking instruction; however, this issue is still unexplored, which means that many teachers are incapable of better instructing critical thinking (Leng and Lu, 2020 ; Niu et al., 2013 ). For example, Huber ( 2016 ) provided the meta-analysis findings of 71 publications on gaining critical thinking over various time frames in college with the aim of determining whether critical thinking was truly teachable. These authors found that learners significantly improve their critical thinking while in college and that critical thinking differs with factors such as teaching strategies, intervention duration, subject area, and teaching type. The usefulness of collaborative problem-solving in fostering students’ critical thinking, however, was not determined by this study, nor did it reveal whether there existed significant variations among the different elements. A meta-analysis of 31 pieces of educational literature was conducted by Liu et al. ( 2020 ) to assess the impact of problem-solving on college students’ critical thinking. These authors found that problem-solving could promote the development of critical thinking among college students and proposed establishing a reasonable group structure for problem-solving in a follow-up study to improve students’ critical thinking. Additionally, previous empirical studies have reached inconclusive and even contradictory conclusions about whether and to what extent collaborative problem-solving increases or decreases critical thinking levels. As an illustration, Yang et al. ( 2008 ) carried out an experiment on the integrated curriculum teaching of college students based on a web bulletin board with the goal of fostering participants’ critical thinking in the context of collaborative problem-solving. These authors’ research revealed that through sharing, debating, examining, and reflecting on various experiences and ideas, collaborative problem-solving can considerably enhance students’ critical thinking in real-life problem situations. In contrast, collaborative problem-solving had a positive impact on learners’ interaction and could improve learning interest and motivation but could not significantly improve students’ critical thinking when compared to traditional classroom teaching, according to research by Naber and Wyatt ( 2014 ) and Sendag and Odabasi ( 2009 ) on undergraduate and high school students, respectively.

The above studies show that there is inconsistency regarding the effectiveness of collaborative problem-solving in promoting students’ critical thinking. Therefore, it is essential to conduct a thorough and trustworthy review to detect and decide whether and to what degree collaborative problem-solving can result in a rise or decrease in critical thinking. Meta-analysis is a quantitative analysis approach that is utilized to examine quantitative data from various separate studies that are all focused on the same research topic. This approach characterizes the effectiveness of its impact by averaging the effect sizes of numerous qualitative studies in an effort to reduce the uncertainty brought on by independent research and produce more conclusive findings (Lipsey and Wilson, 2001 ).

This paper used a meta-analytic approach and carried out a meta-analysis to examine the effectiveness of collaborative problem-solving in promoting students’ critical thinking in order to make a contribution to both research and practice. The following research questions were addressed by this meta-analysis:

What is the overall effect size of collaborative problem-solving in promoting students’ critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills)?

How are the disparities between the study conclusions impacted by various moderating variables if the impacts of various experimental designs in the included studies are heterogeneous?

This research followed the strict procedures (e.g., database searching, identification, screening, eligibility, merging, duplicate removal, and analysis of included studies) of Cooper’s ( 2010 ) proposed meta-analysis approach for examining quantitative data from various separate studies that are all focused on the same research topic. The relevant empirical research that appeared in worldwide educational periodicals within the 21st century was subjected to this meta-analysis using Rev-Man 5.4. The consistency of the data extracted separately by two researchers was tested using Cohen’s kappa coefficient, and a publication bias test and a heterogeneity test were run on the sample data to ascertain the quality of this meta-analysis.

Data sources and search strategies

There were three stages to the data collection process for this meta-analysis, as shown in Fig. 1 , which shows the number of articles included and eliminated during the selection process based on the statement and study eligibility criteria.

figure 1

This flowchart shows the number of records identified, included and excluded in the article.

First, the databases used to systematically search for relevant articles were the journal papers of the Web of Science Core Collection and the Chinese Core source journal, as well as the Chinese Social Science Citation Index (CSSCI) source journal papers included in CNKI. These databases were selected because they are credible platforms that are sources of scholarly and peer-reviewed information with advanced search tools and contain literature relevant to the subject of our topic from reliable researchers and experts. The search string with the Boolean operator used in the Web of Science was “TS = (((“critical thinking” or “ct” and “pretest” or “posttest”) or (“critical thinking” or “ct” and “control group” or “quasi experiment” or “experiment”)) and (“collaboration” or “collaborative learning” or “CSCL”) and (“problem solving” or “problem-based learning” or “PBL”))”. The research area was “Education Educational Research”, and the search period was “January 1, 2000, to December 30, 2021”. A total of 412 papers were obtained. The search string with the Boolean operator used in the CNKI was “SU = (‘critical thinking’*‘collaboration’ + ‘critical thinking’*‘collaborative learning’ + ‘critical thinking’*‘CSCL’ + ‘critical thinking’*‘problem solving’ + ‘critical thinking’*‘problem-based learning’ + ‘critical thinking’*‘PBL’ + ‘critical thinking’*‘problem oriented’) AND FT = (‘experiment’ + ‘quasi experiment’ + ‘pretest’ + ‘posttest’ + ‘empirical study’)” (translated into Chinese when searching). A total of 56 studies were found throughout the search period of “January 2000 to December 2021”. From the databases, all duplicates and retractions were eliminated before exporting the references into Endnote, a program for managing bibliographic references. In all, 466 studies were found.

Second, the studies that matched the inclusion and exclusion criteria for the meta-analysis were chosen by two researchers after they had reviewed the abstracts and titles of the gathered articles, yielding a total of 126 studies.

Third, two researchers thoroughly reviewed each included article’s whole text in accordance with the inclusion and exclusion criteria. Meanwhile, a snowball search was performed using the references and citations of the included articles to ensure complete coverage of the articles. Ultimately, 36 articles were kept.

Two researchers worked together to carry out this entire process, and a consensus rate of almost 94.7% was reached after discussion and negotiation to clarify any emerging differences.

Eligibility criteria

Since not all the retrieved studies matched the criteria for this meta-analysis, eligibility criteria for both inclusion and exclusion were developed as follows:

The publication language of the included studies was limited to English and Chinese, and the full text could be obtained. Articles that did not meet the publication language and articles not published between 2000 and 2021 were excluded.

The research design of the included studies must be empirical and quantitative studies that can assess the effect of collaborative problem-solving on the development of critical thinking. Articles that could not identify the causal mechanisms by which collaborative problem-solving affects critical thinking, such as review articles and theoretical articles, were excluded.

The research method of the included studies must feature a randomized control experiment or a quasi-experiment, or a natural experiment, which have a higher degree of internal validity with strong experimental designs and can all plausibly provide evidence that critical thinking and collaborative problem-solving are causally related. Articles with non-experimental research methods, such as purely correlational or observational studies, were excluded.

The participants of the included studies were only students in school, including K-12 students and college students. Articles in which the participants were non-school students, such as social workers or adult learners, were excluded.

The research results of the included studies must mention definite signs that may be utilized to gauge critical thinking’s impact (e.g., sample size, mean value, or standard deviation). Articles that lacked specific measurement indicators for critical thinking and could not calculate the effect size were excluded.

Data coding design

In order to perform a meta-analysis, it is necessary to collect the most important information from the articles, codify that information’s properties, and convert descriptive data into quantitative data. Therefore, this study designed a data coding template (see Table 1 ). Ultimately, 16 coding fields were retained.

The designed data-coding template consisted of three pieces of information. Basic information about the papers was included in the descriptive information: the publishing year, author, serial number, and title of the paper.

The variable information for the experimental design had three variables: the independent variable (instruction method), the dependent variable (critical thinking), and the moderating variable (learning stage, teaching type, intervention duration, learning scaffold, group size, measuring tool, and subject area). Depending on the topic of this study, the intervention strategy, as the independent variable, was coded into collaborative and non-collaborative problem-solving. The dependent variable, critical thinking, was coded as a cognitive skill and an attitudinal tendency. And seven moderating variables were created by grouping and combining the experimental design variables discovered within the 36 studies (see Table 1 ), where learning stages were encoded as higher education, high school, middle school, and primary school or lower; teaching types were encoded as mixed courses, integrated courses, and independent courses; intervention durations were encoded as 0–1 weeks, 1–4 weeks, 4–12 weeks, and more than 12 weeks; group sizes were encoded as 2–3 persons, 4–6 persons, 7–10 persons, and more than 10 persons; learning scaffolds were encoded as teacher-supported learning scaffold, technique-supported learning scaffold, and resource-supported learning scaffold; measuring tools were encoded as standardized measurement tools (e.g., WGCTA, CCTT, CCTST, and CCTDI) and self-adapting measurement tools (e.g., modified or made by researchers); and subject areas were encoded according to the specific subjects used in the 36 included studies.

The data information contained three metrics for measuring critical thinking: sample size, average value, and standard deviation. It is vital to remember that studies with various experimental designs frequently adopt various formulas to determine the effect size. And this paper used Morris’ proposed standardized mean difference (SMD) calculation formula ( 2008 , p. 369; see Supplementary Table S3 ).

Procedure for extracting and coding data

According to the data coding template (see Table 1 ), the 36 papers’ information was retrieved by two researchers, who then entered them into Excel (see Supplementary Table S1 ). The results of each study were extracted separately in the data extraction procedure if an article contained numerous studies on critical thinking, or if a study assessed different critical thinking dimensions. For instance, Tiwari et al. ( 2010 ) used four time points, which were viewed as numerous different studies, to examine the outcomes of critical thinking, and Chen ( 2013 ) included the two outcome variables of attitudinal tendency and cognitive skills, which were regarded as two studies. After discussion and negotiation during data extraction, the two researchers’ consistency test coefficients were roughly 93.27%. Supplementary Table S2 details the key characteristics of the 36 included articles with 79 effect quantities, including descriptive information (e.g., the publishing year, author, serial number, and title of the paper), variable information (e.g., independent variables, dependent variables, and moderating variables), and data information (e.g., mean values, standard deviations, and sample size). Following that, testing for publication bias and heterogeneity was done on the sample data using the Rev-Man 5.4 software, and then the test results were used to conduct a meta-analysis.

Publication bias test

When the sample of studies included in a meta-analysis does not accurately reflect the general status of research on the relevant subject, publication bias is said to be exhibited in this research. The reliability and accuracy of the meta-analysis may be impacted by publication bias. Due to this, the meta-analysis needs to check the sample data for publication bias (Stewart et al., 2006 ). A popular method to check for publication bias is the funnel plot; and it is unlikely that there will be publishing bias when the data are equally dispersed on either side of the average effect size and targeted within the higher region. The data are equally dispersed within the higher portion of the efficient zone, consistent with the funnel plot connected with this analysis (see Fig. 2 ), indicating that publication bias is unlikely in this situation.

figure 2

This funnel plot shows the result of publication bias of 79 effect quantities across 36 studies.

Heterogeneity test

To select the appropriate effect models for the meta-analysis, one might use the results of a heterogeneity test on the data effect sizes. In a meta-analysis, it is common practice to gauge the degree of data heterogeneity using the I 2 value, and I 2  ≥ 50% is typically understood to denote medium-high heterogeneity, which calls for the adoption of a random effect model; if not, a fixed effect model ought to be applied (Lipsey and Wilson, 2001 ). The findings of the heterogeneity test in this paper (see Table 2 ) revealed that I 2 was 86% and displayed significant heterogeneity ( P  < 0.01). To ensure accuracy and reliability, the overall effect size ought to be calculated utilizing the random effect model.

The analysis of the overall effect size

This meta-analysis utilized a random effect model to examine 79 effect quantities from 36 studies after eliminating heterogeneity. In accordance with Cohen’s criterion (Cohen, 1992 ), it is abundantly clear from the analysis results, which are shown in the forest plot of the overall effect (see Fig. 3 ), that the cumulative impact size of cooperative problem-solving is 0.82, which is statistically significant ( z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]), and can encourage learners to practice critical thinking.

figure 3

This forest plot shows the analysis result of the overall effect size across 36 studies.

In addition, this study examined two distinct dimensions of critical thinking to better understand the precise contributions that collaborative problem-solving makes to the growth of critical thinking. The findings (see Table 3 ) indicate that collaborative problem-solving improves cognitive skills (ES = 0.70) and attitudinal tendency (ES = 1.17), with significant intergroup differences (chi 2  = 7.95, P  < 0.01). Although collaborative problem-solving improves both dimensions of critical thinking, it is essential to point out that the improvements in students’ attitudinal tendency are much more pronounced and have a significant comprehensive effect (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]), whereas gains in learners’ cognitive skill are slightly improved and are just above average. (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

The analysis of moderator effect size

The whole forest plot’s 79 effect quantities underwent a two-tailed test, which revealed significant heterogeneity ( I 2  = 86%, z  = 12.78, P  < 0.01), indicating differences between various effect sizes that may have been influenced by moderating factors other than sampling error. Therefore, exploring possible moderating factors that might produce considerable heterogeneity was done using subgroup analysis, such as the learning stage, learning scaffold, teaching type, group size, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, in order to further explore the key factors that influence critical thinking. The findings (see Table 4 ) indicate that various moderating factors have advantageous effects on critical thinking. In this situation, the subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), learning scaffold (chi 2  = 9.03, P  < 0.01), and teaching type (chi 2  = 7.20, P  < 0.05) are all significant moderators that can be applied to support the cultivation of critical thinking. However, since the learning stage and the measuring tools did not significantly differ among intergroup (chi 2  = 3.15, P  = 0.21 > 0.05, and chi 2  = 0.08, P  = 0.78 > 0.05), we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving. These are the precise outcomes, as follows:

Various learning stages influenced critical thinking positively, without significant intergroup differences (chi 2  = 3.15, P  = 0.21 > 0.05). High school was first on the list of effect sizes (ES = 1.36, P  < 0.01), then higher education (ES = 0.78, P  < 0.01), and middle school (ES = 0.73, P  < 0.01). These results show that, despite the learning stage’s beneficial influence on cultivating learners’ critical thinking, we are unable to explain why it is essential for cultivating critical thinking in the context of collaborative problem-solving.

Different teaching types had varying degrees of positive impact on critical thinking, with significant intergroup differences (chi 2  = 7.20, P  < 0.05). The effect size was ranked as follows: mixed courses (ES = 1.34, P  < 0.01), integrated courses (ES = 0.81, P  < 0.01), and independent courses (ES = 0.27, P  < 0.01). These results indicate that the most effective approach to cultivate critical thinking utilizing collaborative problem solving is through the teaching type of mixed courses.

Various intervention durations significantly improved critical thinking, and there were significant intergroup differences (chi 2  = 12.18, P  < 0.01). The effect sizes related to this variable showed a tendency to increase with longer intervention durations. The improvement in critical thinking reached a significant level (ES = 0.85, P  < 0.01) after more than 12 weeks of training. These findings indicate that the intervention duration and critical thinking’s impact are positively correlated, with a longer intervention duration having a greater effect.

Different learning scaffolds influenced critical thinking positively, with significant intergroup differences (chi 2  = 9.03, P  < 0.01). The resource-supported learning scaffold (ES = 0.69, P  < 0.01) acquired a medium-to-higher level of impact, the technique-supported learning scaffold (ES = 0.63, P  < 0.01) also attained a medium-to-higher level of impact, and the teacher-supported learning scaffold (ES = 0.92, P  < 0.01) displayed a high level of significant impact. These results show that the learning scaffold with teacher support has the greatest impact on cultivating critical thinking.

Various group sizes influenced critical thinking positively, and the intergroup differences were statistically significant (chi 2  = 8.77, P  < 0.05). Critical thinking showed a general declining trend with increasing group size. The overall effect size of 2–3 people in this situation was the biggest (ES = 0.99, P  < 0.01), and when the group size was greater than 7 people, the improvement in critical thinking was at the lower-middle level (ES < 0.5, P  < 0.01). These results show that the impact on critical thinking is positively connected with group size, and as group size grows, so does the overall impact.

Various measuring tools influenced critical thinking positively, with significant intergroup differences (chi 2  = 0.08, P  = 0.78 > 0.05). In this situation, the self-adapting measurement tools obtained an upper-medium level of effect (ES = 0.78), whereas the complete effect size of the standardized measurement tools was the largest, achieving a significant level of effect (ES = 0.84, P  < 0.01). These results show that, despite the beneficial influence of the measuring tool on cultivating critical thinking, we are unable to explain why it is crucial in fostering the growth of critical thinking by utilizing the approach of collaborative problem-solving.

Different subject areas had a greater impact on critical thinking, and the intergroup differences were statistically significant (chi 2  = 13.36, P  < 0.05). Mathematics had the greatest overall impact, achieving a significant level of effect (ES = 1.68, P  < 0.01), followed by science (ES = 1.25, P  < 0.01) and medical science (ES = 0.87, P  < 0.01), both of which also achieved a significant level of effect. Programming technology was the least effective (ES = 0.39, P  < 0.01), only having a medium-low degree of effect compared to education (ES = 0.72, P  < 0.01) and other fields (such as language, art, and social sciences) (ES = 0.58, P  < 0.01). These results suggest that scientific fields (e.g., mathematics, science) may be the most effective subject areas for cultivating critical thinking utilizing the approach of collaborative problem-solving.

The effectiveness of collaborative problem solving with regard to teaching critical thinking

According to this meta-analysis, using collaborative problem-solving as an intervention strategy in critical thinking teaching has a considerable amount of impact on cultivating learners’ critical thinking as a whole and has a favorable promotional effect on the two dimensions of critical thinking. According to certain studies, collaborative problem solving, the most frequently used critical thinking teaching strategy in curriculum instruction can considerably enhance students’ critical thinking (e.g., Liang et al., 2017 ; Liu et al., 2020 ; Cindy, 2004 ). This meta-analysis provides convergent data support for the above research views. Thus, the findings of this meta-analysis not only effectively address the first research query regarding the overall effect of cultivating critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills) utilizing the approach of collaborative problem-solving, but also enhance our confidence in cultivating critical thinking by using collaborative problem-solving intervention approach in the context of classroom teaching.

Furthermore, the associated improvements in attitudinal tendency are much stronger, but the corresponding improvements in cognitive skill are only marginally better. According to certain studies, cognitive skill differs from the attitudinal tendency in classroom instruction; the cultivation and development of the former as a key ability is a process of gradual accumulation, while the latter as an attitude is affected by the context of the teaching situation (e.g., a novel and exciting teaching approach, challenging and rewarding tasks) (Halpern, 2001 ; Wei and Hong, 2022 ). Collaborative problem-solving as a teaching approach is exciting and interesting, as well as rewarding and challenging; because it takes the learners as the focus and examines problems with poor structure in real situations, and it can inspire students to fully realize their potential for problem-solving, which will significantly improve their attitudinal tendency toward solving problems (Liu et al., 2020 ). Similar to how collaborative problem-solving influences attitudinal tendency, attitudinal tendency impacts cognitive skill when attempting to solve a problem (Liu et al., 2020 ; Zhang et al., 2022 ), and stronger attitudinal tendencies are associated with improved learning achievement and cognitive ability in students (Sison, 2008 ; Zhang et al., 2022 ). It can be seen that the two specific dimensions of critical thinking as well as critical thinking as a whole are affected by collaborative problem-solving, and this study illuminates the nuanced links between cognitive skills and attitudinal tendencies with regard to these two dimensions of critical thinking. To fully develop students’ capacity for critical thinking, future empirical research should pay closer attention to cognitive skills.

The moderating effects of collaborative problem solving with regard to teaching critical thinking

In order to further explore the key factors that influence critical thinking, exploring possible moderating effects that might produce considerable heterogeneity was done using subgroup analysis. The findings show that the moderating factors, such as the teaching type, learning stage, group size, learning scaffold, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, could all support the cultivation of collaborative problem-solving in critical thinking. Among them, the effect size differences between the learning stage and measuring tool are not significant, which does not explain why these two factors are crucial in supporting the cultivation of critical thinking utilizing the approach of collaborative problem-solving.

In terms of the learning stage, various learning stages influenced critical thinking positively without significant intergroup differences, indicating that we are unable to explain why it is crucial in fostering the growth of critical thinking.

Although high education accounts for 70.89% of all empirical studies performed by researchers, high school may be the appropriate learning stage to foster students’ critical thinking by utilizing the approach of collaborative problem-solving since it has the largest overall effect size. This phenomenon may be related to student’s cognitive development, which needs to be further studied in follow-up research.

With regard to teaching type, mixed course teaching may be the best teaching method to cultivate students’ critical thinking. Relevant studies have shown that in the actual teaching process if students are trained in thinking methods alone, the methods they learn are isolated and divorced from subject knowledge, which is not conducive to their transfer of thinking methods; therefore, if students’ thinking is trained only in subject teaching without systematic method training, it is challenging to apply to real-world circumstances (Ruggiero, 2012 ; Hu and Liu, 2015 ). Teaching critical thinking as mixed course teaching in parallel to other subject teachings can achieve the best effect on learners’ critical thinking, and explicit critical thinking instruction is more effective than less explicit critical thinking instruction (Bensley and Spero, 2014 ).

In terms of the intervention duration, with longer intervention times, the overall effect size shows an upward tendency. Thus, the intervention duration and critical thinking’s impact are positively correlated. Critical thinking, as a key competency for students in the 21st century, is difficult to get a meaningful improvement in a brief intervention duration. Instead, it could be developed over a lengthy period of time through consistent teaching and the progressive accumulation of knowledge (Halpern, 2001 ; Hu and Liu, 2015 ). Therefore, future empirical studies ought to take these restrictions into account throughout a longer period of critical thinking instruction.

With regard to group size, a group size of 2–3 persons has the highest effect size, and the comprehensive effect size decreases with increasing group size in general. This outcome is in line with some research findings; as an example, a group composed of two to four members is most appropriate for collaborative learning (Schellens and Valcke, 2006 ). However, the meta-analysis results also indicate that once the group size exceeds 7 people, small groups cannot produce better interaction and performance than large groups. This may be because the learning scaffolds of technique support, resource support, and teacher support improve the frequency and effectiveness of interaction among group members, and a collaborative group with more members may increase the diversity of views, which is helpful to cultivate critical thinking utilizing the approach of collaborative problem-solving.

With regard to the learning scaffold, the three different kinds of learning scaffolds can all enhance critical thinking. Among them, the teacher-supported learning scaffold has the largest overall effect size, demonstrating the interdependence of effective learning scaffolds and collaborative problem-solving. This outcome is in line with some research findings; as an example, a successful strategy is to encourage learners to collaborate, come up with solutions, and develop critical thinking skills by using learning scaffolds (Reiser, 2004 ; Xu et al., 2022 ); learning scaffolds can lower task complexity and unpleasant feelings while also enticing students to engage in learning activities (Wood et al., 2006 ); learning scaffolds are designed to assist students in using learning approaches more successfully to adapt the collaborative problem-solving process, and the teacher-supported learning scaffolds have the greatest influence on critical thinking in this process because they are more targeted, informative, and timely (Xu et al., 2022 ).

With respect to the measuring tool, despite the fact that standardized measurement tools (such as the WGCTA, CCTT, and CCTST) have been acknowledged as trustworthy and effective by worldwide experts, only 54.43% of the research included in this meta-analysis adopted them for assessment, and the results indicated no intergroup differences. These results suggest that not all teaching circumstances are appropriate for measuring critical thinking using standardized measurement tools. “The measuring tools for measuring thinking ability have limits in assessing learners in educational situations and should be adapted appropriately to accurately assess the changes in learners’ critical thinking.”, according to Simpson and Courtney ( 2002 , p. 91). As a result, in order to more fully and precisely gauge how learners’ critical thinking has evolved, we must properly modify standardized measuring tools based on collaborative problem-solving learning contexts.

With regard to the subject area, the comprehensive effect size of science departments (e.g., mathematics, science, medical science) is larger than that of language arts and social sciences. Some recent international education reforms have noted that critical thinking is a basic part of scientific literacy. Students with scientific literacy can prove the rationality of their judgment according to accurate evidence and reasonable standards when they face challenges or poorly structured problems (Kyndt et al., 2013 ), which makes critical thinking crucial for developing scientific understanding and applying this understanding to practical problem solving for problems related to science, technology, and society (Yore et al., 2007 ).

Suggestions for critical thinking teaching

Other than those stated in the discussion above, the following suggestions are offered for critical thinking instruction utilizing the approach of collaborative problem-solving.

First, teachers should put a special emphasis on the two core elements, which are collaboration and problem-solving, to design real problems based on collaborative situations. This meta-analysis provides evidence to support the view that collaborative problem-solving has a strong synergistic effect on promoting students’ critical thinking. Asking questions about real situations and allowing learners to take part in critical discussions on real problems during class instruction are key ways to teach critical thinking rather than simply reading speculative articles without practice (Mulnix, 2012 ). Furthermore, the improvement of students’ critical thinking is realized through cognitive conflict with other learners in the problem situation (Yang et al., 2008 ). Consequently, it is essential for teachers to put a special emphasis on the two core elements, which are collaboration and problem-solving, and design real problems and encourage students to discuss, negotiate, and argue based on collaborative problem-solving situations.

Second, teachers should design and implement mixed courses to cultivate learners’ critical thinking, utilizing the approach of collaborative problem-solving. Critical thinking can be taught through curriculum instruction (Kuncel, 2011 ; Leng and Lu, 2020 ), with the goal of cultivating learners’ critical thinking for flexible transfer and application in real problem-solving situations. This meta-analysis shows that mixed course teaching has a highly substantial impact on the cultivation and promotion of learners’ critical thinking. Therefore, teachers should design and implement mixed course teaching with real collaborative problem-solving situations in combination with the knowledge content of specific disciplines in conventional teaching, teach methods and strategies of critical thinking based on poorly structured problems to help students master critical thinking, and provide practical activities in which students can interact with each other to develop knowledge construction and critical thinking utilizing the approach of collaborative problem-solving.

Third, teachers should be more trained in critical thinking, particularly preservice teachers, and they also should be conscious of the ways in which teachers’ support for learning scaffolds can promote critical thinking. The learning scaffold supported by teachers had the greatest impact on learners’ critical thinking, in addition to being more directive, targeted, and timely (Wood et al., 2006 ). Critical thinking can only be effectively taught when teachers recognize the significance of critical thinking for students’ growth and use the proper approaches while designing instructional activities (Forawi, 2016 ). Therefore, with the intention of enabling teachers to create learning scaffolds to cultivate learners’ critical thinking utilizing the approach of collaborative problem solving, it is essential to concentrate on the teacher-supported learning scaffolds and enhance the instruction for teaching critical thinking to teachers, especially preservice teachers.

Implications and limitations

There are certain limitations in this meta-analysis, but future research can correct them. First, the search languages were restricted to English and Chinese, so it is possible that pertinent studies that were written in other languages were overlooked, resulting in an inadequate number of articles for review. Second, these data provided by the included studies are partially missing, such as whether teachers were trained in the theory and practice of critical thinking, the average age and gender of learners, and the differences in critical thinking among learners of various ages and genders. Third, as is typical for review articles, more studies were released while this meta-analysis was being done; therefore, it had a time limit. With the development of relevant research, future studies focusing on these issues are highly relevant and needed.

Conclusions

The subject of the magnitude of collaborative problem-solving’s impact on fostering students’ critical thinking, which received scant attention from other studies, was successfully addressed by this study. The question of the effectiveness of collaborative problem-solving in promoting students’ critical thinking was addressed in this study, which addressed a topic that had gotten little attention in earlier research. The following conclusions can be made:

Regarding the results obtained, collaborative problem solving is an effective teaching approach to foster learners’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]). With respect to the dimensions of critical thinking, collaborative problem-solving can significantly and effectively improve students’ attitudinal tendency, and the comprehensive effect is significant (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

As demonstrated by both the results and the discussion, there are varying degrees of beneficial effects on students’ critical thinking from all seven moderating factors, which were found across 36 studies. In this context, the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have a positive impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. Since the learning stage (chi 2  = 3.15, P  = 0.21 > 0.05) and measuring tools (chi 2  = 0.08, P  = 0.78 > 0.05) did not demonstrate any significant intergroup differences, we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving.

Data availability

All data generated or analyzed during this study are included within the article and its supplementary information files, and the supplementary information files are available in the Dataverse repository: https://doi.org/10.7910/DVN/IPFJO6 .

Bensley DA, Spero RA (2014) Improving critical thinking skills and meta-cognitive monitoring through direct infusion. Think Skills Creat 12:55–68. https://doi.org/10.1016/j.tsc.2014.02.001

Article   Google Scholar  

Castle A (2009) Defining and assessing critical thinking skills for student radiographers. Radiography 15(1):70–76. https://doi.org/10.1016/j.radi.2007.10.007

Chen XD (2013) An empirical study on the influence of PBL teaching model on critical thinking ability of non-English majors. J PLA Foreign Lang College 36 (04):68–72

Google Scholar  

Cohen A (1992) Antecedents of organizational commitment across occupational groups: a meta-analysis. J Organ Behav. https://doi.org/10.1002/job.4030130602

Cooper H (2010) Research synthesis and meta-analysis: a step-by-step approach, 4th edn. Sage, London, England

Cindy HS (2004) Problem-based learning: what and how do students learn? Educ Psychol Rev 51(1):31–39

Duch BJ, Gron SD, Allen DE (2001) The power of problem-based learning: a practical “how to” for teaching undergraduate courses in any discipline. Stylus Educ Sci 2:190–198

Ennis RH (1989) Critical thinking and subject specificity: clarification and needed research. Educ Res 18(3):4–10. https://doi.org/10.3102/0013189x018003004

Facione PA (1990) Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction. Research findings and recommendations. Eric document reproduction service. https://eric.ed.gov/?id=ed315423

Facione PA, Facione NC (1992) The California Critical Thinking Dispositions Inventory (CCTDI) and the CCTDI test manual. California Academic Press, Millbrae, CA

Forawi SA (2016) Standard-based science education and critical thinking. Think Skills Creat 20:52–62. https://doi.org/10.1016/j.tsc.2016.02.005

Halpern DF (2001) Assessing the effectiveness of critical thinking instruction. J Gen Educ 50(4):270–286. https://doi.org/10.2307/27797889

Hu WP, Liu J (2015) Cultivation of pupils’ thinking ability: a five-year follow-up study. Psychol Behav Res 13(05):648–654. https://doi.org/10.3969/j.issn.1672-0628.2015.05.010

Huber K (2016) Does college teach critical thinking? A meta-analysis. Rev Educ Res 86(2):431–468. https://doi.org/10.3102/0034654315605917

Kek MYCA, Huijser H (2011) The power of problem-based learning in developing critical thinking skills: preparing students for tomorrow’s digital futures in today’s classrooms. High Educ Res Dev 30(3):329–341. https://doi.org/10.1080/07294360.2010.501074

Kuncel NR (2011) Measurement and meaning of critical thinking (Research report for the NRC 21st Century Skills Workshop). National Research Council, Washington, DC

Kyndt E, Raes E, Lismont B, Timmers F, Cascallar E, Dochy F (2013) A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educ Res Rev 10(2):133–149. https://doi.org/10.1016/j.edurev.2013.02.002

Leng J, Lu XX (2020) Is critical thinking really teachable?—A meta-analysis based on 79 experimental or quasi experimental studies. Open Educ Res 26(06):110–118. https://doi.org/10.13966/j.cnki.kfjyyj.2020.06.011

Liang YZ, Zhu K, Zhao CL (2017) An empirical study on the depth of interaction promoted by collaborative problem solving learning activities. J E-educ Res 38(10):87–92. https://doi.org/10.13811/j.cnki.eer.2017.10.014

Lipsey M, Wilson D (2001) Practical meta-analysis. International Educational and Professional, London, pp. 92–160

Liu Z, Wu W, Jiang Q (2020) A study on the influence of problem based learning on college students’ critical thinking-based on a meta-analysis of 31 studies. Explor High Educ 03:43–49

Morris SB (2008) Estimating effect sizes from pretest-posttest-control group designs. Organ Res Methods 11(2):364–386. https://doi.org/10.1177/1094428106291059

Article   ADS   Google Scholar  

Mulnix JW (2012) Thinking critically about critical thinking. Educ Philos Theory 44(5):464–479. https://doi.org/10.1111/j.1469-5812.2010.00673.x

Naber J, Wyatt TH (2014) The effect of reflective writing interventions on the critical thinking skills and dispositions of baccalaureate nursing students. Nurse Educ Today 34(1):67–72. https://doi.org/10.1016/j.nedt.2013.04.002

National Research Council (2012) Education for life and work: developing transferable knowledge and skills in the 21st century. The National Academies Press, Washington, DC

Niu L, Behar HLS, Garvan CW (2013) Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educ Res Rev 9(12):114–128. https://doi.org/10.1016/j.edurev.2012.12.002

Peng ZM, Deng L (2017) Towards the core of education reform: cultivating critical thinking skills as the core of skills in the 21st century. Res Educ Dev 24:57–63. https://doi.org/10.14121/j.cnki.1008-3855.2017.24.011

Reiser BJ (2004) Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J Learn Sci 13(3):273–304. https://doi.org/10.1207/s15327809jls1303_2

Ruggiero VR (2012) The art of thinking: a guide to critical and creative thought, 4th edn. Harper Collins College Publishers, New York

Schellens T, Valcke M (2006) Fostering knowledge construction in university students through asynchronous discussion groups. Comput Educ 46(4):349–370. https://doi.org/10.1016/j.compedu.2004.07.010

Sendag S, Odabasi HF (2009) Effects of an online problem based learning course on content knowledge acquisition and critical thinking skills. Comput Educ 53(1):132–141. https://doi.org/10.1016/j.compedu.2009.01.008

Sison R (2008) Investigating Pair Programming in a Software Engineering Course in an Asian Setting. 2008 15th Asia-Pacific Software Engineering Conference, pp. 325–331. https://doi.org/10.1109/APSEC.2008.61

Simpson E, Courtney M (2002) Critical thinking in nursing education: literature review. Mary Courtney 8(2):89–98

Stewart L, Tierney J, Burdett S (2006) Do systematic reviews based on individual patient data offer a means of circumventing biases associated with trial publications? Publication bias in meta-analysis. John Wiley and Sons Inc, New York, pp. 261–286

Tiwari A, Lai P, So M, Yuen K (2010) A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Med Educ 40(6):547–554. https://doi.org/10.1111/j.1365-2929.2006.02481.x

Wood D, Bruner JS, Ross G (2006) The role of tutoring in problem solving. J Child Psychol Psychiatry 17(2):89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x

Wei T, Hong S (2022) The meaning and realization of teachable critical thinking. Educ Theory Practice 10:51–57

Xu EW, Wang W, Wang QX (2022) A meta-analysis of the effectiveness of programming teaching in promoting K-12 students’ computational thinking. Educ Inf Technol. https://doi.org/10.1007/s10639-022-11445-2

Yang YC, Newby T, Bill R (2008) Facilitating interactions through structured web-based bulletin boards: a quasi-experimental study on promoting learners’ critical thinking skills. Comput Educ 50(4):1572–1585. https://doi.org/10.1016/j.compedu.2007.04.006

Yore LD, Pimm D, Tuan HL (2007) The literacy component of mathematical and scientific literacy. Int J Sci Math Educ 5(4):559–589. https://doi.org/10.1007/s10763-007-9089-4

Zhang T, Zhang S, Gao QQ, Wang JH (2022) Research on the development of learners’ critical thinking in online peer review. Audio Visual Educ Res 6:53–60. https://doi.org/10.13811/j.cnki.eer.2022.06.08

Download references

Acknowledgements

This research was supported by the graduate scientific research and innovation project of Xinjiang Uygur Autonomous Region named “Research on in-depth learning of high school information technology courses for the cultivation of computing thinking” (No. XJ2022G190) and the independent innovation fund project for doctoral students of the College of Educational Science of Xinjiang Normal University named “Research on project-based teaching of high school information technology courses from the perspective of discipline core literacy” (No. XJNUJKYA2003).

Author information

Authors and affiliations.

College of Educational Science, Xinjiang Normal University, 830017, Urumqi, Xinjiang, China

Enwei Xu, Wei Wang & Qingxia Wang

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Enwei Xu or Wei Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary tables, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, E., Wang, W. & Wang, Q. The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature. Humanit Soc Sci Commun 10 , 16 (2023). https://doi.org/10.1057/s41599-023-01508-1

Download citation

Received : 07 August 2022

Accepted : 04 January 2023

Published : 11 January 2023

DOI : https://doi.org/10.1057/s41599-023-01508-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Impacts of online collaborative learning on students’ intercultural communication apprehension and intercultural communicative competence.

  • Hoa Thi Hoang Chau
  • Hung Phu Bui
  • Quynh Thi Huong Dinh

Education and Information Technologies (2024)

Exploring the effects of digital technology on deep learning: a meta-analysis

Sustainable electricity generation and farm-grid utilization from photovoltaic aquaculture: a bibliometric analysis.

  • A. A. Amusa
  • M. Alhassan

International Journal of Environmental Science and Technology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

the importance of investigation and problem solving

Creativity as a function of problem-solving expertise: posing new problems through investigations

  • Original Paper
  • Open access
  • Published: 22 March 2021
  • Volume 53 , pages 891–904, ( 2021 )

Cite this article

You have full access to this open access article

the importance of investigation and problem solving

  • Haim Elgrably 1 &
  • Roza Leikin   ORCID: orcid.org/0000-0002-8036-6736 1  

4907 Accesses

17 Citations

1 Altmetric

Explore all metrics

This study was inspired by the following question: how is mathematical creativity connected to different kinds of expertise in mathematics? Basing our work on arguments about the domain-specific nature of expertise and creativity, we looked at how participants from two groups with two different types of expertise performed in problem-posing-through-investigations (PPI) in a dynamic geometry environment (DGE). The first type of expertise—MO—involved being a candidate or a member of the Israeli International Mathematical Olympiad team. The second type—MM—was comprised of mathematics majors who excelled in university mathematics. We conducted individual interviews with eight MO participants who were asked to perform PPI in geometry, without previous experience in performing a task of this kind. Eleven MMs tackled the same PPI task during a mathematics test at the end of a 52-h course that integrated PPI. To characterize connections between creativity and expertise, we analyzed participants’ performance on the PPI tasks according to proof skills (i.e., auxiliary constructions, the complexity of posed tasks, and correctness of their proofs) and creativity components (i.e., fluency, flexibility and originality of the discovered properties). Our findings demonstrate significant differences between PPI by MO participants and by MM participants as reflected in the more creative performance and more successful proving processes demonstrated by MO participants. We argue that problem posing and problem solving are inseparable when MO experts are engaged in PPI.

Similar content being viewed by others

the importance of investigation and problem solving

When Mathematics Meets Real Objects: How Does Creativity Interact with Expertise in Problem Solving and Posing?

the importance of investigation and problem solving

Creativity and Challenge: Task Complexity as a Function of Insight and Multiplicity of Solutions

the importance of investigation and problem solving

Strategy creativity and outcome creativity when solving open tasks: focusing on problem posing through investigations

Avoid common mistakes on your manuscript.

1 Introduction

The research presented in this paper was motivated by several observations concerning research associated with mathematical creativity, expertise, problem solving and problem posing and the relationships between them.

The first observation concerns research on expertise in mathematics (as elaborated in the background section). While expertise is commonly addressed as superior performance in a particular domain (e.g., mathematics), in the research literature the notion of mathematical expertise acquires a broad range of meanings, as expressed in different groups of target populations varying from school students who excel, to professional mathematicians. Taking into account this variance, we examine differences in creativity and proving skills among participants with rich mathematical backgrounds of 2 types: (1) MO—problem-solving experts who were candidates or members of the Israel National IMO team and (2) MM—mathematics majors who excelled in university mathematics courses and also completed a High School Mathematics Teaching Certificate. The participants in these two groups are considered experts with different types of mathematical expertise.

Second, over the past two decades mathematics education researchers have—fortunately—increasingly paid attention to mathematical creativity and creativity-directed activities as major twenty-first century skills. At the same time, there are inconsistent arguments about the connections between mathematical expertise and creativity, and, moreover, empirical studies on such connections are scarce.

Third, in contrast to unconscious (to a large extent) mathematical creation by professional mathematicians, problem posing usually involves producing new problems in response to a requirement to do so. We found that empirical studies that examine connections between problem solving expertise and problem posing performance are rare, and we explore this connection here by employing problem posing through investigation (PPI) tasks.

Furthermore, we base our work on arguments about the domain-dependency of expertise and creativity (Baer 2015 ). We focus on participants from the two groups with two different types of mathematical expertise in order to gain a better understanding of the connection between mathematical creativity and these two types of mathematical expertise.

The PPI—problem posing through investigation—employed in this study is a mathematical activity that combines problem posing and problem solving. PPI provides multiple opportunities for investigations in a dynamic geometry environment (DGE), allowing participants to create auxiliary constructions, measure, search for geometric properties, and conjecture regarding the examination and formulation of new problems, which the participants are then required to solve (Leikin 2014 ; Leikin and Elgrably 2020 ). As such, PPI tasks are open from the start and from the end (Leikin 2019 ), since solvers are encouraged to choose what they examine and how, and the outcomes usually constitute an individual space of posed problems, which are based on the discovered properties. These collections differ among different individuals in terms of the number, types and complexity of the posed problems. A PPI task is completed only when all the posed problems are solved by the participants; they are free to choose how to prove any discovered property. The openness determines the complexity of PPI, since an investigation can lead in unpredicted directions, conjectures can appear to be incorrect, or solving some posed problems can require knowledge and skills at a level that surpasses the level of problem solving expertise of those who posed the problems. At the same time, the openness of the PPI tasks and their complexity determines the power of these tasks as tools for the investigation of creativity and problem-solving expertise.

2 Background

2.1 expertise in mathematics and beyond.

Ericsson and Lehmann ( 1996 ) defined expert performance as “consistently superior performance on a specified set of representative tasks for a domain” (p. 277) and stressed that “it is generally assumed that outstanding human achievements (i.e., expertise) reflect some varying balance between training and experience (nurture), on one hand, and innate differences in capacities and talents (nature) on the other” (p. 274). There is a consensus that expert knowledge differs from novice knowledge in its organization, as well as its extent (Glaser 1987 ; Lesgold 1984 ). Experts also rely on more 'abstract' or general structures (Voss et al. 1983 ). Hoffman ( 1998 ) argued that experts differ from non-experts in the reasoning operations or strategies they apply, and their ability to apply these operations and strategies in different orders and with different emphases.

Experts in mathematics have the ability to focus attention on appropriate features of problems, and have more cognizance of their own thought processes and of how others may think (Carlson and Bloom 2005 ; Lester 1994 ). Researchers characterized experts’ performance as processing flexibility linked to the ability to form multiple alternative interpretations or representations of problems (Hoffman 1998 ; Greer 2009 ; Star and Newton 2009 ). In contrast to an expert, a novice's system of representations of a mathematical concept may be deficient in number and in connections that form an adequate network of knowledge (Lester 1994 ). Mathematical knowledge and skills in experts are developed through deliberate practice and are characterized by robust concept images, procedural fluency and strategic competence in problem-solving, high levels of abstraction, and mathematical flexibility, expressed in the number of ways in which experts can tackle a problem (Schoenfeld 1985 ). Experts differ from novices in the problem-solving strategies they employ (Schoenfeld 1992 ) and in their ability to categorize problems according to solution principles and choose the most efficient ways of solving a particular type of problem (Sweller, Mawer and Ward 1983 ). Moreover, according to Duncker ( 1945 ) proposing an hypothesis is an intrinsic part of the problem-solving process for mathematical experts.

Beginning with Poincare’s ( 1908/1952 ) work, researchers’ studies of mathematical expertise have often been based on retrospective analyses of their own mathematical activities, or analysis of the mathematical performance of highly performing students or colleagues (Berman 2009 ; Schoenfeld 1985 ; Wilkerson-Jerde and Wilensky 2011 ). Studies on mathematical expertise are often linked to studies on mathematical giftedness, which analyze exceptional mathematical performance and connect mathematical giftedness to the work of mathematicians (Leikin 2019 ; Sriraman 2005 ; Usiskin 2000 ). As such, studies on mathematical expertise and mathematical giftedness are greatly intertwined (Leikin 2019 ), as reflected in the research populations of these studies, which include mathematical professors and graduate students (Wilkerson-Jerde and Wilensky 2011 ), participants in mathematical Olympiads (Koichu 2010 ; Koichu and Berman 2005 ; Reznik 1994 ), students who passed SAT-M tests with high scores, or participants in summer mathematics camps, or simply students with extremely high mathematical scores in school, or mathematical majors (Lubinski and Benbow 2006 ).

In contrast to studies that describe and analyze mathematical performance of mathematically advanced individuals alone, in this study we employed a differentiated view of mathematical expertise. We focused our study on two groups of participants with different types of mathematical expertise (MO and MM participants). To the best of our knowledge, no previous study has performed a comparison of mathematical creativity in groups of participants with different types and levels of mathematical expertise.

2.2 Creativity in mathematical problem solving and problem posing

In the vastly changing world of the twenty-first century, the importance of creativity is difficult to overestimate. Development of creativity in general and of mathematical creativity in particular is extremely important nowadays, both from a personal point of view—to strengthen people’s ability to adopt to new and challenging situations, which is essential for the well-being of each individual—and as a basic mechanism of societal, technological, and scientific development (Amado, Carreira and Jones 2018 ; Leikin and Pitta-Pantazi 2013 ; Leikin and Sriraman 2016 ; Sriraman and Hwa 2010 ).

Torrance ( 1974 ) considered creativity to be an effective combination of divergent and convergent thinking. Operationally, this view led to the definition of creativity based on four related components, namely, fluency, flexibility, novelty, and elaboration (Torrance 1974 ). Divergent thinking includes finding different solutions and interpretations, applying different techniques, and thinking originally and unusually, and creativity is one of the learning outcomes. At the same time, for convergent thinking, knowledge is of particular importance as a source of ideas, pathways to solutions, and criteria of effectiveness and novelty.

Providing a precise and broadly accepted definition of mathematical creativity is extremely difficult, probably impossible (Mann 2006 ; Sriraman 2005 ). Sternberg and Lubart ( 2000 ) drew a connection between creative performance and the ability to produce original and useful products, and, moreover, there is consensus among researchers that originality is the major component of creativity.

Mathematical creativity in school mathematics is usually associated with problem solving or problem posing. Problem posing and problem solving can be employed for the development of mathematical creativity (Matsko and Thomas 2015; Levav-Waynberg and Leikin 2012 ). Creative problem solving in mathematics is associated with mental flexibility (Silver 1997 ; Star and Newton 2009 ) and with mathematical insight (Ervynck 1991 ; Krutetskii 1976 ; Leikin 2009 ). Following Torrance ( 1974 ), Silver ( 1997 ) suggested developing creativity through problem solving as follows: Fluency is developed by generating multiple mathematical ideas, generating multiple answers to a mathematical problem (when such exist), and exploring mathematical situations. Flexibility is advanced by generating new mathematical solutions when at least one has already been produced. Originality is advanced by exploring many solutions to a mathematical problem and generating a new one. Leikin ( 2009 ) suggested a model for the evaluation of creativity using multiple solution tasks (MSTs). This model suggests evaluation of creativity with the three abovementioned categories—fluency, flexibility and originality—through analysis of similarities and differences between the multiple problem-solving strategies used. The PPI tasks, as described in the introduction section, are an instance of MSTs, thus in the current study we utilized Leikin’s ( 2009 ) model with regard to the variability of problems posed by the study participants.

2.3 Relationship between creativity and expertise

The relationship between creativity and expertise is an intriguing research topic and one can find inconsistencies between researchers’ arguments about this relationship. For example, the publications reviewed above in this paper do not connect expertise and creativity. This can be seen for example, in the word cloud for the 60 most frequent words created based on Hoffman’s ( 1998 ) chapter “How can expertise be defined? Implications of research from cognitive psychology” Fig. 1 .

figure 1

Studies on experts do not mention creativity

Baer ( 2015 ) demonstrated that creativity and expertise are related, but are very different things. He argues that whereas expertise does not usually require creativity, creativity may require a certain level of expertise. In contrast, the bulk of the research literature on mathematical expertise at high level considers creativity to be an integral component of mathematical expertise in mathematically gifted individuals. Poincare ( 1908 /1952) and Hadamard ( 1945 ) characterized the work of professional mathematicians as a creative activity, based on introspective analysis of their and their colleagues’ activity. Sriraman ( 2005 ) suggested a theoretical model of connections between creativity and expertise that included 8 levels of expertise according to the creativity component (introduced by Usiskin 2000 ), arguing that “in the professional realm, mathematical creativity implies mathematical giftedness, but the reverse is not necessarily true” (Sriraman 2005 , p. 21). Findings about domain dependency of expertise and creativity (Baer 2015 ) are an additional factor that motivated our study. “People may be expert, and people may be creative, in many domains, or they may be expert, or creative, in few domains or none at all, and one cannot simply transfer expertise, or creativity, from one domain to another, unrelated domain” (Baer 2015 , p. 165). In our study we considered whether and how MO and MM types of mathematical expertise are expressed in PPI.

2.4 Problem posing and problem solving

In the past two decades mathematical investigations have been acknowledged as powerful tasks for the teaching and learning of mathematics (Leikin 2016 ; Ponte 2007 ; Ponte and Henriques 2013 ; Silver 1994 ; Yerushalmy et al. 1990 ). Problem posing is a broad concept, usually related to the creation of a new problem in response to a requirement to create a problem or a set of problems. Mathematics educators categorize problem posing and investigation problems as 'open problems' (Pehkonen 1995 ; Silver 1994 , 1997 ). Some problem posing related to problem transformation was explored by researchers focusing on systematic transformations of a given problem involving variations in goals and givens (Brown and Walter 1993 ). Silver et al. ( 1996 ) and Hoehn ( 1993 ) drew attention to the “symmetry” transformation of a problem, which leads to the creation of a problem in which the givens and the goals have been swapped. Silver et al. ( 1996 ) also described the “goal manipulation” strategy, in which the givens remain and only the goal is changed. Leikin and Grossman ( 2013 ) demonstrated that “What if yes?” problem posing strategies are more effective when performing investigations and problem posing in DGE if conditions are added to givens instead of removing them. PPI tasks employed in this study allow both manipulation of givens and goals and this activity is supported by the use of DGE, which is naturally associated with investigations in geometry (Yerushalmy et al. 1990 ).

Complex problem solving by experts, including Olympiad participants, includes problem posing; “problem formulation and problem solution go hand in hand, each eliciting the other as the investigation progresses” (Davis 1985 , p 23). Duncker ( 1945 ) observed that problem solving by mathematical experts consists of successive re-formulations of an initial problem (which is a type of problem posing). Koichu ( 2010 ) analyzed problem posing in the context of teaching for advanced problem solving. However, the way in which experts with different types of mathematical expertise perform problem-posing tasks has not been explored systematically.

Reznik ( 1994 ) described the Putnam contest as designed to test originality as well as technical competence in problem solving. He believed that success in Olympiads and in studying mathematics at the university level are related, but not necessarily equivalent, thus not all mathematics majors can solve Olympiad problems. Sriraman ( 2005 ) maintained that in the hierarchy of mathematical giftedness, majoring in mathematics stands at a lower level than does participation in mathematical Olympiads. Thus, in our study, the two groups MO and MM were chosen in order to shed light on the relationships between problem-solving expertise of different types and levels (MO and MM), and creativity linked to PPI.

3 The study

3.1 problem posing through investigations.

PPI is a complex mathematical activity that includes the following (Leikin and Elgrably 2020 ):

Investigating a geometrical figure (from a proof problem) in a DGE (experimenting, conjecturing and testing), in order to find several [at least 2] non-trivial properties of the given figure and related figures that are constructed using auxiliary constructions. A non-trivial property is defined as one for which the proof includes at least 3 stages (Fig. 2 ).

Formulating several [at least 2] new proof problems based on the investigations performed, and solving (proving) them.

In what follows we use the terms ‘posed problem’ and ‘discovered properties’ interchangeably since the posed problems require proving the discovered properties. Figure  2 depicts the PPI task used in the study presented in this paper.

figure 2

PPI task used in this research

Task 1 was formulated using a proof problem from a 10th grade geometry textbook. The problem required students to prove that \(BE/ EA=2\) (Fig.  2 ). The proof problem is simple for both groups of participants, allowing a focus on their problem-posing performance. To control the level of participants’ expertise we examined participants’ success in proving the posed problems.

3.2 The study goals

The major goal of the study presented here was to examine mathematical creativity as a function of mathematical expertise. The examination was performed with regard to proof skills (auxiliary constructions performed in the course of PPI, correctness of proof of the posed problem and complexity of the posed problem) and creativity components (fluency, flexibility, originality and creativity). To achieve the goal, we asked the following research questions:

QA. What are the differences between PPI by MO and MM participants from the point of view of proof skills and creativity components?

QB. What are the mutual relationships between proof skills and creativity components of PPI by MO and MM participants and how do these relationships differ between the MO and MM students?

3.3 Participants and data collection

Two groups of participants took part in this study, namely the MO group and the MM group. The following characteristics led us to consider the groups as having different types of mathematics expertise.

The MO group included 8 participants who were candidates for, or members of, the Israel National Olympiad team—problem solving experts in this study. All these participants passed the problem-solving training for the IMO (International Mathematical Olympiads). IMO is the most prestigious mathematics competition nowadays, and includes problems from classical content areas and those that are not usually studied in school or university (Koichu and Andžāns 2009 ). The training is directed at the development of the highest level of problem-solving skills and strategies. The 8 participants volunteered to participate in our study upon our request.

The MM group in this study included 11 excelling mathematics majors who had studied more than 1000 h of mathematics in university. These 11 participants were chosen from a group of 68 participants in a wider study, since, in contrast to the other 57 participants, they received scores above 90 in such courses as calculus, advanced calculus, linear algebra and analytical geometry. In addition to holding a BSc degree in mathematics, these participants completed a 52-h geometry course directed at the development of problem solving (proving) skills in geometry through the systematic employment of PPI. This course included PPI linked to Menelaus’ theorem, Ceva’s theorem, 9 points circle and Euler line, so that they discovered and proved the theorems as well as being asked to use them when solving other problems during the course (Leikin and Elgrably 2020 ).

Participants from the MM group were asked to solve Task 1 during the written test conducted as the final examination of the course. MMs were given 90 min to solve this task. They performed PPI in dynamic geometry and submitted their investigation outcomes accompanied by GeoGebra files that demonstrated the entire sequence of constructions and discoveries performed in the course of their investigations. Additionally, MM participants submitted written documents that included problems posed by the participants and their proofs.

Since MO participants did not have training in solving PPI tasks, they first received a preliminary, very short introduction to PPI tasks and the ways of working with DGE, and then were asked to solve Task 1 during individual interviews. The interviews were recorded using Camtasia software that allowed analysis of each action during the investigation process and formulation of the posed problems. Participants from both study groups were engaged in solving the PPI task for about 90 min. This form of data collection allowed us to perform identical analyses of the PPI outcomes produced by the participants from the two groups, as explained in the next section.

3.4 Data analysis

We utilized the decimal-based scoring scheme introduced in Leikin ( 2009 ) for all of the criteria examined in this study. To examine the relationship between creativity and expertise, we evaluated each individual space of posed problems with respect to creativity components and proof skills. Proof skills included the following: (a) auxiliary constructions performed by the participants to discover a property, (b) correctness of proofs of the discovered properties, and (c) complexity of the posed problem. Creativity components included the following: (d) fluency, defined as the number of discovered properties, (e) flexibility, defined as the number of discovered properties of different kinds, (f) originality, defined as the newness and rareness of the discovered properties. An individual space of posed problems is made up of all of the problems the person posed based on the discovered properties. We evaluated each of the individual spaces of posed problems as explained in Table 1 .

We open the findings section with a description of the interview with Dave—the most creative MO participant in our study—and explain the ways in which his performance on PPI tasks was scored. Then, in order to answer the research questions, we report our comparison of the individual spaces of problems posed by the participants from the MO group and those of the participants from the MM group with respect to the creativity components and proof skills. We also report the analysis we performed of the collective spaces of the problems posed by the participants of the two research groups.

4.1 Example: interview with MO expert

Dave (pseudonym) was 17 (at the time of the interview), and had been studying in the Technion (Israel Institute of Technology) since the 9th grade (Spring 2014). Dave took part in the International Mathematical Olympiad (IMO) during 2014–2017. He won 4 medals: 3 bronze medals and one silver medal. Dave exhibited the highest performance on a PPI task both for the MO and the MM groups. Figure  3 presents excerpts from the interview with Dave.

figure 3

Excerpts from the interview with Dave (the highest performer in the study)

Before analyzing the problems posed by Dave, let us remember that to allow for fluency of the interview, MO participants were not asked to present a complete formulation of proof problems (given X, prove Y), but only to find properties that can be proven (Y).

Dave not only discovered multiple properties and proved them, he also refuted a number of properties either by construction and dragging in DGE (e.g., the points \(N\) , \(L\) , \(H\) are not on one line) or by performing a formal proof ( \(CI=DI\) was shown to be mistaken by calculating the power of the point I). In the course of examining conjectures by proving or in the course of proving the properties, Dave formulated additional properties that sometimes he did not find interesting enough to explicitly present as posed problems or did not recognize as discoveries at the time. However, he also used DGE to test the conjectures he raised, when proving or trying to prove some properties that seem to be correct based on the observation of the figure in DGE. After performing a number of auxiliary constructions he understood the power of DGE for discovering properties and asked whether he is allowed ‘to build whatever he wants to’. After this moment of the interview he performed a variety of constructions in the course of tackling the PPI task.

Auxiliary constructions Problems D1, D2 and D3 got a score of 0 because discovering the properties required at most one auxiliary construction. For example, to find discovery D1, that ADBF is a rhombus, Dave had to perform one auxiliary construction, namely, drawing a line parallel to BD through point A, that is, the segment AF. Similarly, property D8 was discovered without need for any auxiliary construction, and so also received a score of 0. On the other hand, property D4 was discovered using the construction of two auxiliary lines in the shape, namely, DF and AF, and therefore received a score of 1. In order to get a score of 10, more than 3 auxiliary constructions are required within the shape; a good example is the discovery of property D5, which consists of creating the points G and H and drawing a circle inscribing the shape.

Complexity of the posed problem Proving D1 was relatively simple: AF ∥ BD is an auxiliary construction (1) AD ∥ BC —if alternate interior angles are equal (∢ BAD  = ∢ ABC  = 60°), the lines are parallel; (2) then AFBD is a parallelogram (according to definition). (3) AD  =  BD —adjacent sides in the parallelogram thus AFBD is a rhombus. The proof included 3 stages, use of 2 definitions and the equality of alternate interior angles as a sufficient condition for parallel lines. Thus complexity of D1 was scored with 1.

The proof for property D4 was based on the proof of D1 and the additional stages: (1) FH  =  HD since the diagonals bisect each other, and therefore AH is a median line, (2) the diagonals in a rhombus are perpendicular, therefore ∢ BHF  = 90°, (3) FD ∥ AC —if corresponding angles are equal (∢ BHF  = ∢ BAC  = 90°) then the lines are parallel, (4) ACFD is a parallelogram according to definition. (5) The diagonals in a parallelogram bisect each other, and therefore AG = FG and therefore DG is a median line and E is the intersection of the medians in the triangle. The proof included 4 additional stages to the proof of D1.

Proof correctness Dave proved all the discovered properties but the last, which he did not prove due to the end of the interview. Each of his proofs was scored with 10.

Fluency Overall Dave discovered and explicitly formulated 12 properties (D1–D12, see Fig.  3 ), thus his fluency score was 12.

Flexibility The properties (see Fig.  3 and Table 2 ) were of 6 different types: D1, D3 and D10—special quadrilaterals, D2 midpoint of a segment, D4 a point is a triangle’s center of mass, D5 and D7 four points on a circle, D8 two segments ratio, D6, D9 and D12—three points on a straight line, and D11 a line is tangent to a circle. Properties D1, D2, D4, D5, D6 and D11 were scored with 10 points for flexibility as these properties were of different types. D8 was scored with 0.1 points for flexibility since it was the same property as D2. D3 was scored with 1 for flexibility since D1 also was a parallelogram (rhombus), however, D10 was scored with 10 since it was a different type of special quadrilateral than D1 and D3, and the discovery of D10 required many complex auxiliary constructions. D7, D9 and D12 were repeating properties discovered in different locations of the figure based on a series of auxiliary constructions, and this received a score of 1 for flexibility.

Originality Originality of the problems was evaluated based on the frequency of the property, as determined by the number of participants who discovered the property. The frequency was calculated based on the problems posed by the participants from MO group and the big MM group. Each of the properties ‘the quadrilateral is a rhombus’, ‘the quadrilateral is an isosceles trapezoid’, and ‘4 points are on a circle’ appeared in the spaces of posed problems of 1 to 5 performers each. Thus D1, D4, D5, D7, D10 and D11 were scored with 10 points for originality. On the other hand, more than 97% participants from the big study group posed problems that included a ratio of segments and areas. Thus D2 and D8 each received 0.1 for originality.

Creativity The creativity of each posed problem within an individual space of posed problems was evaluated as a product of flexibility and originality of the associated discovered property (Leikin 2009 ).

4.2 Comparing problems posed by MO participants and those by MM participants

4.2.1 individual spaces of posed problems of dave and jerry.

Table 2 below depicts, in a condensed form, the individual space of problems posed by Dave (see Sect.  4.1 ), including a summary of the auxiliary constructions, all the discovered properties and their evaluation. Table 3 presents the space of problems posed by Jerry, who is a MM participant with the highest creativity score among the MM participants.

Jerry’s space of posed problems received the highest creativity score among the 11 excelling MM participants. He posed 7 problems, which is fewer than Dave did (12). In Dave’s space of posed problems, 7 problems included complex properties (scored with 10), whereas Jerry’s space included 4 problems with complex properties scored with 10. Dave proved 11 of the 12 problems that he posed, whereas Jerry proved 5 of 7 problems. Dave’s flexibility score was 74.1 whereas Jerry’s flexibility score was 42.1. Dave made 6 original discoveries and his originality score was 64.2 while Jerry’s originality score was 51.1. Jerry’s original discoveries included the following properties: a quadrilateral is a parallelogram, ratio of areas of two quadrilaterals equals 4.5, similarity of two triangles and tangency of a circle and a line. Note here that ratios of areas and ratios of segments were commonly examined by the participants in the MM group. As a result, all the characteristics of the spaces of posed problems were higher for Dave than for Jerry.

4.2.2 Overall differences between the spaces of the problems posed by MM and MO participants

Table 4 displays the number of problems in the collective spaces of posed problems by MO and MM groups, evaluated with the highest scores for different examined criteria. Figure  4 depicts boxplots representing range, mean and median for the all examined criteria.

figure 4

Boxplots of scores assigned to the posed problems in the two groups

Table 4 and Fig.  4 demonstrate that the 8 MO participants produced more than twice as many problems through investigations than did the 11 MM participants. The mean number of problems posed by MO participants was 3 times bigger than that of participants from the MM group. We compared the spaces of problems posed by the participants from the two groups according to the highest scores for all the examined criteria. We found that overall, problems posed by MO participants were based on a larger number of complex auxiliary constructions, included more complex discovered properties, and were proved in 97% of cases as compared to 59%. The properties discovered by MO participants demonstrated more flexibility and were more original. On average MO participants posed 3 times more problems that received 100 for creativity than did MM participants.

Figure  4 illustrates most of the examined criteria. The highest score assigned for the problems posed by MM participants was lower than the lowest score attained in the MO group for all the participants except Jerry. This result held for auxiliary constructions (44 vs 57), proof correctness (70 vs. 110), fluency (10 vs.12), flexibility (51 vs. 58), originality (22.2 vs. 33.83) and creativity (201 vs 275). Jerry’s scores on originality and creativity (Table 3 ) were within the range of scores of MO students. Comparing median scores for all the examined proof skills and creativity components showed significant differences in the quality of discoveries: median scores were more than 4.9 times higher in the MO group than in the MM group for auxiliary constructions, 5.4 times for proofs, 2.9 times for complexity of discoveries. The ratio of median scores in creativity components was 2.8 for fluency, 4 for flexibility, 3.6 for originality and 3.2 for creativity. A Mann–Whitney test demonstrated that the differences among the posed problems were significant for all the proof skills and creativity components.

4.2.3 Relationships between different creativity components and proof skills linked to PPI within the groups of MO and MM participants

An additional comparison between the PPI performed by MO and MM participants was conducted focusing on correlations between the associated proof skills and the creativity components separately for MO and MM groups. A Spearman correlation test was applied to all the proof-related and creativity scores within each study group. Consistent with the findings of our previous studies (Leikin and Elgrably 2020 ; Levav-Waynberg and Leikin 2012 ), in the 2 groups of participants the correlation between creativity and originality was found to be significant ( rs  = 0.881, p  < 0.01 in MO group; rs  = 0.991, p  < 0.01 in MM group). This correlation confirms the validity of the model suggested for the evaluation of creativity linked to PPI.

Based on the initial analysis of the individual and collective spaces of the problems posed by the two groups of participants, and based on our previous study with a big-MM group (Leikin and Elgrably 2020 ), we hypothesised that auxiliary constructions performed by the participants in the course of PPI led to more complex properties and a more flexible discovery process. To our surprise, the complexity and number of auxiliary constructions performed (as reflected in the auxiliary constructions score) did not correlate significantly either with complexity of the posed problems, or with creativity-related criteria linked to PPI.

For the problems posed by the MO participants we found significant correlations between fluency and flexibility ( rs  = 0.862, p  < 0.01 for fluency and flexibility in MO ). This correlation demonstrated that a larger number of posed problems led to a larger number of problems of different types posed by MO participants. This correlation supports our observations regarding the MO students’ inclination to find ‘interesting’ discoveries, as was obvious in the interview with Dave. This connection between fluency and flexibility of the PPI process was specific to the MO participants. This correlation did not appear to be significant in the MM group. Interestingly, both fluency and flexibility of PPI correlated significantly with proof correctness in MO participants only ( rs  = 0.970, p  < 0.01 for fluency and proof in MO; rs  = 0.905, p  < 0.01 for flexibility and proof in MO ). This correlation supports our observation that, as in the case of Dave’s PPI, many of the properties discovered by MO participants were discovered in the course of searching for proofs of earlier discovered properties, and PPI by MO participants constituted chains of proofs and discoveries supported by DGE.

Flexibility of PPI in the MM group correlated significantly both with originality and creativity of the PPI ( rs  = 0.900, p  < 0.01 for flexibility and originality in MM; rs  = 0.945, p  < 0.01 for flexibility and creativity in MM ). MMs’ ability to pose more different problems was related to their success in posing original problems. Surprisingly, these correlations did not appear to be significant in the MO group. We suggest that the proof skills that characterised MO mathematical expertise led to their flexibility, while the posing of original problems was rooted in their geometrical curiosity, expressed in an inclination to find interesting properties.

5 Conclusions, discussion and some additional facts that explain our findings

The goal of the study presented in this paper was to examine relationships between creativity and expertise in mathematics in two groups of participants with different types of mathematical expertise. The first group (MO) included 8 candidates or members of the Israel National Olympiad team. MOs were experts in mathematical problem solving at high level, including solving complex geometry problems. 7 of them did not study university mathematics before or during the study. The second group (MM) included mathematical majors that excelled in mathematical courses during their studies for a BSc degree in mathematics. They succeeded in solving different kinds of problems at an advanced level but were not experts in solving complex mathematical problems at the Olympiad level.

The study demonstrates significant differences between the two kinds of expertise in mathematics. We found that problem solving expertise at high (MO) level significantly influences the quality of PPI as reflected in proof skills and creativity components . Unfortunately, we found again (Leikin and Elgrably 2020 ) that university mathematics courses do not develop creative mathematical abilities and skills. The MO participants appeared to perform PPI significantly better than MM participants. They were more fluent, flexible and original and produced more complex problems with more complex auxiliary constructions. The lowest scores on almost all the examined criteria in MO were higher than the highest scores achieved by MMs on PPI tasks. This result was in spite of the fact that MMs completed university degrees in mathematics, excelled in their mathematics courses, and took a geometry course with a specific focus on PPI.

One possible explanation, that expert knowledge is the reward of years (10 years) of concentrated effort, does not apply well to our findings, since both groups invested time and effort in studying mathematics. We assume that the difference is related rather to the type of training for Olympiads (Koichu and Andžāns 2009 ) and considerations of participation in international competitions as an established indicator of expertise and talent (Bloom 1985 ), than to majoring in mathematics (Sriraman 2005 ).

We found that the high level of mathematical expertise of MO participants was reflected in the significant correlation between proof skills and creativity skill . We demonstrated clearly—both through the analysis of the interview example and by means of the correlation analysis—that problem posing performed by MOs and proving by MOs were inseparable. These findings are in accord with Duncker’s ( 1945 ) position that raising a hypothesis is an intrinsic part of the problem-solving process in mathematical experts. According to Duncker, problem solving by experts involves deep understanding of available data, seeking information to test alternatives, and producing a judgment. The MOs in our study tended to approach PPI as a problem-solving task, and through seeking for alternative properties which were more interesting for them. They used DGE mostly to test their hypotheses about additional properties, along with searching for properties using dynamic geometry. The auxiliary constructions that they performed were performed consciously, oriented to a goal. We suggest that this behavior is reflected in the absence of correlations between the auxiliary constructions performed by MOs and other examined criteria. In addition, since they approached PPI similarly to proof problems, and based their hypotheses about new properties on their previous experiences in solving mainly proof problems, high correlations between proof correctness, fluency and flexibility were found. An additional explanation for our findings can be found in Hoffman’s ( 1998 ) argument that expert performance is characterized by flexible reasoning linked to the ability to form multiple alternative interpretations or representations of problems, and an increased ability to revise old strategies and create new ones as problem-solving proceeds (Shanteau and Phelps 1977 ). Most of the MO participants searched for more original properties based on their inner curiosity.

Note here that a major study limitation is the different formats (i.e., a test and individual interviews) in which the task was employed with the two groups of participants. Nonetheless, both of these two different formats included solving a PPI task in the same dynamic geometry environment and tracking the auxiliary constructions performed and the problems posed by the participants. This data allowed us to conduct identical analyses of the PPI outcomes produced by the participants from the two groups. In contrast to the individual interviews performed with MO participants, the test conducted with MM participants did not record PPI strategies. Thus comparative analysis of the PPI strategies used by the participants from these two groups is a subject for a further investigation.

Amado, N., Carreira, S., & Jones, K. (Eds.). (2018). Broadening the scope of research on mathematical problem solving: A focus on technology, creativity and affect . Cham: Springer.

Google Scholar  

Baer, J. (2015). The Importance of domain-specific expertise in creativity. Roeper Review, 37, 165–178.

Berman, A. (2009). The pleasure of teaching the gifted and the honour of learning from them. In R. Leikin, A. Berman, & B. Koichu (Eds.), Creativity in mathematics and the education of gifted students (pp. 3–10). Rotterdam: Sense Publishers.

Bloom, B. (1985). Developing talent in young people . New York: Ballantine.

Brown, S., & Walter, M. (1993). Problem posing: Reflections and applications . Hillsdale: Lawrence Erlbaum.

Carlson, M., & Bloom, I. (2005). The cyclic nature of problem solving: An emergent multidimensional problem-solving framework. Educational Studies in Mathematics, 58, 45–75.

Davis, P. J. (1985). What do i know? A study of mathematical self-awareness. College Mathematics Journal, 16, 22–41.

Duncker, K. (1945). On problem-solving (L. S. Lees, Trans.). Psychological Monographs, 58 (5), 1–113.

Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance: Evidence of maximal adaptation to task constraints. Annual Review of Psychology, 47 (1), 273–305.

Ervynck, G. (1991). Mathematical creativity. In D. Tall (Ed.), Advanced mathematical thinking (pp. 42–53). Dordrecht: Kluwer.

Glaser, R. (1987). Thoughts on expertise. In C. Schooler & W. Schaie (Eds.), Cognitive functioning and social structure over the lifecourse (pp. 81–94). Norwood, NJ: Ablex.

Greer, B. (2009). Representational flexibility and mathematical expertise. ZDM-The International Journal on Mathematics Education, 41 (5), 697–702.

Hadamard, J. (1945). The psychology of invention in the mathematical field . Princeton, NJ: Princeton University Press.

Hoehn, L. (1993). Problem posing in geometry. In S. Brown & M. Walter (Eds.), Problem posing: Reflections and applications (pp. 281–288). Hillsdale: Lawrence Erlbaum.

Hoffman, R. R. (1998). How can expertise be defined? Implications of research from cognitive psychology. In R. Williams, W. Faulkner, & J. Fleck (Eds.), Exploring expertise . London: Macmillan.

Koichu, B. (2010). On the relationships between (relatively) advanced mathematical knowledge and (relatively) advanced problem-solving behaviours. International Journal of Mathematical Education in Science and Technology, 41 (2), 257–275.

Koichu, B., & Andžaāns, A. (2009). Mathematical creativity and giftedness in out-of-school activities. In R. Leikin, A. Berman, & B. Koichu (Eds.), Creativity in mathematics and education of gifted students (pp. 285–308). Rotterdam: Sense Publishers.

Koichu, B., & Berman, A. (2005). When do gifted high school students use geometry to solve geometry problems? Journal of Secondary Gifted Education, 16 (4), 168–179.

Krutetskii, V. A. (1976). The psychology of mathematical abilities in school children . Chicago: The University of Chicago Press.

Leikin, R. (2009). Exploring mathematical creativity using multiple solution tasks. In R. Leikin, A. Berman, & B. Koichu (Eds.), Creativity in mathematics and the education of gifted students (Ch. 9, pp. 129–145). Rotterdam: Sense Publishers.

Leikin, R. (2014). Challenging mathematics with multiple solution tasks and mathematical investigations in geometry. In Y. Li, E. A. Silver, & S. Li (Eds.), Transforming mathematics instruction: Multiple approaches and practices (pp. 59–80). Dordrecht: Springer.

Leikin, R. (2015). Problem posing for and through investigations in a dynamic geometry environment. In F. M. Singer, N. Ellerton, & J. Cai (Eds.), Problem posing: From research to effective practice (pp. 373–391). Dordrecht: Springer.

Leikin, R. (2016). Interplay between creativity and expertise in teaching and learning of mathematics. In C. Csíkos, A. Rausch, & J. Szitányi (Eds.) Proceedings of the 40th conference of the international group for the psychology of mathematics education (Vol. 1, pp. 19–34). Szeged: PME.

Leikin, R. (2019). Giftedness and high ability in mathematics. In S. Lerman (Ed.), Encyclopedia of mathematics education . 10-page entry. Springer. Electronic Version.

Leikin, R., & Elgrably, H. (2020). Problem posing through investigations for the development and evaluation of proof skills and creativity skills of prospective high school mathematics teachers. International Journal of Educational Research . https://doi.org/10.1016/j.ijer.2019.04.002 .

Article   Google Scholar  

Leikin, R., & Grossman, D. (2013). Teachers modify geometry problems: From proof to investigation. Educational Studies in Mathematics, 82 (3), 515–531.

Leikin, R., & Pitta-Pantazi, D. (Eds.) (2013). Creativity and mathematics education. ZDM-The International Journal on Mathematics Education , Special issue, 45 (2).

Leikin, R., & Sriraman, B. (Eds.). (2016). Creativity and giftedness: Interdisciplinary perspectives from mathematics and beyond (pp. 1–3). Cham: Springer.

Lesgold, A. M. (1984). Acquiring Expertise. In J. R. Anderson & S. M. Kosslyn (Eds.), Tutorials in learning and memory: Essays in honor of Gordon Bower (pp. 31–60). San Francisco, CA: W. H. Freeman.

Lester, F. K., Jr. (1994). Musings about mathematical problem-solving research: 1970–1994. Journal for Research in Mathematics Education, 25 (6), 660–675.

Levav-Waynberg, A., & Leikin, R. (2012). The role of multiple solution tasks in developing knowledge and creativity in geometry. Journal of Mathematical Behavior, 31, 73–90.

Lubinski, D., & Benbow, C. P. (2006). Study of mathematically precocious youth after 35 years: Uncovering antecedents for the development of math-science expertise. Perspectives on Psychological Science, 1 (4), 316–345.

Mann, E. L. (2006). Creativity: The essence of mathematics. Journal for the Education of the Gifted, 30, 236–262.

Pehkonen, E. (1995). Introduction: Use of open-ended problems. ZDM Mathematics Education, 27 (2), 55–57.

Poincare, H. (1908/1952). Science and method . New York: Dover Publications.

Ponte, J. P., & Henriques, A. (2013). Problem posing based on investigation activities by university students. Educational Studies in Mathematics, 83, 145–156.

Ponte, J. P. (2007). Investigations and explorations in the mathematics classroom. ZDM-The International Journal on Mathematics Education, 39 (5–6), 419–430.

Reznik, B. (1994). Some thoughts on writing for the Putnam. In A. Schoenfeld (Ed.), Mathematical thinking and problem solving (pp. 19–29). Hillsdale, NJ: Lawrence Erlbaum.

Schoenfeld, A. H. (1985). Mathematical problem solving . Orlando, FL: Academic Press.

Schoenfeld, A. H. (1992). Learning to think mathematically: Problem solving, metacognition, and sense-making in mathematics. In D. Grouws (Ed.), Handbook for research on mathematics teaching and learning (pp. 334–370). New York: MacMillan.

Shanteau, J., & Phelps, R. H. (1977). Judgment and swine: Approaches in applied judgment analysis. In M. F. Kaplan & S. Schwartz (Eds.), Human judgment and decision processes in applied settings (pp. 255–272). New York, NY: Academic Press.

Silver, E. A. (1994). On mathematical problem posing. For the Learning of Mathematics, 14 (1), 19–28.

Silver, E. A. (1997). Fostering creativity through instruction rich in mathematical problem solving and problem posing. ZDM-The International Journal on Mathematics Education, 3, 75–80.

Silver, E. A., Mamona-Downs, J., Leung, S. S., & Kenney, P. A. (1996). Posing mathematical problems: An exploratory study. Journal for Research in Mathematics Education, 27, 293–309.

Sriraman, B., & Hwa, L. K. (Eds.). (2010). The elements of creativity and giftedness in mathematics . Rotterdam: Sense Publishers.

Sriraman, B. (2005). Are giftedness & creativity synonyms in mathematics? An analysis of constructs within the professional and school realms. The Journal of Secondary Gifted Education, 17, 20–36.

Star, J. R., & Newton, K. J. (2009). The nature and development of experts’ strategy flexibility for solving equations. ZDM-The International Journal on Mathematics Education, 41 (5), 557–567.

Sternberg, R. J., & Lubart, T. I. (2000). The concept of creativity: Prospects and paradigms. In R. J. Sternberg (Ed.), Handbook of creativity (pp. 93–115). New York: Cambridge University Press.

Sweller, J., Mawer, R. F., & Ward, M. R. (1983). Development of expertise in mathematical problem solving. Journal or Experimental Psychology: General, 112 (4), 639–661.

Torrance, E. P. (1974). Torrance tests of creative thinking . Bensenville, IL: Scholastic Testing Service.

Usiskin, Z. (2000). The development into the mathematically talented. Journal of Secondary Gifted Education, 11, 152–162.

Voss, J. M., Tyler, S., & Yengo, L. (1983). Individual differences in social science problem solving. In R. F. Dillon & R. R. Schmeck (Eds.), Individual differences in cognitive processes (Vol. 1, pp. 205–232). New York, NY: Academic Press.

Wilkerson-Jerde, M. H., & Wilensky, U. J. (2011). How do mathematicians learn math?: Resources and acts for constructing and understanding mathematics. Educational Studies in Mathematics, 78, 21–43.

Yerushalmy, M., Chazan, D., & Gordon, M. (1990). Mathematical problem posing: Implications for facilitating student inquiry in classrooms. Instructional Science, 19, 219–245.

Download references

Author information

Authors and affiliations.

RANGE Center, University of Haifa, Haifa, Israel

Haim Elgrably & Roza Leikin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Roza Leikin .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Elgrably, H., Leikin, R. Creativity as a function of problem-solving expertise: posing new problems through investigations. ZDM Mathematics Education 53 , 891–904 (2021). https://doi.org/10.1007/s11858-021-01228-3

Download citation

Accepted : 22 January 2021

Published : 22 March 2021

Issue Date : August 2021

DOI : https://doi.org/10.1007/s11858-021-01228-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Problem posing through investigations
  • Geometry proof problems
  • Mathematics expertise
  • Mathematical creativity
  • Find a journal
  • Publish with us
  • Track your research
  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

the importance of investigation and problem solving

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading Change and Organizational Renewal
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

Root Cause Analysis: What It Is & How to Perform One

A hand stacking building blocks that read "root cause"

  • 07 Mar 2023

The problems that affect a company’s success don’t always result from not understanding how to solve them. In many cases, their root causes aren’t easily identified. That’s why root cause analysis is vital to organizational leadership .

According to research described in the Harvard Business Review , 85 percent of executives believe their organizations are bad at diagnosing problems, and 87 percent think that flaw carries significant costs. As a result, more businesses seek organizational leaders who avoid costly mistakes.

If you’re a leader who wants to problem-solve effectively, here’s an overview of root cause analysis and why it’s important in organizational leadership.

Access your free e-book today.

What Is Root Cause Analysis?

According to the online course Organizational Leadership —taught by Harvard Business School professors Joshua Margolis and Anthony Mayo— root cause analysis is the process of articulating problems’ causes to suggest specific solutions.

“Leaders must perform as beacons,” Margolis says in the course. “Namely, scanning and analyzing the landscape around the organization and identifying current and emerging trends, pressures, threats, and opportunities.”

By working with others to understand a problem’s root cause, you can generate a solution. If you’re interested in performing a root cause analysis for your organization, here are eight steps you must take.

8 Essential Steps of an Organizational Root Cause Analysis

1. identify performance or opportunity gaps.

The first step in a root cause analysis is identifying the most important performance or opportunity gaps facing your team, department, or organization. Performance gaps are the ways in which your organization falls short or fails to deliver on its capabilities; opportunity gaps reflect something new or innovative it can do to create value.

Finding those gaps requires leveraging the “leader as beacon” form of leadership.

“Leaders are called upon to illuminate what's going on outside and around the organization,” Margolis says in Organizational Leadership , “identifying both challenges and opportunities and how they inform the organization's future direction.”

Without those insights, you can’t reap the benefits an effective root cause analysis can produce because external forces—including industry trends, competitors, and the economy—can affect your company’s long-term success.

2. Create an Organizational Challenge Statement

The next step is writing an organizational challenge statement explaining what the gap is and why it’s important. The statement should be three to four sentences and encapsulate the challenge’s essence.

It’s crucial to explain where your organization falls short, what problems that poses, and why it matters. Describe the gap and why you must urgently address it.

A critical responsibility is deciding which gap requires the most attention, then focusing your analysis on it. Concentrating on too many problems at once can dilute positive results.

To prioritize issues, consider which are the most time-sensitive and mission-critical, followed by which can make stakeholders happy.

3. Analyze Findings with Colleagues

It's essential to work with colleagues to gain different perspectives on a problem and its root causes. This involves understanding the problem, gathering information, and developing a comprehensive analysis.

While this can be challenging when you’re a new organizational leader, using the double helix of leadership —the coevolutionary process of executing organizational leadership's responsibilities while developing the capabilities to perform them—can help foster collaboration.

Research shows diverse ideas improve high-level decision-making, which is why you should connect with colleagues with different opinions and expertise to enhance your root cause analysis’s outcome.

4. Formulate Value-Creating Activities

Next, determine what your company must do to address your organizational challenge statement. Establish three to five value-creating activities for your team, department, or organization to close the performance or opportunity gap you’ve identified.

This requires communicating organizational direction —a clear and compelling path forward that ensures stakeholders know and work toward the same goal.

“Setting direction is typically a reciprocal process,” Margolis says in Organizational Leadership . “You don't sit down and decide your direction, nor do you input your analysis of the external context into a formula and solve for a direction. Rather, setting direction is a back-and-forth process; you move between the value you'd like to create for customers, employees, investors, and your grasp of the context.”

Organizational Leadership | Take your organization to the next level | Learn More

5. Identify Necessary Behavior Changes

Once you’ve outlined activities that can provide value to your company, identify the behavior changes needed to address your organizational challenge statement.

“Your detective work throughout your root cause analysis exposes uncomfortable realities about employee competencies, organizational inefficiencies, departmental infighting, and unclear direction from leadership at multiple levels of the company,” Mayo says in Organizational Leadership .

Factors that can affect your company’s long-term success include:

  • Ineffective communication skills
  • Resistance to change
  • Problematic workplace stereotypes

Not all root cause analyses reveal behaviors that must be eliminated. Sometimes you can identify behaviors to enhance or foster internally, such as:

  • Collaboration
  • Innovative thinking
  • Creative problem-solving

6. Implement Behavior Changes

Although behaviors might be easy to pinpoint, putting them into practice can be challenging.

To ensure you implement the right changes, gauge whether they’ll have a positive or negative impact. According to Organizational Leadership , you should consider the following factors:

  • Motivation: Do the people at your organization have a personal desire for and commitment to change?
  • Competence: Do they have the skills and know-how to implement change effectively?
  • Coordination: Are they willing to work collaboratively to enact change?

Based on your answers, decide what behavior changes are plausible for your root cause analysis.

7. Map Root Causes

The next step in your analysis is mapping the root causes you’ve identified to the components of organizational alignment. Doing so helps you determine which components to adjust or change to implement employee behavior changes successfully.

Three root cause categories unrelated to behavior changes are:

  • Systems and structures: The formal organization component, including talent management, product development, and budget and accountability systems
  • People: Individuals’ profiles and the workforce’s overall composition, including employees’ skills, experience, values, and attitudes
  • Culture: The informal, intangible part of your organization, including the norms, values, attitudes, beliefs, preferences, common practices, and habits of its employees

8. Create an Action Plan

Using your findings from the previous steps, create an action plan for addressing your organizational problem’s root cause and consider your role in it.

To make the action plan achievable, ensure you:

  • Identify the problem’s root cause
  • Create measurable results
  • Ensure clear communication among your team

“One useful way to assess your potential impact on the challenge is to understand your locus of control,” Mayo says in Organizational Leadership , “or the extent to which you can personally drive the needed change or improvement.”

The best way to illustrate your control is by using three concentric circles: the innermost circle being full control of resources, the middle circle representing your ability to influence but not control, and the outermost circle alluding to shifts outside both your influence and control.

Consider these circles when implementing your action plan to ensure your goals don’t overreach.

Which HBS Online Leadership and Management Course is Right for You? | Download Your Free Flowchart

The Importance of Root Cause Analysis in Organizational Leadership

Root cause analysis is a critical organizational leadership skill for effectively addressing problems and driving change. It helps you understand shifting conditions around your company and confirm that your efforts are relevant and sustainable.

As a leader, you must not only effect change but understand why it’s needed. Taking an online course, such as Organizational Leadership , can enable you to gain that knowledge.

Using root cause analysis, you can identify the issues behind your organization’s problems, develop a plan to address them, and make impactful changes.

Are you preparing to transition to a new leadership role? Enroll in our online certificate course Organizational Leadership —one of our leadership and management courses —and learn how to perform an effective root cause analysis to ensure your company’s long-term success. To learn more about what it takes to be an effective leader, download our free leadership e-book .

the importance of investigation and problem solving

About the Author

What is problem solving and why is it important

the importance of investigation and problem solving

By Wayne Stottler , Kepner-Tregoe

  • Problem Solving & Decision Making Over time, developing and refining problem solving skills provides the ability to solve increasingly complex problems Learn More

For over 60 years, Kepner-Tregoe has been helping companies across industries and geographies to develop and mature their problem-solving capabilities through KT’s industry leading approach to training and the implementation of best practice processes. Considering that problem solving is a part of almost every person’s daily life (both at home and in the workplace), it is surprising how often we are asked to explain what problem solving is and why it is important.

Problem solving is at the core of human evolution. It is the methods we use to understand what is happening in our environment, identify things we want to change and then figure out the things that need to be done to create the desired outcome. Problem solving is the source of all new inventions, social and cultural evolution, and the basis for market based economies. It is the basis for continuous improvement, communication and learning.

If this problem-solving thing is so important to daily life, what is it?

Problem-solving is the process of observing what is going on in your environment; identifying things that could be changed or improved; diagnosing why the current state is the way it is and the factors and forces that influence it; developing approaches and alternatives to influence change; making decisions about which alternative to select; taking action to implement the changes; and observing impact of those actions in the environment.

Each step in the problem-solving process employs skills and methods that contribute to the overall effectiveness of influencing change and determine the level of problem complexity that can be addressed. Humans learn how to solve simple problems from a very early age (learning to eat, make coordinated movements and communicate) – and as a person goes through life problem-solving skills are refined, matured and become more sophisticated (enabling them to solve more difficult problems).

Problem-solving is important both to individuals and organizations because it enables us to exert control over our environment.

Fixing things that are broken

Some things wear out and break over time, others are flawed from day-1. Personal and business environments are full of things, activities, interactions and processes that are broken or not operating in the way they are desired to work. Problem-solving gives us a mechanism for identifying these things, figuring out why they are broken and determining a course of action to fix them.

Addressing risk

Humans have learned to identify trends and developed an awareness of cause-and-effect relationships in their environment. These skills not only enable us to fix things when they break but also anticipate what may happen in the future (based on past-experience and current events). Problem-solving can be applied to the anticipated future events and used to enable action in the present to influence the likelihood of the event occurring and/or alter the impact if the event does occur.

Improving performance

Individuals and organizations do not exist in isolation in the environment. There is a complex and ever-changing web of relationships that exist and as a result, the actions of one person will often have either a direct impact on others or an indirect impact by changing the environment dynamics. These interdependencies enable humans to work together to solve more complex problems but they also create a force that requires everyone to continuously improve performance to adapt to improvements by others. Problem-solving helps us understand relationships and implement the changes and improvements needed to compete and survive in a continually changing environment.

Seizing opportunity

Problem solving isn’t just about responding to (and fixing) the environment that exists today. It is also about innovating, creating new things and changing the environment to be more desirable. Problem-solving enables us to identify and exploit opportunities in the environment and exert (some level of) control over the future.

Problem solving skills and the problem-solving process are a critical part of daily life both as individuals and organizations. Developing and refining these skills through training, practice and learning can provide the ability to solve problems more effectively and over time address problems with a greater degree of complexity and difficulty. View KT’s Problem Solving workshop known to be the gold standard for over 60 years.

Blog Image 1

We are experts in:

For inquiries, details, or a proposal!

Subscribe to the KT Newsletter

  • Share full article

Advertisement

Supported by

The Mind-Expanding Value of Arts Education

As funding for arts education declines worldwide, experts ponder what students — and the world at large — are losing in the process.

the importance of investigation and problem solving

By Ginanne Brownell

This article is part of our special report on the Art for Tomorrow conference that was held in Florence, Italy.

Awuor Onguru says that if it were not for her continued exposure to arts education as a child, she never would have gotten into Yale University.

Growing up in a lower-middle-class family in Nairobi, Kenya, Ms. Onguru, now a 20-year-old junior majoring in English and French, started taking music lessons at the age of four. By 12, she was playing violin in the string quartet at her primary school, where every student was required to play an instrument. As a high school student on scholarship at the International School of Kenya, she was not only being taught Bach concertos, she also became part of Nairobi’s music scene, playing first violin in a number of local orchestras.

During her high school summer breaks, Ms. Onguru — who also has a strong interest in creative writing and poetry — went to the United States, attending the Interlochen Center for the Arts ’ creative writing camp, in Michigan, and the Iowa Young Writers’ Studio . Ms. Onguru, who recently returned to campus after helping organize Yale Glee Club’s spring tour in Kenya, hopes to become a journalist after graduation. She has already made progress toward that goal, serving as the opinion editor for the Yale Daily News, and getting her work published in Teen Vogue and the literary journal Menacing Hedge.

“Whether you’re in sports, whether you end up in STEM, whether you end up in government, seeing my peers — who had different interests in arts — not everyone wanted to be an artist,” she said in a video interview. “But they found places to express themselves, found places to be creative, found places to say things that they didn’t know how else to say them.”

Ms. Onguru’s path shows what a pivotal role arts education can play in a young person’s development. Yet, while the arts and culture space accounts for a significant amount of gross domestic product across the globe — in the United Kingdom in 2021, the arts contributed £109 billion to the economy , while in the U.S., it brought in over $1 trillion that year — arts education budgets in schools continue to get slashed. (In 2021, for instance, the spending on arts education in the U.K. came to an average of just £9.40 per pupil for the year .)

While experts have long espoused the idea that exposure to the arts plays a critical role in primary and secondary schooling, education systems globally have continually failed to hold it in high regard. As Eric Booth, a U.S.-based arts educator and a co-author of “Playing for Their Lives: The Global El Sistema Movement for Social Change Through Music,” said: “There are a whole lot of countries in the world that don’t have the arts in the school, it just isn’t a thing, and it never has been.”

That has led to the arts education trajectory heading in a “dark downward spiral,” said Jelena Trkulja, senior adviser for academic and cultural affairs at Qatar Museums , who moderated a panel entitled “When Arts Education is a Luxury: New Ecosystems” at the Art for Tomorrow conference in Florence, Italy, organized by the Democracy & Culture Foundation, with panels moderated by New York Times journalists.

Part of why that is happening, she said, is that societies still don’t have a sufficient and nuanced understanding of the benefits arts education can bring, in terms of young people’s development. “Arts education is still perceived as an add-on, rather than an essential field creating essential 21st-century skills that are defined as the four C’s of collaboration, creativity, communication and critical thinking,” Dr. Trkulja said in a video interview, “and those skills are being developed in arts education.”

Dennie Palmer Wolf, principal researcher at the U.S.-based arts research consultancy WolfBrown , agreed. “We have to learn to make a much broader argument about arts education,” she said. “It isn’t only playing the cello.”

It is largely through the arts that we as humans understand our own history, from a cave painting in Indonesia thought to be 45,000 years old to “The Tale of Genji,” a book that’s often called the world’s first novel , written by an 11th-century Japanese woman, Murasaki Shikibu; from the art of Michelangelo and Picasso to the music of Mozart and Miriam Makeba and Taylor Swift.

“The arts are one of the fundamental ways that we try to make sense of the world,” said Brian Kisida, an assistant professor at the University of Missouri’s Truman School of Public Affairs and a co-director of the National Endowment for the Arts-sponsored Arts, Humanities & Civic Engagement Lab . “People use the arts to offer a critical perspective of their exploration of the human condition, and that’s what the root of education is in some ways.”

And yet, the arts don’t lend themselves well to hard data, something educators and policymakers need to justify classes in those disciplines in their budgets. “Arts is this visceral thing, this thing inside you, the collective moment of a crescendo,” said Heddy Lahmann , an assistant professor of international education at New York University, who is conducting a global study examining arts education in public schools for the Community Arts Network. “But it’s really hard to qualify what that is.”

Dr. Lahmann’s early research into the decrease in spending by public schools in arts education points to everything from the lack of trained teachers in the arts — partly because those educators are worried about their own job security — to the challenges of teaching arts remotely in the early days of the Covid pandemic. And, of course, standardized tests like the Program for International Student Assessment, which covers reading, math and science, where countries compete on outcomes. “There’s a race to get those indicators,” Dr. Lahmann said, “and arts don’t readily fit into that.” In part, that is because standardized tests don’t cover arts education .

“It’s that unattractive truth that what gets measured gets attended to,” said Mr. Booth, the arts educator who co-authored “Playing for Their Lives.”

While studies over the years have underscored the ways that arts education can lead to better student achievement — in the way that musical skills support literacy, say, and arts activities lead to improved vocabulary, what have traditionally been lacking are large-scale randomized control studies. But a recent research project done in 42 elementary and middle schools in Houston, which was co-directed by Dr. Kisida and Daniel H. Bowen, a professor who teaches education policy at Texas A&M, is the first of its kind to do just that. Their research found that students who had increased arts education experiences saw improvements in writing achievement, emotional and cognitive empathy, school engagement and higher education aspirations, while they had a lower incidence of disciplinary infractions.

As young people are now, more than ever, inundated with images on social media and businesses are increasingly using A.I., it has become even more relevant for students these days to learn how to think more critically and creatively. “Because what is required of us in this coming century is an imaginative capacity that goes far beyond what we have deliberately cultivated in the schooling environment over the last 25 years,” said Mariko Silver, the chief executive of the Henry Luce Foundation, “and that requires truly deep arts education for everyone.”

Facility for Rare Isotope Beams

At michigan state university, international research team uses wavefunction matching to solve quantum many-body problems, new approach makes calculations with realistic interactions possible.

FRIB researchers are part of an international research team solving challenging computational problems in quantum physics using a new method called wavefunction matching. The new approach has applications to fields such as nuclear physics, where it is enabling theoretical calculations of atomic nuclei that were previously not possible. The details are published in Nature (“Wavefunction matching for solving quantum many-body problems”) .

Ab initio methods and their computational challenges

An ab initio method describes a complex system by starting from a description of its elementary components and their interactions. For the case of nuclear physics, the elementary components are protons and neutrons. Some key questions that ab initio calculations can help address are the binding energies and properties of atomic nuclei not yet observed and linking nuclear structure to the underlying interactions among protons and neutrons.

Yet, some ab initio methods struggle to produce reliable calculations for systems with complex interactions. One such method is quantum Monte Carlo simulations. In quantum Monte Carlo simulations, quantities are computed using random or stochastic processes. While quantum Monte Carlo simulations can be efficient and powerful, they have a significant weakness: the sign problem. The sign problem develops when positive and negative weight contributions cancel each other out. This cancellation results in inaccurate final predictions. It is often the case that quantum Monte Carlo simulations can be performed for an approximate or simplified interaction, but the corresponding simulations for realistic interactions produce severe sign problems and are therefore not possible.

Using ‘plastic surgery’ to make calculations possible

The new wavefunction-matching approach is designed to solve such computational problems. The research team—from Gaziantep Islam Science and Technology University in Turkey; University of Bonn, Ruhr University Bochum, and Forschungszentrum Jülich in Germany; Institute for Basic Science in South Korea; South China Normal University, Sun Yat-Sen University, and Graduate School of China Academy of Engineering Physics in China; Tbilisi State University in Georgia; CEA Paris-Saclay and Université Paris-Saclay in France; and Mississippi State University and the Facility for Rare Isotope Beams (FRIB) at Michigan State University (MSU)—includes  Dean Lee , professor of physics at FRIB and in MSU’s Department of Physics and Astronomy and head of the Theoretical Nuclear Science department at FRIB, and  Yuan-Zhuo Ma , postdoctoral research associate at FRIB.

“We are often faced with the situation that we can perform calculations using a simple approximate interaction, but realistic high-fidelity interactions cause severe computational problems,” said Lee. “Wavefunction matching solves this problem by doing plastic surgery. It removes the short-distance part of the high-fidelity interaction, and replaces it with the short-distance part of an easily computable interaction.”

This transformation is done in a way that preserves all of the important properties of the original realistic interaction. Since the new wavefunctions look similar to that of the easily computable interaction, researchers can now perform calculations using the easily computable interaction and apply a standard procedure for handling small corrections called perturbation theory.  A team effort

The research team applied this new method to lattice quantum Monte Carlo simulations for light nuclei, medium-mass nuclei, neutron matter, and nuclear matter. Using precise ab initio calculations, the results closely matched real-world data on nuclear properties such as size, structure, and binding energies. Calculations that were once impossible due to the sign problem can now be performed using wavefunction matching.

“It is a fantastic project and an excellent opportunity to work with the brightest nuclear scientist s in FRIB and around the globe,” said Ma. “As a theorist , I'm also very excited about programming and conducting research on the world's most powerful exascale supercomputers, such as Frontier , which allows us to implement wavefunction matching to explore the mysteries of nuclear physics.”

While the research team focused solely on quantum Monte Carlo simulations, wavefunction matching should be useful for many different ab initio approaches, including both classical and  quantum computing calculations. The researchers at FRIB worked with collaborators at institutions in China, France, Germany, South Korea, Turkey, and United States.

“The work is the culmination of effort over many years to handle the computational problems associated with realistic high-fidelity nuclear interactions,” said Lee. “It is very satisfying to see that the computational problems are cleanly resolved with this new approach. We are grateful to all of the collaboration members who contributed to this project, in particular, the lead author, Serdar Elhatisari.”

This material is based upon work supported by the U.S. Department of Energy, the U.S. National Science Foundation, the German Research Foundation, the National Natural Science Foundation of China, the Chinese Academy of Sciences President’s International Fellowship Initiative, Volkswagen Stiftung, the European Research Council, the Scientific and Technological Research Council of Turkey, the National Natural Science Foundation of China, the National Security Academic Fund, the Rare Isotope Science Project of the Institute for Basic Science, the National Research Foundation of Korea, the Institute for Basic Science, and the Espace de Structure et de réactions Nucléaires Théorique.

Michigan State University operates the Facility for Rare Isotope Beams (FRIB) as a user facility for the U.S. Department of Energy Office of Science (DOE-SC), supporting the mission of the DOE-SC Office of Nuclear Physics. Hosting what is designed to be the most powerful heavy-ion accelerator, FRIB enables scientists to make discoveries about the properties of rare isotopes in order to better understand the physics of nuclei, nuclear astrophysics, fundamental interactions, and applications for society, including in medicine, homeland security, and industry.

The U.S. Department of Energy Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of today’s most pressing challenges. For more information, visit energy.gov/science.

ScienceDaily

Wavefunction matching for solving quantum many-body problems

Strongly interacting systems play an important role in quantum physics and quantum chemistry. Stochastic methods such as Monte Carlo simulations are a proven method for investigating such systems. However, these methods reach their limits when so-called sign oscillations occur. This problem has now been solved by an international team of researchers from Germany, Turkey, the USA, China, South Korea and France using the new method of wavefunction matching. As an example, the masses and radii of all nuclei up to mass number 50 were calculated using this method. The results agree with the measurements, the researchers now report in the journal " Nature ."

All matter on Earth consists of tiny particles known as atoms. Each atom contains even smaller particles: protons, neutrons and electrons. Each of these particles follows the rules of quantum mechanics. Quantum mechanics forms the basis of quantum many-body theory, which describes systems with many particles, such as atomic nuclei.

One class of methods used by nuclear physicists to study atomic nuclei is the ab initio approach. It describes complex systems by starting from a description of their elementary components and their interactions. In the case of nuclear physics, the elementary components are protons and neutrons. Some key questions that ab initio calculations can help answer are the binding energies and properties of atomic nuclei and the link between nuclear structure and the underlying interactions between protons and neutrons.

However, these ab initio methods have difficulties in performing reliable calculations for systems with complex interactions. One of these methods is quantum Monte Carlo simulations. Here, quantities are calculated using random or stochastic processes. Although quantum Monte Carlo simulations can be efficient and powerful, they have a significant weakness: the sign problem. It arises in processes with positive and negative weights, which cancel each other. This cancellation leads to inaccurate final predictions.

A new approach, known as wavefunction matching, is intended to help solve such calculation problems for ab initio methods. "This problem is solved by the new method of wavefunction matching by mapping the complicated problem in a first approximation to a simple model system that does not have such sign oscillations and then treating the differences in perturbation theory," says Prof. Ulf-G. Meißner from the Helmholtz Institute for Radiation and Nuclear Physics at the University of Bonn and from the Institute of Nuclear Physics and the Center for Advanced Simulation and Analytics at Forschungszentrum Jülich. "As an example, the masses and radii of all nuclei up to mass number 50 were calculated -- and the results agree with the measurements," reports Meißner, who is also a member of the Transdisciplinary Research Areas "Modeling" and "Matter" at the University of Bonn.

"In quantum many-body theory, we are often faced with the situation that we can perform calculations using a simple approximate interaction, but realistic high-fidelity interactions cause severe computational problems," says Dean Lee, Professor of Physics from the Facility for Rare Istope Beams and Department of Physics and Astronomy (FRIB) at Michigan State University and head of the Department of Theoretical Nuclear Sciences.

Wavefunction matching solves this problem by removing the short-distance part of the high-fidelity interaction and replacing it with the short-distance part of an easily calculable interaction. This transformation is done in a way that preserves all the important properties of the original realistic interaction. Since the new wavefunctions are similar to those of the easily computable interaction, the researchers can now perform calculations with the easily computable interaction and apply a standard procedure for handling small corrections -- called perturbation theory.

The research team applied this new method to lattice quantum Monte Carlo simulations for light nuclei, medium-mass nuclei, neutron matter and nuclear matter. Using precise ab initio calculations, the results closely matched real-world data on nuclear properties such as size, structure and binding energy. Calculations that were once impossible due to the sign problem can now be performed with wavefunction matching.

While the research team focused exclusively on quantum Monte Carlo simulations, wavefunction matching should be useful for many different ab initio approaches. "This method can be used in both classical computing and quantum computing, for example to better predict the properties of so-called topological materials, which are important for quantum computing," says Meißner.

The first author is Prof. Dr. Serdar Elhatisari, who worked for two years as a Fellow in Prof. Meißner's ERC Advanced Grant EXOTIC. According to Meißner, a large part of the work was carried out during this time. Part of the computing time on supercomputers at Forschungszentrum Jülich was provided by the IAS-4 institute, which Meißner heads.

  • Quantum Computers
  • Computers and Internet
  • Computer Modeling
  • Spintronics Research
  • Mathematics
  • Quantum mechanics
  • Quantum entanglement
  • Introduction to quantum mechanics
  • Computer simulation
  • Quantum computer
  • Quantum dot
  • Quantum tunnelling
  • Security engineering

Story Source:

Materials provided by University of Bonn . Note: Content may be edited for style and length.

Journal Reference :

  • Serdar Elhatisari, Lukas Bovermann, Yuan-Zhuo Ma, Evgeny Epelbaum, Dillon Frame, Fabian Hildenbrand, Myungkuk Kim, Youngman Kim, Hermann Krebs, Timo A. Lähde, Dean Lee, Ning Li, Bing-Nan Lu, Ulf-G. Meißner, Gautam Rupak, Shihang Shen, Young-Ho Song, Gianluca Stellin. Wavefunction matching for solving quantum many-body problems . Nature , 2024; DOI: 10.1038/s41586-024-07422-z

Cite This Page :

Explore More

  • Stopping Flu Before It Takes Hold
  • Cosmic Rays Illuminate the Past
  • Star Suddenly Vanish from the Night Sky
  • Dinosaur Feather Evolution
  • Warming Climate: Flash Droughts Worldwide
  • Record Low Antarctic Sea Ice: Climate Change
  • Brain 'Assembloids' Mimic Blood-Brain Barrier
  • 'Doomsday' Glacier: Catastrophic Melting
  • Blueprints of Self-Assembly
  • Meerkat Chit-Chat

Trending Topics

Strange & offbeat.

IMAGES

  1. Problem-Solving Strategies: Definition and 5 Techniques to Try

    the importance of investigation and problem solving

  2. 15 Importance of Problem Solving Skills in the Workplace

    the importance of investigation and problem solving

  3. Why problem solving is important?

    the importance of investigation and problem solving

  4. Top 10 Skills Of Problem Solving With Examples

    the importance of investigation and problem solving

  5. 5 step problem solving method

    the importance of investigation and problem solving

  6. 5 why problem solving tool

    the importance of investigation and problem solving

VIDEO

  1. Importance of Problem solving and Decision Making Skill

  2. VW Scirocco ignition difficult to turn... Fault finding and repair

  3. CXC|CSEC|MATHS 2024 Paper 2 Exam Part 7 (CSEC CXC INVESTIGATION AND PROBLEM SOLVING SOLUTION)

  4. Meaningful Incident Investigation

  5. AI/ML in Criminal Investigation

  6. Grade 11 investigation (Video 6)

COMMENTS

  1. Problem Solving, Investigating Ideas and Solutions

    The elements necessary for divergent thinking include: Releasing the mind from old patterns of thought and other inhibiting influences. Bringing the elements of a problem into new combinations. Not rejecting any ideas during the creative, problem solving period. Actively practicing, encouraging and rewarding the creation of new ideas.

  2. What is the Scientific Method: How does it work and why is it important

    While the scientific method is versatile in form and function, it encompasses a collection of principles that create a logical progression to the process of problem solving: Define a question: Constructing a clear and precise problem statement that identifies the main question or goal of the investigation is the first step. The wording must ...

  3. PDF Why are MATHEMATICAL INVESTIGATIONS important?

    (2008). Mathematical investigations would generally focus on one or more of the three content strands as well as developing elements of one or more of the four proficiency strands, understanding, fluency, problem solving, reasoning, which are necessary for 'working mathematically'. Figure 1. Diagram of the investigative approach.

  4. Generating and Evaluating Scientific Evidence and Explanations

    The evidence-gathering phase of inquiry includes designing the investigation as well as carrying out the steps required to collect the data. Generating evidence entails asking questions, deciding what to measure, developing measures, collecting data from the measures, structuring the data, systematically documenting outcomes of the investigations, interpreting and evaluating the data, and ...

  5. Perspective: Problem Finding and the Multidisciplinary Mind

    The multidisciplinary mind. Broadly, scientific investigation may start in two ways, either of which may be fruitful. A "problem focused" approach begins with a question that stimulates studies to look for answers. Sometimes, however, new solutions to a specific problem appear that suggest potential application to other problems.

  6. Problem Solving

    Importance of problem solving skills. Obviously, every organization has problems and every individual has problems too. For this reason, the ability to solve problems is of great importance to individuals and organizations. ... Being inquisitive and conducting thorough investigation and research helps you identify what the core of the problem ...

  7. What is Problem Solving? Steps, Process & Techniques

    Finding a suitable solution for issues can be accomplished by following the basic four-step problem-solving process and methodology outlined below. Step. Characteristics. 1. Define the problem. Differentiate fact from opinion. Specify underlying causes. Consult each faction involved for information. State the problem specifically.

  8. Investigation and problem-solving in mathematical education

    Summary. Investigation can play a vital part in the learning of mathematical concepts and in problem-solving. At all stages the teacher has an essential part to play. He sets the scene, providing real materials or a challenging problem when necessary. He observes what his pupils do with these and asks questions which will help their learning.

  9. PDF Section One: Investigation and Problem-solving

    the investigation. For instance, in any one investigation lesson a student might learn any or all of the following: i) A new mathematical fact or technique. This will be something which the students need in order to pursue the investigation. The students will seek the new information them selves. They could find it out from you or from a peer

  10. Problem Solving in Science Learning

    The traditional teaching of science problem solving involves a considerable amount of drill and practice. Research suggests that these practices do not lead to the development of expert-like problem-solving strategies and that there is little correlation between the number of problems solved (exceeding 1,000 problems in one specific study) and the development of a conceptual understanding.

  11. The effectiveness of collaborative problem solving in promoting

    Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field ...

  12. PDF Characterising the Cognitive Processes in Mathematical Investigation

    In Section 2, we have decoupled problem posing from the process of investigation. In other words, an open investigative activity involves both problem posing and problem solving, i.e., problem posing is a subset of an open investigative activity and not a subset of the process of investigation.

  13. Creativity as a function of problem-solving expertise: posing new

    3.1 Problem posing through investigations. PPI is a complex mathematical activity that includes the following (Leikin and Elgrably 2020):. Investigating a geometrical figure (from a proof problem) in a DGE (experimenting, conjecturing and testing), in order to find several [at least 2] non-trivial properties of the given figure and related figures that are constructed using auxiliary ...

  14. Using Research for Investigative Decision-Making

    An evidence-based policing (EBP) approach offers law enforcement a foundation for real-world problem-solving through the compilation and promotion of research efforts focused on policing. 1 However, ... it can also be misapplied. For this reason, understanding the process of research and its application to investigations is important.

  15. PDF An Investigation of Problem Solving Approaches, Strategies, and ...

    problem solving outside of the classroom. From this perspective, a realistic answer for the first problem would be 13 rather than 12.5. A realistic answer to the second problem is 8; because in reality one can saw only 2 planks of 1 meter from a plank 2.5 meters long. The third problem does not have a single cor-

  16. Root Cause Analysis: What It Is & How to Perform One

    8 Essential Steps of an Organizational Root Cause Analysis. 1. Identify Performance or Opportunity Gaps. The first step in a root cause analysis is identifying the most important performance or opportunity gaps facing your team, department, or organization. Performance gaps are the ways in which your organization falls short or fails to deliver ...

  17. [PDF] Solving Mathematical Problems by Investigation

    Solving Mathematical Problems by Investigation. Joseph B. W. Yeo, Yeap Ban Har. Published 1 May 2009. Mathematics, Education. TLDR. This chapter discusses the relationship between problem solving and investigation by differentiating investigation as a task, as a process and as an activity, and shows how the process of investigation can occur in ...

  18. What is problem solving and why is it important

    Problem-solving enables us to identify and exploit opportunities in the environment and exert (some level of) control over the future. Problem solving skills and the problem-solving process are a critical part of daily life both as individuals and organizations. Developing and refining these skills through training, practice and learning can ...

  19. Intelligence and creativity in problem solving: The importance of test

    This paper discusses the importance of three features of psychometric tests for cognition research: construct definition, problem space, and knowledge domain. Definition of constructs, e.g., intelligence or creativity, forms the theoretical basis for test construction. Problem space, being well or ill-defined, is determined by the cognitive abilities considered to belong to the constructs, e.g ...

  20. PDF Observational investigation of student problem solving: The role and

    Observational investigation of student problem solving: The role and importance of habits Ozcan Gulacar*†, Charles R. Bowman†, Debra A. Feakes† ABSTRACT: The problem-solving strategies of students enrolled in general chemistry courses have been the subject of numerous research investigations. In

  21. (PDF) ENHANCING THE PROBLEM-SOLVING AND CRITICAL ...

    Enhancing the problem-solving and critical thinking skills of students using the mathematical investigation approach Pentang, J. (2019). Determining elementary pre- service teachers' problem solving

  22. PDF Problem Solving In Science Learning

    This is why domain-specific study of problem solving activity and creativity are gradually becoming more and more important. Among various disciplines taught in the institution, science particularly seems to have enough scopes to encourage problem solving skills. In fact, problem solving is the essence of scientific investigation (Meador,

  23. The Mind-Expanding Value of Arts Education

    Avion Pearce for The New York Times. While experts have long espoused the idea that exposure to the arts plays a critical role in primary and secondary schooling, education systems globally have ...

  24. International research team uses wavefunction matching to solve quantum

    New approach makes calculations with realistic interactions possibleFRIB researchers are part of an international research team solving challenging computational problems in quantum physics using a new method called wavefunction matching. The new approach has applications to fields such as nuclear physics, where it is enabling theoretical calculations of atomic nuclei that were previously not ...

  25. PDF Problem Posing in Mathematical Investigation

    Based on these models but with some modifications, an investigation model (see Fig. 1) was developed for this study to describe the interaction of these processes. An important difference between a mathematical investigation model and a problem-solving model is the additional phase of problem posing after understanding the task in investigation.

  26. Hydrogen releasing law and in situ computed tomography investigation of

    This research has important theoretical and practical value in solving the wading safety problem of lithium-ion battery and electric vehicle design. Wading event of new energy electric vehicles occurs frequently, and wading safety of lithium-ion batteries has been increasingly paid great attention.

  27. Wavefunction matching for solving quantum many-body problems

    Wavefunction matching for solving quantum many-body problems. Date: May 15, 2024. Source: University of Bonn. Summary: Strongly interacting systems play an important role in quantum physics and ...