US → UR
Let’s think about how classical conditioning is used on us. Another example you are probably very familiar with involves your alarm clock. If you are like most people, waking up early usually makes you unhappy. In this case, waking up early (US) produces a natural sensation of grumpiness (UR). Rather than waking up early on your own, though, you likely have an alarm clock that plays a tone to wake you. Before setting your alarm to that particular tone, let’s imagine you had neutral feelings about it (i.e., the tone had no prior meaning for you). However, now that you use it to wake up every morning, you psychologically “pair” that tone (CS) with your feelings of grumpiness in the morning (UR). After enough pairings, this tone (CS) will automatically produce your natural response of grumpiness (CR). Thus, this linkage between the unconditioned stimulus (US; waking up early) and the conditioned stimulus (CS; the tone) is so strong that the unconditioned response (UR; being grumpy) will become a conditioned response (CR; e.g., hearing the tone at any point in the day—whether waking up or walking down the street—will make you grumpy). Modern studies of classical conditioning use a vast range of CS’s and US’s and measure a wide range of conditioned responses.
Watson believed that most of our fears and other emotional responses are classically conditioned. He had gained a good deal of popularity in the 1920s with his expert advice on parenting offered to the public. He believed that parents could be taught to help shape their children’s behavior and tried to demonstrate the power of classical conditioning with his famous experiment with an 18-month-old boy named “Little Albert.” Watson sat Albert down and introduced a variety of seemingly scary objects to him: a burning piece of newspaper, a white rat, etc. However, Albert remained curious and reached for all of these things. Watson knew that one of our only inborn fears is the fear of loud noises, so he proceeded to make a loud noise each time he introduced one of Albert’s favorites, a white rat. After hearing the loud noise several times paired with the rat, Albert soon came to fear the rat and began to cry when it was introduced. Watson filmed this experiment for posterity and used it to demonstrate that he could help parents achieve any outcomes they desired if they would only follow his advice. Watson wrote columns in newspapers and magazines and gained much popularity among parents eager to apply science to household order. Parenting advice was not the legacy Watson left us, however. Where he made his impact was in advertising. After Watson left academia, he went into the world of business and showed companies how to tie something that brings about a natural positive feeling to their products to enhance sales. Thus, the union of sex and advertising!
Operant Conditioning is another learning theory that emphasizes a more conscious type of learning than that of classical conditioning. A person (or animal) does something (operates something) to see what effect it might bring. Simply said, operant conditioning describes how we repeat behaviors because they pay off for us. It is based on a principle authored by a psychologist named Thorndike (1874-1949) called the law of effect . The law of effect suggests that we will repeat an action if a good effect follows it. However, when a behavior has a negative (painful/annoying) consequence, it is less likely to be repeated in the future. Effects that increase behaviors are referred to as reinforcers, and effects that decrease them are referred to as punishers. Operant conditioning occurs when a behavior (as opposed to a stimulus) is associated with the occurrence of a significant event. This voluntary behavior is called an operant behavior, because it “operates” on the environment (i.e., it is an action that the animal itself makes).
Figure caption: When a dog does a trick, the dog receives a treat. This is operant conditioning – the action/operation gets a stimulus response. Photo Courtesy of Pixabay
Let’s think about how operant conditioning is used on us. Have you ever done something to get a reward or not done something to avoid punishment? This is operant conditioning!
B.F. Skinner (1904-1990) continued the expand on Thorndike’s principle and outlined the principles of operant conditioning . Skinner believed that we learn best when our actions are reinforced. For example, a child who cleans his room and is reinforced (rewarded) with a big hug and words of praise is more likely to clean it again than a child whose deed goes unnoticed. Skinner believed that almost anything could be reinforcing. A reinforcer is anything following a behavior that makes it more likely to occur again. It can be something intrinsically rewarding (called intrinsic or primary reinforcers), such as food or praise, or it can be something rewarding because it can be exchanged for what one wants (such as using money to buy a cookie). Such reinforcers are referred to as secondary reinforcers or extrinsic reinforcers.
Sometimes, adding something to the situation is reinforcing as in the cases we described above with cookies, praise, and money. Positive reinforcement involves adding something to the situation in order to encourage a behavior. Other times, taking something away from a situation can be reinforcing. For example, the loud, annoying buzzer on your alarm clock encourages you to get up so that you can turn it off and get rid of the noise. Children whine in order to get their parents to do something and often, parents give in to stop the whining. In these instances, negative reinforcement has been used.
Operant conditioning tends to work best if you focus on trying to encourage a behavior or move a person into the direction you want them to go rather than telling them what not to do. Reinforcers are used to encourage behavior; punishers are used to stop a behavior. A punisher is anything that follows an act and decreases the chance it will reoccur. However, often a punished behavior does not go away. It is just suppressed and may reoccur whenever the threat of punishment is removed. For example, a child may not cuss around you because you have washed his mouth out with soap, but he may cuss around his friends. Alternatively, a motorist may only slow down when the trooper is on the side of the freeway. Another problem with punishment is that when a person focuses on punishment, they may find it hard to see what the other does right or well. Moreover, the punishment is stigmatizing; when punished, some start to see themselves as bad and give up trying to change.
Table. Positive and Negative Reinforcement and Punishment
Reinforcement (increase behavior that follows it) | Positive Reinforcement (pleasant consequence to reward) Example: dog gets treat for doing trick | Negative Reinforcement (remove aversive stimulus to reward) Example: buckle up to avoid car seat alarm |
Punishment (decrease behavior) | Positive Punishment (add aversive stimulus to punish) Example: pay fine for late library book | Negative Punishment (remove pleasant stimuli to punish)Example: Taken out of game for rough behavior |
Reinforcement can occur in a predictable way, such as after every desired action is performed, or intermittently after the behavior is performed a number of times or the first time it is performed after a certain amount of time. The schedule of reinforcement has an impact on how long a behavior continues after reinforcement is discontinued. So, a parent who has rewarded a child’s actions each time may find that the child gives up very quickly if a reward is not immediately forthcoming. A lover who is warmly regarded now and then may continue to seek out his or her partner’s attention long after the partner has tried to break up. Think about the kinds of behaviors you may have learned through classical and operant conditioning. You may have learned many things in this way. However, sometimes we learn very complex behaviors quickly and without direct reinforcement. Bandura explains how.
Albert Bandura is a leading contributor to social learning theory . He calls our attention to how many of our actions are not learned through conditioning; instead, they are learned by watching others (1977). Young children frequently learn behaviors through imitation. Sometimes, particularly when we do not know what else to do, we learn by modeling or copying the behavior of others. An employee on his or her first day of a new job might eagerly look at how others are acting and try to act the same way to fit in more quickly. Adolescents struggling with their identity rely heavily on their peers to act as role-models. Newly married couples often rely on roles they may have learned from their parents and begin to act in ways they did not while dating and then wonder why their relationship has changed. Sometimes we do things because we have seen it pay off for someone else. They were operantly conditioned, but we engage in the behavior because we hope it will pay off for us as well. This is referred to as vicarious reinforcement (Bandura, Ross, & Ross, 1963).
Bandura (1986) suggests that there is an interplay between the environment and the individual. We are not just the product of our surroundings; rather we influence our surroundings. There is interplay between our personality and the way we interpret events and how they influence us. This concept is called reciprocal determinism.
In reciprocal determinism, there are bi-directional influences between how a person thinks and feels with the environment, his/her actions. Image courtesy of Wikimedia
An example of this might be the interplay between parents and children. Parents not only influence their child’s environment, perhaps intentionally through the use of reinforcement, etc., but children influence parents as well. Parents may respond differently to their first child than with their fourth. Perhaps they try to be the perfect parents with their firstborn, but by the time their last child comes along they have very different expectations both of themselves and their child. Our environment creates us, and we create our environment. Other social influences: TV or not TV? (Bandura et al., 1963) began a series of studies to look at the impact of television, particularly commercials, have on the behavior of children. Are children more likely to act out aggressively when they see this behavior modeled? What if they see it being reinforced? Bandura began by conducting an experiment in which he showed children a film of a woman hitting an inflatable clown or “Bobo” doll. Then the children were allowed in the room where they found the doll and immediately began to hit it. This was without any reinforcement whatsoever. Later they viewed a woman hitting a real clown, and sure enough, when allowed in the room, they too began to hit the clown! Not only that, but they found new ways to behave aggressively. It is as if they learned an aggressive role.
Children view far more television today than in the 1960s; so much, in fact, that they have been referred to as Generation M (media). Based on a study of a national representative sample of over 7,000 8 to 18-year-olds, the Kaiser Foundation (2010) reports that children spend just over 7 hours a day involved with media outside of schoolwork. This includes almost 4 hours of television viewing and over an hour on the computer. Two-thirds have a television in their room, and those children watch an average of 1.27 hours more of television per day than those that do not have a television in their bedroom (Kaiser Family Foundation, 2005). The prevalence of violence, sexual content, and messages promoting foods high in fat and sugar in the media are certainly cause for concern and the subjects of ongoing research and policy review. Many children spend even more time on the computer viewing content from the internet. Moreover, the amount of time spent connected to the internet continues to increase with the use of smartphones that primarily serve as mini-computers. What are the implications of this?
Cognitive theories focus on how our mental processes or cognitions change over time. We will examine the ideas of two cognitive theorists: Jean Piaget and Lev Vygotsky.
Jean Piaget (1896-1980) is one of the most influential cognitive theorists in development inspired to explore children’s ability to think and reason by watching his own children’s development. He was one of the first to recognize and map out how children’s intelligence differs from that of adults. He became interested in this area when he was asked to test the IQ of children and began to notice that there was a pattern in their wrong answers! He believed that children’s intellectual skills change over time and that maturation rather than training brings about that change. Children of differing ages interpret the world differently.
Piaget believed that we are continuously trying to maintain cognitive equilibrium or a balance or cohesiveness in what we see and what we know. Children have much more of a challenge in maintaining this balance because they are continually being confronted with new situations, new words, new objects, etc. When faced with something new, a child may either fit it into an existing framework ( schema ) and match it with something known ( assimilation ) such as calling all animals with four legs “doggies” because he or she knows the word doggie, or expand the framework of knowledge to accommodate the new situation ( accommodation ) by learning a new word to more accurately name the animal. This is the underlying dynamic in our cognition. Even as adults we continue to try and “make sense” of new situations by determining whether they fit into our old way of thinking or whether we need to modify our thoughts.
Figure caption: An individual can be in a state of disequilibrium when new information does not match the knowledgebase. In order to equalize, the new information is either accommodated (change to knowledge base) or assimilated (fits within knowledgebase). Image courtesy of Pixabay.
Piaget outlined four major stages of cognitive development. Let me briefly mention them here, but we will discuss them in detail throughout the course. For about the first two years of life, the child experiences the world primarily through their senses and motor skills. Piaget referred to this type of intelligence as sensorimotor intelligence. During the preschool years, the child begins to master the use of symbols or words and can think of the world symbolically but not yet logically. This stage is the preoperational stage of development. The concrete operational stage in middle childhood is marked by an ability to use logic in understanding the physical world. In the final stage, the formal operational stage the adolescent learns to think abstractly and to use logic in both concrete and abstract ways.
Sensorimotor – learning through senses and actions/motor skills (touch, look, put in mouth, grasp, etc.) | |
Preoperational – using symbols (language, imaginative play), lack logical reasoning | |
Operational – logical thought for concrete events, understanding categories, hierarchies and arithmetic operations | |
Formal Operational – abstract reasoning |
Piaget has been criticized for overemphasizing the role that physical maturation plays in cognitive development and in underestimating the role that culture and interaction (or experience) plays in cognitive development. Looking across cultures reveals considerable variation in what children can do at various ages. Piaget may have underestimated what children are capable of given the right circumstances. For example, we will learn more about more current research examining infant cognition and babies’ understanding of the world in chapter 3.
Lev Vygotsky (1896-1934) was a Russian psychologist who wrote in the early 1900s but whose work was discovered in the United States in the 1960s but became more widely known in the 1980s. Vygotsky differed with Piaget in that he believed that a person has not only a set of abilities but also a set of inherent abilities that can be realized if given the proper guidance from others. His sociocultural theory emphasizes the importance of culture and interaction in the development of cognitive abilities. He believed that through guided participation , also known as scaffolding , with a teacher or capable peer, a child could learn cognitive skills within a certain range known as the zone of proximal development. Have you ever taught a child to perform a task? Maybe it was brushing their teeth or preparing food. Chances are you spoke to them and described what you were doing while you demonstrated the skill and let them work along with you all through the process. You assisted them when they seemed to need it, but once they knew what to do-you stood back and let them go. This is scaffolding and can be seen demonstrated throughout the world. The individual learning that needs more guidance or scaffolding has a larger zone of proximal development (more room for growth in learning the task or skill independently). Someone who already can do the task or skill with little help is said to have a smaller zone of proximal development (needs less scaffolding). This approach to teaching has also been adopted by educators. Rather than assessing students on what they are doing, they should be understood in terms of what they are capable of doing with the proper guidance. You can see how Vygotsky would be very popular with modern day educators. We will discuss Vygotsky in greater depth in upcoming lessons.
Figure caption: Vygotsky’s Zone of Proximal Development. The individual learning that needs more guidance or scaffolding has a larger zone of proximal development (more room for growth in learning the task or skill independently). Someone who already can do the task or skill with little help is said to have a smaller zone of proximal development (needs less scaffolding).
Information Processing is not the work of a single theorist, but based on the ideas and research of several cognitive scientists studying how individuals perceive, analyze, manipulate, use, and remember information. The information processing model theorizes that information made available by the environment is processed by a series of processing systems (e.g. attention, perception, aspects of memory). This approach assumes that humans gradually improve in their processing skills; that is, development is continuous rather than stage-like. The information processing model is analogous to computer functioning in that we combine information presented with stored information like you are able to as you edit and resave files on a computer. However, humans do have limitations in how well we process information and we may not recall or restore information as efficiently as a computer. Additionally, humans are not serial processors and are more complex than computers (for example, consider our emotional and motivational factors).
The image below shows a version of the information processing model. The processors that are shown (sensory memory, working (short-term) memory, long term memory) are used as we attend, store, and retrieve information. We first notice stimuli through our senses and then we begin to process information in our working (short-term) memory. Once memory has been stored, it is in our long-term memories. Working (short-term) memory has a limited capacity. We first notice stimuli through our senses and then we begin to process information in our working (short-term) memory. Once memory has been stored, it is in our long-term memories.
Figure: This is an example of the information processing model based from Attkinson and Shiffrin (1968). Image courtesy of Wikipedia
Information processing theories see the more complex mental skills of adults being built from the primitive abilities of children in a continuously developing process. We are born with the ability to notice stimuli, store, and retrieve information. Brain maturation enables advancements in our information processing system. At the same time, interactions with the environment also aid in our development of more effective strategies for processing information.
Urie Bronfenbrenner (1917-2005) provides a model of human development that addresses its many influences. Bronfenbrenner recognized that larger social forces influence human interaction and that an understanding of those forces is essential for understanding an individual.
In sum, a child’s experiences are shaped by larger forces such as the family, schools, and religion, and culture. All of this occurs in a historical context or chronosystem. Bronfenbrenner’s model helps us combine each of the other theories described above and gives us a perspective that brings it all together. Despite its comprehensiveness, Bronfenbrenner’s ecological system’s theory is not easy to use. Taking into consideration all the different influences makes it difficult to research and determine the impact of all the different variables (Dixon, 2003). Consequently, psychologists have not fully adopted this approach, although they recognize the importance of the ecology of the individual. The figure below is an expanded version of Brofenbrenner’s model including examples for each system.
Each psychological theory presented in the chapter expands our understanding of human development. Some of the theories focus on different periods of development while others expand on how changes occur across the lifespan. The theories presented cover core aspects of psychology – including cognitive, behavioral, psychoanalytical, and social development. As we cover human development chronologically, you will identify sections of the chapter are divided up by different concepts of development, including biological and physical changes. While these different areas are discussed separately, these are interactive processes. Human development is an interaction between biological and environmental factors.
Theory | Zone of proximal development |
Discontinuous vs continuous | classical conditioning |
Active vs passive | Unconditioned stimulus, conditioned |
Nature and nurture | stimulus, conditioned response |
Freud’s theory | operant conditioning |
Id, ego, superego | Positive and negative reinforcement |
Erikson’s 8 stages | Punishment |
Piaget’s 4 stages | Ecological Systems model |
Assimilation vs. accommodation | |
Vygotsky’s theory | |
Scaffolded/guided participation |
Psychology Through the Lifespan Copyright © 2020 by Alisa Beyer and Julie Lazzara is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Part of the book series: Springer International Handbooks of Education ((SIHE))
1360 Accesses
Developmental Psychology is the scientific study of mind and behavior from the perspective of change across the entire lifespan. In the present chapter, we provide a comprehensive and modern view on current topics particularly relevant when teaching Developmental Psychology. We start with the attempt to derive a contemporary definition of development and Developmental Psychology. Over historical time, perspectives on development changed. These different perspectives were regularly challenged, and we discuss some of the questions of scientific dispute such as the influence of nature and nurture on the development of an individual from a contemporary perspective. The perspectives often resulted in larger theoretical constructs. We will not describe individual theories comprehensively but rather focus on general issues of theoretical approaches and highlight one recent approach, the dynamic systems theories. Theories need to be supported by empirical evidence. Accordingly, we will briefly describe the major research designs used to measure developmental change. We will conclude the chapter with a focus on one topic particularly relevant when teaching Developmental Psychology, the development of communication, and discuss further topics that can potentially be included in a Developmental Psychology curriculum and describe some ideas on how to teach them. In all, we intend to provide a contemporary overview of the scientific study of developmental change.
This is a preview of subscription content, log in via an institution to check access.
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Acredolo, L. P., & Goodwyn, S. W. (1990). Sign language among hearing infants: The spontaneous development of symbolic gestures. Springer Series in Language and Communication , 68–78. https://doi.org/10.1007/978-3-642-74859-2_7
Adolph, K. E., Young, J. W., Robinson, S. R., & Gill-Alvarez, F. (2008). What is the shape of developmental change? Psychological Review, 115 (3), 527–543. https://doi.org/10.1037/0033-295x.115.3.527
Article Google Scholar
Arnett, J. J. (2007). Emerging adulthood: What is it, and what is it good for? Child Development Perspectives, 1 (2), 68–73. https://doi.org/10.1111/j.1750-8606.2007.00016.x
Baltes, P. B. (1987). Theoretical propositions of life-span developmental psychology: On the dynamics between growth and decline. Developmental Psychology, 23 (5), 611–626. https://doi.org/10.1037/0012-1649.23.5.611
Baron-Cohen, S. (1989). Perceptual role taking and protodeclarative pointing in autism. British Journal of Developmental Psychology, 7 (2), 113–127. https://doi.org/10.1111/j.2044-835x.1989.tb00793.x
Baron-Cohen, S. (1995). Mindblindness: An essay on autism and theory of mind . Cambridge, MH: MIT Press.
Book Google Scholar
Baroni, M. R., & Axia, G. (1989). Children’s meta-pragmatic abilities and the identification of polite and impolite requests. First Language, 9 (27), 285–297. https://doi.org/10.1177/014272378900902703
Batki, A., Baron-Cohen, S., Wheelwright, S., Connellan, J., & Ahluwalia, J. (2000). Is there an innate gaze module? Evidence from human neonates. Infant Behavior & Development, 23 (2), 223–229.
Bischof, N. (2020). Life Span an der Lahn. Psychologische Rundschau, 71 (1), 36–38.
Blake, J., & Boysson-Bardies, B. D. (1992). Patterns in babbling: A cross-linguistic study. Journal of Child Language, 19 (1), 51–74. https://doi.org/10.1017/s0305000900013623
Bohannon, J. H., & Bonvillian, J. D. (1997). Theoretical approaches to language acquisition. The Development of Language, 4 , 259–316.
Google Scholar
Bowlby, J. (1999). Attachment and loss: Vol. 1. Attachment (2nd ed.). Basic Books.
Brooks, R., & Meltzoff, A. N. (2002). The importance of eyes: How infants interpret adult looking behavior. Developmental Psychology, 38 (6), 958–966. https://doi.org/10.1037/0012-1649.38.6.958
Bruner, J. S. (1983). Play, thought, and language. Peabody Journal of Education, 60 (3), 60–69. https://doi.org/10.1080/01619568309538407
Bushneil, I. W. R., Sai, F., & Mullin, J. T. (1989). Neonatal recognition of the mother’s face. British Journal of Developmental Psychology, 7 (1), 3–15. https://doi.org/10.1111/j.2044-835X.1989.tb00784.x
Buskist, W. F., & Benassi, V. A. (Eds.). (2011). Effective college and university teaching: Strategies and tactics for the new professoriate (1st ed.). SAGE.
Callanan, M. A., & Sabbagh, M. A. (2004). Multiple labels for objects in conversations with young children: Parents’ language and children’s developing expectations about word meanings. Developmental Psychology, 40 (5), 746–763. https://doi.org/10.1037/0012-1649.40.5.746
Carlson, S. M., & Moses, L. J. (2001). Individual differences in inhibitory control and children’s theory of mind. Child Development, 72 (4), 1032–1053. https://doi.org/10.1111/1467-8624.00333
Chalmers, D., & Fuller, R. (2012). Teaching for learning at university . Routledge.
Cooper, R. P., & Aslin, R. N. (1990). Preference for infant-directed speech in the first month after birth. Child Development, 61 (5), 1584–1595. https://doi.org/10.1111/j.1467-8624.1990.tb02885.x
Coyle, T. R., & Bjorklund, D. F. (1997). Age differences in, and consequences of, multiple and variable-strategy use on a multitrial sort-recall task. Developmental Psychology, 33 (2), 372–380. https://doi.org/10.1037/0012-1649.33.2.372
Daum, M. M., Greve, W., Pauen, S., Schuhrke, B., & Schwarzer, G. (2020). Positionspapier der Fachgruppe Entwicklungspsychologie: Versuch einer Standortbestimmung. Psychologische Rundschau, 71 (1), 15–23. https://doi.org/10.1026/0033-3042/a000465
Daum, M. M., & Manfredi, M. (2021). The history of developmental psychology. PsyArXiv . https://doi.org/10.31234/osf.io/s2ckp
Davis, B. L., & MacNeilage, P. F. (2000). An embodiment perspective on the acquisition of speech perception. Phonetica, 57 (2–4), 229–241. https://doi.org/10.1159/000028476
Davis, H. L., & Pratt, C. (1995). The development of children’s theory of mind: The working memory explanation. Australian Journal of Psychology, 47 (1), 25–31. https://doi.org/10.1080/00049539508258765
Dunn, D., Halonen, J. S., & Smith, R. A. (2008). Teaching critical thinking in psychology a handbook of best practices . Wiley-Blackwell.
Erikson, E. H., & Erikson, J. M. (1998). The life cycle completed . W. W. Norton & Company.
Fantz, R. L. (1963). Pattern vision in newborn infants. Science, 140 (3564), 296–297. https://doi.org/10.1126/science.140.3564.296
Farroni, T., Massaccesi, S., Pividori, D., & Johnson, M. H. (2004). Gaze following in newborns. Infancy, 5 , 39–60.
Fernald, A., & Simon, T. (1984). Expanded intonation contours in mothers’ speech to newborns. Developmental Psychology, 20 (1), 104–113. https://doi.org/10.1037/0012-1649.20.1.104
Fischer, K. W., & van Geert, P. L. C. (2014). Dynamic development of brain and behavior. In Handbook of developmental systems theory and methodology (pp. 287–315). The Guilford Press.
Freud, S. (1930). Three contributions to the theory of sex: Authorized transl. By AA Brill. With introduction by James J. Putnam, and AA Brill . Nervous and Mental Disease Publishing Company.
Garton, A. F., & Pratt, C. (1990). Children’s pragmatic judgements of direct and indirect requests. First Language, 10 (28), 51–59. https://doi.org/10.1177/014272379001002804
Gershkoff-Stowe, L., & Smith, L. B. (1997). A curvilinear trend in naming errors as a function of early vocabulary growth. Cognitive Psychology, 34 (1), 37–71. https://doi.org/10.1006/cogp.1997.0664
Goldin-Meadow, S. (2000). Beyond words: The importance of gesture to researchers and learners. Child Development, 71 (1), 231–239. https://doi.org/10.1111/1467-8624.00138
Gray, K. (2017). How to map theory: Reliable methods are fruitless without rigorous theory. Perspectives on Psychological Science. https://doi.org/10.1177/1745691617691949
Hamaker, E. L. (2012). Why researchers should think “within-person”: A paradigmatic rationale. In M. R. Mehl & T. S. Connor (Eds.), Handbook of research methods for studying daily life (pp. 43–61). Guilford.
Havighurst, R. J. (1972). Developmental tasks and education (3rd ed.). New York: David McKay Company.
Hood, B. M., Willen, J. D., & Driver, J. (1998). Adult’s eyes trigger shifts of visual attention in human infants. Psychological Science, 9 (2), 131–134. https://doi.org/10.1111/1467-9280.00024
James, W. (1890). The principles of psychology . Holt.
Katz, G. S., Cohn, J. F., & Moore, C. A. (1996). A combination of vocal f0 dynamic and summary features discriminates between three pragmatic categories of infant-directed speech. Child Development, 67 (1), 205. https://doi.org/10.2307/1131696
Kohlberg, L. (1973). Moral development . McGraw-Hill Films.
Kuhl, P. K. (2004). Early language acquisition: Cracking the speech code. Nature Reviews Neuroscience, 5 (11), 831–843. https://doi.org/10.1038/nrn1533
Lee, C. S., Kitamura, C., Burnham, D., & McAngus Todd, N. P. (2014). On the rhythm of infant- versus adultdirected speech in Australian English. The Journal of the Acoustical Society of America, 136 (1), 357–365.
Leong, V., Kalashnikova, M., Burnham, D., & Goswami, U. (2017). The temporal modulation structure of infantdirected speech. Open Mind, 1 (2), 78–90.
Lindenberger, U. (2013, September 10). Lifespan psychology: Challenges for the future. 21. Tagung Fachgruppe Entwicklungspsychologie . Tagung der Fachgruppe Entwicklungspsychologie der DGPs, Saarbrücken.
Masataka, N. (1992). Pitch characteristics of Japanese maternal speech to infants. Journal of Child Language, 19 (2), 213–223. https://doi.org/10.1017/s0305000900011399
McKee, C., & McDaniel, D. (2004). Multiple influences on children’s language performance. Journal of Child Language, 31 (2), 489–492. https://doi.org/10.1017/s0305000904006130
McNeill, D. (1992). Hand and mind: What gestures reveal about thought . University of Chicago Press.
Meaney, M. J. (2001). Nature, nurture, and the disunity of knowledge. Annals of the New York Academy of Sciences, 935 (1), 50–61. https://doi.org/10.1111/j.1749-6632.2001.tb03470.x
Meaney, M. J. (2010). Epigenetics and the biological definition of gene × environment interactions. Child Development, 81 (1), 41–79. https://doi.org/10.1111/j.1467-8624.2009.01381.x
Munakata, Y., Snyder, H. R., & Chatham, C. H. (2012). Developing cognitive control: Three key transitions. Current Directions in Psychological Science, 21 (2), 71–77. https://doi.org/10.1177/0963721412436807
Mundy, P., Block, J., Delgado, C., Pomares, Y., Van Hecke, A. V., & Parlade, M. V. (2007). Individual differences and the development of joint attention in infancy. Child Development, 78 (3), 938–954. https://doi.org/10.1111/j.1467-8624.2007.01042.x
Mundy, P., & Newell, L. (2007). Attention, joint attention, and social cognition. Current Directions in Psychological Science, 16 (5), 269–274. https://doi.org/10.1111/j.1467-8721.2007.00518.x
Piaget, J. (1954). The construction of reality in the child . Basic Books.
Plomin, R., DeFries, J. C., Craig, I. W., & McGuffin, P. (2003). Behavioral genetics. In R. Plomin, J. C. DeFries, I. W. Craig, & P. McGuffin (Eds.), Behavioral genetics in the postgenomic era (pp. 3–16). American Psychological Association.
Chapter Google Scholar
Plomin, R., & Spinath, F. M. (2004). Intelligence: Genetics, genes, and genomics. Journal of Personality and Social Psychology, 86 (1), 112–129. https://doi.org/10.1037/0022-3514.86.1.112
Przyborski, A., & Wohlrab-Sahr, M. (2013). Qualitative Sozialforschung: Ein Arbeitsbuch . Walter de Gruyter.
Reinert, G. (1976). Grundzüge einer Geschichte der Human-Entwicklungspsychologie . Univ., Fachbereich I, Psychologie.
Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Computer Graphics, 21 (4), 25–34. https://doi.org/10.1145/37402.37406
Rheingold, H. L., & Adams, J. L. (1980). The significance of speech to newborns. Developmental Psychology, 16 (5), 397–403. https://doi.org/10.1037/0012-1649.16.5.397
Rosenthal, M. (1982). Vocal dialogues in the neonatal period. Developmental Psychology, 18 (1), 17–21. https://doi.org/10.1037/0012-1649.18.1.17
Ross, H. S., & Lollis, S. P. (1987). Communication within infant social games. Developmental Psychology, 23 (2), 241–248. https://doi.org/10.1037/0012-1649.23.2.241
Schacter, D., Gilbert, D., Wegner, D., & Hood, B. M. (2011). Psychology: European edition . Macmillan International Higher Education.
Schaie, K. W. (2015). Cohort sequential designs (convergence analysis). In R. L. Cautin & S. O. Lilienfeld (Eds.), The encyclopedia of clinical psychology (pp. 1–6). American Cancer Society. https://doi.org/10.1002/9781118625392.wbecp098
Schneider, M., & Mustafić, M. (2015). Gute Hochschullehre: Eine evidenzbasierte Orientierungshilfe: Wie man Vorlesungen, Seminare und Projekte effektiv gestaltet . Springer-Verlag.
Schwarzer, G., & Walper, S. (2016). Entwicklungspsychologie. In Dorsch Lexikon der Psychologie . Verlag Hans Huber. https://m.portal.hogrefe.com/dorsch/gebiet/entwicklungspsychologie/ .
Shaffer, D. R., & Kipp, K. (2010). Developmental psychology: Childhood and adolescence (8th ed.). Wadsworth/Cengage Learning. http://thuvienso.vanlanguni.edu.vn/handle/Vanlang_TV/11689
Siegler, R. S. (2016). Continuity and change in the field of cognitive development and in the perspectives of one cognitive developmentalist. Child Development Perspectives, 10 (2), 128–133. https://srcd.onlinelibrary.wiley.com/doi/abs/10.1111/cdep.12173 . https://doi.org/10.1111/cdep.12173
Siegler, R. S., & Jenkins, E. A. (2014). How children discover new strategies . Psychology Press.
Siegler, R. S., & Svetina, M. (2002). A microgenetic/cross-sectional study of matrix completion: Comparing short-term and long-term change. Child Development, 73 (3), 793–809. https://doi.org/10.1111/1467-8624.00439
Smith, L. B., & Thelen, E. (2003). Development as a dynamic system. Trends in Cognitive Sciences, 7 (8), 343–348. https://doi.org/10.1016/S1364-6613(03)00156-6
Soderstrom, M. (2007). Beyond babytalk: Re-evaluating the nature and content of speech input to preverbal infants. Developmental Review, 27 (4), 501–532. https://doi.org/10.1016/j.dr.2007.06.002
Spencer, J. P., Thomas, S. C., & McClelland, J. L. (2009). Toward a unified theory of development: Connectionism and dynamic systems theory re-considered . Oxford University Press.
Striano, T., Chen, X., Cleveland, A., & Bradshaw, S. (2006). Joint attention social cues influence infant learning. European Journal of Developmental Psychology, 3 (3), 289–299. https://doi.org/10.1080/17405620600879779
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18 , 643–662.
Tarantino, N., Tully, E. C., Garcia, S. E., South, S., Iacono, W. G., & McGue, M. (2014). Genetic and environmental influences on affiliation with deviant peers during adolescence and early adulthood. Developmental Psychology, 50 (3), 663–673. https://doi.org/10.1037/a0034345
Thelen, E., & Smith, L. B. (1996). A dynamic systems approach to the development of cognition and action . MIT Press.
Thelen, E., & Smith, L. B. (2007). Dynamic systems theories. In Handbook of child psychology . American Cancer Society. https://doi.org/10.1002/9780470147658.chpsy0106
Tomasello, M. (1995). Joint attention as social cognition. In C. Moore & P. J. Dunham (Eds.), Joint attention: Its origins and role in development (pp. 103–130). Lawrence Erlbaum Associates.
Trautner, H. M. (2003). Allgemeine Entwicklungspsychologie . Kohlhammer Verlag.
Valenza, E., Simion, F., Cassia, V. M., & Umilta, C. (1996). Face preference at birth. Journal of Experimental Psychology-Human Perception and Performance, 22 (4), 892–903.
van Geert, P. L. C. (1994). Dynamic systems of development: Change between complexity and chaos (p. xii, 300). Harvester Wheatsheaf.
van Geert, P. L. C. (1998). A dynamic systems model of basic developmental mechanisms: Piaget, Vygotsky, and beyond. Psychological Review, 105 (4), 634–677. https://doi.org/10.1037/0033-295X.105.4.634-677
van Geert, P. L. C. (2017). Constructivist theories. In B. Hopkins, E. Geangu, & S. Linkenauger (Eds.), The Cambridge encyclopedia of child development (2nd ed., pp. 19–34). Cambridge University Press. https://doi.org/10.1017/9781316216491.005
Vygotsky, L. S. (1978). Mind and society: The development of higher mental processes . Harvard University Press.
Walton, G. E., Bower, N. J. A., & Bower, T. G. R. (1992). Recognition of familiar faces by newborns. Infant Behavior & Development, 15 (2), 269–265. https://doi.org/10.1016/0163-6383(92)80027-R
Werker, J. F., & Hensch, T. K. (2015). Critical periods in speech perception: New directions. Annual Review of Psychology, 66 (1), 173–196. https://doi.org/10.1146/annurev-psych-010814-015104
Download references
Authors and affiliations.
Department of Psychology, Developmental Psychology: Infancy and Childhood, University of Zurich, Zurich, Switzerland
Moritz M. Daum & Mirella Manfredi
Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
Moritz M. Daum
You can also search for this author in PubMed Google Scholar
Correspondence to Moritz M. Daum .
Editors and affiliations.
Department of Educational Research, University of Salzburg, Salzburg, Austria
Joerg Zumbach
Department of Psychology, University of South Florida, Bonita Springs, FL, USA
Douglas A. Bernstein
School of Science - Faculty of Psychology, Psychology of Learning and Instruction, Technische Universitaet Dresden, Dresden, Sachsen, Germany
Susanne Narciss
Department of Human, Philosophical and Educational Sciences (DISUFF), University of Salerno, Fisciano, Italy
Giuseppina Marsico
University of Salzburg, Salzburg, Austria
Department of Psychology, University of South Florida, Tampa, FL, USA
Psychologie des Lehrens und Lernens, Technische Universität Dresden, Dresden, Deutschland
Department of Human, Philosophic, and Education Sciences, University of Salerno, Salerno, Italy
Reprints and permissions
© 2023 Springer Nature Switzerland AG
Cite this entry.
Daum, M.M., Manfredi, M. (2023). Developmental Psychology. In: Zumbach, J., Bernstein, D.A., Narciss, S., Marsico, G. (eds) International Handbook of Psychology Learning and Teaching. Springer International Handbooks of Education. Springer, Cham. https://doi.org/10.1007/978-3-030-28745-0_13
DOI : https://doi.org/10.1007/978-3-030-28745-0_13
Published : 17 December 2022
Publisher Name : Springer, Cham
Print ISBN : 978-3-030-28744-3
Online ISBN : 978-3-030-28745-0
eBook Packages : Education Reference Module Humanities and Social Sciences Reference Module Education
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Policies and ethics
Developmental Psychology ® publishes articles that significantly advance knowledge and theory about development across the life span. The journal focuses on seminal empirical contributions. The journal occasionally publishes exceptionally strong scholarly reviews and theoretical or methodological articles. Studies of any aspect of psychological development are appropriate, as are studies of the biological, social, and cultural factors that affect development.
The journal welcomes not only laboratory-based experimental studies but studies employing other rigorous methodologies, such as ethnographies, field research, and secondary analyses of large data sets. We especially seek submissions in new areas of inquiry and submissions that will address contradictory findings or controversies in the field as well as the generalizability of extant findings in new populations.
Although most articles in this journal address human development, studies of other species are appropriate if they have important implications for human development.
Submissions can consist of single manuscripts, proposed sections, or short reports.
Disclaimer: APA and the editors of Developmental Psychology ® assume no responsibility for statements and opinions advanced by the authors of its articles.
Developmental Psychology supports equity, diversity, and inclusion (EDI) in its practices. More information on these initiatives is available under EDI Efforts .
The APA Journals Program is committed to publishing transparent, rigorous research; improving reproducibility in science; and aiding research discovery. Open science practices vary per editor discretion. View the initiatives implemented by this journal .
Each issue of Developmental Psychology will highlight one manuscript with the designation as an “ Editor’s Choice ” paper. Selection is based on the recommendations of the associate editors, based on the paper’s potential impact to the field, the distinction of expanding the contributors to, or the focus of, our science, or its discussion of an important future direction for science.
Explore journal highlights : free article summaries, editor interviews and editorials, journal awards, mentorship opportunities, and more.
Prior to submission, please carefully read and follow the submission guidelines detailed below. Manuscripts that do not conform to the submission guidelines may be returned without review.
Please submit manuscripts electronically through the Manuscript Submission Portal in Microsoft Word (.docx) or LaTex (.tex) as a zip file with an accompanied Portable Document Format (.pdf) of the manuscript file.
Prepare manuscripts according to the Publication Manual of the American Psychological Association using the 7 th edition. Manuscripts may be copyedited for bias-free language (see Chapter 5 of the Publication Manual ). APA Style and Grammar Guidelines for the 7 th edition are available.
Submit Manuscript
Koraly Pérez-Edgar The Pennsylvania State University
General correspondence may be directed to the editor's office .
Manuscripts should be the appropriate length for the material being presented. Manuscripts can vary from a maximum of 4,500 words for a brief report to 10,500 words for a larger research report to 15,000 words for a report containing multiple studies or comprehensive longitudinal studies. Please note that the total length includes the cover page, abstract, main manuscript text, references section, tables, and figures. Editors will decide on the appropriate length and may return a manuscript for revision before reviews if they think the paper is too long. Please make manuscripts as brief as possible. We have a strong preference for shorter papers.
The APA Publication Manual (7th ed.) stipulates that “authorship encompasses…not only persons who do the writing but also those who have made substantial scientific contributions to a study.” In the spirit of transparency and openness, Developmental Psychology has adopted the Contributor Roles Taxonomy (CRediT) to describe each author's individual contributions to the work. CRediT offers authors the opportunity to share an accurate and detailed description of their diverse contributions to a manuscript.
Submitting authors will be asked to identify the contributions of all authors at initial submission according to this taxonomy. If the manuscript is accepted for publication, the CRediT designations will be published as an author contributions statement in the author note of the final article. All authors should have reviewed and agreed to their individual contribution(s) before submission.
CRediT includes 14 contributor roles, as described below:
Authors can claim credit for more than one contributor role, and the same role can be attributed to more than one author.
Authors submitting manuscripts to the journal Developmental Psychology are now required to provide 2–3 brief sentences regarding the relevance or public health significance of their study or review described in their manuscript. This description should be included within the manuscript on the abstract/keywords page.
The public significance statement (similar to the Relevance section of NIH grant submissions) summarizes the significance of the study's findings for a public audience in one to three sentences (approximately 30–70 words long). It should be written in language that is easily understood by both professionals and members of the lay public. Please refer to the Guidance for Translational Abstracts and Public Significance Statements page to help you write these statements. This statement supports efforts to increase dissemination and usage of research findings by larger and more diverse audiences.
When an accepted paper is published, these sentences will be boxed beneath the abstract for easy accessibility. All such descriptions will also be published as part of the table of contents, as well as on the journal's web page. This policy is in keeping with efforts to increase dissemination and usage by larger and diverse audiences.
In addition to email addresses, please supply mailing addresses, phone numbers, and fax numbers. Most correspondence will be handled by email. Keep a copy of the manuscript to guard against loss.
This journal uses masked review for all submissions. Make every effort to see that the manuscript itself contains no clues to the authors' identity, including grant numbers, names of institutions providing IRB approval, self-citations, and links to online repositories for data, materials, code, or preregistrations (e.g., Create a View-only Link for a Project ). The submission letter should indicate the title of the manuscript, the authors' names and institutional affiliations, and the date the manuscript is submitted.
The first page of the manuscript should omit the authors' names and affiliations but should include the title of the manuscript and the date it is submitted. Author notes, acknowledgments, and footnotes containing information pertaining to the authors' identity or affiliations may be added on acceptance.
Description of sample.
Authors should be sure to report the procedures for sample selection and recruitment. Major demographic characteristics should be reported, such as sex, age, socioeconomic status, race/ethnicity, and, when possible and appropriate, disability status and sexual orientation. Even when such demographic characteristics are not analytic variables, they provide a more complete understanding of the sample and of the generalizability of the findings and are useful in future meta-analytic studies.
Authors should provide a justification that their sample size is appropriate beyond just citing convention in the literature. Justification could include a power analysis, a stopping rule, and/or some other type of valid justification.
For all study results, measures of both practical and statistical significance should be reported. The latter can involve either a standard error or an appropriate confidence interval. Practical significance can be reported using an effect size, a standardized regression coefficient, a factor loading, or an odds ratio.
Manuscripts should include information regarding the establishment of interrater reliability when relevant, including the mechanisms used to establish reliability and the statistical verification of rater agreement and excluding the names of the trainers and the amount of personal contact with such individuals.
Authors must adhere to the APA Style Journal Article Reporting Standards (JARS) for quantitative, qualitative, and mixed methods. The standards offer ways to improve transparency in reporting to ensure that readers have the information necessary to evaluate the quality of the research and to facilitate collaboration and replication.
APA endorses the Transparency and Openness Promotion (TOP) Guidelines developed by a community working group in conjunction with the Center for Open Science ( Nosek et al. 2015 ). Empirical research, including meta-analyses, submitted to Developmental Psychology must at least meet the “requirement” level for all aspects of research planning and reporting. Authors should include a subsection in the method section titled “Transparency and Openness.” This subsection should detail the efforts the authors have made to comply with the TOP Guidelines.
For example:
Authors must state whether data, code, and study materials are posted to a trusted repository and, if so, where to access them, including their location and any limitations on use. If they cannot be made available, authors must state the legal or ethical reasons why they are not available. Trusted repositories adhere to policies that make data discoverable, accessible, usable, and preserved for the long term. Trusted repositories also assign unique and persistent identifiers. Recommended repositories include APA’s repository on the Open Science Framework (OSF), or authors can access a full list of other recommended repositories .
In a subsection titled “Transparency and Openness” at the end of the method section, specify whether and where the data and material will be available or note the legal or ethical reasons for not doing so. For submissions with quantitative or simulation analytic methods, state whether the study analysis code is posted to a trusted repository, and, if so, how to access it (or the legal or ethical reason why it is not available).
Preregistration of studies and specific hypotheses can be a useful tool for making strong theoretical claims. Likewise, preregistration of analysis plans can be useful for distinguishing confirmatory and exploratory analyses. Investigators are encouraged to preregister their studies and analysis plans prior to conducting the research via a publicly accessible registry system (e.g., OSF , ClinicalTrials.gov, or other trial registries in the WHO Registry Network). There are many available templates; for example, APA, the British Psychological Society, and the German Psychological Society partnered with the Leibniz Institute for Psychology and Center for Open Science to create Preregistration Standards for Quantitative Research in Psychology (Bosnjak et al., 2022).
Articles must state whether or not any work was preregistered and, if so, where to access the preregistration. If any aspect of the study is preregistered, include the registry link in the method section. Preregistrations must be available to reviewers; authors may submit a masked copy via stable link or supplemental material. Links in the method section should be replaced with an identifiable copy on acceptance.
Developmental Psychology publishes direct replications. Submissions should include “A Replication of XX Study” in the subtitle of the manuscript as well as in the abstract.
Developmental Psychology also publishes Registered Reports. Registered Reports require a two-step review process. The first step is the submission of the registration manuscript. This is a partial manuscript that includes hypotheses, rationale for the study, experimental design, and methods. The partial manuscript will be reviewed for rigor and methodological approach.
If the partial manuscript is accepted, this amounts to provisional acceptance of the full report regardless of the outcome of the study. The full manuscript will be reviewed for adherence to the preregistered design (deviations should be reported in the manuscript).
Prepare manuscripts according to the Publication Manual of the American Psychological Association using the 7th edition. Manuscripts may be copyedited for bias-free language (see Chapter 5 of the Publication Manual ).
Review APA's Journal Manuscript Preparation Guidelines before submitting your article.
Double-space all copy. Other formatting instructions, as well as instructions on preparing tables, figures, references, metrics, and abstracts, appear in the Manual . Additional guidance on APA Style is available on the APA Style website .
Below are additional instructions regarding the preparation of display equations, computer code, and tables.
We strongly encourage you to use MathType (third-party software) or Equation Editor 3.0 (built into pre-2007 versions of Word) to construct your equations, rather than the equation support that is built into Word 2007 and Word 2010. Equations composed with the built-in Word 2007/Word 2010 equation support are converted to low-resolution graphics when they enter the production process and must be rekeyed by the typesetter, which may introduce errors.
To construct your equations with MathType or Equation Editor 3.0:
If you have an equation that has already been produced using Microsoft Word 2007 or 2010 and you have access to the full version of MathType 6.5 or later, you can convert this equation to MathType by clicking on MathType Insert Equation. Copy the equation from Microsoft Word and paste it into the MathType box. Verify that your equation is correct, click File, and then click Update. Your equation has now been inserted into your Word file as a MathType Equation.
Use Equation Editor 3.0 or MathType only for equations or for formulas that cannot be produced as Word text using the Times or Symbol font.
Because altering computer code in any way (e.g., indents, line spacing, line breaks, page breaks) during the typesetting process could alter its meaning, we treat computer code differently from the rest of your article in our production process. To that end, we request separate files for computer code.
We request that runnable source code be included as supplemental material to the article. For more information, visit Supplementing Your Article With Online Material .
If you would like to include code in the text of your published manuscript, please submit a separate file with your code exactly as you want it to appear, using Courier New font with a type size of 8 points. We will make an image of each segment of code in your article that exceeds 40 characters in length. (Shorter snippets of code that appear in text will be typeset in Courier New and run in with the rest of the text.) If an appendix contains a mix of code and explanatory text, please submit a file that contains the entire appendix, with the code keyed in 8-point Courier New.
Use Word's insert table function when you create tables. Using spaces or tabs in your table will create problems when the table is typeset and may result in errors.
Authors who feel that their manuscript may benefit from additional academic writing or language editing support prior to submission are encouraged to seek out such services at their host institutions, engage with colleagues and subject matter experts, and/or consider several vendors that offer discounts to APA authors .
Please note that APA does not endorse or take responsibility for the service providers listed. It is strictly a referral service.
Use of such service is not mandatory for publication in an APA journal. Use of one or more of these services does not guarantee selection for peer review, manuscript acceptance, or preference for publication in any APA journal.
APA can place supplemental materials online, available via the published article in the PsycArticles ® database. Please see Supplementing Your Article With Online Material for more details.
The abstract must include major demographic characteristics about the sample (e.g., age, gender, race/ethnicity, socioeconomic status) so the reader can judge the degree to which the sample reflects the diversity, equity, and inclusion of participants. The abstract should not exceed a maximum of 250 words and typed on a separate page. After the abstract, please supply up to six keywords or brief phrases.
List references in alphabetical order. Each listed reference should be cited in text, and each text citation should be listed in the references section.
Examples of basic reference formats:
McCauley, S. M., & Christiansen, M. H. (2019). Language learning as language use: A cross-linguistic model of child language development. Psychological Review , 126 (1), 1–51. https://doi.org/10.1037/rev0000126
Brown, L. S. (2018). Feminist therapy (2nd ed.). American Psychological Association. https://doi.org/10.1037/0000092-000
Balsam, K. F., Martell, C. R., Jones. K. P., & Safren, S. A. (2019). Affirmative cognitive behavior therapy with sexual and gender minority people. In G. Y. Iwamasa & P. A. Hays (Eds.), Culturally responsive cognitive behavior therapy: Practice and supervision (2nd ed., pp. 287–314). American Psychological Association. https://doi.org/10.1037/0000119-012
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software , 36(3), 1–48. https://www.jstatsoft.org/v36/i03/
Wickham, H. et al., (2019). Welcome to the tidyverse. Journal of Open Source Software, 4 (43), 1686, https://doi.org/10.21105/joss.01686
All data, program code, and other methods must be cited in the text and listed in the references section.
Alegria, M., Jackson, J. S., Kessler, R. C., & Takeuchi, D. (2016). Collaborative Psychiatric Epidemiology Surveys (CPES), 2001–2003 [Data set]. Inter-university Consortium for Political and Social Research. https://doi.org/10.3886/ICPSR20240.v8
Preferred formats for graphics files are TIFF and JPG, and preferred format for vector-based files is EPS. Graphics downloaded or saved from web pages are not acceptable for publication. Multipanel figures (i.e., figures with parts labeled a, b, c, d, etc.) should be assembled into one file. When possible, please place symbol legends below the figure instead of to the side.
Line weights
APA offers authors the option to publish their figures online in color without the costs associated with print publication of color figures.
The same caption will appear on both the online (color) and print (black and white) versions. To ensure that the figure can be understood in both formats, authors should add alternative wording (e.g., “the red (dark gray) bars represent”) as needed.
For authors who prefer their figures to be published in color both in print and online, original color figures can be printed in color at the editor's and publisher's discretion provided the author agrees to pay:
Authors of accepted papers must obtain and provide to the editor on final acceptance all necessary permissions to reproduce in print and electronic form any copyrighted work, including test materials (or portions thereof), photographs, and other graphic images (including those used as stimuli in experiments).
On advice of counsel, APA may decline to publish any image whose copyright status is unknown.
For full details on publication policies, including use of Artificial Intelligence tools, please see APA Publishing Policies .
APA policy prohibits an author from submitting the same manuscript for concurrent consideration by two or more publications.
See also APA Journals ® Internet Posting Guidelines .
APA requires authors to reveal any possible conflict of interest in the conduct and reporting of research (e.g., financial interests in a test or procedure, funding by pharmaceutical companies for drug research).
In light of changing patterns of scientific knowledge dissemination, APA requires authors to provide information on prior dissemination of the data and narrative interpretations of the data/research appearing in the manuscript (e.g., if some or all were presented at a conference or meeting, posted on a listserv, shared on a website, including academic social networks like ResearchGate, etc.). This information (2–4 sentences) must be provided as part of the author note.
It is a violation of APA Ethical Principles to publish "as original data, data that have been previously published" (Standard 8.13).
In addition, APA Ethical Principles specify that "after research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose, provided that the confidentiality of the participants can be protected and unless legal rights concerning proprietary data preclude their release" (Standard 8.14).
APA expects authors to adhere to these standards. Specifically, APA expects authors to have their data available throughout the editorial review process and for at least 5 years after the date of publication.
Authors are required to state in writing that they have complied with APA ethical standards in the treatment of their sample, human or animal, or to describe the details of treatment.
The APA Ethics Office provides the full Ethical Principles of Psychologists and Code of Conduct electronically on its website in HTML, PDF, and Word format. You may also request a copy by emailing or calling the APA Ethics Office (202-336-5930). You may also read "Ethical Principles," December 1992, American Psychologist , Vol. 47, pp. 1597–1611.
See APA’s Publishing Policies page for more information on publication policies, including information on author contributorship and responsibilities of authors, author name changes after publication, the use of generative artificial intelligence, funder information and conflict-of-interest disclosures, duplicate publication, data publication and reuse, and preprints.
Visit the Journals Publishing Resource Center for more resources for writing, reviewing, and editing articles for publishing in APA journals.
Koraly Pérez-Edgar, PhD The Pennsylvania State University, United States
Irma Arteaga, PhD University of Missouri, United States
Sheretta T. Butler-Barnes, PhD Washington University in St. Louis, United States
Christopher Beam, PhD University of Southern California, United States
Peter Bos, PhD University of Leiden, The Netherlands
Natalie Brito, PhD New York University, United States
Lucas Butler, PhD University of Maryland, United States
Gustavo Carlo, PhD University of California, Irvine, United States
Elisabeth Conradt, PhD University of Utah, United States
Timothy Curby, PhD George Mason University, United States
Judith Danovitch, PhD University of Louisville, United States
John Franchak, PhD University of California, Riverside, United States
Emily Fyfe, PhD Indiana University, United States
Melinda Gonzales Backen, PhD Florida State University, United States
Wendy Gordon, PhD Auburn University, United States
Noa Gueron-Sela, PhD Ben-Gurion University, Israel
Elizabeth Gunderson, PhD Indiana University, United States
Amanda Guyer, PhD University of California, Davis, United States
Larisa Solomon, PhD Columbia University, United States
Lana Karasik, PhD City University of New York, United States
Melissa Kibbe, PhD Boston University, United States
Elizabeth Kiel, PhD Miami University of Ohio, United States
Su Yeong Kim, PhD University of Texas, Austin, United States
Diana Leyva, PhD University of Pittsburgh, United States
Jennifer McDermott, PhD University of Massachusetts, Amherst, United States
Kristine Marceau, PhD Purdue University, United States
Julie Markant, PhD Tulane University, United States
Kalina Michalska, PhD University of California, Riverside, United States
Francisco Palermo, PhD University of Missouri, United States
Carlomagno Panlilio, PhD The Pennsylvania State University, United States
Mikko Peltola, PhD Tampere University, Finland
Gavin Price, PhD Exeter University, United Kingdom
Joanna Williams, PhD Rutgers University, United States
Qing Zhou, PhD University of California, Berkeley, United States
Melissa Barnett, PhD University of Arizona, United States
Martha Ann Bell, PhD Virginia Tech, United States
Deon Benton, PhD Vanderbilt University, United States
Tashauna Blankenship, PhD University of Massachusetts, Boston, United States
David Bridgett, PhD Northern Illinois University, United States
Rebecca Brooker, PhD Texas A&M University, United States
Samantha Brown, PhD Colorado State University, United States
Claire Cameron, PhD University at Buffalo, United States
Carlos Cardenas-Iniguez, PhD University of Southern California, United States
Rona Carter, PhD, LLP, RYT University of Michigan, United States
Stephen Chen, PhD Wellesley College, United States
Elizabeth Davis, PhD University of California, Riverside, United States
Leah Doane, PhD Arizona State University, United States
Jessica Dollar, PhD University of North Carolina, Greensboro, United States
Robert Duncan, PhD Purdue University, United States
Ari Eason, PhD University of California, Berkeley, United States
Katie Ehrlich, PhD University of Georgia, United States
Paola Escudero, PhD Western Sydney University, Australia
Caitlin Fausey, PhD University of Oregon, United States
Gregory M. Fosco, PhD The Pennsylvania State University, United States
Nicole Gardner-Neblett, PhD University of Michigan, United States
Erica Glasper, PhD Ohio State University, United States
Selin Gulgoz, PhD Fordham University, United States
Ernest Hodges, PhD St. John’s University, United States
Adam Hoffman, PhD Cornell University, United States
Stefanie Höhl, PhD University of Vienna, Austria
Caroline Hornburg, PhD Virginia Tech, United States
Yang Hou, PhD University of Kentucky, United States
Marina Kalashnikova, PhD Basque Center on Cognition, Brain, and Language, Spain
Heather Kirkorian, PhD University of Wisconsin, Madison, United States
Olga Kornienko, PhD George Mason University, United States
Deborah Laible, PhD Lehigh University, United States
Jonathan Lane, PhD Vanderbilt University, United States
Tessa Lansu, PhD Radboud University, Netherlands
Kathryn Leech, PhD University of North Carolina, United States
Ryan Lei, PhD Haverford College, United States
Jeffrey Liew, PhD Texas A&M University, United States
Betty Lin, PhD University at Albany, United States
Eric Lindsey, PhD Penn State Berks, United States
Jessica Lougheed, PhD University of British Columbia, Okanagan, Canada
Alexandra Main, PhD University of California, Merced, United States
Henrike Moll, PhD University of Southern California, United States
Santiago Morales, PhD University of Southern California, United States
Dianna Murray-Close, PhD University of Vermont, United States
Shaylene Nancekivell, PhD University of North Carolina, Greensboro, United States
Justin Parent, PhD Brown University, United States
Livio Provenzi, PhD IRCCS Mondino Foundation, Italy
Laura Quiñones-Camacho, PhD University of Texas, Austin, United States
Rachel Romeo, PhD, CCC-SLP University of Maryland, United States
Samuel Ronfard, EdD University of Toronto, Canada
Kathleen Rudasill, PhD Virginia Commonwealth University, United States
Adena Schachner, PhD University of California, San Diego, United States
Yishan Shen, PhD Texas State University, United States
Cara Streit, PhD University of New Mexico, United States
Cin Cin Tan, PhD University of Toledo, United States
Rachel Thibodeau-Nielson, PhD University of Missouri, United States
Sho Tsuji, PhD University of Tokyo, Japan
Yuuko Uchikoshi, EdD University of California, Davis, United States
Carlos Valiente, PhD Arizona State University, United States
Nicholas Wagner, PhD Boston University, United States
Jinjing Wang, PhD Rutgers University, United States
Jun Wang, PhD Texas A&M University, United States
Christina Weiland, EdD University of Michigan, United States
Eric Wilkey, PhD Louisiana State University, United States
Emily Densmore American Psychological Association
Abstracting and indexing services providing coverage of Developmental Psychology ®
Special issue of the APA journal Developmental Psychology, Vol. 56, No. 3, March 2020. Articles discuss the impact of emotion-related socialization behaviors on children’s emotion, self-regulation, and developmental outcomes.
Special issue of the APA journal Developmental Psychology, Vol. 55, No. 9, September 2019. The issue is intended to present and highlight examples of innovative recent approaches and thinking to a range of questions about emotional development and to inspire new directions for future research.
Special issue of the APA journal Developmental Psychology, Vol. 53, No. 11, November 2017. The articles examine identity in developmental stages ranging from early childhood to young adulthood, and represent samples from 5 different countries.
Special issue of the APA journal Developmental Psychology, Vol. 49, No. 3, March 2013. The articles pose important questions concerning how children learn from others, what the characteristic signatures of social learning might be, and how this learning changes over time.
APA endorses the Transparency and Openness Promotion (TOP) Guidelines by a community working group in conjunction with the Center for Open Science ( Nosek et al. 2015 ). The TOP Guidelines cover eight fundamental aspects of research planning and reporting that can be followed by journals and authors at three levels of compliance.
At a minimum, empirical research, including meta-analyses, submitted to Developmental Psychology must, at a minimum, meet Level 2 (Requirement) for all aspects of research planning and reporting. Authors should include a subsection in their methods description titled “Transparency and Openness.” This subsection should detail the efforts the authors have made to comply with the TOP Guidelines.
The list below summarizes the minimal TOP requirements of the journal. Please refer to the TOP guidelines for details, and contact the editor (Koraly Pérez-Edgar, PhD) with any further questions. Authors must share data, materials, and code via trusted repositories (e.g., APA’s repository on the Open Science Framework (OSF)). Trusted repositories adhere to policies that make data discoverable, accessible, usable, and preserved for the long term. Trusted repositories also assign unique and persistent identifiers.
We encourage investigators to preregister their studies and to share protocols and analysis plans prior to conducting their research. Clinical trials are studies that prospectively evaluate the effects of interventions on health outcomes, including psychological health. Clinical trials must be registered before enrolling participants on ClinicalTrials.gov or another primary register of the WHO International Clinical Trials Registry Platform (ICTRP) . There are many available preregistration forms (e.g., the APA Preregistration for Quantitative Research in Psychology template, ClininalTrials.gov , or other preregistration templates available via OSF ). Completed preregistration forms should be posted on a publicly accessible registry system (e.g., OSF , ClinicalTrials.gov, or other trial registries in the WHO Registry Network).
The following list presents the eight fundamental aspects of research planning and reporting, the TOP level required by Developmental Psychology , and a brief description of the journal's policy.
Explore open science at APA .
Definitions and further details on inclusive study designs are available on the Journals EDI homepage .
More information on this journal’s reporting standards is listed under the submission guidelines tab .
Editorial fellowships.
Editorial fellowships for this journal will begin in 2023.
Orcid reviewer recognition.
Open Research and Contributor ID (ORCID) Reviewer Recognition provides a visible and verifiable way for journals to publicly credit reviewers without compromising the confidentiality of the peer-review process. This journal has implemented the ORCID Reviewer Recognition feature in Editorial Manager, meaning that reviewers can be recognized for their contributions to the peer-review process.
This journal offers masked peer review (where both the authors’ and reviewers’ identities are not known to the other). Research has shown that masked peer review can help reduce implicit bias against traditionally female names or early-career scientists with smaller publication records (Budden et al., 2008; Darling, 2015).
Sign up to receive email alerts on the latest content published.
Welcome! Thank you for subscribing.
Calls for Papers
Access options
APA Publishing Insider is a free monthly newsletter with tips on APA Style, open science initiatives, active calls for papers, research summaries, and more.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Mirembe mandy.
1 Medical Research Council/Uganda Virus Research Institute Uganda Research Unit on AIDS, P Box 49, Entebbe, Uganda
2 London School of Hygiene & Tropical Medicine, Keppel Street, London WC1E 7HT, UK
Low- and middle-income countries (LMICs), particularly those in sub-Saharan Africa, are experiencing rapid increases in the prevalence of non-communicable diseases (NCDs), which may not be fully explained by urbanization and associated traditional risk factors such as tobacco smoking, excessive alcohol consumption, poor diet or physical inactivity. In this commentary, we draw attention to the concept of Developmental Origins of Health and Disease (DOHaD), where environmental insults in early life can contribute to long-term risk of NCDs, the impact of which would be particularly important in LMICs where poverty, malnutrition, poor sanitation and infections are still prevalent.
The ‘Developmental Origins of Health and Disease (DOHaD)’ hypothesis, a rather more recent term for the concept initially proposed and called ‘Fetal Origins of Adult Disease’ in the 1990s, 1 postulates that exposure to certain environmental influences during critical periods of development and growth may have significant consequences on an individual’s short- and long-term health. 2 In this concept, the developing fetus, if exposed to a hostile uterine environment (caused by insults such as poor nutrition, infections, chemicals, metabolite or hormonal perturbations), 3 responds by developing adaptations (predictive adaptive responses—PARs), that not only foster its immediate viability, but also its survival if a similar environment is encountered later in life. 4 , 5 Some examples of short-term adaptations the fetus may make in these scenarios include down-regulation of endocrine or metabolic function, and/or specific organ function to slow down its growth rate to match the nutrient supply in the deprived uterine environment. 6 Long-term, subtle, irreversible changes in the development, structure and function of some tissues and vital organs (thymus, skeletal muscle, lungs, pancreas, kidney) may occur 7 as a result of disruptions in gene expression, cell differentiation and proliferation. However, if the individual then grows up in an extra-uterine environment the reverse of that experienced in utero, the ‘mismatch’ and poorer fit, therefore, would predispose them to a higher risk of certain non-communicable diseases (NCDs). 3 This risk is further exacerbated by excessive weight gain in postnatal/adult life, and by the aging process itself. 5 , 8
Much of the evidence underpinning DOHaD science has been obtained from animal models and observational human studies. It shows that the period from conception to early childhood, i.e. prenatal development to child growth—when organogenesis and rapid growth are occurring 9 —is critical to the immediate and future health of the infant. Studies that looked at undernutrition acting in this early life period (as a result of either maternal undernutrition or protein/calorie restriction), 4 showed that it not only retarded growth, 10 but also induced lifelong changes in hormonal concentrations, and the sensitivity of various tissues to these fetal and placental hormones—alterations that lead to abnormal organ development 11 , 12 and to diseases such as type-2 diabetes mellitus (T2DM), cardiovascular disease (CVD), kidney disease, obesity, hypertension, osteoporosis and metabolic syndrome in later life. 13 , 14 These irreversible changes to tissue structure and physiology made to survive the harsh environment encountered in utero have also been called ‘programming’, and they are dependent on the nature and point at which exposure to the insult occurs, since tissues mature at different rates and time points. 11 , 15 This differential effect is well illustrated with undernutrition, to which exposure too soon after conception, for example, slows down fetal growth and leads to low birthweight of the infant. In contrast, if undernutrition occurs during mid-pregnancy, it may alter placental development and lead to fetal wasting during the remainder of the pregnancy—disturbances that can result in distinct metabolic phenotypes in adulthood. 13 Exposure to various other environmental factors including maternal stress, infections, hypertension, obesity, teratogens, alcohol, drugs, cigarette smoke, over nutrition and paternal malnutrition, within these critical windows of growth and development, have also been associated with an increased risk of adult disease. 16 , 17
The mechanisms that mediate the programming effects of diverse environmental insults, or how this memory is stored are unclear, but a few have been postulated. These include the following.
A theory of glucocorticoids as a potential common mechanism through which various environmental factors exert their programming effects.
Other proposed mechanisms include genetics, 28 cellular aging 29 and intergenerational effects (exposures experienced by one generation that influence the health of the next, because they persist across generations or are genetic, or occur in utero and are self-perpetuating such as those that affect the HPA axis). 30
It is known now that NCDs can be caused by a number of factors including genetics, environmental, physiological and behavioral patterns, and that they can occur in all age groups even though they are primarily diseases of old age. 31 NCDs are the current leading cause of death globally, accounting for about 40 million (70%) deaths annually. 31 They are generally attributed to four main risk factors: tobacco smoke, harmful alcohol consumption, poor diet and physical inactivity. 31 However, these factors do not seem to fully explain the pattern of NCDs emerging in developing countries with the fast pace of urbanization, and consequent epidemiological and nutrition transitions. 32 , 33 The epidemics in these regions seem to differ in some characteristics—with presentation occurring at a seemingly earlier age and disease progression at a faster rate than has been reported in developed countries; 34 , 35 more than four-fifths of the estimated 15 million premature worldwide deaths from NCDs occur in these low- and middle income settings. 31 These differences raise the question as to whether there are other drivers of chronic disease in these regions and the argument that there may well be. 36 A proportion of NCDs in less developed resource-constrained countries could probably be explained by other factors, particularly, 4 the encounter of adverse experiences during critical periods of growth (prenatal, childhood and in adolescence). 32
Evidence from the numerous studies cited above would strongly support this notion, highlighting the need to further understand the role DOHaD may have in driving the NCD epidemic, and how it could contribute to the design of appropriate interventions, to address this growing public health problem, 32 especially given that it is in these same regions that poverty, malnutrition, infections, low birth weight and poor sanitation are still prevalent. DOHaD science would be particularly useful for informing ways to improve nutrition and not just in early life, where fetal malnutrition, largely a consequence of poor maternal nutrition, has been shown to alter normal patterns of growth and development, 37 but throughout the life course. 38 Presently, about one-third of the world’s population, mostly in developing countries, suffers from some form of malnutrition; 815 million from calorie deficiency and nearly 2 billion from being overweight or obese. 39 This year alone, maternal and child undernutrition accounted for 10% of the global burden of disease 40 and obesity for about 2.8 million deaths. 41 Together with other, often related NCDs, these represent a significant burden of ill health and put enormous strain not just on individuals and their families, but also on the health systems, societies and economies of these nations. 42 Knowing this, it is imperative to step up the momentum to tackle these problems where effort is already being made, an even more importantly, to garner attention, as well as begin to utilize what knowledge of DOHaD we have, in regions where little is understood or being done. 34
In conclusion, we now know that it is possible to reduce the burden of NCDs and to have an impact on long-term health outcomes by using approaches that address the influence of environmental factors on growth and development. Because these, unlike genetics or aging (important causes of NCDs), can be modified, they provide an opportunity, using the knowledge there is of developmental plasticity, to design interventions that could prevent many of these chronic diseases. 16 However, like all previous successes, HIV-AIDs the classic one, progress addressing NCDs will require the political will, and NCDs getting on the national agendas, especially those of countries in the developing regions.
As for the next steps and interventions, it is critical that these take a life-course as well as a multi-disciplinary approach to be able to affect multiple generations. 31 , 34 , 43 , 44 A few to consider include:
Author’s contributions: MN conceived the idea for the paper; MM drafted it; MN critically revised the paper for intellectual content and presentation.
Acknowledgements: The authors would like to thank Professor Robert Newton for his insightful input reviewing the paper.
Funding: None.
Competing interests: None declared.
Ethical approval: Not required.
Journal logo.
Colleague's E-mail is Invalid
Your message has been successfully sent to your colleague.
Save my selection
Zhao, Xinzhi ∗
a International Peace Maternity & Child Health Hospital of China Affiliated to Shanghai Jiao Tong University
b Shanghai Key Laboratory of Embryo Original Diseases, Shanghai, China.
∗Corresponding author: Xinzhi Zhao, International Peace Maternity & Child Health Hospital of China Affiliated to Shanghai Jiao Tong University, Shanghai 200030, China. E-mail: [email protected] .
This is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. http://creativecommons.org/licenses/by-nc-nd/4.0
The developmental origins of health and disease theory states that environmental stresses during the early stages of life influence health and risk of developing non-communicable diseases throughout the lifespan of an individual. Developmental plasticity is thought to be a possible underlying mechanism. Here, I discuss a contrasting but complementary genetic hypothesis regarding the developmental origins of health and disease theory: crosstalk between the genomes of the parents and offspring is responsible for shaping and adapting responses to environmental stresses, regulating early growth and predisposition to non-communicable diseases. Genetic variants that are beneficial in terms of responses to early life stresses may have pleiotropic detrimental effects on health later in life, which may change the allele frequencies driven by selection on a population level. Genetic studies on the cohort of children born after assisted reproduction could provide insight regarding the genetic mechanisms of the developmental origins of health and disease theory.
Non-communicable diseases (NCDs), also known as chronic diseases, are characterized by slow progression and long duration. Prominent NCDs include cancer, cardiovascular diseases, diabetes mellitus, and major psychiatric diseases. Over the past two centuries, dramatic improvements in healthcare have increased the global life expectancy and changed the patterns of disease. NCDs account for a substantial proportion of the global disease burden and are currently responsible for over 63% of the world's deaths. [1]
The etiologies of NCDs are generally complex and consist of multiple genetic and environmental factors. The discovery that disorders of intrauterine development were associated with risk of NCDs in adult life was an important finding of the late 20th century. David Baker and colleagues reported an increased rate of death from ischemic heart disease in individuals with low birth weight in an English cohort, [2] and later this association was replicated in several different populations. [3,4] Low birth weight was also found to be associated with disorders characterized by insulin resistance, such as type 2 diabetes (T2D), hypertension, and dyslipidaemia. [5] Based on these observations, Barker formulated the hypothesis of the developmental origins of health and disease (DOHaD), which proposes that exposure to environmental factors during specific sensitive periods of development might predispose an organism to diseases in adult life. [6] An increasing number of studies have connected exposure to stresses during early life after conception, or even during preconceptional gametogenesis, with increased risks for chronic conditions.
Although the association has been well established by many epidemiology studies, the underlying mechanisms of the DOHaD theory are not fully understood. Some important hypotheses are based on developmental plasticity. [7] Many organisms have been found to express adaptive responses to their environments and to form specific characteristics. However, if the environment changes, these characteristics could result in maladaptation, and could increase the risk of contracting disease. Hales and Baker [8] put forward the thrifty phenotype hypothesis to explain the associations between poor fetal and infant growth and an increased risk of developing impaired glucose tolerance or metabolic syndrome in adult life. The hypothesis states that poor nutrition in fetal and early infant life induces specific programming of development and function of pancreatic β-cells, as well as kidney, muscle, liver, vascular, hypothalamic-pituitary-adrenal axis, and sympathetic systems. The programming leads to a thrifty phenotype with reduced fetal growth and poor organic function. These individuals would be well adapted to living in a continuously poorly nourished state, but would be maladapted to overnutrition in adult life, putting them at risk of developing a metabolic syndrome.
An alternative explanation to the DOHaD theory is genetic pleiotropy, the phenomenon in genetics whereby a DNA variant influences multiple traits. Given the limited number of genes in the human genome, and the enormous dimensionality of the phenome, it is reasonable to expect that many functional variants have pleiotropic effects. [9] Those genetic variants that affect development and influence adaptive responses to environmental stresses at an early age may also contribute to the genetic risk of NCD later on in life. The association between changes early in development and the onset of NCDs later in life may be a coupled phenomenon caused by a common mechanism.
In this review, I focus on the following potential genetic factors in the DOHaD theory, 1) the genetic variants linking early life development, mainly during intrauterine growth, and the risk of NCDs, 2) possible antagonistic pleiotropy and positive/negative selection for these genetic variants, and 3) the interaction between genetic and environmental factors. I also discuss how the growing population of individuals born as a result of assisted reproduction provides a unique opportunity to investigate the genetic mechanisms of DOHaD theory.
In this review, the author performed a search for original and review articles published in English between January 1989 and June 2019 focusing on genetic hypothesis of DOHaD theory in PubMed database. The following search terms were used alone or in combination: “genetics in developmental origins of health and disease”, “genetic pleiotropy”, “genetics of intrauterine growth”, “genetics of birth weight”, “positive selection”, “assisted reproduction”. The bibliographies of pertinent articles were also examined to identify further relevant papers.
As early as 1962, the American geneticist James Neel [10] put forth the “thrifty genotype” hypothesis. He proposed that genetic variants that reduce glucose uptake and limit body growth facilitated survival during periods of famine, and therefore these thrifty alleles would be positively selected in preindustrialized societies, in which famine is frequent. When individuals who have the thrifty genotype encounter a modern industrialized environment with plentiful food/low energy expenditure, they are at risk of developing metabolic syndromes. This hypothesis was initially used to explain the increasing prevalence of T2D among many indigenous populations, and was modified in the following years in conjunction with new information regarding the complexity of metabolic disease. [11]
The thrifty genotype hypothesis was applied to the mechanism of DOHaD by Hattersley and Tooke. [12] They proposed the fetal insulin hypothesis, which states that genetic factors are important both as determinants of birth weight and in evaluating the risk of adult metabolic syndromes. In other words, they proposed that the “thrifty phenotype” is the result of a “thrifty genotype”. Insulin pathways are considered to be excellent candidates for a common link because insulin plays a key role in both metabolism syndrome and fetal growth.
The thrifty genotype hypothesis employed an evolutionary framework to explain the growing burden of NCDs in contemporary populations. Nevertheless, many critiques of the hypothesis [13] arose due to the general lack of genetic evidence until recently. However, in the past 15 years, high throughput genomic platforms have generated huge amounts of genetic data enabling numerous genome-wide association studies (GWAS). A number of genetic variants that increase NCDs risk in old age have been found to be associated with increases in survival, fertility or lifetime reproductive success (LRS).
Low birth weight has been identified as an important risk factor for many NCDs, and the association is primarily the result of growth retardation, rather than preterm birth. [14] Birth weight is a complex multifactorial trait, and genetic factors play an important role that is independent of the intra-uterine environment. [15] This trait is extremely polygenic, and is controlled by large numbers of loci of small effect. As a result, it is quite difficult to identify the genetic loci that influence birth weight. Fortunately, birth weight is an easy-accessible demographic trait. Meta-analyses of GWAS data from multiple large-scale studies have yielded sufficient power to detect the genetic variants associated with birth weight, and many loci have been identified. A complex correlation between birth weight and adult disease risk has been demonstrated.
Using the GWAS data from a cohort of 5465 Caucasian children with recorded birth weights, Zhao et al [16] investigated the association between birth weight and previously reported T2D-associated variations at 20 loci. They found that the minor allele of a single nucleotide polymorphisms (SNPs) rs7756992 at the CDKAL1 locus was associated with lower birth weight, although the association did not reach genome-wide significance. [16] Later, this finding was replicated in a Danish cohort consisting of 4744 individuals. [17] The researchers also reported that birth weight was inversely associated with the T2D risk alleles of rs11708067 in ADCY5 . Freathy et al [18] searched for common genetic variants associated with birth weight in 38,214 individuals of European descent. They identified that rs900400, located near CCNL1 , and rs9883204, located in ADCY5 , were robustly associated with birth weight. The SNP rs9883204 showed a relatively strong linkage disequilibrium with rs11708067, which implied that it is involved in the regulation of glucose levels and thus T2D susceptibility. [19] The inverse genetic correlations between birth weight and T2D at the ADCY5 and CDKAL1 loci were confirmed in an expanded GWAS (with 69,308 individuals of European descent). [20] These results provided direct support for the fetal insulin hypothesis. [12]
The birth weight loci are not all associated with NCDs. For example, rs900400, which is located near the CCNL1 locus, is not known to be associated with any other traits. [20] Moreover, some alleles associated with higher birth weight may confer risk of NCDs. rs1801253 in ADRB1 has been strongly associated with birth weight, as well as systolic and diastolic blood pressure. [19,21] However, the birth weight-lowering allele at rs1801253 is associated with lower blood pressure in later life. [20] Importantly, birth weight may be influenced by both fetal and maternal genotype, and through the cellular mechanisms of gametogenesis and fertilization, fetal genotype is correlated with maternal genotype ( r ≈ 0.5). In some cases, functional alleles may exert different effects on birth weight depending on whether they are carried by the mother or fetus. For instance, rare heterozygous mutations in the glucokinase gene of a fetus may result in a reduction of approximately 530 g in birth weight, while mothers who carry a glucokinase mutation may have offspring that weigh 600 g more than average due to maternal hyperglycemia. [22]
In a recent multi-ancestry GWAS meta-analysis of birth weight in 153,781 individuals, [23] the authors identified 7 previously reported and 53 novel loci that were associated with birth weight at a genome-wide significance level for populations with either European ancestry or trans-ancestry. Further, three of the loci harbored multiple distinct association signals attaining genome-wide significance. The lead SNPs in these loci were almost common variants and they individually had modest effects on birth weight (10–26 g per allele). Using a linkage disequilibrium score regression method for the data from samples with European ancestry, the researchers found that birth weight was inversely correlated with genetic indicators of adverse metabolic and cardiovascular health, reflecting the impact of shared genetic variants that influence both sets of phenotypes. However, they also observed locus-specific heterogeneity in the genetic relationships between birth weight and health-related traits, including the replication of rs1801253 in ADRB1 that the birth weight-lowing allele is associated with lower blood pressure. [20] Meanwhile, the researchers found that both maternal and fetal genetic effects connect birth weight to later T2D risk, albeit acting in opposing directions, by analyzing the loci associated with both traits in mother-child pairs.
These studies provide compelling evidence that fetal genotype plays an important role in early growth, as measured by birth weight. Further, the loci that impact birth weight may contribute to the adult risk of metabolic diseases, providing some support for the idea of the “thrifty genotype”. However, these findings can only explain a small part of the variance in birth weight, and the effect of any individual loci is very small. For example, 62 distinct and significant genome-wide signals that were identified in more than 150,000 individuals account for approximately 2% of the variance. [23] Moreover, the genetic correlation between birth weight and other adult health-related traits is complex, and is likely to be indirectly influenced by the maternal genome.
While NCDs have very clear negative impacts, they also have a large genetic component. The cost to fitness for the individuals carrying risk alleles of susceptibility to NCDs may be relatively small because they would have transmitted the alleles to the next generation before the onset of age-related NCDs. A key point of the “thrifty genotype” hypothesis is that the risk alleles of susceptibility to NCDs may confer advantages during early life in a severe environment, and therefore contribute to a net fitness benefit. These risk alleles may even be positively selected. Natural selection is the principal underlying force molding life via traits associated with survival and reproduction. [24] Therefore, it is possible that the thrifty genes may contribute to sustaining fecundity besides survival during famine, as mentioned by Prentice. [25]
Considering that the quantitative traits associated with early development and NCSs are extremely polygenic, positive selection of the loci may occur via polygenic adaptation, and the selection signals may go largely undetected by conventional methods. [26] An allele that is positively selected may rise in prevalence rapidly such that recombination does not substantially break down the association with alleles at nearby loci on the ancestral chromosome, and so a long haplotype is formed. Based on this theory, an integrated Haplotype Score method has been developed to estimate positive selection for genetic polymophisms. [27] This method is typically used to detect candidate adaptive SNPs where the selected alleles may not have reached fixation, as it is well suited for detecting recent selection signals. Another haplotype-based method for computing nS L scores (the number of segregation sites by length) is more robust than the integrated Haplotype Score and does not depend on a genetic map. [28] Most selected regions have been found to be limited to a specific population, [27] suggesting adaption to local environments. Moreover, LRS, the number of children per parent per lifetime, is a prerequisite for responses to selection, and provides a measure of fitness that combines survival and reproduction.
Although several early studies failed to detect the selection signatures in metabolic genes, [29,30] compelling evidence was found in a Samoan population. Samoan people represent a unique founder population with a high prevalence of obesity. A recent GWAS identified a functional variant, rs373863828, with a large effect size (p.Arg457Gln, meta P = 1.4 × 10 −20 , 1.36–1.45 kg/m 2 per copy of the risk allele) in CREBRF that strongly influences body mass index in Samoan population. [31] Interestingly, while the variant is common in Samoan people, it is extremely rare in other populations. The body mass index-raising allele of rs373863828 showed decreased energy use and increased fat storage in an adipocyte cell model, and therefore was considered to be a thrifty variant. Positive selection of this thrifty variant was observed according to the integrated Haplotype Score and nSL score.
The positive selection of risk alleles associated with NCDs is not limited to obesity or specific populations. Evidence of positive selection was found in 40 of the 76 genes known to be associated with the risk of coronary artery disease in a GWAS of 12 human populations. [32] The coronary artery disease genes under positive selection were enriched in a number of traits associated with reproduction and pregnancy outcomes, including twinning, reproductive timing, lactation capacity, pregnancy loss, intrauterine growth restriction (IUGR), and preeclampsia. Meanwhile, the authors found antagonistic relationships between coronary artery disease and reproductive success in women in the Framingham Heart Study.
Human reproductive behavior including age at first birth and the number of children ever born (same as LRS) is strongly related to fitness. A GWAS that combined genome data from more than 300,000 individuals identified 12 independent loci that were significantly associated with age at first birth and/or number of children ever born. [33] Genetic correlation estimates found that the alleles linked to lower age at first birth in both men and women were associated with a higher genetic risk of smoking and T2D, while the number of children ever born was negatively genetically correlated with years of education and age at first sexual intercourse. No significantly enriched genes, tissue sets, or biological functions were found to be enriched for the genes associated with reproductive behavior. These results indicate that human reproductive behavior is influenced by a mixture of biological, psychological, and socio-environmental factors in contemporary populations.
Although limited in number, some reliable studies have identified specific genes associated with the risk of NCDs that are under positive selection, and may contribute to LRS. The positive signals of selection or association with LRS are generally very weak for a single locus. Large databases that contain information on genomics, fertility, mortality, and health status from hundreds of thousands of individuals are required to identify these genes.
Recent studies have suggested a substantial role of genetics in the association between early development and adult NCDs. However, the effect sizes of the candidate loci are so small that they can only be identified in very large cohort studies, and some locus-specific heterogeneity has also been observed. As mentioned above, the quantitative traits related to early development and NCDs are extremely polygenic and have complex gene-environment interactions. Moreover, evidence of substantial crosstalk between the genomes of parents and offspring has been observed during pre-implantation and development. Here, as an example of how crosstalk between the mother and fetus shapes the intrauterine growth environment and impacts early development, I describe the antagonistic responses between the mother and fetus during adaptation to defective deep placentation.
Human pregnancy is characterized by deep placentation, in which placental trophoblasts invade up to a depth of one-third of the thickness of the myometrium. This process is accompanied by transformation of the uterine spiral arteries from a low-velocity to a high-flow chamber. Transformation of the spiral arteries facilitates increases in uterine blood flow to enable perfusion of the intervillous space of the placenta and support fetal growth. Defective deep placentation, which is defined as a significantly increased number of spiral arteries with absent or partial transformation, results in placental ischemia, and is associated with a spectrum of obstetric disorders including preeclampsia, IUGR, preterm labor, preterm premature rupture of membranes, late spontaneous abortion, and abruptio placentae. [34]
Clinical outcomes of defective deep placentation are determined by two major factors. The first is the degree to which physiological transformation of the spiral arteries is restricted. For example, 90% of the myometrial spiral arteries are fully transformed in normal pregnancy. However, in cases of severe early-onset preeclampsia, which is a pregnancy-induced hypertension syndrome associated with IUGR, only a few spiral arteries may be fully transformed. In late-onset preeclampsia (LOPE) without IUGR, IUGR without hypertension, and preterm labor, defective deep placentation may only partially affect the spiral arteries. [34] The trophoblast invasion plays a key role in the transformation of the spiral arteries. Importantly, the placenta is genetically identical to the fetus, and is semi-allogeneic to the mother. Thus, the trophoblast invasion challenges the maternal immune system. [35] Complete failure of immunoregulatory mechanisms could lead to spontaneous abortion, whereas partial failure could lead to a continued pregnancy with a small, insufficient placenta. Thus, there exists a continuum between abortion, pre-eclampsia, and other disorders associated with deep placentation. HLA-C expressed by trophoblasts can be recognized by decidual killer immunoglobulin-like receptors, in a process involved in maternal-fetal tolerance. Combinations in which the mother carries two killer immunoglobulin-like receptor AA haplotypes and the fetus inherits paternal HLA-C2 haplotypes are the most susceptible to preeclampsia and recurrent miscarriages. [36]
A second factor in determining the clinical outcome in cases of defective deep placentation is the maternal and fetal adaptive responses to the compromised placental blood supply. Unlike diseases in the non-pregnant state, obstetric disorders develop in a unique biological situation in which a mother and fetus with different genomes coexist. [37] Compromised placental blood supply restricts the transportation of oxygen and nutrients, which can lead to ischemia–reperfusion injury and retarded fetal growth. Meanwhile, a series of adaptive responses may occur in placental trophoblast cells: hypoxia-inducible factors may increase as a response to cellular oxygen deprivation; antiangiogenic factors including soluble vascular endothelial growth factor receptor1 (sVEGFR-1) and soluble endoglin may be up-regulated; endoplasmic reticulum stress induced by ischemia–reperfusion injury may suspend protein folding, leading to trophoblast apoptosis; and the production of reactive oxygen species may increase, inducing the release of proinflammatory cytokines and chemokines, as well as trophoblast debris. These placental responses release soluble components into the maternal circulatory system and may trigger a non-specific, systemic (vascular), inflammatory response leading to clinical symptoms in the mother. [38]
Among clinical measures of maternal health status, hypertension may be the most important because it can reflect an important fetal adaptive response – increased placental perfusion. In a meta-analysis examining the relationship between fetoplacental growth and the use of oral antihypertensive medication to treat mild-to-moderate hypertension during pregnancy in 45 randomized controlled trials, treatment-induced decreases in maternal blood pressure appeared to adversely affect fetal growth. Specifically, greater decreases in mean arterial pressure from study enrolment to delivery were associated with a higher risk of IUGR ( P = 0.006, 14 trials) and lower mean birthweight ( P = 0.049, 27 trials). [39] This suggests that fetal growth benefits from maternal hypertension. In a GWAS on preeclampsia, genetic variants in the fetal genome near the FLT1 gene were significantly associated with the syndrome in 4380 cases and 310,238 controls. Interestingly, the association was strongest in offspring of LOPE patients without IUGR. FLT1 encodes antiangiogenic sVEGFR-1 in trophoblasts. The risk C allele of the lead SNP rs4769613 ( P = 5.4 × 10 −11 ) was associated with high concentrations of sVEGFR-1 in maternal blood in control pregnancies ( P = 0.04). [40] Maternal plasma levels of sVEGFR-1 are higher in cases of severe preeclampsia compared with cases without severe features, and are also higher in cases of early-onset preeclampsia compared with cases of LOPE. [41] There are several possible explanations for the stronger genetic association in cases of LOPE without IUGR. First, while placental sVEGFR-1 is shed in normal pregnancy, levels are significantly increased in cases of preeclampsia. Thus, sVEGFR-1 may be a physiological regulator of maternal blood pressure that is released by the fetus (placenta) during normal pregnancy. In cases of partially defective deep placentation, placental ischemia tends to be mild. Fetuses that carry preeclampsia-risk alleles at the FLT locus tend to secrete more sVEGFR-1 to elevate maternal blood pressure, which leads to increased placental perfusion and fetal growth. Therefore, these alleles may simultaneously increase the risk of preeclampsia and decrease the risk of IUGR.
Although maternal hypertension may benefit fetal growth, it is likely to be detrimental to the mother. In severe cases, hypertensive crisis could lead to cerebrovascular accident and maternal death. Thus, there might be a maternal response to fetal adaption to placental ischemia. It has been suggested that a maternal inflammatory response may counteract the fetus. A recent study using transcriptome profiling of unsupervised cluster placentas identified 3 clinically probable etiologies of preeclampsia. A subgroup of preeclampsia patients showed severe fetal growth restriction and evidence of maternal antifetal rejection, while the maternal parameters of disease severity, such as blood pressure and proteinuria levels, were less severe in these cases compared with canonical patients. [42] One possible explanation is that the maternal condition is relieved by restricting fetus growth. This view is supported by the observation that resolution of maternal hypertension follows fetal death in some patients with preeclampsia. [43] Our recent study identified a number of consistently hypomethylated probes that were associated with early-onset preeclampsia in different populations. The methylation levels of the validated probes were associated with clinical severity, and the samples with intermediate changes in methylation showed antagonistic fetal/maternal outcomes. [44]
Successful reproduction is the common interest of the mother and fetus. However, conflict between the mother and her fetus takes place when their interests diverge, resulting in antagonistic responses. Variations in these adaptive responses impact pregnancy outcomes. Meanwhile, common phenotypes of pregnancy outcome may represent the manifestation of different adaptive responses. For example, IUGR may be caused by severely defective deep placentation, the insensitivity of the fetus to ischemia, or the tolerance of the mother to placental releasing factors. Thus, individuals born with IUGR may have different genotypes that influence fetal growth and future predisposition to NCDs. This complexity represents a major challenge in identifying the genetic correlation between early development and NCD risk.
Epigenetics refers to “the study of molecules and mechanisms that can perpetuate alternative gene activity states in the context of the same DNA sequence.” [45] Epigenetic modifications, including DNA methylation, histone modifications, and noncoding RNAs, modify DNA bases and chromatin. This enables the establishment and maintenance of chromatin states that regulate gene expression transmitted across cell divisions.
For most differentiated somatic cells, epigenetic modifications buffer environmental variations and act as barriers to prevent changes in gene expression and cell identity. However, adaption responses to extreme environmental conditions during specific sensitive periods in early development can induce epigenetic programming, which can lead to long-lasting changes in the epigenome and predispose an organism to disease in adult life. [46]
Epigenetic modifications are thought to be reversible and are governed by a series of writers (that deposit them), readers (that interpret them), and erasers (that remove them). Both genetic and environmental factors are involved in epigenetic variations. Variations in the DNA sequence can influence epigenetic modifications in two ways: 1) changing the target sequences of modifications or 2) altering the genes encoding the epigenetic writers, readers, or erasers. Rare genetic mutations of implicated regions can also cause epigenetic disorders. For example, DNA methylation is one of the best-characterized epigenetic modifications. DNA methylation in the human genome predominantly occurs at the C5 position of cytosine in CpG dinucleotides. [47] Fragile X syndrome is a mental condition caused by the excessive expansion of a CGG repeat in the 5’ untranslated region of the FMR1 gene. [48] More than 200 repeats of the trinucleotide induce mRNA-mediated DNA methylation in the CGG repeat, resulting in gene silencing. [49] DNA methylation is catalyzed by the DNA methyltransferase family, which includes DNMT1, DNMT3A, and DNMT3B (the writers). Rare de novo heterozygous mutations in the DNMT3A gene cause Tatton-Brown-Rahman syndrome, a overgrowth disorder characterized by a distinctive facial phenotype and intellectual disability. [50] Further, somatic mutations of the same gene are frequent in cases of acute myeloid leukemia. [51] Interestingly, two other genes that encode epigenetic writers, EZH2 and NSD1 , are both associated with developmental growth disorders and hematological malignancies. [52] MECP2, which binds methylated CpGs, is a reader of DNA methylation that can both activate and repress transcription. [53] Mutations in the MECP2 gene are the main genetic causes of Rett syndrome, a progressive neurologic developmental disorder with X-linked dominant inheritance. [54] The ten-eleven translocation (TET) enzymes, which are encoded by TET1 , TET2, and TET3 , oxidize the C5 position of cytosines and are important erasers of DNA methylation. Somatic mutations in TET genes are frequent in several kinds of hematological cancer. [55] In contrast, transposable elements are frequent targets of epigenetic silencing. Abnormal epigenetic modifications may activate transposable elements, and therefore influence genome integrity and induce de novo mutations. For instance, human neuronal progenitor cells carrying MeCP2 mutations have been found to have heightened susceptibility to LINE-1 retrotransposition. [56]
Environmental factors can have direct influences on the cellular epigenome. Most epigenetic writers and erasers are enzymes for which activity is mediated by the availability of substrates, cofactors, and allosteric regulators. A number of metabolites derived from diverse metabolic pathways, including the one-carbon metabolism, the tricarboxylic acid cycle, β-oxidation, glycolysis, and hexosamine biosynthesis, are used as substrates and cofactors in epigenetic modifications. [57] Therefore, environmental factors, such as diet, microbiome, temperature, malnutrition, and chemical exposure can affect cellular metabolism and change the activity of chromatin-modifying enzymes, leading to changes in the epigenome. For example, epigenetic changes have been found to persist for periods longer than 60 years in individuals prenatally exposed to the Dutch Hunger Winter. [58]
Among epigenetic modifications, DNA methylation is more stable and more suitable for high-resolution assay platforms than histone modifications or non-coding RNA expression. High throughput platforms based on bisulfite conversion are highly quantitative and reproducible, offering high sensitivity to detect small changes from limited amounts of DNA. [59] Epigenome-wide association studies have found many loci with small (<10%) changes in intermediate methylation levels that are associated with complex phenotypes of NCDs. [60] This paradigm is in stark contrast to the observation of imprinted genes or malignant cells that have large differences in methylation level in genomic regulatory regions and clear gene expression changes. Although some changes in DNA methylation may reflect fetal genetic predisposition to disease, [61] others have been found to be reproducibly associated with specific environmental factors. [62] The presence of differential methylated cytosines in NCDs reflects the comprehensive effect of genetic and environmental factors. Therefore, the DNA methylome in peripheral, easy-to-access tissues in early stages of life, such as the placenta, umbilical cords, and fetal membranes, may be ideal for identifying the biomarkers for both genetic risk and early developmental stress associated with NCDs.
The use of assisted reproductive technologies (ART) has grown dramatically, contributing to millions of successful birth worldwide and accounting for more than 4% of all newborns in some European countries. [63] Assisted reproduction is redefining human society and biology, as many infertile couples may transmit their genes to subsequent generations. The long-term health of children born via ART is still unclear because they are too young to evaluate their incidence of age-related NCDs. However, ART children have a specific genetic background, and they tend to be exposed to some environmental stressors. ART children may be a promising cohort for studying the genetic basis of DOHaD theory.
ART procedures are generally used by couples who are infertile, which is defined as “failure to achieve clinical pregnancy after 12 months or more of regular unprotected sexual intercourse”’. [64] The infertile population is known to have an increased prevalence of NCDs. For example, polycystic ovary syndrome (PCOS), which affects 5% to 20% of women of reproductive age worldwide, is the most common cause of anovulatory infertility. [65] Patients with PCOS are at risk of many other NCDs such as coronary heart disease, stroke, and obesity. [66] Genetic studies have indicated causal roles in PCOS etiology for elevations in body mass index and insulin resistance. However, PCOS susceptibility is associated with alleles that raise the menopausal age and genes involved in DNA repair, suggesting a mechanism resulting in the retardation of ovarian ageing. [67] ART children are likely to inherit genes associated with susceptibility to infertility and other NCDs, or quantitative traits associated with infertility. Meanwhile, the population of parents using ART tends to be older, which is associated with an increased number of de novo mutations in offspring. [68] Moreover, social-economical factors affect the accessibility of ART treatments. ART clinic remains absent, inaccessible or unaffordable for many infertile couples in the world. [69] Those healthy subfertile couples with higher socioeconomic status have more opportunities to transmit their genes. Although the genetic background of ART children is a confounding factor in many epidemiological studies, but it can be overcomed in genetic studies using family-based genetic data. For example, live birth is a key phenotype for ART children. The classic transmission/disequilibrium test can be used to identify the genetic loci associated with live birth in nuclear families and investigate changes of gene pool between parents and offspring.
Compared with naturally-conceived children, ART children are exposed to many environmental stressors associated with the assisted reproductive procedure. For example, a pivotal step in most in vitro fertilization protocols is controlled ovarian stimulation (COS), in which exogenous gonadotrophins are used to retrieve multiple oocytes. COS may be implicated in adverse perinatal outcomes because it involves altering the embryonic genome and epigenome. [70] The oocyte yield after COS is highly variable, and in some cases, more than twenty mature oocytes are acquired in one cycle. However, in a natural menstrual cycle, only one or two oocytes from dominant follicles would undergo ovulation simultaneously. The number of oocytes retrieved after COS, which is influenced by genetic factors, [71] is positively associated with the live birth rate up to 20 oocytes. [72] However, the incidence of severe ovarian hyperstimulation syndrome, which is associated with adverse developmental outcomes in offspring, increases significantly with the number of oocytes, particularly if more than 18 oocytes are retrieved. [72] Children born to ovarian-hyperstimulated women display cardiovascular dysfunctions and reduced intellectual ability. [73,74] Therefore, the genetic factors associated with oocyte number in COS have pleiotropic effects: although women who carry the genes associated with a higher number of oocytes retrieved have a greater chance of giving birth to a child after assisted reproduction, their children may have a greater risk of NCDs later in life. Besides COS, other steps in ART, including in vitro maturation of oocytes, intracytoplasmic sperm injection, in vitro culture of the embryo, cryopreservation and thawing of the gamete and embryo, and preimplantation genetic testing, may also negatively impact the early development of ART children. [75] Population-level changes in the genetic responses to these environmental factors would be very informative with respect to the mechanisms of development.
Although it is quite difficult to conduct detailed investigations regarding implantation failure events before initiation of pregnancy in naturally-conceiving women, such medical records often exist for individuals in the ART cohort. Thus, it is possible to investigate the genetic factors influencing success of embryo implantation leading to pregnancy in ART-treated women. A genetic association study found that a common functional p53 Pro allele was associated with increased rates of blastocyst implantation failure in in vitro fertilization patients. Selected alleles in SNPs in the LIF , Mdm2 , Mdm4 , and Hausp genes, each of which regulates p53 levels in cells, were also enriched in in vitro fertilization patients. [76] Despite these costs, the p53 Pro allele is associated with an increased lifespan, and previous positive selection for alleles in the p53 pathway has been identified in contemporary white and Asian populations. ART itself may also influence human evolution, as has been previously discussed. [77] There are several steps of natural and artificial selection during ART treatment and favored traits may differ between natural reproduction and ART. As a result, only approximately 5% of fresh oocytes produce a baby in ART cycles. [78] Identifying the genetic factors that confer benefits to reproductive success during ART and their potential pleiotropic effects on NCDs will be helpful in predicting the long-term health of ART children. Additionally, such information may also provide insight regarding the genetic mechanisms of DOHaD theory.
The DOHaD theory was formulated based on the well-established association between adverse early environmental events and the increased risk of adult NCDs. In this article, I have reviewed the genetic hypothesis of the DOHaD theory, namely, that genetic variants associated with early development have pleiotropic effects that contribute to the risk of NCDs. This hypothesis is not incompatible with the emphasis on environmental factors in the DOHaD theory, as evidence suggests that the environment of early human development is shaped by crosstalk between the fetal and the maternal genome. Moreover, the interplay between DNA sequence variations and environmental stresses programs the epigenome, resulting in long-term effects on phenotypes. Interestingly, some risk alleles of NCDs are under positive selection, suggesting that they play beneficial roles in reproduction and early development.
Until now, few genetic loci linking early development and adult NCDs had been identified. Both kinds of phenotypes are highly polygenic and any associated genetic loci have very small effect sizes. Importantly, the genetic correlation between early development and adult NCDs will be caused directly by the genome of the child, or indirectly by the genomes of the parents. New methods are needed to improve the power of screening for genetic correlations between pleiotropic effects. Meanwhile, certain intermediate phenotypes during reproduction may help to identify subclasses of cohorts.
Author contributions.
XZ drafted and revised this manuscript and approved the final submission.
This work was supported by the National Key Research and Development Program of China (No. 2018YFC1005001), the National Natural Science Foundation of China (No. 81871180) and the Innovative Research Team of High-Level Local Universities in Shanghai, China; all to XZ.
The authors declare that they have no conflicts of interest.
assisted reproduction; DOHaD; genetic pleiotropy; non-communicable diseases; positive selection
Mark was a source for Matthew and Luke, both of whom also independently used a now lost sayings source called Q.
The Two-Source Hypothesis (2SH) has been the predominant source theory for the synoptic problem for almost a century and half. Originally conceived in Germany by Ch. H. Weisse in 1838, the 2SH came to dominate German protestant scholarship after the fall of the Tübingen school with H. J. Holtzmann 's endorsement of a related variant in 1863. In the latter part of the 19th century, the Oxford School brought the 2SH to English scholarship, culminating in B. H. Streeter 's 1924 treatment of the synoptic problem. Now, the 2SH commands the support of most biblical critics from all continents and denominations.
Any viable solution to the synoptic problem has to account, at a minimum, for the two main textual features of the synoptic gospels, called the triple tradition and the double tradition. The triple tradition refers to the subject matter jointly related by Matthew, Mark, and Luke. Generally, the triple tradition is characterized by substantial agreements in arrangement and wording among all three gospels with frequent agreements between Mark and Matthew against Luke and between Mark and Luke against Matthew, but a near absence of agreements of Matthew and Luke against Mark. The double tradition , on the other hand, consists of the material that Matthew and Luke share outside of Mark and exhibits some of the most striking verbatim agreements in some passages and quite divergent versions in other passages.
The 2SH derives its name (and most of its plausibility) from its postulation of two distinct sources for the synoptic gospels: a narrative source (Mark) for the triple tradition and a saying source (Q) for the double tradition. Sometimes, the 2SH is more precisely called the two-document hypothesis to emphasize that the two sources are distinct documents or the Mark-Q hypothesis to identify those two documentary sources.
After a few false starts, the modern argument for the 2SH has settled into a two-step analysis. First, an explanation for the triple tradition, Markan priority, is is established and then arguments for the relative independence of Matthew and Luke are made, which result in hypothesizing a common source, Q. Among the best-argued cases for the 2SH in contemporary scholarship include Stein 1987 and Tuckett 1992 .
H. J. Holtzmann , Die Synoptischen Evangelien (Leipzig, 1863); R. H. Stein , The Synoptic Problem: An Introduction (Grand Rapids, Mich.: Baker Books, 1987). B. H. Streeter , The Four Gospels (London: Macmillan, 1924) [ web ]; C. M. Tuckett , "Synoptic Problem" in D. N. Freedman , ed., The Anchor Bible Dictionary (New York: Doubleday, 1992): 6:263-70; Ch. H. Weisse , Die evangelische Geschichte kritisch und philosopisch bearbeitet (Leipzig, 1838).
Under Markan priority, the triple tradition is derived from a narrative source that resembles Mark and that both Matthew and Luke used. In the present form of the 2SH, that source is Mark 1:1-16:8. Variations of this source include the supposition of an early form of Mark called Uk-Markus or proto-Mark , a revised form of Mark, deutero-Mark , or both, but these possibilities are only supported by a handful of active scholars.
It is argued that it is easier to understand certain material (infancy accounts, Sermon on the Mount) being added to Mark by Matthew and Luke than the reverse of Mark's omitting them from Matthew and Luke. It is also argued that it is easier to view Matthew's and Luke's relative brevity in the account all three share as both Matthew's and Luke's compressing the text of Mark to add their own material rather than Mark's abridging the content and expanding the words of one or both of the others. Furthermore, critics have argued put forth reasons for the specific divergences of Matthew's and Luke's order of their material from that of Mark's as more plausible than the reverse. ( Stein 1987 : 48-51; Tuckett 1992 : 264-265)
At a general level, proponents of Markan priority find Mark's less literary diction, grammar, redundancy, difficulty of expression, Christology, and use of Aramaic to more likely be intentional improvements by Matthew and Luke rather than Mark's "dumbing down" of one or both of the others ( Stein 1987 :52-67; Tuckett 1992 : 265-267). On a more specific level, Markan priority is also found to in instances where Matthew and Luke seem to occasionally refer to omitted explanatory material in Mark, in Matthew's adding his own theological emphases rather in than Mark's removing them, and an uneven distribution of Mark's stylistic features in Matthew. ( Stein 1987 : 70-83).
The distinctive aspect of the 2SH is not Markan priority, which is shared by other theories (mostly notably, Farrer), but the supposition of a lost collection of Jesus's sayings that served as a source to the double tradition in Matthew and Luke. This source used to be called the Logia , based on a once-popular interpretation of Papias , but is now called Q , from the German word for "source," Quelle . Differences of opinion about the exact nature of Q are more common than about Markan priority. Though some view Q as a collection of oral and written sources employed Matthew and Luke, most scholars see Q as a discrete document in its own right.
The existence of Q follows from the conclusion that Luke and Matthew are independent in the double tradition. Therefore, the liteary connection in the double tradition must be explained by an indirect relationship, namely, through use of a common source or sources.
Proponents of Q argue that neither Matthew nor Luke used the other due to the large degree of disuse of the other's non-Markan material in triple tradition and of the other's non-Markan, non-sayings material. (The use of non-Markan sayings material is from Q, of course). It is also argued that the different contexts and presence of doublets for the double tradition material in Matthew and Luke in view of great similiarity in contexts for the triple tradition indicate that neither knew the other's arrangement of the non-Markan material with respect to the Markan outline.
The Q hypothesis is used to explain why the form of the material sometimes appears more primitive in Matthew but other times more primitive in Luke in the face of very impressive exactness in wording in other parts of the double tradition (e.g. Matt 6:24 = Luke 16:13 for 27 of 28 Greek words and Matt 7:7-8 = Luke 11:9-10 for 24 of 24 Greek words).
Much recent work has gone in studying the theology, community, and the compositional history of Q. For example, some scholars have concluded that Q was composed in stages. (e.g., Kloppenborg 1987 ; but see also a partial critique by Tuckett 1996 : 69-74). A thorough exposition of the state of the Q art is Excavating Q ( Kloppenborg 2000 ).
The so-called "minor agreements".
The minor agreements are probably the weakest point for the 2SH, although each of the central theses of the 2SH, pure Markan priority and the existence of Q, can be attacked separately. The minor agreements are those agreements between Matthew and Luke against Mark (or anti-Markan agreements , which I prefer to avoid the inherent value judgment in "minor") that occur in triple tradition. Some of them agreements are quite striking; for example, both Matt 26:68 and Luke 22:64 but not Mark 14:65 include the question "Who is it that struck you" in the beating of Jesus.
The minor agreements pose a special dilemma for the 2SH, because they are suggestive of a literary connection between Matthew and Luke outside of either Mark or Q, calling into question the relative independence of Matthew and Luke.
For example, a few scholars explain the minor agreements by Luke's use of Matthew in addition to Q and Mark (3SH). The problem is that the modern argument for Q requires Matthew and Luke to be independent, so the 3SH raises more questions than it solves, namely, how to establish Q is Luke is dependent on Matthew. Other scholars keep Q while acknowledging the force of the minor agreements to attribute the minor agreements to a proto-Mark, such as the Ur-Markus in the Markan Hypothesis (MkH) that was adapted by Mark independently from its use by Matthew and Luke. Stlle other scholars feel that the character of the minor agreements suggest that they are due to a revision of our Mark, called deutero-Mark. In this case, both Matthew and Luke are dependent on deutero-Mark, which did not survive the ages.
Therefore, the minor agreements, if taken seriously, force a choice between accepting pure Markan priority on one hand or the existence of Q on the other hand, but not both simultaneously as the 2SH requires.
The 2SH's response to the issue of the minor agreement is to weaken their significance by attributing various causes for them. B. H. Streeter devoted a chapter on this issue in his magnum opus on the synoptic problem with an analysis that is largely maintained today. ( Streeter 1924 : 293-331; see also Neirynck 1974 for a modern exhaustive treatment) The minor argreements are handled by one of several different reasons how Matthew and Luke could have independently arrived at their anti-Markan agreements:
Most of the minor agreements are attributed to the independent, coincidental redaction of Mark by Matthew and Luke. Streeter stated that "the majority of these agreements do not require any explanation at all" because they are the natural result of Matthew's and Luke's production of their own gospels. In this category, Streeter noted Matthew and Luke compression of Mark's diffuse style and their improvements of Mark's rough Greek, which is the most colloquial in the New Testament and influenced by an Aramaic coloring.
Streeter attributed some agreements of Matthew and Luke against Mark to their common of use of Q in those passages that Mark and Q overlapped. Stein suggested that. in minor agreements that appear "primitive," Matthew and Luke preferred their form of oral tradition over Mark. ( Stein 1987 : 126-27)
Streeter argued that, in a few cases, the best manuscript copies of the gospels do not reflect the original text in a manner that produces with Matthew and Luke agree against Mark.
To the limited extent that the early Christians have discussed the origins of the gospels, none of them has clearly indicated the existence of Q or the priority of Mark. Rather, the priority of Matthew is the most consistent testimony in the first few centuries of Christianity.
While many of those who do not subscribe to Q also subscribe to Matthean priority, there is also an increasing group of scholars who would dispense with Q within the framework of Markan priority under the Farrer Theory. Their argument, mainly involving the minor agreements, may be found at Mark Goodacre's Case Against Q .
Serious study of the synoptic problem began in the late eighteenth century during the Enlightenment. The most prominent source critic in the early period was J. J. Griesbach who argued for a theory that now bears his name in that Mark was a conflation of Matthew and Luke ( Griesbach 1789 ). Around this time, early critics began to formulate the separate theses that would later join to form the 2SH.
For example, G. Ch. Storr, one of Griesbach's contemporary challengers, arguing that Mark, not Matthew, was the earliest, and that both Matthew and Luke used Mark ( Storr 1786 ; see generally Farmer 1964 : 7, Reicke 1978a : 51). For the double tradition, however, Storr was undecided about whether Luke used Matthew, virtually anticipating the Farrer Hypothesis, or whether Matthew's translator used Luke.
The synoptic theory propounded by Hebert Marsh ( 1802 ) was very mechanical, proposing three main hypothetical Hebrew documents, denominated with Hebrew letters. These documents comprised the Matthew-Mark agreements against Luke (Aleph1, labeled "pMt"), the Mark-Luke agreements against Matthew (Aleph2, labeled "pLk"), and the Matthew-Luke agreements against Mark (Beth, labeled "Q"), respectively. The first two of these documents themselves were derived from a fourth hypothetical document Aleph (labeled "G"), which contained the Matthew-Mark-Luke triple agreements. The other source, Beth, was a sayings collection that was similar to Q.
Marsh's theory has an important contact with the 2SH, namely, the postulation of a Q-like hypothetical sayings source, Beth, that is largely responsible for the double tradition. Since Marsh offered a Griesbach-like explanation for the origin of Mark as a conflation of Aleph 1 (proto-Matthew) and Aleph 2 (proto-Luke), however, Marsh should not deserve credit as the originator of the 2SH. A modern variant of Marsh's theory is P. Rolland's Hypothesis.
Storr's and Marsh's views passed out of favor and left no direct effect on the course of the synoptic problem. By the 1830s, when the consensus was coalescing around the Griesbach hypothesis, especially in the work of the Tübingen school, two scholars, F. E. D. Schleiermacher and Karl Lachmann, laid the groundwork for what would become the two fundamental tenets of the 2SH.
Schleiermacher operated within the perimeters of the Fragmentary Hypothesis, which held that the synoptic gospels were composed from a multiplicity of shorter documents. Interpreting the term logia (logia) as referring to a sayings collection in the testimony of Papias (c. 125), Schleiermacher argued that the "logia" was one of the documents that were available to the evangelists ( Schleiermacher 1832 ; see generally Farmer 1964: 15 ). In fact, "logia" was how scholars originally called Q until the turn of the 20th century, when doubts about the identification with Papias's reference began to surface.
Lachmann, also a proponent of the Fragmentary Hypothesis, subdivided the narrative portions of the synoptics into roughly nine separate sections and investigating how these sections were eventually arranged in the individual gospels. Lachmann concluded that the Mark's order best reflected a relatively fixed oral sequence for these sections and found reasons for Matthew and Luke to depart from this fixed oral sequence ( Lachmann 1835 , ET Palmer 1966 : 376-378 = Palmer 1985 : 119-131).
J. J. Griesbach , Commentatio qua Marci Evangelium totum e Matthaei et Lucae commentariis decerptum esse monstratur , I-II (Jena, 1789-90), enl. ed., in J. C. Velsthusen et al., eds., Commentationes theologicae (Leipzig, 1794): 1:360-434, repr. in Griesbach , Opsuscula academica , J. Ph. Gabler , ed., (Jena, 1825) 2:358-425, repr. in B. Orchard & T. R. W. Longstaff , eds., J. J. Griesbach (SNTSMS 34; Cambridge: UP, 1978), 74-102, English trans. B. Orchard , "A Demonstration that Mark was Written after Matthew and Luke" at 103-135; Karl Lachmann , "De ordine narrationum in evangeliis synopticis" in Theologische Studien und Kritiken (1835): 570-90, esp. 573-84; English trans. by N. H. Palmer , "Lachmann's Argument," NTS 13 (1966-67): 368-378 = idem in The Two-Source Hypothesis: A Critical Appraisal (ed. Arthur J. Bellinzoni , Jr.; Macon, Ga.: Mercer UP, 1985), 119-131; H. Marsh , "Dissertation on the Origin of our Three First Canonical Gospels" in Introduction to the New Testament by John David Michaelis , vol. 3, pt. 2 (2d ed., London: F. & C. Rivington, 1802) 167-409; F. E. D. Schleiermacher , "Über die Zeugnisse des Papias von unsern beiden ersten Evangelien" in Theologische Studien und Kritiken (1832); G. Ch. Storr , Über den Zweck der evangelischen Geschichte und der Briefe Johannis (Tübingen: 1786).
Shortly thereafter, Ch. H. Weisse put these two pieces of the puzzle together, in which he argued that Mark and the logia were the sources of both Matthew and Luke. Although some of his comporaries expounded similar ideas ( Wilke , Credner ), Weisse was the first to identify this and only these sources as a sufficient solution to the synoptic problem and therefore deserves the father of the 2SH. Weisse did have a problem, though; he was not sure where to place such material as the preaching of John the Baptist and the Temptation of Jesus into this framework. First (1838), he placed them in the logia (Q), but later changed mind and put them in an Ur-Markus (1854) to preserve the integrity of Q's genre.
Weisse's views did not immediately establish a following. He was a lone voice during a period that was dominated by the Tübingen school, who found the Griesbach hypothesis amenable to their rigid conception of the development of history in accordance with the Hegelian dialectic. Specifically, they saw Matthew as the Jewish thesis, Luke as the Gentile antithesis, and Mark as the mediating synthesis. However, the excesses of the school led a questioning of all their positions created a favorable climate for other approaches the synoptic problem. Holtzmann (1863) investigated his predecessors and organized his theory around a narrative source he called Alpha (A). Noticing that Matthew and Luke rarely agreed against Mark, Alpha's nature so closely resembled Mark that Holtzmann called it an Ur-Markus. With a Mark-like source, there needs to be a saying source. which Holtzmann termed Lambda (L) for the logia. Holtzmann's work came out when members of the Tübingen school were retiring, and the new generation of scholars quickly and enthusiastically adopted Holtzmann's Markan hypothesis.
Meanwhile, in England the scholars there generally agreed with Westcott in an oral (i.e. non-literary) origin to the gospels. William Sanday, however, brought Holtzmann's ideas on the synoptic problem to Oxford where it was studied in great detail, leading to a modification of Holtzmann's theory that recalled Weisse's solution back in 1838. Specifically, the Oxford School produced a series of proofs that led to the abandonment of Holtzmann's Ur-Markus in favor of pure Markan priority.
The key step was made by J. C. Hawkins (1899), who decided to analyze the nature of Ur-Markus by looking at the agreements between Matthew and Luke against Mark, which should belong to Ur-Markus. However, Hawkins discovered that many of these agreements (which we now call the "minor agreement") were smoother than the corresponding text of Mark, which made it very difficult to envision Mark as a debasing revision of Ur-Markus. Nevertheless, Hawkins listed about 20 anti-Markan agreements that he felt made it unlikely for Mark to be the direct source of Matthew and Luke.
The next step in this direction was F. C. Burkitt (1907), who found explanations for most of Hawkins troublesome passages, and the process was completed by Streeter (1924) who appealed to textual corruption as the answer to the most difficult of these minor agreements.
The hypothesis of a deutero-Mark within the framework of a four-source hypothesis (Mark, Q, L and M) is extremely sensible. It is the most economic theory that explains all we know. Why has it not been accepted more widely?
Conditions treated, training and certifications.
Developmental psychology is the study of how humans grow, change, and adapt across the course of their lives. Developmental psychologists research the stages of physical, emotional, social, and intellectual development from the prenatal stage through infancy, childhood, adolescence, and adulthood.
This article covers developmental psychology, including the definition, types, life stages, and how to seek treatment when necessary.
seksan Mongkhonkhamsao / Getty Images
According to the American Psychological Association (APA), developmental psychology is a branch of psychology that focuses on how human beings grow, change, adapt, and mature across various life stages. Developmental psychology is also known as human development or lifespan psychology.
In each of the life stages of developmental psychology, people generally meet certain physical, emotional, and social milestones. These are the major life stages, according to developmental psychologists:
During its early development as a branch of psychology in the late 19th and early 20th centuries, developmental psychology focused on infant and child development. As the field grew, so did its focus. Today, developmental psychologists focus on all stages of the human lifespan.
As developmental psychology grew over time, various researchers proposed theories about how to understand the process of human development. Depending on their training, a developmental psychologist might focus on a specific theory or approach within the field.
These are a few of the major branches of developmental psychology.
Building on Austrian neurologist and the founder of psychoanalysis Sigmund Freud’s theory of psychosexual development , psychologist Erik Erikson proposed a lifespan theory that included eight stages of psychosocial development .
Each of the stages corresponds to both an age range and a core “crisis” (such as trust vs. mistrust in infancy) that must be resolved before someone can move on to the next.
Swiss psychologist Jean Piaget’s theory of cognitive development focuses on how children and youth gradually become able to think logically and scientifically. Piaget proposed that cognition develops through four distinct stages of intellectual development, beginning at birth and ending at age 12.
Attachment theory , originally developed by psychoanalyst John Bowlby, establishes the importance of a supportive, steady, and loving caregiver in infant and child development. If a child doesn’t establish such a connection, or if they experience parental separation or loss, they might continue to struggle with healthy attachments as they get older.
While Bowlby considered the importance of the immediate family in child development, psychologist Lev Vygotsky’s sociocultural developmental theory looks at the role of society. Cultural influences and beliefs can have a profound impact on how a person views their own identity and relates to others.
Developmental psychologists can help people address developmental issues in order to reach their full potential.
Some of the conditions a developmental psychologist might treat include:
The training required to become a developmental psychologist is similar to that in other subfields of psychology. Most developmental psychologists start with an undergraduate degree in psychology or a related field, followed by a master’s degree and a doctoral degree (PhD).
There are many master’s, graduate certificate, and PhD programs in developmental psychology in the United States. Some focus on a certain part of a person's lifespan, such as child and adolescent development. In addition to research and teaching, graduates may participate in a practicum or internship to pursue licensing as a therapist.
If you're concerned that your child is facing a developmental delay, a developmental psychologist can assess them to ensure that they are meeting their milestones. It's best to seek an assessment, diagnosis, and treatment early, so intervention can begin as soon as possible.
Examples of when to see a developmental psychologist may include:
A developmental psychologist might perform physical and/or cognitive testing to diagnose your child or refer them to another specialist, including the following:
A developmental psychologist will also likely ask you and your child questions about issues in areas of their life such as friends, behavior, or school performance.
In addition to working with infants and children, developmental psychologists can also help people at any stage of life. In particular, many older adults benefit from working with a developmental psychologist if they're experiencing symptoms of dementia, ill health, or cognitive decline.
Developmental psychology is the study of how human beings grow and change throughout their lives. Many developmental psychologists focus on the intellectual, social, emotional, and physical development of infants, children, and adolescents. Others treat and assess people of all ages.
Developmental psychologists can treat issues such as developmental delays, intellectual disabilities, learning disabilities, speech and language delays, motor skill delays, dementia, anxiety, depression, auditory processing disorder, autism spectrum disorder, and more. They also make referrals to other specialists, such as physical therapists, psychiatrists, and speech-language pathologists.
American Psychological Association. Developmental psychology .
Maryville University. What is human development and why is it important ?
American Psychological Association. Developmental psychology studies human development across the lifespan .
Liberty University. Theories of psychosocial development .
Oklahoma State University Library. Cognitive development: the theory of Jean Piaget .
University of Illinois Psychology Department Labs. Adult attachment theory and research .
Massey University. Vygotsky .
Centers for Disease Control and Prevention. Child development - developmental monitoring and screening .
Centers for Disease Control and Prevention. CDC's developmental milestones .
By Laura Dorwart Dr. Dorwart has a Ph.D. from UC San Diego and is a health journalist interested in mental health, pregnancy, and disability rights.
In this work.
Access to the complete content on Oxford Reference requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.
Please subscribe or login to access full text content.
If you have purchased a print title that contains an access token, please see the token for information about how to register your code.
For questions on access or troubleshooting, please check our FAQs , and if you can''t find the answer there, please contact us .
PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). (c) Copyright Oxford University Press, 2023. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice ).
date: 29 June 2024
Character limit 500 /500
The MoSCoW method is a four-step approach to prioritizing which project requirements provide the best return on investment (ROI). MoSCoW stands for must have, should have, could have and will not have -- the o's make the acronym more pronounceable.
A variety of business disciplines use the MoSCoW method. It enables everyone involved in a project to know what work to complete first and how that work helps increase revenue, decrease operational costs, improve productivity or boost customer satisfaction. On the business side, it can help stakeholders frame discussions about the importance of specific product features when choosing a software vendor. On the IT side, the MoSCoW method plays an important role in Agile project management by helping project teams prioritize story points.
Furthermore, prioritizing requirements enables project teams to understand the amount of effort and resources each project element requires. This knowledge improves the team's time management, makes the project more manageable, increases the likelihood of completion by deadline and optimizes ROI .
The MoSCoW method is also known as MoSCoW analysis , MoSCoW prioritization , MoSCoW technique and MoSCoW rules .
Before implementing the MoSCoW method, businesses must ensure the teams involved in the project and other stakeholders agree on the project objectives and the factors they use for prioritization. They should also establish plans for settling disagreements.
Next, teams should decide what percentage of resources they assign to each category. For example, they could allocate 20% of the resources to the could-have requirements, while giving 40% to must-haves and 30% to should-haves.
Once the teams and stakeholders gather requirements and reach agreements, then the teams can start assigning requirements to each of the following four categories.
This first category includes all the requirements that are necessary for the successful completion of the project. These are non-negotiable elements that provide the minimum usable subset of requirements.
Statements that are true for must-haves include the following:
If there is any way to work around a particular requirement, teams should consider it a should-have or could-have element. Assigning requirements to the should-have and could-have categories does not mean the team won't deliver the element; it just reveals that it is not necessary for completion and, therefore, is not guaranteed.
This second category of requirements is one step below must have. It can prep requirements for future release without impacting the current project. Should-have elements are important to project completion, but they are not necessary. In other words, if the final product doesn't include should-have requirements, then the product still functions. However, if it does include should-have elements, they greatly increase the value of the product. Minor bug fixes, performance improvements and new functionality are all examples of requirements that could fall into this category.
Teams can distinguish a should-have element from a could-have element by assessing the amount of pain caused by leaving the requirement out. This is often measured in terms of the business value or the number of people affected by its absence.
This category includes requirements that have a much smaller impact when left out of the project. As a result, could-have requirements are often the first ones teams deprioritize -- must-have and should-have requirements always take precedence as they impact the product more. An example of a could-have is a desirable but unimportant element.
This final category includes all the requirements the team recognizes as not a priority for the project's time frame. Assigning elements to the will-not-have category helps strengthen the focus on requirements in the other three categories, while also setting realistic expectations for what the final product does not include. Furthermore, this category is beneficial in preventing scope creep -- or the tendency for product or project requirements to increase during development beyond what the team anticipated.
The team can eventually reprioritize some requirements in the will-not-have group and work them into future projects; others are never used. To differentiate between these types of elements, teams can create subcategories within the will-not-have group to identify which requirements they should still implement and which they can ignore.
The Agile project management methodology breaks projects into small sections called iterations. Each iteration focuses on completing specific project elements in work sessions called sprints -- typically lasting two to four weeks. The MoSCoW method is frequently used within Agile project management to determine which elements -- including tasks, requirements, products and user stories -- the team should prioritize and which can be put on hold. These decisions make an Agile project schedule that enables teams to rapidly deploy solutions, more efficiently use resources, increase their flexibility and adaptability to changes, and more quickly detect issues.
The MoSCoW method is easy to use and understand. It can help individuals with prioritization, but it more greatly benefits project teams. Other advantages include the following:
In addition, the MoSCoW method enables users to assign specific percentages of resources to each of the four categories. This action ensures resources are effectively managed ,and it optimizes productivity analysis.
However, there are some drawbacks with the MoSCow method, including the following:
The MoSCoW method has its roots in the dynamic systems development method -- an Agile project delivery framework that aimed to improve rapid application development processes.
Software development expert Dai Clegg created the MoSCoW method while working at Oracle , the multinational computer technology corporation. Clegg initially designed the prioritization technique for timeboxed projects and initiatives within releases.
Editor's note: This article was reformatted in 2023 to improve the reader experience.
Dig deeper on agile, devops and software development methodologies.
CIOs are taking a hard look at the VMware portfolio as the number of alternatives rises in the hybrid cloud infrastructure market.
Building AI apps in the cloud requires you to pay more attention to your cloud workload management because of how AI impacts ...
While cloud-first gained popularity for its scalability and cost efficiency, the hybrid-first approach acknowledges that not all ...
Managing microservices without API gateways might be uncommon, but not unheard of. Consider the benefits, downsides and available...
The switch from microservices to monolith could save costs and improve performance. Explore key considerations and questions to ...
The RESTful API Modeling Language, or RAML, can be a powerful tool for developers looking to create an efficient, standardized ...
Former Proofpoint CEO sets an AI-focused agenda, including an Nvidia partnership launched this week, while denying layoff rumors ...
A series of product updates at Datadog DASH broke out of the vendor's usual observability domain and into territory held by ...
JFrog plans to meld AI/ML development with established DevSecOps pipelines through the acquisition of Qwak in a bid to help more ...
The three-level DBMS architecture makes database design more secure, extensible and accessible for client applications. Learn the...
Through Agile project management, techniques such as self-management, self-organization and continuous delivery achieve benefits ...
A work promotion can feel uncomfortable due to expectations and uncertainty, but that's also a sign of career and personal growth...
Compare Datadog vs. New Relic capabilities including alerts, log management, incident management and more. Learn which tool is ...
Many organizations struggle to manage their vast collection of AWS accounts, but Control Tower can help. The service automates ...
There are several important variables within the Amazon EKS pricing model. Dig into the numbers to ensure you deploy the service ...
What is moscow prioritization.
MoSCoW prioritization, also known as the MoSCoW method or MoSCoW analysis, is a popular prioritization technique for managing requirements.
The acronym MoSCoW represents four categories of initiatives: must-have, should-have, could-have, and won’t-have, or will not have right now. Some companies also use the “W” in MoSCoW to mean “wish.”
Software development expert Dai Clegg created the MoSCoW method while working at Oracle. He designed the framework to help his team prioritize tasks during development work on product releases.
You can find a detailed account of using MoSCoW prioritization in the Dynamic System Development Method (DSDM) handbook . But because MoSCoW can prioritize tasks within any time-boxed project, teams have adapted the method for a broad range of uses.
Before running a MoSCoW analysis, a few things need to happen. First, key stakeholders and the product team need to get aligned on objectives and prioritization factors. Then, all participants must agree on which initiatives to prioritize.
At this point, your team should also discuss how they will settle any disagreements in prioritization. If you can establish how to resolve disputes before they come up, you can help prevent those disagreements from holding up progress.
Finally, you’ll also want to reach a consensus on what percentage of resources you’d like to allocate to each category.
With the groundwork complete, you may begin determining which category is most appropriate for each initiative. But, first, let’s further break down each category in the MoSCoW method.
Moscow prioritization categories.
As the name suggests, this category consists of initiatives that are “musts” for your team. They represent non-negotiable needs for the project, product, or release in question. For example, if you’re releasing a healthcare application, a must-have initiative may be security functionalities that help maintain compliance.
The “must-have” category requires the team to complete a mandatory task. If you’re unsure about whether something belongs in this category, ask yourself the following.
If the product won’t work without an initiative, or the release becomes useless without it, the initiative is most likely a “must-have.”
Should-have initiatives are just a step below must-haves. They are essential to the product, project, or release, but they are not vital. If left out, the product or project still functions. However, the initiatives may add significant value.
“Should-have” initiatives are different from “must-have” initiatives in that they can get scheduled for a future release without impacting the current one. For example, performance improvements, minor bug fixes, or new functionality may be “should-have” initiatives. Without them, the product still works.
Another way of describing “could-have” initiatives is nice-to-haves. “Could-have” initiatives are not necessary to the core function of the product. However, compared with “should-have” initiatives, they have a much smaller impact on the outcome if left out.
So, initiatives placed in the “could-have” category are often the first to be deprioritized if a project in the “should-have” or “must-have” category ends up larger than expected.
One benefit of the MoSCoW method is that it places several initiatives in the “will-not-have” category. The category can manage expectations about what the team will not include in a specific release (or another timeframe you’re prioritizing).
Placing initiatives in the “will-not-have” category is one way to help prevent scope creep . If initiatives are in this category, the team knows they are not a priority for this specific time frame.
Some initiatives in the “will-not-have” group will be prioritized in the future, while others are not likely to happen. Some teams decide to differentiate between those by creating a subcategory within this group.
Although Dai Clegg developed the approach to help prioritize tasks around his team’s limited time, the MoSCoW method also works when a development team faces limitations other than time. For example:
What if a development team’s limiting factor is not a deadline but a tight budget imposed by the company? Working with the product managers, the team can use MoSCoW first to decide on the initiatives that represent must-haves and the should-haves. Then, using the development department’s budget as the guide, the team can figure out which items they can complete.
A cross-functional product team might also find itself constrained by the experience and expertise of its developers. If the product roadmap calls for functionality the team does not have the skills to build, this limiting factor will play into scoring those items in their MoSCoW analysis.
Cross-functional teams can also find themselves constrained by other company priorities. The team wants to make progress on a new product release, but the executive staff has created tight deadlines for further releases in the same timeframe. In this case, the team can use MoSCoW to determine which aspects of their desired release represent must-haves and temporarily backlog everything else.
Although many product and development teams have prioritized MoSCoW, the approach has potential pitfalls. Here are a few examples.
One common criticism against MoSCoW is that it does not include an objective methodology for ranking initiatives against each other. Your team will need to bring this methodology to your analysis. The MoSCoW approach works only to ensure that your team applies a consistent scoring system for all initiatives.
Pro tip: One proven method is weighted scoring, where your team measures each initiative on your backlog against a standard set of cost and benefit criteria. You can use the weighted scoring approach in ProductPlan’s roadmap app .
To know which of your team’s initiatives represent must-haves for your product and which are merely should-haves, you will need as much context as possible.
For example, you might need someone from your sales team to let you know how important (or unimportant) prospective buyers view a proposed new feature.
One pitfall of the MoSCoW method is that you could make poor decisions about where to slot each initiative unless your team receives input from all relevant stakeholders.
Because MoSCoW does not include an objective scoring method, your team members can fall victim to their own opinions about certain initiatives.
One risk of using MoSCoW prioritization is that a team can mistakenly think MoSCoW itself represents an objective way of measuring the items on their list. They discuss an initiative, agree that it is a “should have,” and move on to the next.
But your team will also need an objective and consistent framework for ranking all initiatives. That is the only way to minimize your team’s biases in favor of items or against them.
MoSCoW prioritization is effective for teams that want to include representatives from the whole organization in their process. You can capture a broader perspective by involving participants from various functional departments.
Another reason you may want to use MoSCoW prioritization is it allows your team to determine how much effort goes into each category. Therefore, you can ensure you’re delivering a good variety of initiatives in each release.
If you’re considering giving MoSCoW prioritization a try, here are a few steps to keep in mind. Incorporating these into your process will help your team gain more value from the MoSCoW method.
Remember, MoSCoW helps your team group items into the appropriate buckets—from must-have items down to your longer-term wish list. But MoSCoW itself doesn’t help you determine which item belongs in which category.
You will need a separate ranking methodology. You can choose from many, such as:
For help finding the best scoring methodology for your team, check out ProductPlan’s article: 7 strategies to choose the best features for your product .
To make sure you’re placing each initiative into the right bucket—must-have, should-have, could-have, or won’t-have—your team needs context.
At the beginning of your MoSCoW method, your team should consider which stakeholders can provide valuable context and insights. Sales? Customer success? The executive staff? Product managers in another area of your business? Include them in your initiative scoring process if you think they can help you see opportunities or threats your team might miss.
MoSCoW gives your team a tangible way to show your organization prioritizing initiatives for your products or projects.
The method can help you build company-wide consensus for your work, or at least help you show stakeholders why you made the decisions you did.
Communicating your team’s prioritization strategy also helps you set expectations across the business. When they see your methodology for choosing one initiative over another, stakeholders in other departments will understand that your team has thought through and weighed all decisions you’ve made.
If any stakeholders have an issue with one of your decisions, they will understand that they can’t simply complain—they’ll need to present you with evidence to alter your course of action.
Related Terms
2×2 prioritization matrix / Eisenhower matrix / DACI decision-making framework / ICE scoring model / RICE scoring model
Try productplan free for 14 days, share on mastodon.
Prioritization played a significant role in the success of most feature-rich apps, such as Slack and GitLab . Initially, they offered a limited set of functionalities that were essential for their users. With time, this set was supplemented with other features. Railsware is going to share its own style of prioritizing and show you how we use the MoSCoW method to get long lists of tasks done.
As a rule, the daily routine includes a bunch of tasks. Ideally, you’ll have enough time and energy to cover all of them – but it just might happen that the number of tasks is immense and the resources available are not in abundance. That’s where prioritization comes in.
This term denotes a process to filter what you have to do in order of importance or relevance. For example, if you’re building a house, you are not likely to begin with the roof or walls until your foundation is done. Of course, things are much more complicated in the web development industry, and this example cannot reveal the full-scope value of setting priorities.
Complex projects and numerous startups make use of advanced prioritization techniques. These usually consist of frameworks known for specific requirements or rules that improve decision-making. Success in prioritization often determines the success of the company itself. Getting caught up in pending and undone tasks is a straight road to failure. That’s why businesses pay particular attention to which prioritization methods to use. There are quite a few of them, but they all have some common characteristics, such as orientation towards input (internal or external) and quantitative or qualitative tools.
By the way, we are hiring. Check out our job openings.
External orientation means that you need to involve stakeholders outside the development team to set priorities, while the internally-oriented methods can be executed purely in-house. Quantitative methods entail a deeper focus on numeric metrics in prioritization, and the qualitative one rests on expert opinions, votings, classifications to a greater extent. In view of this, they are traditionally divided into the following categories:
You can read about different Agile prioritization techniques in detail here . If you need, we’ve also gone more in depth on what Agile product development is in a separate article.
Railsware prefers a technique developed by Dai Clegg way back in 1994. Initially, it was named MSCW, but two o’s were added to improve pronounceability. This also made it sound like the capital city of Russia. Let’s see how it works.
To understand the gist of the MoSCoW method, we need to look at its origin – the dynamic systems development method (DSDM). It is a framework for Agile project management tailored by practitioners with the aim of improving quality in rapid app development (RAD) processes. A hallmark of DSDM projects is strictly determined quality, costs, and time at an early stage. In view of this, all the project tasks have to be allocated by importance. The need for managing priorities triggered the invention of a specialized prioritization mechanism.
This mechanism was implemented via MoSCoW – a simple yet powerful solution to set priorities both with and without timeboxes. However, it shows better efficiency if you have a certain deadline for a task, feature, subfeature, functionality, etc. The framework is applicable to all levels of project prioritization from top to bottom, as well as to all functions and focus areas.
The MoSCoW abbreviation (except for the o’s) is carved with first letters of the priority categories it works with. These are Must-haves, Should-haves, Could-haves and Won’t-haves. And that’s how you can define which task falls into which category.
These rules or requirements estimate the importance of any task/process/feature/etc. Each company or work team uses its own approach to setting requirements, but, in general, they do not differentiate much and look as follows.
These are top-priority requirements, which shape the foundation of the major pipeline. Avoiding them means blocking the entire project or further activities. As a rule, product ideation depends entirely on defining must-haves using such pointers as ‘required for launch’, ‘required for safety’, ‘required for validation’, ‘required to deliver a viable solution’, etc.
This type of requirement is of secondary priority. Should-haves do not affect the launch and, traditionally, are considered important but not crucial. They differ from must-haves by the availability of a workaround. Therefore, the failure of a should-have task is unlikely to cause the failure of the entire project. If you’re building a product, it will still be usable even if these requirements aren’t met.
The next requirement is less important than the two previous ones but still wanted. If we compare could-haves with should-haves, the former is defined by a lower degree of adverse effect if omitted. Traditionally, the third-level priority requirements in the Agile framework MoSCoW are realized if a project is not highly constrained in time. Within the product development, we can call them low-cost tweaks.
You can also encounter this type of requirement under the name of would-have or wish-to-have, but these variants are not recognized by the Wiki . However, regardless of the chosen name, these requirements define the lowest priority for tasks that are unviable to implement with a particular budget and deadline. Won’t-have does not mean a complete rejection of something. It envisions reintroduction under favorable conditions in the future.
In search of the perfect tools and techniques, our team often modifies some well-known approaches and tailors them to our needs. This constant search and improvement led us to brand new product ideation and decision-making framework: BRIDGeS . BRIDGeS is a flexible approach for multi-context analysis suitable for building effective product strategies, solving operational and strategic problems, making day-to-day decisions , and more. Find out how to use BRIDGeS and what advantages BRIDGeS can bring to your team .
MoSCoW is another tool that we modified to make it even more flexible and versatile. Below, we share our findings to help your team nail prioritization in a more efficient way.
The main difference between the classical MoSCoW and our version of this technique is that we added another level of prioritization within such groups as Must, Should, and Could. Each of these groups of requirements got another 4 complexity categories :
This way, when a requirement gets, let’s say, the priority Must, we can also add a numeric matter to the letter M. For instance, our sprint can include several M2 tasks, one M1 task, and three S1 tasks.
When the task is marked with the priority “3” (M3/S3/C3), it most likely means that its scope is too large and complex to be fulfilled fast. You need to decompose it into smaller, manageable chunks and prioritize them as well. This way, from one M3 requirement, you can get a bunch of M2, S1, and C1 tasks, for example.
Sometimes, M, S, C, and W letters are not enough and we may also need an Urgent Must (UM) mark. UMs are the most critical things, such as hotfixes, bug fixes, and patches, which block the work of the whole team. From our experience, we recommend you to fix these tasks ASAP, as they hinder the team’s productive work. So if you set any task as UM, you should ignore all other tasks until the UM task is fixed. In normal situations, your bug tracking system shouldn’t have UMs.
Why do Urgent Must tasks appear? Often, UMs are the Must-haves that your team ignored before the deployment phase or missed during the QA phase. Pay attention to these tricky cases, and try to solve them before they become an obstacle.
When we got an additional level of priorities within the MoSCoW system, we felt the following improvements:
Everything looks simple in theory, but is it in practice? Let’s check out how a traditional MoSCoW analysis of functionality prioritizing works through the example of a regular web application. As a sample, we’re going to use basic functions taken from one of the Railsware products .
Prioritize cards.
Based on particular requirements for budget and time, we can single out the most fundamental features to be implemented in the minimum viable product . After the priority analysis, we’ve got the following:
The top-priority tasks are followed by important, though not vital,functionalities for the app. These are:
The evolution of the app does foresee its availability on mobile devices. However, this task is only nice-to-have at this point.
And now the least-priority feature. It aims at enhancing the user experience once the app is on track. Theme selectability is definitely not what we’re going to make now, so this feature is saved for later.
This step allows you to see the quantitative ratio of high and low priority tasks.
The most difficult thing about prioritization is to be icily intelligent and focus on the essential tasks to be done. Otherwise, you can get into the EVERYTHING-IS-MUST trap , according to which any feature like the billing system option or mobile app availability turns into the must-have.
And that’s why the MoSCoW Agile method is cool. It allows you to define a basic feature set, which has top priority and emphasizes that you do not need to abandon anything. The healthy balance of must-haves + should-haves is 50% of the entire scope. All (or almost all) of the tasks will be implemented later but in the order of their importance to your goal. The goal of this example is to build an MVP , and the categorization above shows the expected progress of the app’s functionality.
We took the same example with all the tasks listed above to showcase how we apply our version of this prioritization technique.
The main priority (Must, Should, Could, and Won’t) are still the same, however, we dived deeper to make a more precise priority estimation of each task. Here’s what we came up with:
Our modified approach provides a better understanding of the task’s priority and complexity and shows the parts that need to be reconsidered. This way, it’s easier to plan a balanced sprint, taking only tasks that can be implemented (all cards with the priority “3” should be split into smaller tasks) and some small tasks that allow your team to reduce the workload.
We have lots of knowledge to share with you. Join Railsware team.
The framework is quite popular among Agile projects with fixed timeboxes since it allows for managing the requirements for a specific release of a product. This prioritization method has proved its efficiency and reliability within our company as well, and we do recommend it to our clients. However, it is not perfect of course, and an unbiased look can reveal some flaws associated with MoSCoW technique. Let’s take a look at its strengths and weaknesses.
Let’s take a look at how we set priorities within the company.
Product development: we rest upon a roadmap where the product features and the order of their implementation are specified. As a rule, we leverage MoSCoW to define which feature goes first, which comes second, and so on, taking into account their importance and the interdependence of features. Must-haves and Should-haves are meant for the product release. Could-haves and Won’t-haves are postponed for the future.
HR and recruitment: prioritization rests upon such requirements as the demand for particular expertise, budget availability, timebox (how urgently we need this expertise), and so on. We leverage the similar patterns of setting priorities in other focus areas including on-boarding, branding, marketing, etc.
The biggest challenge of the methodology is that all stakeholders must be familiar with enough context to estimate features correctly. Besides, stakeholders that represent different functions like sales, development, marketing have their own vision of setting priorities, which not always works towards correct prioritization. Investors usually treat all features as Must-haves from their broad-based perspective and need them done without any respect of their implementation order.
Railsware has a Holacratic organizational structure . We take advantage of collective leadership based on the RASCI model and make decisions on different things including prioritization through voting. Team members can choose from several options like really want, want and don’t want. Each option implies a particular point. The option with the biggest point total has the highest priority. For small contexts, a responsible role (team leader, project manager, etc.) can be in charge of setting priorities on his/her own.
Railsware uses Agile framework MoSCoW heavily and is pleased with it. However, it does not mean that we are closed to other solutions. Besides, a good product manager must consider the key product metrics and build the prioritization according to them. So here are some other worthwhile techniques that you may benefit from.
With this framework, you can define how happy the users are with product features. The Kano Model rests on a questionnaire, which is used to learn users’ attitude to a particular feature (like, expect, dislike, neutral, etc.). Visually, the model can be expressed via a two-dimensional diagram where the vertical axis is responsible for the level of user satisfaction (from totally frustrated to incredibly happy) and the horizontal one shows either how much was invested in the feature (Investment), how well was it implemented (Implementation), or how much users benefit from it (Functionality).
Categorization of requirements includes four types that are prioritized in the following order: must-be, performance, attractive, and indifferent. Must-bes are some basic things that users generally expect. Performance (also known as One-Dimensional) requirements are the golden mean and allow you to increase the satisfaction level. Attractive requirements are those that improve user experience. These are nice-to-haves or could-haves according to MoSCoW. Indifferent ones are less prioritized and sometimes even entirely omitted.
This prioritization technique is one of the simplest. You can encounter it under the names of Value vs. Cost or Value vs. Effort as well. The method feels intuitive and is aimed at maximizing value delivery. Estimation of features’ importance rests upon how much effort is invested to implement them and how much value they will bring. Here is how it looks visually:
The art of setting priorities shows the efficiency of your workflow. Railsware’s choice is the MoSCoW project management framework, which has made a good showing in versatile functionalities and products. However, it might be less useful for immense projects with multiple teams involved in the pipeline. We advise you to find an effective prioritization solution that fits your unique needs, and to always avoid getting caught up in countless pending tasks.
Watch CBS News
Updated on: June 29, 2024 / 12:40 AM EDT / CBS/AP
Martin Mull, whose droll, esoteric comedy and acting made him a hip sensation in the 1970s and later a beloved guest star on sitcoms including "Roseanne" and "Arrested Development," has died, his daughter said Friday. He was 80.
Mull's Daughter, TV writer and comic artist Maggie Mull, said her father died at home on Thursday after "a valiant fight against a long illness."
Mull, who was also a guitarist and painter, came to national fame with a recurring role on the Norman Lear-created satirical soap opera "Mary Hartman, Mary Hartman," and the starring role in its spinoff, "Fernwood 2 Night," on which he played the host of a satirical talk show.
"He was known for excelling at every creative discipline imaginable and also for doing Red Roof Inn commercials," Maggie Mull said in an Instagram post . "He would find that joke funny. He was never not funny. My dad will be deeply missed by his wife and daughter, by his friends and coworkers, by fellow artists and comedians and musicians, and —the sign of a truly exceptional person— by many, many dogs."
Melissa Joan Hart, who acted alongside Mull in the series "Sabrina the Teenage Witch," paid tribute to him on Instagram on Friday, calling him "a wonderful man who I am better for knowing."
"I have such fond memories of working with him and being in awe of his huge body of work," she wrote.
Fellow "Sabrina" actress Caroline Rhea described Mull as "brilliantly funny and kind" in her own social media post.
"Your impact on the world will never be forgotten," Rhea wrote. "What a gift it was to know you Martin."
Known for his blonde hair and well-trimmed mustache, Mull was born in Chicago, raised in Ohio and Connecticut. He studied art in Rhode Island and Rome. He combined his music and comedy in hip Hollywood clubs in the 1970s.
"In 1976 I was a guitar player and sit-down comic appearing at the Roxy on the Sunset Strip when Norman Lear walked in and heard me," Mull told The Associated Press in 1980. "He cast me as the wife beater on 'Mary Hartman, Mary Hartman.' Four months later I was spun off on my own show."
In the 1980s he appeared in films including "Mr. Mom" and "Clue," and in the 1990s had a recurring role on "Roseanne."
He would later play private eye Gene Parmesan on "Arrested Development," and would be nominated for an Emmy in 2016 for a guest turn on "Veep."
"What I did on 'Veep' I'm very proud of, but I'd like to think it's probably more collective, at my age it's more collective," Mull told the AP after his nomination. "It might go all the way back to 'Fernwood.'"
Other comedians and actors were often his biggest fans.
"Martin was the greatest," "Bridesmaids" director Paul Feig said in an X post . "So funny, so talented, such a nice guy. Was lucky enough to act with him on The Jackie Thomas Show and treasured every moment being with a legend. Fernwood Tonight was so influential in my life."
Strategies for navigating a new kind of communication landscape: the “echoverse.”
The Internet and AI tools are transforming marketing communications within a complex, interactive landscape called the echoverse. While marketing has evolved since the proliferation of the Internet, in the echoverse, a diverse network of human and nonhuman actors — consumers, brands, AI agents, and more — continuously interact, influence, and reshape messages across digital platforms. Traditional one-way and two-way communication models give way to omnidirectional communication. The authors integrated communication theory and theories of marketing communications to create a typology of marketing communication strategies consisting of three established strategies — 1) promotion marketing, 2) relationship marketing, and 3) customer engagement marketing — and their proposed strategy, 4) echoverse marketing. The authors also recommend three strategies for marketers to make the shift from leading messaging to guiding messaging: 1) Enable co-creation and co-ownership, 2) Create directed learning opportunities, and 3) Develop a mindset of continuous learning.
Today, companies must navigate a new kind of communication landscape: the “ echoverse .” This new terrain is defined by a complex web of feedback loops and reverberations that are created by consumers, brands, news media, investors, communities, society, and artificial intelligence (AI) agents. This assemblage of actors continuously interact, influence, and respond to each other across a myriad of digital channels, platforms, and devices, creating a dynamic where messages circulate and echo, being amplified, modified, or dampened by ongoing interactions.
Apple confirms iphone upgrade with 2 key new features is here in days.
The next developer beta of iPhone software, iOS 18 is coming soon: Monday, June 24, to be exact. And, unusually, we have this release schedule from Apple itself, made in a statement that also talked about Apple Intelligence—and how it won’t be released in the EU this year. And now new information has emerged about exactly why the biggest beast of iOS 18, Apple Intelligence is only available on the very latest iPhones, the iPhone 15 Pro and iPhone 15 Pro Max, plus devices with Apple M1 or more recent chips on board.
iPhone Mirroring coming to iOS 18.
Updated June 14 with new details of the problems running Apple Intelligence on older iPhones.
The usual route for Apple, as pointed out by Bloomberg’s Mark Gurman in his latest Power On newsletter is that Apple’s latest features work on “a wide range of existing hardware.” But only two current iPhones will handle Apple Intelligence. In an interview with the excellent John Gruber from Daring Fireball, John Giannandrea, Apple’s head of AI, said ““You could, in theory, run these models on a very old device, but it would be so slow that it would not be useful.”
Hmm. An Apple iPhone 15 isn’t really very old, but it seems that the processor and memory on board the iPhone 15 Pro and iPhone 15 Pro Max are enough to run Apple Intelligence, even if the older can’t do justice to it.
According to MarkGurman again, it may come down to how Apple chose which processors to put in its latest models. “When the iPhone 14 line rolled out in 2022, Apple shifted its processor strategy. Instead of giving the Pro and non-Pro models the same chip, the company only put the new processor into the iPhone 14 Pro. That continued last year, with the iPhone 15 Pro getting the A17 Pro and the iPhone 15 standard models keeping the A16 and less memory (six gigabytes versus 8 gigabytes). The tech giant determined internally months ago that 8 gigabytes is the minimum needed to run Apple Intelligence.”
Best 5% interest savings accounts of 2024.
And maybe the neural engine, the part of the chip that looks after AI concerns, is less of an issue than was expected. As Gurman says, “The size of the neural engine is actually less of a factor. That component is practically the same on M1 Macs and iPads, which do support Apple Intelligence, as it is on the iPhone 13, 14 and 15, which do not. It’s almost entirely about memory. This is why Apple is boosting the memory on the regular iPhone 16 this fall: to support Intelligence and frame it as a key selling point. Every version of the 16 lineup will support the AI features.”
While Apple Intelligence will be available to test in beta this summer in the U.S., for instance, it’s coming much later in some places, such as the EU, according to the same Bloomberg newsletter. Gurman points out that Apple will withhold Apple Intelligence in the EU, “along with iPhone Mirroring and SharePlay Screen Sharing — because it’s afraid of running afoul of new regulations. At issue is the Digital Markets Act, a law aimed at reining in Big Tech that took effect this year.”
Apple cited interoperability requirements and for now EU users will miss out on iPhone Mirroring and SharePlay Screen Sharing—discussed below. Gurman asks how consumers will respond. “Will European customers put pressure on lawmakers if they resent not having the new features? With Apple already making other changes to accommodate the new law, most notably to its App Store, this year could be a testy one for its relationship with European regulators.” Back to the timetable of software releases.
Apple almost never gives the kind of detail as a release date, though the timing, two weeks after the first beta release is not surprising and fits with the usual cadence of beta releases at this stage. Here’s what’s coming.
The statement was made by Apple to The Verge on Friday, June 21, as part of the explanation of why Apple Intelligence will be delayed in the EU because of the recent Digital Markets Act.
Apple said again that Apple Intelligence will be available for beta testing in the summer, for users with U.S. English as their language, and with the EU caveat. But this Monday’s update will have two key features.
This is plenty cool. If your iPhone is in your bag, you can still use it on your Mac, for instance, and interact with it as easily as if it were in your hand. Notifications from the iPhone appear on the Mac and when you click on one, the iPhone appears on your Mac display.
You can then control it with your keyboard or the trackpad on the Mac, swiping between home screens or interacting with apps. And you’ll be able to drag and drop files between devices, though that is coming later.
This is also very interesting. It already exists in iPadOS 17 but it’s being upgraded. In the new iPadOS 18 version, you’ll be able to draw on your iPad screen and what you draw will appear on your friend’s device. That’s relevant to iPhones because you can draw on their iPhone screen as well as iPad. And, even cooler, you can request permission to remote-control their device. This is something I’ve been wanting for a long time, especially to help friends or family members who aren’t quite sure about how to do something on their Apple device.
Apple Intelligence is not coming on Monday, but will be with us in the summer, and the public beta for the new OS versions will likely land in July.
One Community. Many Voices. Create a free account to share your thoughts.
Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.
In order to do so, please follow the posting rules in our site's Terms of Service. We've summarized some of those key rules below. Simply put, keep it civil.
Your post will be rejected if we notice that it seems to contain:
User accounts will be blocked if we notice or believe that users are engaged in:
So, how can you be a power user?
Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.
The next wave of obesity drugs is coming soon.
Drug companies are racing to develop GLP-1 drugs following the blockbuster success of Novo Nordisk’s Ozempic and Wegovy and Eli Lilly’s Mounjaro and Zepbound.
Some of the experimental drugs may go beyond diabetes and weight loss, improving liver and heart function while reducing side effects such as muscle loss common to the existing medications. At the 2024 American Diabetes Association conference in Orlando, Florida, researchers are expected to present data on 27 GLP-1 drugs in development.
“We’ve heard about Ozempic and Mounjaro and so on, but now we’re seeing lots and lots of different drug candidates in the pipeline, from very early-stage preclinical all the way through late-stage clinical,” said Dr. Marlon Pragnell, ADA’s vice president of research and science. “It’s very exciting to see so much right now.”
A large portion of the data presented comes from animal studies or early-stage human trials. However, some presentations include mid-to late-stage trials, according to a list shared by the organization.
Approval by the Food and Drug Administration is likely years away for most. Some of the drugs showcased could be available for prescription in the U.S. within the next few years.
“We’ve witnessed an unprecedented acceleration in the development of GLP drugs,” said Dr. Christopher McGowan, a gastroenterologist who runs a weight loss clinic in Cary, North Carolina. “We are now firmly entrenched in the era of the GLP.”
While the existing drugs are highly effective, new drugs that are more affordable and have fewer side effects are needed, McGowan added.
There aren’t just GLP-1 drugs in the pipeline. On Thursday, ahead of the diabetes conference, Denmark-based biotech firm Zealand Pharma released data that showed a high dose of its experimental weight loss drug petrelintide helped reduce body weight by an average of 8.6% at 16 weeks.
The weekly injectable medication is unique because it mimics the hormone amylin, which helps control blood sugar. The hope is patients will experience fewer side effects like nausea commonly associated with GLP-1 drugs such as Wegovy and Zepbound.
GLP-1 medications work, in part, by slowing down how quickly food passes through the stomach, leading people to feel fuller longer. In several of the upcoming weight loss drugs, a different hormone called glucagon is in the spotlight. Glucagon is a key blood-sugar-regulating hormone that can mimic the effects of exercise.
One of the drugs featured at the conference on Sunday is called pemvidutide, from Maryland-based biotech firm Altimmune .
The drug contains the GLP-1 hormone, a key ingredient in Ozempic and Wegovy, in addition to glucagon.
Altimmune released data from a phase 2 trial of 391 adults with obesity or who are overweight with at least one weight-related comorbidity such as high blood pressure. Patients were randomized to either get one of three doses of pemvidutide or a placebo for 48 weeks.
Researchers found that patients who got the highest dose of the drug lost on average 15.6% of their body weight after 48 weeks, compared to the 2.2% body weight loss seen in patients who got a placebo. In similar trials, semaglutide was shown to reduce body weight by around 15% after 68 weeks.
These are not direct comparisons because the drugs weren’t compared in a head-to-head clinical trial.
Dr. Scott Harris, Altimmune’s chief medical officer, said the drug has been shown to help people lose weight, as well as provide health benefits to the liver and heart. What’s more, the drug has shown benefits in preserving lean body mass. Some studies have suggested that semaglutide, the active ingredient in Ozempic and Wegovy, can cause muscle loss.
“If people take the drugs long term, what’s going to be their long-term health? What’s going to be the long-term effects on their body composition, their muscle, their ability to function?” he said.
Harris said that people who got pemvidutide lost on average 21% of their lean body mass, which is lower than the around 25% of lean body mass people typically lose with diet and exercise.
“We’re the next wave of obesity drugs,” Altimmune President and CEO Vipin Garg said. “The first wave of mechanisms was all driven by appetite suppression. We are adding another component.”
Altimmune expects to begin a phase 3 trial soon. The company hopes the drug will be available in the U.S. sometime in 2028.
Expanding the number of weight loss drugs available is important for several reasons, experts say.
More options could also help alleviate the shortages seen in the U.S. with Novo Nordisk’s and Lilly’s weight loss drugs.
Increased competition could drive down the high cost of the drugs over time. A month’s supply of Wegovy or Zepbound can cost more than $1,000, often financially untenable for many patients, experts say.
Patients can also respond differently to treatments, said Dr. Fatima Cody Stanford, an associate professor of medicine and pediatrics at Harvard Medical School. In fact, some have found the existing GLP-1 options ineffective.
“Different GLP-1 drugs may have varying levels of efficacy and potency,” she said. “Some patients may respond better to one drug over another, depending on how their body metabolizes and responds to the medication.”
Since starting Ozempic in June 2022, Danielle Griffin has not seen the results her doctor predicted. “She really expected to see a huge difference in my weight, and I just never saw it,” said the 38-year-old from Elida, New Mexico. Griffin weighed about 300 pounds and has lost only about 10 pound in two years. She said her “expectations were pretty much shattered from that.”
Amid insurance battles and shortages, she has also tried Wegovy and Mounjaro, but didn’t see a difference in her weight.
“I don’t feel like there are options, especially for myself, for someone who the medications not working for.”
The prospect of new medications on the horizon excites Griffin. “I would be willing to try it,” she said, adding that “it could be life changing, honestly, and you know that alone gives me something to look forward to.”
Eli Lilly, which makes Zepbound and the diabetes version Mounjaro, has two more GLP-1 drugs in development.
On Sunday, Lilly released new data about retatrutide, an injectable drug that combines GLP-1 and glucagon , plus another hormone called GIP. GIP is thought to improve how the body breaks down sugar.
In an earlier trial, retatrutide helped people lose, on average, about 24% of their body weight, the equivalent of about 58 pounds — greater weight loss than any other drug on the market.
New findings showed the weekly medication also significantly reduced blood sugar levels in people with Type 2 diabetes.
On Saturday, there were also new findings on the experimental mazdutide, which Lilly has given permission to biotech firm Innovent Biologics to develop in China. The drug combines GLP-1 and glucagon.
In a phase 3 study of adults in China who were overweight or had obesity, researchers found that after 48 weeks, a 6-milligram dose of the drug led to an average body weight reduction of 14.4%.
The drug also led to a reduction in serum uric acid — a chemical that can build up in the bloodstream, causing health problems, and has been associated with obesity, according to Dr. Linong Ji, director of the Peking University Diabetes Center, who presented the findings.
That was “quite unique and never reported for other GLP-1-based therapies,” he said in an interview.
The drug could be approved in China in 2025, Ji said.
An estimated 75% of people with obesity have nonalcoholic fatty liver disease and 34% have MASH, or metabolic dysfunction-associated steatohepatitis, according to researchers with the German drugmaker Boehringer Ingelheim. Fatty liver disease occurs when the body begins to store fat in the liver . It can progress to MASH, when fat buildup causes inflammation and scarring.
In a phase 2 trial of people who were overweight or had obesity, Boehringer Ingelheim’s survodutide, which uses both GLP-1 and glucagon, led to weight loss of 19% at 46 weeks. Another phase 2 study in people with MASH and fibrosis found that 83% of participants also showed improvement in MASH.
Survodutide “has significant potential to make a meaningful difference to people living with cardiovascular, renal and metabolic conditions,” said Dr. Waheed Jamal, Boehringer Ingelheim’s corporate vice president and head of cardiometabolic medicine.
On Friday, the company released two studies on the drug. One, in hamsters, found that weight loss was associated with improvements in insulin and cholesterol. The second, in people with Type 2 diabetes or people with obesity, found the drug helped improve blood sugar levels.
The company is looking to begin a phase 3 trial.
CLARIFICATION (June 24, 2024, 2:31 p.m. ET): Innovent Biologics has entered into an exclusive licensed agreement with Eli Lilly for the development of mazdutide in China, not a partnership.
Berkeley Lovelace Jr. is a health and medical reporter for NBC News. He covers the Food and Drug Administration, with a special focus on Covid vaccines, prescription drug pricing and health care. He previously covered the biotech and pharmaceutical industry with CNBC.
Who are the best teams in ea sports™ college football 25.
Hey College Football Fans,
Welcome back to the Campus Huddle! This week, we have a special “living” edition of the Campus Huddle, centered around Rankings Week.
So what is Rankings Week?
It’s a time to celebrate various EA SPORTS™ College Football 25 rankings, from the Toughest Places to Play, to the Top Offenses and Defenses, to our final Team Power Rankings before the worldwide launch on July 19. Plus, we’ll have our Sights and Sounds Deep Dive coming Wednesday, showcasing the incredible and unique presentation features coming to EA SPORTS™ College Football 25.
The full Rankings Week schedule can be seen here:
We laid out the significant impact that Homefield Advantage can have on the outcome of games in EA SPORTS™ College Football 25 during our Gameplay Deep Dive Campus Huddle . Audio and in-game modifiers such as blurred routes, incorrect play art, confidence and composure affects, and screen shaking are some of the immersive impacts away teams and players will be forced to contend with.
But not all Homefield Advantages are created equal. The Development Team worked to compile a list of the Top 25 Toughest Places to Play, factoring in historical stats such as home winning %, home game attendance, active home winning streaks, team prestige, and more.
Rankings are subject to change in future updates.
In case you missed it, Kirk Herbstreit is back with our next Deep Dive, taking a look at the sights and sounds featured in EA SPORTS™ College Football 25. The Development Team spent years capturing countless traditions, mascots, fight songs, and more to the game, ensuring all 134 schools and fan bases were represented with pride. These elements make College Football special and unique, bringing the unmatched feeling of game day to your fingertips.
For even more on the presentation elements and how they come to life, check out the latest Campus Huddle hosted by Senior Game Designer Christian Brandt.
The Development Team meticulously examined hundreds of thousands of data points to arrive at our team power rankings. With help from our friends at Pro Football Focus (PFF), the team analyzed all 134 rosters, thousands of players, years worth of game film, and mountains of stats, ultimately arriving at our Team Power Rankings.
Here are the Top 25 offenses in EA SPORTS™ College Football 25:
As the old saying goes, “Defense wins championships.” Here are the Top 25 defenses in EA SPORTS™ College Football 25:
And the moment you’ve all been waiting for! Here are the Top Teams in EA SPORTS™ College Football 25.
Let us know what you think! Join the conversation today by following EA SPORTS™ College Football 25 on social media and rep your school. Next week, we’ll have even more information to share including our Dynasty Deep Dive where we explore the ins and outs of the mode, recruiting, and more!
College Football 25 launches worldwide on July 19th, 2024. Pre-order the Deluxe Edition* or the EA SPORTS™ MVP Bundle** and play 3 days early. Conditions and restrictions apply. See disclaimers for details. Stay in the conversation by following us on Facebook , Twitter , Instagram , YouTube , and Answers HQ .
Pre-order the MVP Bundle*** to make game day every day, and get both Madden NFL 25 and College Football 25 with exclusive content.
Sign-up for our newsletter to be the first to know about new updates.
College football 25 sights and sounds deep dive, college football 25 gameplay deep dive, welcome to college football 25.
Formation Bio, a startup focused on applying AI to drug development with backing from OpenAI CEO Sam Altman, has raised over a quarter-billion dollars to support its ambitious product roadmap.
Formation announced Wednesday that it raised $372 million in a Series D funding round led by Andreessen Horowitz with participation from drug maker Sanofi, Sequoia, Thrive, Emerson Collective, Lachy Groom, SV Angel Growth and FPV Ventures. The new tranche brings Formation’s total raised to more than $600 million ( according to PitchBook), which the company says is being put mainly toward partnership acquisition efforts and R&D.
Formation declined to reveal its new valuation. But a spokesperson told TechCrunch that it’s a “material step up” from $1 billion, Formation’s Series C valuation.
The company, which previously went by the brand TrialSpark, was co-founded by Benjamine Liu and Linhao Zhang in 2016. Liu has a background in computational biology, having conducted neuroscience research at Oxford and UPenn. Zhang is a software developer by trade and worked at Salesforce before joining Oscar Insurance as a product engineer.
Formation builds tech-forward solutions for clinical trials and drug development. The company licenses drug IP from and co-develops drugs with biotech and pharma companies, and develops these drugs past clinical proof-of-concept.
Drug development is a notoriously expensive and challenging endeavor. It takes 10 to 15 years on average to take a drug from initial discovery through regulatory approval, with the cost per drug reaching up to $5.5 billion. And an estimated 90% of drugs fail to reach the market.
Formation claims it’s able to run clinical trials more efficiently by streamlining processes such as study startup, participant recruitment and data management. For example, the company is currently deploying AI to generate patient recruitment materials and reports for “adverse events.” It’s also fine-tuning AI models to provide drug development teams recommendations for R&D decisions and better predict drug toxicity, tolerability and efficacy.
Last month, Formation announced a collaboration with OpenAI and Sanofi to jointly design and develop customized AI solutions for drug development. OpenAI said it would contribute access to AI capabilities and expertise, and Sanofi said it would bring proprietary data for developing AI models.
OpenAI’s involvement gives the appearance of conflict of interest, given that Altman was involved in Formation’s Series C fundraising. OpenAI PR told TechCrunch the deal was led by OpenAI COO Brad Lightcap and OpenAI’s board of directors but didn’t indicate whether Altman, who’s on the board, recused himself.
“At Sanofi, we’re all in on AI,” Sanofi CEO Paul Hudson said in a press release. “And we are proud to partner with and invest in Formation Bio, whose AI-driven drug development vision and capabilities will help lead our industry forward in the shared ambition to accelerate and improve how we bring more new medicines to patients.”
Formation has three drug candidates in its clinical pipeline, including treatments for chronic hand eczema, sensory neuropathy and knee osteoarthritis. The furthest along is the eczema treatment, which recently reached phase 3 — the last stage of testing before a drug is submitted to regulatory authorities.
A number of startups are attempting to pioneer AI-powered tech for drug development , including EvolutionaryScale , which emerged from stealth this week with investments from Amazon and Nvidia. Market research firm Markets and Markets anticipates that the market for AI in drug discovery will be worth $4.9 billion by 2028. Major players in the space include Xaira (which launched with $1 billion), DeepMind spin-off Isomorphic , Insilico , Jeff Dean-backed Profluent , Enveda and Causaly .
Get the industry’s biggest tech news, techcrunch daily news.
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
The latest Fintech news and analysis, delivered every Tuesday.
TechCrunch Mobility is your destination for transportation news and insight.
Adept, a startup developing AI-powered “agents” to complete various software-based tasks, has agreed to license its tech to Amazon and the startup’s co-founders and portions of its team have joined…
There are plenty of resources to learn English, but not so many for near-native speakers who still want to improve their fluency. That description applies to Stan Beliaev and Yurii…
NASA and Boeing officials pushed back against recent reporting that the two astronauts brought to the ISS on Starliner are stranded on board. The companies said in a press conference…
As the country reels from a presidential debate that left no one looking good, the Supreme Court has swooped in with what could be one of the most consequential decisions…
As Google described during the I/O session, the new on-device surface would organize what’s most relevant to users, inviting them to jump back into their apps.
Many VC firms are struggling to attract new capital from their own backers amid a tepid IPO environment. But established, brand-name firms are still able to raise large funds. On…
Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. I…
The company “identified a security incident that involved bad actors targeting a limited number of HubSpot customers and attempting to gain unauthorized access to their accounts” on June 22.
VW Group’s struggling software arm Cariad has hired at least 23 of the startup’s top employees over the past several months.
Featured Article
VCs Jonathon Triest and Brett deMarrais see their ability to read people and create longstanding relationships with founders as the primary reason their Detroit-based venture firm, Ludlow Ventures, is celebrating its 15th year in business. It sounds silly, attributing their longevity to what’s sometimes called “Midwestern nice.” But is it…
President Joe Biden’s administration is doubling down on its interest in the creator economy. In August, the White House will host the first-ever White House Creator Economy Conference, which will…
In an industry where creators are often tossed aside like yesterday’s lootboxes, MegaMod swoops in with a heroic promise to put them front and center.
Google’s trying to make waves with Gemini, its flagship suite of generative AI models, apps and services. So what’s Google Gemini, exactly? How can you use it? And how does…
There were definite differences between how the two platforms managed last night, with some saying X felt more alive, and others asserting that Threads proved that X is no longer…
Ultra-low-cost e-commerce giants Shein and Temu have only recently been confirmed as subject to centralized enforcement of the strictest layer of the European Union’s digital services regulation, the Digital Services…
Artyc has raised $14 million to date and has a product on the market, Medstow Micro, that helps ship temperature-sensitive specimens.
Get ready to unlock the secrets of successful fundraising in the upcoming year at Disrupt 2024. Our featured session, “How to Raise in 2025 if You’ve Taken a Flat, Down,…
The remote access giant linked the cyberattack to government-backed hackers working for Russian intelligence, known as APT29.
We’ve poked through the many product announcements made by the biggest tech companies and product trade shows of the year, so far, and compiled them into this list.
As a foreigner, navigating health insurance systems can often be difficult. German startup Feather thinks it has a solution and raised €6 million to help some of the 40-plus million…
The salad days of fresh grocery delivery startups are over, but those that have stayed the course, and built businesses that are seeing gains, are still here and hungry for…
The first six months of the year have seen $4.2 billion invested in robotics, putting this year well on track to beat 2023’s 12-month total of $6.8 billion.
Hebbia, a startup using generative AI to search large documents and return answers, has raised a nearly $100 million Series B led by Andreessen Horowitz, according to three people with…
Digit’s first job will be moving totes around a Connecticut Spanx factory — which is most definitely not a euphemism.
These days, when you hear about students and generative AI, chances are that you’re getting a taste of the debate over the adoption of tools like ChatGPT. Are they a…
In the conversation, Zuckerberg said there needs to be a lot of different AIs that get created to reflect people’s different interests.
AI big shot Andrew Ng’s AI Fund, a startup incubator that backs small teams of experts looking to solve key problems using AI, plans to raise upward of $120 million…
Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Am I…
Specifically, according to the FCC, carriers would simply have to provide unlocking services 60 days after activation.
Amid a fraught environment for battery startups, Sila has raised $375 million to finish construction of a U.S. factory that will scale its next-generation battery technology for customers like Mercedes-Benz…
IMAGES
VIDEO
COMMENTS
7 Main Developmental Theories. Child development theories focus on explaining how children change and grow over the course of childhood. These developmental theories center on various aspects of growth, including social, emotional, and cognitive development. The study of human development is a rich and varied subject.
Researchers have turned to other mechanisms (e.g., Legare, 2019), but the source of new acquisitions remains a central topic. Another developmental theory that was influential in the 70's and 80's and has waned in influence over the years is Gibson (1969) theory of perceptual learning. This theory was attractive because it showed how perceptual ...
Jean Piaget (1896-1980) was a renowned psychologist of the 20th century and a pioneer in developmental child psychology. Piaget did not accept the prevailing theory that knowledge was innate or a priori. Instead, he believed a child's knowledge and understanding of the world developed over time, through the child's interaction with the world, empirically. His cogitations on cognitive ...
Theories can help explain these and other occurrences. Developmental theories explain how we develop, why we change over time, and the influences that impact development. A theory guides and helps us interpret research findings as well. It gives the researcher a blueprint or model to help piece together various studies.
Chapter 2: Developmental Theories. At the end of this lesson, you will be able to…. Define theory. Describe Freud's theory of psychosexual development. Describe the parts of the self in Freud's model (id, ego, superego). Appraise the strengths and weaknesses of Freud's theory. List and apply Erikson's eight stages of psychosocial ...
A theory is broad in nature and explains larger bodies of data. A hypothesis is more specific and makes a prediction about the outcome of a particular study. Working with theories is not "icing on the cake." It is a basic ingredient of psychological research. Like other scientists, psychologists use the hypothetico-deductive method.
Psychodynamic Theory. We begin with the often controversial figure, Sigmund Freud. Freud has been a very influential figure in the area of development; his view of development and psychopathology dominated the field of psychiatry until the growth of behaviorism in the 1950s. Freud's assumption that personality forms during the first few years ...
The resulting research supports a new view of development that is much more comprehensive (and far more complex) than was possible even in the late 1990s, when developmental psychologists still generally focused on one domain of behavior (e.g., theory of mind) at a time, considered it at one level of analysis (e.g., cognitive), in one age range ...
Theories can help explain these and other occurrences. Developmental theories offer explanations about how we develop, why we change over time and the kinds of influences that impact development. A theory guides and helps us interpret research findings as well. It provides the researcher with a blueprint or model to be used to help piece ...
Developmental Psychology: A Definition. Psychology (from Greek psyche = breath, spirit, soul and logos = science, study, research) is a relatively young scientific discipline.Among the first to define Psychology was James who defined it as "the science of mental life, both of its phenomena and their conditions."Today, Psychology is usually defined as the science of mind and behavior ...
This is a book about lifespan human development —the ways in which people grow, change, and stay the same throughout their lives, from conception to death. When people use the term development , they often mean the transformation from infant to adult. However, development does not end with adulthood.
FROM FETAL ORIGINS OF ADULT DISEASE TO DEVELOPMENTAL ORIGINS OF HEALTH AND DISEASE. Barker's hypothesis stimulated a great deal of worldwide interest and activity in the area of developmental plasticity, Gillman et al 5 summarized in a report of the meetings of the World Congress on Fetal Origins of Adult Disease that were convened in 2001 (Mumbai, India) and 2003 (Brighton, United Kingdom ...
Developmental Psychology® publishes articles that significantly advance knowledge and theory about development across the life span. The journal focuses on seminal empirical contributions. The journal occasionally publishes exceptionally strong scholarly reviews and theoretical or methodological articles. Studies of any aspect of psychological ...
The concept. The 'Developmental Origins of Health and Disease (DOHaD)' hypothesis, a rather more recent term for the concept initially proposed and called 'Fetal Origins of Adult Disease' in the 1990s, 1 postulates that exposure to certain environmental influences during critical periods of development and growth may have significant consequences on an individual's short- and long ...
hanism. Here, I discuss a contrasting but complementary genetic hypothesis regarding the developmental origins of health and disease theory: crosstalk between the genomes of the parents and offspring is responsible for shaping and adapting responses to environmental stresses, regulating early growth and predisposition to non-communicable diseases. Genetic variants that are beneficial in terms ...
The Two-Source Hypothesis (2SH) has been the predominant source theory for the synoptic problem for almost a century and half. Originally conceived in Germany by Ch. H. Weisse in 1838, the 2SH came to dominate German protestant scholarship after the fall of the Tübingen school with H. J. Holtzmann's endorsement of a related variant in 1863.
Developmental psychology is the study of how humans grow, change, and adapt across the course of their lives. Developmental psychologists research the stages of physical, emotional, social, and intellectual development from the prenatal stage through infancy, childhood, adolescence, and adulthood. This article covers developmental psychology ...
Developmental origins of health and disease (DOHaD) is an approach to medical research factors that can lead to the development of human diseases during early life development. These factors include the role of prenatal and perinatal exposure to environmental factors, such as undernutrition, stress, environmental chemical, etc. This approach includes an emphasis on epigenetic causes of adult ...
Developmental Origins Hypothesis Source: A Dictionary of Epidemiology Author(s): Miquel Porta. Access to the complete content on Oxford Reference requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.
The MoSCoW method is a prioritization technique used in management, business analysis, project management, and software development to reach a common understanding with stakeholders on the importance they place on the delivery of each requirement; it is also known as MoSCoW prioritization or MoSCoW analysis.. The term MOSCOW itself is an acronym derived from the first letter of each of four ...
The MoSCoW method is a four-step approach to prioritizing which project requirements provide the best return on investment (ROI). MoSCoW stands for must have, should have, could have and will not have -- the o's make the acronym more pronounceable. A variety of business disciplines use the MoSCoW method.
MoSCoW prioritization, also known as the MoSCoW method or MoSCoW analysis, is a popular prioritization technique for managing requirements. The acronym MoSCoW represents four categories of initiatives: must-have, should-have, could-have, and won't-have, or will not have right now. Some companies also use the "W" in MoSCoW to mean "wish.".
The main difference between the classical MoSCoW and our version of this technique is that we added another level of prioritization within such groups as Must, Should, and Could. Each of these groups of requirements got another 4 complexity categories: 3 - most heavy and unclear requirements. 2 - heavy complexity.
In the 1980s he appeared in films including "Mr. Mom" and "Clue," and in the 1990s had a recurring role on "Roseanne." He would later play private eye Gene Parmesan on "Arrested Development," and ...
Gartner research shows 78% of organizational leaders report experiencing "collaboration drag" — too many meetings, too much peer feedback, unclear decision-making authority, and too much ...
The authors integrated communication theory and theories of marketing communications to create a typology of marketing communication strategies consisting of three established strategies — 1 ...
In an interview with the excellent John Gruber from Daring Fireball, John Giannandrea, Apple's head of AI, said ""You could, in theory, run these models on a very old device, but it would be ...
Researchers at the American Diabetes Association conference in Orlando are expected to present data on 27 GLP-1 drugs in development. Others target a different hormone.
The Development Team meticulously examined hundreds of thousands of data points to arrive at our team power rankings. With help from our friends at Pro Football Focus (PFF), the team analyzed all 134 rosters, thousands of players, years worth of game film, and mountains of stats, ultimately arriving at our Team Power Rankings. ...
Drug development is a notoriously expensive and challenging endeavor. It takes 10 to 15 years on average to take a drug from initial discovery through regulatory approval, with the cost per drug ...