will focus on the following elements:
When submitting an article to Young Children , please adhere to the following formatting and submission guidelines.
If manuscripts are not formatted correctly, they will be unsubmitted until the appropriate changes are made.
Cover letter
Style guides
Authors should provide accurate and complete information for references and resources. Young Children expects authors to focus on references published within the last 10 years (unless they are seminal sources) in order to reflect the most recent research and data. Use primary references when available and avoid online resources such as Wikipedia. Authors should also use the number of resources appropriate for the length of their manuscript.
Young Children follows Merriam-Webster's Collegiate Dictionary , 11th edition, for spelling and The Chicago Manual of Style , 17th edition, for style and reference formatting.
We encourage authors to include informative, interesting visuals (e.g., high-resolution photographs, children's work samples, charts, and graphs) that enhance the content of the article and promote understanding. This is not a requirement.
The author must confirm completed model release forms for any recognizable person appearing in the author's photos (signed by any adult who appears in the photo and by the legal guardian of any child who appears in the photo). If the author did not take the photos but submits them with the article, the author must confirm that she or he has the right to publish the photos and that the photographer possesses the necessary model releases. The editorial team will reach out to the author upon acceptance to process the forms. NAEYC-approved model release forms are available from the editorial team if needed. Forms must be confirmed and provided to the editorial team before publication. Failure to do so will result in the visuals being excluded from publication.
The visuals themselves can be uploaded as separate files in Editorial Manager as part of the manuscript submission. Do not include them in the body of the article. Young Children does not pay authors for their own photos when they are integral to the content of the article.
Authors are responsible for seeking and maintaining written permission from parents or legal guardians to include photos of children or children's work samples, and for seeking and maintaining written permission to include photos of adults. These permissions must be provided to NAEYC for review prior to publication.
For quoted material longer than 100 words, as well as figures and tables (or the content therein), authors must seek and submit to Young Children written permission from the copyright holder prior to publication.
Young Children receives all submissions electronically through Editorial Manager . After creating an account, authors will find instructions for manuscript submission. Be sure to submit the cover letter, article, and photographs as separate files. Authors can view tutorials on the Editorial Manager website for assistance or e-mail the Young Children editorial staff at [email protected] .
With the exception of cluster-topic articles, submissions are generally published 16 to 24 months after acceptance. Authors may check the status of their submissions by logging into their Editorial Manager account.
Please note: Individuals may submit only one article within a six-month period. Young Children 's preferred practice is to publish a particular author only once per 12-month period. On rare occasions we make exceptions to best meet the needs of our readers.
Authors may submit only one article at a time. This holds true whether they are the only author, or one of several. If authors have written several articles for submission, they must decide which one to submit first.
After the article has been reviewed, the authors will be notified of its status. After receipt of this notification, the author may submit another article. Thus, only one article per author can be under initial consideration and review at a time.
The Young Children review process generally takes 6–8 months from receipt of manuscript. The process is compressed for cluster articles. The schedule may vary according to the schedule of our reviewers, many of whom are on the academic calendar.
Given the volume of articles we receive, not all articles can be sent out for review, nor can we provide individual feedback on articles that are not reviewed. The editor in chief determines whether articles will go out for review. There are a number of reasons why articles are not sent out for review. Sometimes articles do not meet basic guidelines for content, writing style, length, or format. At times, the journal has a backlog of articles or has recently published an article on the same topic. In some cases, we receive a number of articles for a cluster that address the same topic and age group. The editor in chief might recommend revising an article before it is reviewed by consulting editors. | 1 to 16 weeks after receipt |
Articles that meet basic guidelines undergo peer review by NAEYC’s consulting editors. The reviewers provide comments and suggestions. NAEYC senior staff may also review articles. | 16 to 26 weeks after receipt |
Using all reviews as a guide, the editorial team determines one of the following as the next step. The editor in chief notifies the author of the decision via e-mail. When necessary, this correspondence includes the reviewers' feedback and suggestions for enhancing the manuscript. | 26 to 32 weeks after receipt |
When authors submit revised articles, they must include a summary of what the author did to address the reviewers’ feedback, through Editorial Manager. | Within 6 months of authors' receipt of decision e-mail |
From acceptance to print
It is not possible to determine in advance the exact publication dates of accepted articles (unless for a particular cluster). When planning issues, the editorial team considers the content, style, intended audience, and length of articles, as well as articles’ submission dates.
Authors are notified when their articles are scheduled for publication. They are asked to make updates—sometimes significant—and to complete biography, copyright transfer, and photograph submission and credit forms.
Editing involves substantive editing and copyediting by members of the editorial team. The lead editor returns the edited article to the author via email for final approval before the manuscript enters production. On occasion, last-minute changes in an issue’s content may cause publication of an article to be postponed.
Authors receive a protected PDF copy of their article and have the option to receive two print copies of the issue in which their article appears.
Annie Moses , PhD, Editor in Chief, Young Children
Susan Donsky , Managing Editor, Young Children
Email: [email protected]
Check out our author guidelines for
Recommendations For Authors & Photographers Catalog Webinars NAEYC Books List
Permissions Desk Copies FAQ
Books NAEYC Books List Catalog Quantity & Special Pricing
With larry ferlazzo.
In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to [email protected]. Read more from this blog.
(This is the first post in a two-part series.)
The new question-of-the-week is:
What is the single most effective instructional strategy you have used to teach writing?
Teaching and learning good writing can be a challenge to educators and students alike.
The topic is no stranger to this column—you can see many previous related posts at Writing Instruction .
But I don’t think any of us can get too much good instructional advice in this area.
Today, Jenny Vo, Michele Morgan, and Joy Hamm share wisdom gained from their teaching experience.
Before I turn over the column to them, though, I’d like to share my favorite tool(s).
Graphic organizers, including writing frames (which are basically more expansive sentence starters) and writing structures (which function more as guides and less as “fill-in-the-blanks”) are critical elements of my writing instruction.
You can see an example of how I incorporate them in my seven-week story-writing unit and in the adaptations I made in it for concurrent teaching.
You might also be interested in The Best Scaffolded Writing Frames For Students .
Now, to today’s guests:
Jenny Vo earned her B.A. in English from Rice University and her M.Ed. in educational leadership from Lamar University. She has worked with English-learners during all of her 24 years in education and is currently an ESL ISST in Katy ISD in Katy, Texas. Jenny is the president-elect of TexTESOL IV and works to advocate for all ELs:
The single most effective instructional strategy that I have used to teach writing is shared writing. Shared writing is when the teacher and students write collaboratively. In shared writing, the teacher is the primary holder of the pen, even though the process is a collaborative one. The teacher serves as the scribe, while also questioning and prompting the students.
The students engage in discussions with the teacher and their peers on what should be included in the text. Shared writing can be done with the whole class or as a small-group activity.
There are two reasons why I love using shared writing. One, it is a great opportunity for the teacher to model the structures and functions of different types of writing while also weaving in lessons on spelling, punctuation, and grammar.
It is a perfect activity to do at the beginning of the unit for a new genre. Use shared writing to introduce the students to the purpose of the genre. Model the writing process from beginning to end, taking the students from idea generation to planning to drafting to revising to publishing. As you are writing, make sure you refrain from making errors, as you want your finished product to serve as a high-quality model for the students to refer back to as they write independently.
Another reason why I love using shared writing is that it connects the writing process with oral language. As the students co-construct the writing piece with the teacher, they are orally expressing their ideas and listening to the ideas of their classmates. It gives them the opportunity to practice rehearsing what they are going to say before it is written down on paper. Shared writing gives the teacher many opportunities to encourage their quieter or more reluctant students to engage in the discussion with the types of questions the teacher asks.
Writing well is a skill that is developed over time with much practice. Shared writing allows students to engage in the writing process while observing the construction of a high-quality sample. It is a very effective instructional strategy used to teach writing.
Michele Morgan has been writing IEPs and behavior plans to help students be more successful for 17 years. She is a national-board-certified teacher, Utah Teacher Fellow with Hope Street Group, and a special education elementary new-teacher specialist with the Granite school district. Follow her @MicheleTMorgan1:
For many students, writing is the most dreaded part of the school day. Writing involves many complex processes that students have to engage in before they produce a product—they must determine what they will write about, they must organize their thoughts into a logical sequence, and they must do the actual writing, whether on a computer or by hand. Still they are not done—they must edit their writing and revise mistakes. With all of that, it’s no wonder that students struggle with writing assignments.
In my years working with elementary special education students, I have found that writing is the most difficult subject to teach. Not only do my students struggle with the writing process, but they often have the added difficulties of not knowing how to spell words and not understanding how to use punctuation correctly. That is why the single most effective strategy I use when teaching writing is the Four Square graphic organizer.
The Four Square instructional strategy was developed in 1999 by Judith S. Gould and Evan Jay Gould. When I first started teaching, a colleague allowed me to borrow the Goulds’ book about using the Four Square method, and I have used it ever since. The Four Square is a graphic organizer that students can make themselves when given a blank sheet of paper. They fold it into four squares and draw a box in the middle of the page. The genius of this instructional strategy is that it can be used by any student, in any grade level, for any writing assignment. These are some of the ways I have used this strategy successfully with my students:
* Writing sentences: Students can write the topic for the sentence in the middle box, and in each square, they can draw pictures of details they want to add to their writing.
* Writing paragraphs: Students write the topic sentence in the middle box. They write a sentence containing a supporting detail in three of the squares and they write a concluding sentence in the last square.
* Writing short essays: Students write what information goes in the topic paragraph in the middle box, then list details to include in supporting paragraphs in the squares.
When I gave students writing assignments, the first thing I had them do was create a Four Square. We did this so often that it became automatic. After filling in the Four Square, they wrote rough drafts by copying their work off of the graphic organizer and into the correct format, either on lined paper or in a Word document. This worked for all of my special education students!
I was able to modify tasks using the Four Square so that all of my students could participate, regardless of their disabilities. Even if they did not know what to write about, they knew how to start the assignment (which is often the hardest part of getting it done!) and they grew to be more confident in their writing abilities.
In addition, when it was time to take the high-stakes state writing tests at the end of the year, this was a strategy my students could use to help them do well on the tests. I was able to give them a sheet of blank paper, and they knew what to do with it. I have used many different curriculum materials and programs to teach writing in the last 16 years, but the Four Square is the one strategy that I have used with every writing assignment, no matter the grade level, because it is so effective.
Joy Hamm has taught 11 years in a variety of English-language settings, ranging from kindergarten to adult learners. The last few years working with middle and high school Newcomers and completing her M.Ed in TESOL have fostered stronger advocacy in her district and beyond:
A majority of secondary content assessments include open-ended essay questions. Many students falter (not just ELs) because they are unaware of how to quickly organize their thoughts into a cohesive argument. In fact, the WIDA CAN DO Descriptors list level 5 writing proficiency as “organizing details logically and cohesively.” Thus, the most effective cross-curricular secondary writing strategy I use with my intermediate LTELs (long-term English-learners) is what I call “Swift Structures.” This term simply means reading a prompt across any content area and quickly jotting down an outline to organize a strong response.
To implement Swift Structures, begin by displaying a prompt and modeling how to swiftly create a bubble map or outline beginning with a thesis/opinion, then connecting the three main topics, which are each supported by at least three details. Emphasize this is NOT the time for complete sentences, just bulleted words or phrases.
Once the outline is completed, show your ELs how easy it is to plug in transitions, expand the bullets into detailed sentences, and add a brief introduction and conclusion. After modeling and guided practice, set a 5-10 minute timer and have students practice independently. Swift Structures is one of my weekly bell ringers, so students build confidence and skill over time. It is best to start with easy prompts where students have preformed opinions and knowledge in order to focus their attention on the thesis-topics-supporting-details outline, not struggling with the rigor of a content prompt.
Here is one easy prompt example: “Should students be allowed to use their cellphones in class?”
Swift Structure outline:
Thesis - Students should be allowed to use cellphones because (1) higher engagement (2) learning tools/apps (3) gain 21st-century skills
Topic 1. Cellphones create higher engagement in students...
Details A. interactive (Flipgrid, Kahoot)
B. less tempted by distractions
C. teaches responsibility
Topic 2. Furthermore,...access to learning tools...
A. Google Translate description
B. language practice (Duolingo)
C. content tutorials (Kahn Academy)
Topic 3. In addition,...practice 21st-century skills…
Details A. prep for workforce
B. access to information
C. time-management support
This bare-bones outline is like the frame of a house. Get the structure right, and it’s easier to fill in the interior decorating (style, grammar), roof (introduction) and driveway (conclusion). Without the frame, the roof and walls will fall apart, and the reader is left confused by circuitous rubble.
Once LTELs have mastered creating simple Swift Structures in less than 10 minutes, it is time to introduce complex questions similar to prompts found on content assessments or essays. Students need to gain assurance that they can quickly and logically explain and justify their opinions on multiple content essays without freezing under pressure.
Thanks to Jenny, Michele, and Joy for their contributions!
Please feel free to leave a comment with your reactions to the topic or directly to anything that has been said in this post.
Consider contributing a question to be answered in a future post. You can send one to me at [email protected] . When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.
You can also contact me on Twitter at @Larryferlazzo .
Education Week has published a collection of posts from this blog, along with new material, in an e-book form. It’s titled Classroom Management Q&As: Expert Strategies for Teaching .
Just a reminder; you can subscribe and receive updates from this blog via email (The RSS feed for this blog, and for all Ed Week articles, has been changed by the new redesign—new ones are not yet available). And if you missed any of the highlights from the first nine years of this blog, you can see a categorized list below.
I am also creating a Twitter list including all contributors to this column .
The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.
Edweek top school jobs.
Writing for Education
Writing Lesson Plans
If you plan to certify to teach, you will be asked to write lesson plans in many different classes. Lesson planning lies at the heart of good teaching, and written plans represent the most structured writing assignments you will do in education classes. A good lesson plan describes all the critical elements of your teaching plan, including what you intend for your students to learn, how the lesson will proceed, and how you will know that your lesson reached your goals. In good lesson plans these three elements (objectives, instructional activities, and assessments) are very clearly connected, and they inform each other.
Education is a field that bridges anthropology, sociology, psychology, science, and philosophy. When writing about education, you will utilize a myriad of writing styles and formats to address your essay topics. Your writing should always:
1) Be tailored for the audience of the educational community
2) Be tailored for the type or purpose of writing in education
3) Use formal, specific, and precise language
4) Be credibly sourced and free of plagiarism
5) Convey clear, complete, and organized communication
6) Use correct English language conventions
7) Be correctly formatted and styled
The types of writing in Education include: reflective writing, persuasive writing, analytic writing and procedural writing.
Types of Papers
As an education student, you may be asked to write:
8700 NW River Park Drive, Box 61 - Parkville, MO - 64152 | Phone: (816) 584-6285 Toll-free: (800) 270-4347 |
Burdened by expanding curriculum and multiplying high-stakes assessment requirements, some of my respected colleagues might be forgiven for not integrating student journals into their courses. The most common objection: "Who has time?"
"What instructor doesn't have time for student journaling?" is my typical reply, a non-answer that halts further conversation by employing a rhetorical cul-de-sac familiar to high-school debaters. To atone, I'll summarize research on journaling, identify my favorite reflective writing formats, and describe a labor-saving method of teacher response.
The benefits of students integrating journal writing across the curriculum are amply documented . From a teacher's perspective, there are few activities that can trump journal writing for understanding and supporting the development of student thinking. Journaling turbo-charges curiosity. The legendary Toby Fulwiler, author of The Journal Book , writes, "Without an understanding of who we are, we are not likely to understand fully why we study biology rather than forestry, literature rather than philosophy. In the end, all knowledge is related; the journal helps clarify the relationship."
Annette Lamb and Larry Johnson's 42explore presents implementation advice and describes different journal formats. Introducing a range of reflective genres can encourage students to generalize about their content attitudes. Every subject area "pot" has its own reflective "lid," allowing teachers a peak into the metacognitive soup of students' misconceptions and insight. For example, here is a format that supports scientific reflection: "Today I observed... I predict that... I also measured... I concluded that..."
One of my favorites, the microtheme , supports comprehension, extends thinking, improves confidence, and bolsters writing across the content areas. I've run into different versions. In one, students write a summary to a reading, lecture, demonstration, or experiment on the back of an index card. Teachers collect the note cards and write responses to the students on the other side. Microthemes quickly activate thinking before whole-class discussions.
But, while essentially all reflective writing formats yield benefits, there is a problem...
For years, I've taken home crates of journals on the weekend and responded with a Theseusian intensity that has crushed classroom preparation time and personal leisure, and has exasperated friends and family. To lessen the time costs, I tried skimming journals. My token analysis, however, signaled students to submit journals that were equivalently weak ("If he doesn't care, why should we?").
So, how do you implement journals, make them a priority, and reduce responding time?
Premised on the notion that students should assess their own writing, Terri Van Sickle , a virtuoso instructor and writer for Crystal Coast Parent Magazine, teaches her classes to use a rich and organic process of open-ended reflection that works well as a culminating journal activity.
Whether your students write in daybooks , two entry notebooks , or academic journals , you can use the following instruction sheet to help students self-reflect.
Assignment Introduction: The following questions will help you to deeply examine the thinking, interactions, exercises, and writing you have experienced over the course of the semester.
1. Reading and Marking: Read through your entire journal. Identify and star (*) 10 passages that seem most significant to you as a learner of the subject matter in this course. You might choose an entry that was written when you were thinking on all cylinders, discovering something revelatory, engaging in higher order thinking, struggling with an idea that was only partially formed, or experiencing confusion. Maybe you were able to transcend the classroom conversations and texts to come up with an original idea. These ten passages should be as varied as possible and make generalizations that provide a full portrait of you as a learner of this course's content. Next, double star (**) five of the passages most significant to you. Why did you choose these five sections? What generalizations can you make about you as a writer and learner?
2. Letter to Reader: Write a letter to your reader, describing the items you starred and explaining how and why you chose them. Also, reflect on the following:
3. Final Check: Is your name, class, and date written on the cover? Make sure your journal has a complete table of contents, page numbers on every page, and that each entry is dated. If you were absent on a day when we used journals in class, enter "absent" next to the date.
I allow a full class period or more for students to follow these instructions. Many adolescents wrestle with critical reflection and therefore may need more individual help or modeling.
By primarily focusing my commentary on students' starred passages and reflective letters, I acquire a snapshot of the students' understanding of course content and save 3-4 hours on every set of 30 semester-length journals. Even though I only collect journals one time per semester, I can meet students' eyes, knowing that I haven't neglected journal segments that they wanted me to read.
Coda: The three best albums to write reflections to:
1. "Kind of Blue" by Miles Davis 2. "The Last Temptation of Christ" (Soundtrack) by Peter Gabriel 3. "Unleft" by Helios
-- Todd Finley's Twitter address is @finleyt .
Title | Type | --> | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | journal | 5.772 Q1 | 186 | 43 | 74 | 5366 | 1338 | 73 | 12.59 | 124.79 | 64.39 | ||
2 | journal | 4.321 Q1 | 142 | 118 | 208 | 11698 | 2845 | 197 | 12.23 | 99.14 | 53.76 | ||
3 | journal | 3.956 Q1 | 151 | 67 | 205 | 3036 | 2030 | 198 | 6.76 | 45.31 | 48.23 | ||
4 | journal | 3.914 Q1 | 73 | 58 | 116 | 4766 | 837 | 109 | 7.32 | 82.17 | 27.78 | ||
5 | journal | 3.874 Q1 | 100 | 48 | 130 | 5704 | 2101 | 129 | 12.20 | 118.83 | 57.87 | ||
6 | journal | 3.863 Q1 | 130 | 84 | 216 | 7477 | 2702 | 214 | 4.93 | 89.01 | 55.99 | ||
7 | journal | 3.651 Q1 | 232 | 163 | 597 | 13475 | 9892 | 596 | 12.14 | 82.67 | 47.96 | ||
8 | journal | 3.520 Q1 | 68 | 72 | 199 | 2880 | 2148 | 182 | 10.46 | 40.00 | 44.83 | ||
9 | journal | 3.313 Q1 | 11 | 14 | 46 | 400 | 635 | 40 | 18.34 | 28.57 | 62.96 | ||
10 | journal | 3.227 Q1 | 29 | 79 | 106 | 5426 | 2196 | 105 | 18.85 | 68.68 | 31.47 | ||
11 | journal | 2.966 Q1 | 185 | 62 | 269 | 5579 | 3184 | 253 | 6.30 | 89.98 | 54.17 | ||
12 | journal | 2.838 Q1 | 54 | 12 | 21 | 1493 | 236 | 21 | 8.20 | 124.42 | 61.90 | ||
13 | journal | 2.808 Q1 | 116 | 20 | 74 | 3639 | 621 | 74 | 7.31 | 181.95 | 63.08 | ||
14 | journal | 2.774 Q1 | 252 | 53 | 307 | 5250 | 2234 | 306 | 6.45 | 99.06 | 57.48 | ||
15 | journal | 2.761 Q1 | 118 | 51 | 104 | 5425 | 885 | 103 | 6.56 | 106.37 | 68.78 | ||
16 | journal | 2.738 Q1 | 106 | 130 | 273 | 5755 | 2125 | 272 | 6.97 | 44.27 | 55.08 | ||
17 | journal | 2.606 Q1 | 105 | 61 | 101 | 2688 | 644 | 81 | 5.00 | 44.07 | 52.53 | ||
18 | journal | 2.601 Q1 | 60 | 94 | 161 | 5032 | 1493 | 146 | 4.15 | 53.53 | 58.00 | ||
19 | journal | 2.578 Q1 | 61 | 62 | 183 | 3613 | 2510 | 178 | 11.28 | 58.27 | 52.17 | ||
20 | journal | 2.426 Q1 | 117 | 18 | 76 | 1500 | 827 | 75 | 9.75 | 83.33 | 52.38 | ||
21 | journal | 2.425 Q1 | 119 | 120 | 387 | 7794 | 3894 | 366 | 9.33 | 64.95 | 48.14 | ||
22 | journal | 2.357 Q1 | 144 | 80 | 243 | 4949 | 1824 | 236 | 6.12 | 61.86 | 47.45 | ||
23 | journal | 2.249 Q1 | 88 | 229 | 835 | 18525 | 11016 | 834 | 12.66 | 80.90 | 34.93 | ||
24 | journal | 2.232 Q1 | 144 | 35 | 142 | 2674 | 802 | 141 | 4.99 | 76.40 | 61.98 | ||
25 | journal | 2.198 Q1 | 107 | 42 | 190 | 3459 | 1159 | 174 | 4.85 | 82.36 | 68.75 | ||
26 | journal | 2.158 Q1 | 109 | 56 | 129 | 2747 | 762 | 115 | 5.14 | 49.05 | 63.64 | ||
27 | journal | 2.155 Q1 | 29 | 84 | 192 | 3501 | 1345 | 99 | 5.38 | 41.68 | 45.53 | ||
28 | journal | 2.124 Q1 | 115 | 82 | 166 | 5522 | 836 | 153 | 4.52 | 67.34 | 52.29 | ||
29 | journal | 2.105 Q1 | 111 | 23 | 71 | 1314 | 388 | 66 | 3.79 | 57.13 | 54.90 | ||
30 | journal | 2.082 Q1 | 289 | 172 | 621 | 11356 | 3257 | 614 | 4.57 | 66.02 | 61.38 | ||
31 | journal | 2.075 Q1 | 104 | 146 | 461 | 9456 | 3132 | 458 | 6.12 | 64.77 | 43.27 | ||
32 | journal | 2.065 Q1 | 127 | 256 | 396 | 15378 | 3075 | 392 | 7.11 | 60.07 | 52.47 | ||
33 | journal | 2.041 Q1 | 113 | 22 | 55 | 1730 | 276 | 53 | 4.17 | 78.64 | 50.00 | ||
34 | journal | 2.035 Q1 | 50 | 68 | 187 | 5429 | 1767 | 174 | 7.37 | 79.84 | 55.85 | ||
35 | journal | 2.007 Q1 | 95 | 38 | 106 | 3403 | 550 | 97 | 4.51 | 89.55 | 43.02 | ||
36 | journal | 1.987 Q1 | 74 | 0 | 43 | 0 | 328 | 40 | 3.11 | 0.00 | 0.00 | ||
37 | journal | 1.982 Q1 | 21 | 40 | 63 | 1645 | 288 | 56 | 4.46 | 41.13 | 68.25 | ||
38 | journal | 1.954 Q1 | 144 | 186 | 494 | 11252 | 2846 | 489 | 5.44 | 60.49 | 61.37 | ||
39 | journal | 1.937 Q1 | 120 | 899 | 1482 | 45502 | 12101 | 136 | 5.91 | 50.61 | 34.69 | ||
40 | journal | 1.933 Q1 | 95 | 16 | 58 | 1494 | 236 | 58 | 3.84 | 93.38 | 62.00 | ||
41 | journal | 1.908 Q1 | 132 | 96 | 127 | 4236 | 669 | 122 | 5.03 | 44.13 | 61.15 | ||
42 | journal | 1.906 Q1 | 157 | 99 | 181 | 9134 | 1171 | 168 | 5.19 | 92.26 | 59.22 | ||
43 | journal | 1.898 Q1 | 60 | 235 | 544 | 12273 | 3423 | 526 | 4.26 | 52.23 | 70.75 | ||
44 | journal | 1.888 Q1 | 122 | 112 | 178 | 5739 | 923 | 171 | 4.44 | 51.24 | 47.51 | ||
45 | journal | 1.866 Q1 | 112 | 71 | 150 | 3408 | 620 | 150 | 3.57 | 48.00 | 37.28 | ||
46 | journal | 1.866 Q1 | 75 | 98 | 124 | 6503 | 1063 | 121 | 8.11 | 66.36 | 51.61 | ||
47 | journal | 1.852 Q1 | 62 | 89 | 123 | 4840 | 1140 | 112 | 5.56 | 54.38 | 57.95 | ||
48 | journal | 1.842 Q1 | 62 | 76 | 99 | 5306 | 789 | 91 | 7.30 | 69.82 | 41.16 | ||
49 | journal | 1.842 Q1 | 114 | 179 | 313 | 12758 | 2443 | 312 | 7.41 | 71.27 | 46.41 | ||
50 | journal | 1.840 Q1 | 111 | 55 | 151 | 5498 | 837 | 149 | 4.29 | 99.96 | 65.33 |
Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®
Cookie settings
Cookie Policy
Legal Notice
Privacy Policy
26 education & teaching magazines and websites that pay freelance writers.
Dear Writers,
Here’s a roundup of magazines and websites that publish writing about education.
All of these publishers pay freelance writers. We’ve noted payment information, when available. Also, keep in mind that payment rates are not set in stone.
Also, if you are a teacher, there are many opportunities out there to use your expertise to get paid as a freelance writer. This article gives helpful advice.
PTO Today is the magazine for leaders of parent-teacher organizations. They’re published 6 times a year. They publish articles about parental involvement, leadership, fundraising, working with school staff, etc. They pay $125 to $500 (down from $200 to $700!) for features. To learn more, read their submission guidelines.
The Change Agent publishes articles written by adult educators and students. Published biannually, the magazine’s pieces promote advocacy skills and and social action. They pay a $50 stipend for accepted articles. To learn more, read their submission guidelines.
TakeLessons is an educational site that connects teachers with students. They invite writers to join their team of teachers and submit articles to their blog. Teachers can choose from a list of topics and write a 500-800 word post for consideration. They pay $50 per post by a site-registered teacher; non-teachers do not receive payment. To learn more, read their submission guidelines.
American Educator is published quarterly by the American Federation of Teachers. It addresses the state of education across the country and covers new trends in education, politics, labor issues, and more. They pay at least $300 for articles, which typically run 1,000 to 5,000 words. To learn more, read their submission guidelines.
Learning for Justice (formerly Teaching Tolerance) publishes articles for a national audience of pre-K through 12 educators with a focus on diversity and social justice. They accept freelance submissions for articles, blog posts, and lessons that reflect their perspective. They pay up to $1 a word for features and their Story Corner section. To learn more, read their submission guidelines.
Education Forum is the official magazine of the Ontario Secondary School Teachers’ Federation. They are “a progressive voice on public education and on all issues affecting those that work in public education. ” They reach 60,000 public education workers in Ontario. They pay $500 for features. To learn more, read their submission guidelines .
SchoolArts Magazine publishes information on teaching art in schools. They’re looking for conversational articles that share “successful lessons, areas of concern, and approaches to teaching art.” They pay up to $100 per article. To learn more, read theirsubmission guidelines.
The Old Schoolhouse Magazine is a magazine for Christian homeschoolers. Articles can be from parents of homeschooled children or those with an interest in the topic. They have set out themes and deadlines for 2018. Query first. Length: 800 words. Pay: $50. Details here .
Back to College publishes information for adult re-entry students who are pursuing an advanced degree. They accept unsolicited articles that discuss all aspects of the re-entry experience, from finding financial aid to mastering online education. They appear to only accept submissions via mail. They pay $65 and up for features. To learn more, read their submission guidelines.
Practical Homeschooling Magazine is a print and digital magazine that features the latest educational trends, useful how-tos and practical answers to the toughest homeschooling questions. They are looking for “practical articles (with resource lists and, ideally, photos) that explain how to meet some homeschool challenge or how to venture forth in to some new area.” They pay $50 per article. To learn more, read their writer’s guidelines .
WeAreTeachers is an online media brand for educators. They welcome submissions on a wide range of topics related to teacher life and education. Before submitting, they recommend reviewing their blog to understand their style, format, and tone. Most of their blog posts are 500 to 700 words long. If they publish the submission, they pay an honorarium of $100. To learn more, visit this page .
Texas Adult Education & Literacy Quarterly is a publication of the Texas Center for the Advancement of Literacy & Learning (TCALL) at Texas A&M University. They address “topics of concern to adult education and literacy practitioners, policymakers, and scholars.” They are looking for articles that are no longer than 900 words. They pay a stipend of $50 to $250 per article. Further details can be found here .
Living Education is an online journal that celebrates and explores issues that are of relevance to homeschooling families. They are “especially interested in articles that highlight unique and innovative paths that the educational journey can take.” They want the articles to be up to about 1,000 words long. They pay $50 per piece. For details, visit this page .
Faramira publishes quizzes on English vocabulary, general knowledge, basic mathematics, and general science to help people prepare for aptitude tests. They are seeking articles (500 to 800 words) from experienced freelance writers. The articles can be on “fashion & beauty, health & fitness, family & wellbeing, education, finance, personal growth, leadership, productivity, food & recipes, technology, social media, and entertainment.” They pay $8 to $30 per article to their paid writers. Details here . But with such low rates, why bother?
The Hechinger Report is an independent, nonprofit news organization that focuses on inequality and innovation in education. They provide in-depth, fact-based, and unbiased reporting on education. Payment reports indicate that they pay up to $1.50 per word. To contact them, refer to this page .
The URMIA Journal is an annual scholarly publication by the University Risk Management and Insurance Association (URMIA), an international non-profit educational association that serves colleges and universities. The journal features peer-reviewed articles that contain “in-depth analysis on a broad range of risk management topics of concern in higher education.” They offer an honorarium of $300 per article (2,500 to 7,500 words). Details here .
The James G. Martin Center for Academic Renewal is a “nonprofit institute dedicated to improving higher education in North Carolina and the nation.” They are accepting unsolicited article submissions on topics including “higher education administration, finances, governance, academic standards, efficiency, enrollment, employment, pedagogy, and the curriculum, as well as exposure of bias, politicization, corruption, and poor practices.” They pay an honorarium that begins at $200 and increases with the amount of web traffic. Details here .
The Advocate is a newspaper for the students, faculty, and staff of the Graduate Center (GC), City University of New York (CUNY). They accept articles, reviews, photos, and illustrations from the students, faculty, and staff of CUNY as well as those who are not affiliated with CUNY. They accept articles on a wide range of topics including GC/CUNY issues; teaching and graduate life; New York City’s politics, culture, and art; local, national, and international issues; science and technology; and book, theater, film, music, and art. They pay $100 to $150 per article (1,000 to 3,000 words). To contribute, refer to this page .
E-Book Web is a magazine about eBooks, reading, education, and more. They are looking for book reviews, tips for increasing reading productivity, interviews, and education related content. They pay $50 for an article of 600 to 1500 words, and $75 for an article of more than 1,500 words. They pay $100 for interviews of authors and other education professionals. Details here .
The Medic Blog is a self-study resource to prepare for UCAT, BMAT, and GAMSAT. They welcome submissions from candidates who have taken UCAT, BMAT, or GAMSAT. They pay up to £100 per article. To learn more, refer to this page .
Generation Mindful creates educational products that build emotional intelligence and help connect the generations. They are interested in articles on the following topics: “social emotional learning, positive discipline, supportive classrooms, the power of play, mindfulness, foster families, co-regulation, Calming Corners, home schooling, early childhood education, special education, ADHD, autism, and childhood trauma.” They previously indicated payment of $75 per published post, but now ask that you specify your rate with the submission. Details here .
College & University (C&U) is a quarterly journal by American Association of Collegiate Registrars and Admissions Officers (AACRAO). They pay an honorarium of $300 for a feature article (refereed article) and $150 for a forum article (commentary, analysis, book review, and international resource). To learn more, refer to this page .
STEMTaught is an organization that is dedicated to improving the accessibility of STEM (Science, Technology, Engineering, and Math) education to elementary school students. They are looking for article submissions. They pay $100 for the author’s efforts and creativity. To learn more, visit this page .
Teachers & Writers Magazine is “published by Teachers & Writers Collaborative to provide resources and inspiration in support of our stated mission: teaching creative writing and educating the imagination.” They are looking for the following type of articles: Favorite Classroom Writing Prompts ($75 for 500-750 words), Narrative Lesson Plans ($100 for 750-2,000 words), The Art of Teaching Writing ($150 for 1,000+ words), Interviews ($150-$350 for 1,000-2,500 words), Profiles ($150 for 1,000-2,500 words), Redefining the Canon ($150 for 1,000-2,500 words), and Essays and Editorial Responses ($150 for 1,000-2,000 words). For details, read their submission guidelines .
EdTech Magazine explores “technology and education issues that IT leaders and educators face when they’re evaluating and implementing a solution for K-12 and Higher Ed.” They are always seeking new writing talent. According to their associate editor , they pay $0.50 to $1.00 per word for articles of 800 to 1,200 words. To learn more, refer to this page .
Chalkbeat is a nonprofit news organization that reports on education in poor communities across America. They elevate the “voices of educators, students, parents, advocates, and others on the front lines of trying to improve public education.” They are looking for personal essays (around 800 words) centered around a personal experience or observation. They publish these essays in a series called First Person. According to their story editor , they pay $100 per personal essay. If interested, send your pitches or drafts to [email protected] . For more information, read their first person guidelines .
Sign up and we'll send you 3 companies hiring writers now. Plus, we'll send more companies as we find and review them. All in our free email magazine.
We send you companies hiring writers., subscribe and we'll send you 3 companies hiring right now., we'll also send you a guide that gets you started., we're completely free., subscribe now. (it's free.).
We're dedicated to helping freelance writers succeed. We send you reviews of freelance writing companies, assignments, and articles to help build your writing career. You can view our privacy policy here, and our disclaimer. To get started, simply enter your email address in the form on this page.
Freedom With Writing | We Send You Paid Writing Opportunities | View Our Privacy Policy
This page contains reference examples for journal articles, including the following:
Grady, J. S., Her, M., Moreno, G., Perez, C., & Yelinek, J. (2019). Emotions in storybooks: A comparison of storybooks that represent ethnic and racial groups in the United States. Psychology of Popular Media Culture , 8 (3), 207–217. https://doi.org/10.1037/ppm0000185
Jerrentrup, A., Mueller, T., Glowalla, U., Herder, M., Henrichs, N., Neubauer, A., & Schaefer, J. R. (2018). Teaching medicine with the help of “Dr. House.” PLoS ONE , 13 (3), Article e0193972. https://doi.org/10.1371/journal.pone.0193972
Missing volume number.
Lipscomb, A. Y. (2021, Winter). Addressing trauma in the college essay writing process. The Journal of College Admission , (249), 30–33. https://www.catholiccollegesonline.org/pdf/national_ccaa_in_the_news_-_nacac_journal_of_college_admission_winter_2021.pdf
Sanchiz, M., Chevalier, A., & Amadieu, F. (2017). How do older and young adults start searching for information? Impact of age, domain knowledge and problem complexity on the different steps of information searching. Computers in Human Behavior , 72 , 67–78. https://doi.org/10.1016/j.chb.2017.02.038
Butler, J. (2017). Where access meets multimodality: The case of ASL music videos. Kairos: A Journal of Rhetoric, Technology, and Pedagogy , 21 (1). http://technorhetoric.net/21.1/topoi/butler/index.html
Joly, J. F., Stapel, D. A., & Lindenberg, S. M. (2008). Silence and table manners: When environments activate norms. Personality and Social Psychology Bulletin , 34 (8), 1047–1056. https://doi.org/10.1177/0146167208318401 (Retraction published 2012, Personality and Social Psychology Bulletin, 38 [10], 1378)
de la Fuente, R., Bernad, A., Garcia-Castro, J., Martin, M. C., & Cigudosa, J. C. (2010). Retraction: Spontaneous human adult stem cell transformation. Cancer Research , 70 (16), 6682. https://doi.org/10.1158/0008-5472.CAN-10-2451
The Editors of the Lancet. (2010). Retraction—Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet , 375 (9713), 445. https://doi.org/10.1016/S0140-6736(10)60175-4
Hare, L. R., & O'Neill, K. (2000). Effectiveness and efficiency in small academic peer groups: A case study (Accession No. 200010185) [Abstract from Sociological Abstracts]. Small Group Research , 31 (1), 24–53. https://doi.org/10.1177/104649640003100102
Ganster, D. C., Schaubroeck, J., Sime, W. E., & Mayes, B. T. (1991). The nomological validity of the Type A personality among employed adults [Monograph]. Journal of Applied Psychology , 76 (1), 143–168. http://doi.org/10.1037/0021-9010.76.1.143
Freeberg, T. M. (2019). From simple rules of individual proximity, complex and coordinated collective movement [Supplemental material]. Journal of Comparative Psychology , 133 (2), 141–142. https://doi.org/10.1037/com0000181
Journal article references are covered in the seventh edition APA Style manuals in the Publication Manual Section 10.1 and the Concise Guide Section 10.1
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Humanities and Social Sciences Communications volume 11 , Article number: 1086 ( 2024 ) Cite this article
Metrics details
Automated written corrective feedback (AWCF) has been widely applied in second language (L2) writing classrooms in the past few decades. Recently, the introduction of tools based on generative artificial intelligence (GAI) such as ChatGPT heralds groundbreaking changes in the conceptualization and practice of AWCF in L2 pedagogy. However, students’ engagement in such an interactive and intelligent learning environment remains unstudied. The present study aims to investigate L2 writers’ behavioral, cognitive, and affective engagement with ChatGPT as an AWCF provider for writing products. This mixed-method multiple case study explored four L2 writers’ behavioral, cognitive, and affective engagement with AWCF provided by ChatGPT. Bearing the conversational and generative mechanisms of ChatGPT in mind, data on students’ engagement were collected from various sources: prompt writing techniques, revision operations, utilization of metacognitive and cognitive strategies, and attitudinal responses to the feedback. The results indicated that: 1) behavioral engagement was related to their individual differences in language proficiencies and technological competencies; 2) the participants have failed to metacognitively regulate the learning processes in an effective manner; and 3) ChatGPT ushered in an affectively engaging, albeit competence-demanding and time-consuming, learning environment for L2 writers. The study delivers conceptual and pedagogical implications for educators and researchers poised to incorporate GAI-based technologies in language education.
A large-scale comparison of human-written versus chatgpt-generated essays, introduction.
“Engagement defines all learning” (Hiver et al. 2021 , 2).
In second language (L2) writing, feedback, especially written feedback, is one of the most widely applied and researched topics (Hyland and Hyland 2019 ). In the past decades, the focus of relevant research has shifted from the effects of feedback on writing quality (e.g., Nelson and Schunn 2009 ; Dizon and Gayed 2021 ) towards students’ involvement in processing and utilizing feedback (Zhang 2017 ; Ranalli 2021 ). However, due to the multifaceted and dynamic nature of student engagement with written feedback (Han and Gao 2021 ), the body of existing literature suffers from the lack of multidimensional insights into all the aspects of engagement with feedback (Shi 2021 ).
Meanwhile, with the advancement of technologies, automated written corrective feedback (AWCF) has been widely implemented in L2 classrooms as a pedagogical innovation. Researchers have made continuous contributions to expand our knowledge in 1) the effects of AWCF on the quality of writing products (Barrot 2021 ); 2) the interplay of AWCF and classroom instruction (Tan et al. 2022 ); and 3) learners’ perceptions of the utilization of AWCF providers in L2 classrooms (ONeill and Russell 2019 ). Reversely, thorough investigations of students’ engagement with AWCF have been scant (Koltovskaia 2020 ). Furthermore, compared to the bravery to incorporate state-of-the-art technologies in L2 classrooms, there remains a lacuna of research on the students’ engagement with cutting-edge AWCF providers. Since its advent in late 2022, ChatGPT, a conversational generative artificial intelligence (GAI) chatbot powered by large language models (LLM), has evoked heated hype about its impact on language education (e.g., Jiao et al. 2023 ; Mizumoto and Eguchi 2023 ). Specifically, a few pioneering studies have unveiled its strength to outperform its precedents in correcting grammatical errors (Fang et al. 2023 ; Wu et al. 2023 ). Nevertheless, we confronted a dearth of empirical evidence of students’ engagement with AWCF generated by ChatGPT in authentic L2 pedagogical settings.
Against the above backdrops, the study has explored L2 writers’ engagement with AWCF provided by ChatGPT. Theoretically, the research has drawn upon existing studies to reconceptualize student engagement with feedback provided by GAI-based systems. Methodologically, the study adopted a mixed-method multiple case study approach to collect and triangulate data. The paper is significant as it brings new insights into the changes in learning patterns that resulted from students’ exposure to GAI-based feedback providers and the extent to which learners engage with the new environment.
Awcf and the potential of chatgpt.
In recent years, the impact of AWCF, the written corrective feedback (WCF) provided by computerized automated writing evaluation (AWE) tools, on L2 writing pedagogy has grown continuously (Zheng et al. 2021 ). Compared to the traditional teacher-fronted WCF, AWCF has been praised by researchers and educators for its: 1) power to alleviate teachers’ and peers’ burden in L2 classrooms (Ranalli 2018 ); 2) empowering effects in augmenting students’ involvement in revision and proofreading (Li et al. 2015 ); and 3) promptness in providing effective feedback (Barrot 2021 ). However, researchers have conflicting perspectives regarding the efficacy of AWCF compared to WCF. On the one hand, technology-enhanced feedback providers or interventions serve as a significant assistant in facilitating teachers or peers in making an accurate evaluative judgment on writing artifacts, particularly in overcoming evaluation biases or inaccuracies (Wood 2022 ; Gong and Yan 2023 ; Yan 2024a ), for example, the choice between lenient or severe judgment (e.g., Jansen et al. 2021 ) or the tendency to use simple heuristics while forming feedback (e.g., Fleckenstein et al. 2018 ). On the other hand, AWCF has constantly been criticized as inferior to human-generated feedback with the relatively restricted abilities of AWE systems to form accurate and comprehensive evaluations of writing artifacts, particularly the more traditional corpus-based systems such as Pigai.com (Fu et al. 2022 ). Hence, there has been a long-standing pursuit to improve AWE systems in providing individualized and effective AWCF for language learners (Fleckenstein et al. 2023 ).
Recently, with the emergence of AI-based technologies such as Grammarly and QuillBot, researchers’ interest shifted gradually. According to existing empirical studies, AI-based AWCF providers outperform the corpus-based systems by a substantial margin in both the feedback uptake and revision quality of L2 writers (c.f., the successful revision rate of merely 60% in Bai and Hu 2017 ; and approximately 70% in Koltovskaia 2020 ). Based on such improvement in performance, the technological advancement would further spur the research and implementation of AWCF providers in L2 writing classrooms.
Since the appearance of ChatGPT, researchers have attempted to adopt it as an AWCF provider for L2 writing with promising results. As evidenced by the comparison between ChatGPT and Grammarly by Wu et al. ( 2023 ), the former offers a further improvement over existing AI-based solutions for correcting grammatical errors. Accordingly, researchers have optimistically prophesied the potential of ChatGPT as a significant assistant for language learners in the future (Jiao et al. 2023 ; Mizumoto and Eguchi, 2023 ). The potential of ChatGPT as a potential AWCF provider is based on: 1) the outstanding performance in providing grammatical and syntactical corrections in an accurate and instant fashion (Steiss et al. 2024 ); 2) the tremendous amount of pre-trained language data that ensures its excellent performance compared to its precedents (Wu et al. 2023 ); 3) the ability to iteratively respond to users’ inquiries for feedback due to the interactional and conversational mechanism of the human-computer interface (White et al. 2023 ; Yan 2024b ); and 4) the verified enhancement from conversational AI-based chatbots as learning assistants in previous studies (Wu and Yu 2023 ).
However, we cannot neglect that ChatGPT has its disadvantages; for example, it could create hallucination , the randomly generated and unverified information (Tonmoy et al. 2024 ). Additionally, since ChatGPT is a conversational chatbot, the quality of ChatGPT-generated feedback is dynamic and subject to the extent to which the learners agentically seek and process the feedback (Yan 2024b ). Moreover, from a student perspective, the effective and ethical use of ChatGPT called for a higher level of AI literacy and corresponding support and scaffolding from teachers or peers, both of which were inadequately possessed or provided at the current stage (Yan 2023 ). Taken together, the effective utilization of ChatGPT in educational settings needs meaningful and successful fulfillment of its potential while controlling the threats and menaces it might bring.
In the pre-ChatGPT era, Ranalli ( 2018 ) has called for an accurate and robust AWCF provider that could interactively answer individual learners’ specific needs and demands. Given the history of the AWCF application and the strength of ChatGPT, the GAI-based system is in the spotlight as a potential problem solver and game changer for the field.
In an era of change, the effects of ChatGPT or similar GAI-based tools on L2 writing still need to be studied. Among all the overheating hype and unfounded fears about adopting ChatGPT in education since its debut, we expect more empirical studies investigating the actual effects of the tool on language learners. As Zhang ( 2017 ) has suggested, students’ engagement with feedback providers is an indispensable prerequisite to benefiting from technology-mediated language learning facilities. Consequently, a study focusing on learners’ involvement in processing and utilizing the corrective feedback provided by ChatGPT would enrich our limited knowledge of AI-mediated language learning (e.g., Tseng and Warschauer 2023 ).
In L2 research, engagement has been understood as one of the defining features of students’ active involvement in learning (Mercer 2019 ). For L2 writing, engagement is commonly conceptualized as a tripartite meta-construct composing three key components: behavioral, affective, and cognitive engagement (Ellis 2010 ; Zhang and Hyland 2018 ; Fan and Xu 2020 ). Specifically, behavioral engagement refers to the learning behaviors (Zheng and Yu 2018 ) and strategical choices in translating the received feedback into a revision (Han and Hyland 2015 ); affective engagement represents students’ emotional and attitudinal responses to the feedback (Ellis 2010 ); and cognitive engagement denotes the extent to which the student cognitively perceives the feedback and the subsequent cognitive and metacognitive operations to process and utilize the feedback (Han and Hyland 2015 ).
In recent years, many studies have investigated the three dimensions of student engagement in pedagogical settings of L2 writing equipped with automated feedback providers. On the one hand, researchers have attributed students’ engagement with AWCF to various factors. In a single case study to examine engagement with Pigai.com in an EFL context, Zhang ( 2017 ) discovers that more teacher scaffolding and pedagogical assistance are needed to facilitate the cognitive engagement of L2 writers learning with AWE systems. In a subsequent multiple case study on engagement with teacher-scaffolded feedback provided by Pigai.com, Zhang and Hyland ( 2018 ) attribute the diversity in learners’ engagement to students’ language proficiency, learning styles, and utilization of learning strategies. As the interest of researchers shifts from traditional AWE systems to AI-based AWCF providers, new perspectives on student engagement emerge. Ranalli ( 2021 ) concludes by observing six Mandarin L1 learners who trust in AWCF quality and credibility and decisively determine engagement. Furthermore, a recent eye-tracking study reveals that feedback explicitness determines student engagement with AWCF provided by Write & Improve (Liu and Yu 2022 ). On the other hand, contradictory voices are often heard from research on the students’ engagement with AWCF. For example, the study by Rad et al. ( 2023 ) betokens the promoting effects of Wordtune, an AI-based writing assistant, on L2 students’ overall engagement. On the contrary, Koltovskaia ( 2020 ) manifests that students’ cognitive engagement with the feedback provided by Grammarly is insufficient, although positive affective engagement was reported after using the tool to support writing.
Despite the prolific insights into students’ engagement with AWCF in L2 writing classrooms, scholars have criticized existing research for neglecting key elements, e.g., overlooking students’ involvement in the revision process (Stevenson and Phakiti 2019 ), and the predominance of an outcome-based approach to studying the quality of writing products (Liu and Yu 2022 ). The present study not only embarks on a comprehensive investigation into students’ engagement but also strives to seek a new conceptual departure in L2 pedagogy in the age of AI. Considering the characteristics of ChatGPT as a potential AWCF provider, there exists a lacuna in our understanding of how and to what extent students engage with the new GAI-based feedback provider.
The rationale to revisit the conceptualization of student engagement with corrective feedback in the context of GAI is posited on the paradox between the alleged positive effects of AWCF providers on writing pedagogy (Fang et al. 2023 ; Wu and Yu 2023 ) and the reported challenges encountered by students to effectively tap the strength of AI in seeking feedback (Yan 2024b ). To frame the decisive factors affecting engagement, Ellis’s ( 2010 ) componential framework for investigating corrective feedback is referred to. According to the framework, student engagement with corrective feedback is influenced by individual differences and contextual factors. Previous studies have generally attributed the individual differences of learners to language proficiency (Zhang and Hyland 2018 ; Ranalli 2021 ). However, for ChatGPT as an AWCF provider, technological competence should be included as a major aspect of individual competence since the interaction with ChatGPT, via iteratively prompt writing and amendments, calls for a higher level of digital literacy (Lee 2023 ; Naamati-Schneider and Alt 2024 ).
The tripartite dimensions within the meta-construct of engagement are developed on top of the body of literature. First, the concept of behavioral engagement is expanded. In the study of Zhang and Hyland ( 2018 ), behavioral engagement is deemed to be students’ behaviors to process feedback, i.e., operation and strategies of revision. However, for the present study, an additional aspect of students’ behaviors is considered, i.e., the actions of writing prompts to seek feedback from ChatGPT. Unlike conventional AWE systems and AWCF providers such as Grammarly, the quality, content, and quantity of feedback provided by ChatGPT rely on the user’s interaction with the GAI-based system through iterative and incremental prompt writing (Yan 2023 ). Second, in line with the work by Koltovskaia ( 2020 ), the present study conceptualizes cognitive engagement as students’ utilization of cognitive and metacognitive strategies in processing AWCF; and affective engagement as students’ emotional and attitudinal responses to the AWCF. The conceptual model of student engagement with ChatGPT-generated feedback is graphically shown in Fig. 1 .
Conceptual model of student engagement with ChatGPT-generated AWCF.
The study explores L2 writer engagement with AWCF generated by ChatGPT. The following logic guides the research: (1) compared to more traditional approaches to corrective feedback, we are facing a paucity of comprehensive understanding of student engagement with AWCF; (2) compared to AWE systems such as Pigai.com, we have barely any knowledge about how ChatGPT’s unique features, such as its outstanding text generation abilities, interactive and conversational interfaces, iterative feedback generation capabilities, would impact on L2 writer engagement with AWCF; and (3) given that effective use of ChatGPT calls for a higher level of domain knowledge and AI competence, we need to examine how do these individual characteristics influence L2 writer engagement with ChatGPT-generated AWCF. Therefore, the following research question would be answered:
How do L2 writer with varied language proficiency and technological competence behaviorally, cognitively, and affectively engage with AWCF provided by ChatGPT?
The study’s research site was an undergraduate EFL program at a Chinese university. Students enrolled in this program had to take three writing courses in which formative assessment and technology-enhanced feedback were practiced. Therefore, the students were relatively experienced in learning-oriented assessment practices.
The participants were recruited from a pool of students previously involved in a pilot project investigating the impact of ChatGPT on L2 learners (Yan 2023 ). A purposeful sampling method was applied to select four participants with distinct characteristics in language proficiency and technological competence (Palinkas et al. 2015 ). The sampling criteria included: 1) average performance in four precedent L2 writing assessments, which were adopted from the official writing prompts of Test for English Major band 4, a national level and widely applied test of English proficiency for English majors in China (Jin and Fan 2011 ); 2) average performances in the assessments of two precedent digital humanities courses; 3) interest in the project and self-rated trust in AWCF; and 4) recommendations from co-researchers (from the teaching faculty of the program) based on classroom observation and the analysis of learning artifacts. Originally, a group of 14 students voluntarily participated in the project. However, only 4 students were regarded as qualified participants for the present study since the others failed to provide complete learning data. See Table 1 for the background information of the 4 participants. To maintain the ethicality of the study, written informed consents were obtained from all participants, who were aware of the purpose, design, procedures, and anonymity policies of the study, prior to the data collection procedures.
In second language acquisition (Duff 2010 ) and educational feedback (e.g., Zhang and Hyland 2023 ), case study has been widely applied as an established means to collect rich data on students’ actual learning experiences. Adopting a mixed-method multiple case study approach (Yin 2013 ), a case in the study was defined as the extent to which an individual learner was behaviorally, cognitively, and affectively engaged with ChatGPT-generated feedback. For each specific case, the study followed a convergent design in which the quantitative and qualitative data were triangulated to manifest students’ engagement with the AWCF (Creswell and Plano Clark 2018 ). Furthermore, the study was a collective multiple-case study, as the cross-case comparison of the individual cases allowed the researcher to generalize the findings for a broader context (Stake 1995 ). Although the limited number of participants would possibly hinder the study’s potential implications for a general and broader context, small sample size and/or high drop-out rate are frequent phenomena among case studies on learning behaviors, for example, in Koltovskaia and Mahapatra ( 2022 ), only 2 student participants’ data were selected from a pool of 17; in Yan ( 2024b ), only 3 students were finalized as participants in the inquiry into L2 writer’s feedback-seeking behaviors. As argued by Adams ( 2019 ), the limited number of research subjects in case studies had its merits in unfolding learner experiences in using feedback other than the feedback design.
During the five-week project, 68 students (inclusive of all the participants of the study) joined an L2 writing practicum focusing on exploring the affordance of ChatGPT as a feedback provider. Each week, two sessions of teacher-fronted instruction and live demonstration were prescribed, in addition to four sessions of self-directed learning and practicing. Students must complete draft writing, seek feedback from ChatGPT, execute the revision based on the feedback, and submit it to the instructor each week. To facilitate the data collection, different data collection strategies were employed, i.e., students’ weekly reflective learning journals (Bowen 2009 ), the observation of students’ behaviors in the classroom (Jamshed 2014 ), and the interviews (Braun and Clarke 2012 ). The practicum structure and data collection procedures of the study are shown in Fig. 2 .
Procedures of the study.
First, after each week, the participants were required to complete a reflective learning journal. Specifically, they are asked to provide their weekly reflection on the learning progressions, experiences using ChatGPT for feedback, the episodes of interaction with ChatGPT for eliciting and refining corrective feedback, and the acceptance and rejection of the feedback in preparing the revisions. Participants were encouraged to complete the journal multimodally with multiple types of files, e.g., screenshots, audio recordings, and video clips as supplementary files. See Supplementary Appendix A for the template for the reflective journal. Moreover, a task worksheet was provided to the learners to write down the draft writing, formative revisions, and the final writing products for each writing task. See Supplementary Appendix B for a sample task worksheet.
Second, during each instructional and practice session, the instructors were requested to record the students’ learning behaviors and processes. The students attended all the sessions in language laboratories equipped with keylogging and screen recording facilities to facilitate the recording. All the loggings and recordings were gathered, processed, and taken down in notes by two co-researchers recruited from the teaching faculty. Furthermore, the note-takers coded the notes against a coding scheme for metacognitive and cognitive learning strategies for the study. See Supplementary Appendix C for the coding scheme adopted from the work of Sonnenberg and Bannert ( 2015 ). Inter-coder disagreements were solved by reaching a consensus between the two coders and the researcher through recording playbacks and collective discussion. According to the measurement of Cohen’s Kappa (κ = 0.72, 95% CI [0.65, 0.84]), good inter-rater reliability was attained.
Finally, an immediate post-session interview was performed for each participant after the final session of the week. Participants were required to follow the instructions of the interviewer to answer the questions from the pre-determined interview protocol with questions like “When the project ends, are you willing to continue using ChatGPT for feedback in L2 writing?”. Each interview session lasted for about 10–15 min. The moderator was required to write down all the major viewpoints and interview details in an interview note. The interviews were audio recorded and transcribed verbatim.
Throughout the data collection processes, the researchers have taken measures to ensure the trustworthiness, reliability, and validity of data. For example, the reliability and validity of the observational data were attained after reaching interobserver agreement during the initial two weeks (Watkins and Pacheco 2000 ); the trustworthiness of the qualitative data were checked with member checking with participants (Doyle 2007 ), and the investigator triangulation (Carter et al. 2014 ).
First, quantified document analysis was applied to analyze the learning journals and the worksheets. For each case, the individual learner’s learning details, i.e., time spent for feedback processing, number of written ChatGPT prompts, time spent for the interaction with ChatGPT, and retention of feedback in the revision, were quantified and analyzed through descriptive statistics. For the coding of prompt writing patterns, a coding scheme developed by the first author in a previous work was used (Yan 2024b ). The coding was performed by three coders recruited from the teaching faculty. According to the measurement of Fleiss’ Kappa (κ = 0.86, 95%CI [0.78, 0.91]), good inter-rater reliability was achieved.
Second, a lag sequential analysis (LSA) using GSEQ 5.1 software was performed to analyze students’ transition and interaction patterns using metacognitive and cognitive strategies extracted from the coded classroom observations. LSA is a statistical technique used to identify patterns and sequences of behaviors or events over time by examining the conditional probabilities of one event occurring after another within a specified time delay or lag period (Bakeman and Quera 2011 ). Correspondingly, GSEQ calculated adjusted residuals from a transitional probability matrix based on the coded behavior sequences (Pohl et al. 2016 ). The significance of behavioral transitions was determined by the Z-score of the adjusted residuals (significant if Z > 1.96). Behavioral transitions were visualized to present the behavioral patterns in terms of metacognition and cognition within the feedback processing and revision processes.
Third, a thematic analysis following the six-step procedures recommended by Braun and Clarke ( 2012 ) was applied to the interview transcripts. Two additional co-researchers were recruited to assist the researchers in coding and theme extraction. Disagreements among the co-researchers were solved through an ad hoc discussion convened and joined by the researcher.
Finally, when the data analyses were finalized, all findings were converged and triangulated to answer the research questions.
After the quantified document analysis, the data on the participants’ feedback-seeking and revision operations were presented. Specifically, the actions of feedback seeking and revision were respectively manifested as detailed patterns in composing ChatGPT prompts and processing ChatGPT-generated feedback categorized by error types.
First, the actions of feedback seeking by the four participants were shown in Figs. 3 – 6 , respectively. According to the bar charts, Emma and Sophia created more than 2000 ChatGPT prompts in 5 weeks, followed by Robert’s 1670 and Mia’s 1238. Pertinent to the weekly developmental trends in using specific prompt writing techniques, Emma and Sophia have displayed similarities, indicating that the patterns of Robert and Mia were on common ground. For example, in using the [+QUA] technique (providing the user’s quality evaluation of the feedback to re-elicit feedback from ChatGPT), Emma and Sophia have displayed a parabolic curve in the weekly frequencies. At the same time, Robert and Mia have kept a growing momentum to use such a technique throughout the project.
Based on Yan ( 2024 b).
BP: minimal prompt; [+BG]: providing background information; [+TSK]: providing task requirement; [+PER]: providing virtual persona; [+TON]: ask to feedback with ascertain style and tone; [+SPE]: with additional specific demands; [-NAR]: ask to narrow down feedback foci; [+CRE]: ask to check credibility; [+Aff]: provide affective evaluation to regenerate feedback; [+QUA]: provide quality evaluation to regenerate feedback; [!REG]: totally regenerate feedback.
Second, the operations of revision of the four participants were gathered, coded, and categorized by the error types (see Tables 2 – 5 respectively for each participant). The taxonomy of errors was based on the coding instruments developed and used in the work by Ferris ( 2006 ) and Han and Hyland ( 2015 ). According to the results, ChatGPT has provided an average of 11 pieces of corrective feedback for Emma per writing task. Emma performed outstandingly with 74.55% of correct revision and actively used substitutions to correct her errors (14.55%), leaving a relatively limited amount of incorrectly executed revision (1.82%) and a low rate of rejection for correction suggestions (3.64%). Sophia’s performance in revision execution was basically on par with Emma’s (received 12.4 pieces of corrective feedback per task), with a high correction rate (74.19%), a good percentage of substitution (19.35%), and a low rate of correction suggestion rejection (4.84%). Alternatively, Robert and Mia, who have received more than 22 pieces of corrective feedback per task, attained lower rates of correct revision (about 60%) and substitution (≤6.25%), higher rates of incorrect revision (16.5% and 12.5% respectively), correction suggestion rejection (≥6.9%), and deletion (>10%).
The results of the LSA for the participants are displayed in Tables 6 – 9 respectively. In the tables, the leftmost column refers to the starting behavior, while the top row stands for the following behavior in the sequence. The behavior sequence is statistically significant when the corresponding Z value of the adjusted residual is greater than 1.96 ( p < 0.05). For example, the behavior sequence from planning to feedback seeking is statistically significant for Emma as the adjusted residual is significant ( Z = 7.483).
The above four tables were visualized diagrammatically (see Fig. 7 for the behavioral transition diagram). Each node in the diagram stands for a category of (meta)cognitive strategies, while a line linking two nodes indicates a significant behavioral transition of the sequence.
P: planning, referring to allocation of time, resources for the following-up feedback and writing processes; M: monitoring, referring to an on-going process in which the quality feedback is observed and compared; E: evaluation, referring to an appraisal of the value and cost for a potential revision or correction based on the feedback selected from the monitoring process; F: feedback elicitation, referring to using the interactive communication with ChatGPT to elicit AWCF; N: feedback refinement, referring to comparing and finalizing potential feedback and ask ChatGPT to regenerate for quality improvement if the quality is unsatisfactory; D: making decision, referring to a final appraisal of the feedback quality and translate the feedback to a potential revision; R: executing the revision, referring to applying the finalized revision to the writing products.
Emma has displayed a relatively higher level of metacognitive regulatory skills. The utilization of cognitive strategies to seek feedback, that is, feedback elicitation and feedback refinement, was integrated with the metacognitive regulations, i.e., monitoring and evaluation. Such integration was characterized by the bidirectional interaction between feedback seeking and metacognitive monitoring (Z F→M = 16.527; Z M→F = 12.137), and the similar bidirectional behavioral sequence between monitoring and feedback refinement (Z N→M = 9.009; Z M→N = 12.679).
Sophia has demonstrated a similar pattern of utilizing cognitive and metacognitive strategies but in a relatively weaker fashion. Sharing a similar diagrammatical structure, the role of metacognitive monitoring has been reduced, typically in the feedback refinement processes (as indicated by the unidirectional sequence of M → N, Z M→N = 15.209). However, the role played by metacognitive monitoring during the feedback elicitation processes remained strong (as indicated by the bidirectional behavioral sequence of F ⇌ M, Z F→M = 18.15; Z M→F = 3.834).
Contrarily, the diagrams of Robert and Mia were simple and absent of the interweaving between cognitive and metacognitive strategies. In Robert’s case, metacognitive strategies, i.e., monitoring and evaluation, were involved in the learning processes. He was incapable of effectively and metacognitively regulating his learning behaviors, resulting in most of his feedback elicitation and refinement being one-off activities (as indicated by the unidirectional sequences of Z N→M = 15.633; Z M→E = 15.126; and Z E→D = 12.911). Similarly, Mia has failed to integrate cognitive and metacognitive strategies. Compared to Robert, her case was even worse, as the metacognitive monitoring and evaluation were eventually severed from her feedback-seeking and revision behaviors (as indicated by Z N→M = 8.698; Z N→D = 9.755; Z M→E = 10.419; and the disconnection between E and D).
In the interview, all four participants were invited to express their affective engagement with AWCF provided by ChatGPT. We used four representative quotes to represent the four major themes that emerged from the qualitative data: (1) a beneficial journey; (2) challenges and mental stresses; (3) easier to deal with GAI-generated negative feedback; and (4) continuous usage in the future.
First, students described the overall journey of using ChatGPT for AWCF as a beneficial and interesting experience. Students showcased remarkable trust in the quality of ChatGPT-generated AWCF, especially when their skills at writing prompts increased. Emma described her experiences as a “fun journey.” She was rather satisfied with ChatGPT-generated feedback, as it was of “remarkable quality and great versatility.” Sophia, sharing relatively a large proportion of Emma’s viewpoints, summarized her experiences during the project as a “thrilling journey in a bizarre yet magnificent site.” She reported that the quality of ChatGPT-generated feedback was not always stable yet mostly trustworthy and clear to follow. Robert, seeing his experiences as a “ride on the highway,” was satisfied with ChatGPT as a feedback provider for its promptness and automated workflow. Mia concluded her journey with the project as a “shocking and slow-paced exploration.” She was satisfied with the tool and the learning environment, but not so much with her own progress.
Second, students identified the cognitive challenges they have faced and the resultant mental stresses. A consensus reached by the participants was the logistical issues, particularly the time spent seeking and refining ChatGPT’s responses while using ChatGPT for AWCF. For example, Emma reflected that the processes took her a relatively longer time and were a little bit mentally taxing, as she must “try very hard to seek better prompts that will bring feedback of higher quality and value.” Sophia expressed her desire for more training and scaffolding from teachers since one-on-one conversations with ChatGPT cannot be “sustained with fruitful outcomes.” The feedback-seeking and revision processes were “interesting, rewarding, but challenging” to her, and she was somewhat mentally stressed after using ChatGPT continuously for feedback. Mia explained that the feedback-seeking process was rewarding but hard and took her too much time since she regarded herself as a slow-paced learner. The only exception is Robert, who found that the feedback-seeking processes were “a little bit boring” but not mentally taxing at all since he was confident in his digital competence.
Third, students favored ChatGPT when the tones of AWCF were negative and harsh. Compared to the traditional scenarios, the students were relieved of the shame and “losing face” experience in front of teachers and peers. Emma asserted that it’s much easier for her to accept negative feedback from AI systems than teachers in the classroom. Mia shared a similar feeling that handling ChatGPT-generated negative feedback feels like those from an anonymous agent.
Finally, students expressed their interest in continuously using ChatGPT in the future. On a broader spectrum, the students acknowledge the value and applicability of ChatGPT as an AWCF provider. As Emma remarked, “using AI for corrective feedback will be normal in the future, and the tips and tricks we have explored will be of valuable significance”. Sophia was sure that she would continue to explore the more advanced features of ChatGPT in writing classrooms, but Mia was worried that she would be outperformed by her classmates as she was slow to pick up the more sophisticated tricks and usage. Robert claimed rather straightforwardly that he would be using ChatGPT after the project to “avoid face-to-face feedback from teachers”.
The study explored the behavioral, cognitive, and affective engagement of L2 writers with corrective feedback provided by ChatGPT in feedback-seeking and revision processes. The findings are categorically presented and discussed in the following sections against existing research and theoretical insights.
The four participants’ behavioral engagement revealed that students were actively involved in the feedback-seeking and revision execution processes. At first glance, all four participants have made progress in seeking feedback from ChatGPT throughout the weeks. Internally, high language-proficiency learners (represented by Emma and Sophia), showed a more sophisticated approach to refining ChatGPT prompts. Instead of repeatedly asking ChatGPT to regenerate feedback, the two learners focused on the quality and content richness of the prompts. The observed varieties could be explained by the process of inner feedback , a term advocated by Nicol ( 2021 ) to represent the natural processing and comparison after learners’ exposure to feedback. Based on the findings, we could infer that the ability to internally process the received feedback during the feedback seeking from ChatGPT depended on the students’ language proficiencies. From another perspective, students’ feedback-seeking behaviors revealed that students with a higher level of technological competence were likely to make more attempts in feedback elicitation and refinement. The specific result was in line with the widely accepted viewpoint that a higher level of ICT competence or digital literacy would lead to more advanced learning outcomes in a technology-enhanced learning environment (Park and Weng 2020 ; Yan and Wang 2022 ).
Similarly, participants with different language proficiencies manifested varied patterns in translating the received feedback to revision execution. Apart from the differences in total errors detected by the AI system per writing task, the most drastic discrepancies among the four participants in the revision operation were the rate of correct revision and adoption of revision strategies. On the one hand, the rate of correct revision was higher than that from precedent research with Grammarly as a feedback provider (i.e., Koltovskaia 2020 ). This could be explained by the alleged strength of ChatGPT in correcting grammatical errors (H. Wu et al. 2023 ). On the other hand, the observation that high-proficiency language learners would make significantly more substitutions than low-proficiency learners echoed the findings of Barkaoui ( 2016 ). However, in contrast with Barkaoui’s ( 2016 ) study, low-proficiency language learners made significantly more revision deletions than their peers. Comprehensively, the students, especially the low-proficiency ones, have ineffectively utilized the corrective feedback provided by ChatGPT. This phenomenon was in line with previous literature (Warschauer and Grimes 2008 ; Chapelle et al. 2015 ).
Cognitively, the extent to which the participants were engaged with the ChatGPT-generated corrective feedback diversified distinctly. Generally, the students performed unsatisfactorily to metacognitively regulate their learning, especially during the feedback-seeking processes. This phenomenon was in unison with Koltovskaia’s ( 2020 ) study, where the participants failed to process AWCF effectively. Furthermore, the relatively poor metacognitive strategy use also testified to the finding of Zhang and Zhang ( 2022 ) that the AWCF hindered students’ active utilization of monitoring and evaluation strategies. Specifically, higher proficiency learners (represented by Emma and Sophia) have effectively utilized metacognitive monitoring and evaluation of the quality of the received feedback to make full use of the strength of ChatGPT; conversely, the lower proficiency learners (i.e., Robert and Mia) could not effectively integrate the metacognitive strategies with the cognitive processes. The variations in the metacognitive regulatory skills among the participants could be attributed to the view of Zheng and Yu ( 2018 ) that insufficient language proficiency would hinder learners’ ability to process feedback and revision.
Unexpected findings emerged from the comparison of the LSA results between Robert and Mia. Based on the data and the visualization, we could posit that students possessing better technological competence could compensate for their limited abilities to monitor and evaluate the quality of received feedback with intensive communication with AI systems. Such inference underlined the revolutionary affordance of ChatGPT’s conversational AI system in providing a highly customizable and learner-aware environment that satisfies learners’ needs through repeated and creative prompt writing (Ranalli 2018 ; Oppenlaender et al. 2023 ; Rudolph et al. 2023 ). Additionally, the finding was in tandem with the meta-analysis results of Wu and Yu ( 2023 ) that AI chatbots were impactful on learning outcomes. The insights would create a new understanding of students’ feedback processing in a learning environment equipped with GAI-based or conversational tools.
The attitudinal and emotional responses towards ChatGPT-generated AWCF and the new GAI-powered learning environment were mostly positive. The overall satisfaction with and acceptance of ChatGPT as a corrective feedback provider was in line with relevant studies in the field of AWCF (Dikli and Bleyle 2014 ; Koltovskaia 2020 ). Furthermore, participants have agreed that the quality of ChatGPT-generated corrective feedback was reliable and accurate. Compared to previous research on the acceptance and evaluation of AWE systems and tools such as Grammarly, the performance of ChatGPT was convincing and well acclaimed by its users (Zhang 2017 ; Koltovskaia 2020 ; Ranalli 2021 ). This phenomenon could be attributed to the interplay of the computational might of the AI system (Fang et al. 2023 ; Wu et al. 2023 ) and its interactive human-machine interface (Oppenlaender et al. 2023 ).
However, participants stressed the mental effort expenditure that resulted from using ChatGPT in L2 writing classrooms. This was not unexpected, as AWCF providers or AWE systems have always been linked with cognitive overload in previous studies (Ranalli 2018 ; Barrot 2021 ). Nevertheless, the cognitive burden experienced by users of ChatGPT was the aggregate of mental effort expenditure for both feedback seeking and feedback processing. The finding ushered in new insights that would expand our understanding of students’ cognitive load in utilizing feedback for L2 writing. Moreover, the finding was in tandem with a recent research trend beyond the scope of AWCF studies to explore how to effectively compose high-quality ChatGPT prompts (Oppenlaender et al. 2023 ; White et al. 2023 ) and how to develop students’ abilities to communicate with GAI systems (Yan 2023 ; Yan 2024b ).
The multiple mixed-method case study, involving four students with different language proficiencies and technological competences from an EFL program, has explored L2 writers’ engagement with ChatGPT-provided corrective feedback from behavioral, cognitive, and affective perspectives. The findings revealed that: 1) students were behaviorally engaged with ChatGPT-generated feedback; however, their feedback-seeking behaviors and revision operations are highly related to language proficiencies and technological competences; 2) only high language proficiency learners could cognitively engage with ChatGPT-generated AWCF by effectively utilizing metacognitive regulatory strategies; and 3) ChatGPT was well-received by participants as a powerful and affectively engaging AWCF provider.
Adding to the body of literature on students’ engagement with AWCF, the study also focuses on the changes in learning brought about by the appearance of ChatGPT. Noticeably, the research underlines the importance of technological competence for L2 learners exposed to technology-enhanced learning environments. Furthermore, as an initial effort to investigate the patterns of learning behaviors and utilization of (meta)cognitive strategies of L2 writers in a GAI-powered environment, the study offers insights into how students are involved in seeking feedback instead of receiving feedback from AWCF providers and how the feedback processing and revision processes are regulated metacognitively.
The diversity of student engagement with ChatGPT-generated corrective feedback, as manifested by the study, has significant pedagogical implications. First, ChatGPT was not only a powerful rival to its precedents but also an affectively engaging solution with which a new learning environment could be constructed. As a result, the inclusion of GAI-based applications as learning assistants in L2 classrooms should be popularized. Second, teacher scaffolding or instruction on the utilization of ChatGPT for the purposes of L2 writing pedagogy or assessment should be developed and provided. As reflected in the study, learners’ individual ability to metacognitively regulate feedback seeking and revision execution is a far cry from perfection. Hence, support from instructors and peer learners is highly expected. Third, a more rational attitude towards the position of GAI-based products in education should be upheld. Instead of a “silver bullet” or a terminator of education, ChatGPT’s integration in classrooms needs the enhancement of students’ and instructors’ multicompetence and the corresponding restructuring of instructional patterns. Finally, from an L2 learner perspective, the relatively high drop-out rate during the participant recruitment showed that, at least at the current stage, students didn’t possess sufficient AI competence and domain knowledge to effectively utilize GAI for longer-time learning improvement. Thus, sustained efforts should be provided in training students of the contemporary era into better users of state-of-the-art technologies.
The study was not without limitations. First, the study adopted a multiple-case study approach methodologically. Hence, researchers should be cautious when translating or generalizing the findings of the present study to different research settings with larger populations. In follow-up research, alternative research methods could be considered to comprehensively investigate the impact of ChatGPT on a larger number of language learners. Second, the duration of the research is limited. In a five-week project, students have completed merely five writing tasks with limited exposure to ChatGPT. In subsequent studies, researchers could try to conduct longitudinal investigations through which the long-term effects of ChatGPT on the learning behaviors and outcomes of L2 learners could be uncovered. Third, the modes of sources of feedback are limited. The study partially adopted a self-regulated learning style for the participants. Hence, the role of peer learners and instructors in processing the feedback was not examined. In successive inquiries, researchers could introduce collaborative learning or peer scaffolding into the learning environment. Fourth, the impact of ChatGPT-generated feedback on writing of different genres was not studied. In future studies, researchers could delve into the effects of the AWCF provided by ChatGPT on multiple types and genres of writing. In general, with the exhibited potential of ChatGPT as a game changer for language education, the researcher hopes the study could kindle more in-depth insights into the pedagogical practice of utilizing GAI-based applications in L2 classrooms.
The pseudonymized data that support the findings of this study are available on request from the corresponding author. The raw data are not publicly available due to the concern that they might disclose the privacy of the participants.
Adams G (2019) A narrative study of the experience of feedback on a professional doctorate: ‘a kind of flowing conversation. Stud Contin Educ 41(2):191–206. https://doi.org/10.1080/0158037X.2018.1526782
Article Google Scholar
Bai L, Hu G (2017) In the face of fallible AWE feedback: how do students respond? Educ Psychol 37(1):67–81. https://doi.org/10.1080/01443410.2016.1223275
Bakeman R, Quera V (2011) Sequential analysis and observational methods for the behavioral sciences. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9781139017343
Barkaoui K (2016) What and when second-language learners revise when responding to timed writing tasks on the computer: the roles of task type, second language proficiency, and keyboarding skills. Mod Lang J 100(1):320–340. https://doi.org/10.1111/modl.12316
Barrot JS (2021) Using automated written corrective feedback in the writing classrooms: effects on L2 writing accuracy. Comput Assist Lang Learn. https://doi.org/10.1080/09588221.2021.1936071
Bowen GA (2009) Document analysis as a qualitative research method. Qual Res J 9(2):27–40. https://doi.org/10.3316/QRJ0902027
Braun V, Clarke V (2012) Thematic analysis. In: APA handbook of research methods in psychology, vol 2: research designs: quantitative, qualitative, neuropsychological, and biological. APA handbooks in psychology®. American Psychological Association, Washington, DC, pp 57–71. https://doi.org/10.1037/13620-004
Carter N, Bryant-Lukosius D, DiCenso A et al. (2014) The use of triangulation in qualitative research. Oncol Nurs Forum 41(5):545–547. https://doi.org/10.1188/14.ONF.545-547
Article PubMed Google Scholar
Chapelle CA, Cotos E, Lee J (2015) Validity arguments for diagnostic assessment using automated writing evaluation. Lang Test 32(3):385–405. https://doi.org/10.1177/0265532214565386
Creswell JW, Plano Clark VL (2018) Designing and conducting mixed methods research, 3rd edn. SAGE, LA
Dikli S, Bleyle S (2014) Automated essay scoring feedback for second language writers: How does it compare to instructor feedback? Assess Writ 22:1–17. https://doi.org/10.1016/j.asw.2014.03.006
Dizon G, Gayed J (2021) Examining the impact of Grammarly on the quality of mobile L2 writing. JALT CALL J 17(2):74–92. https://doi.org/10.29140/jaltcall.v17n2.336
Doyle S (2007) Member checking with older women: a framework for negotiating meaning. Health Care Women Int 28(10):888–908. https://doi.org/10.1080/07399330701615325
Duff P (2010) Case study research in applied linguistics. Second language acqusition research. Routledge, New York
Google Scholar
Ellis R (2010) A framework for investigating oral and written corrective feedback. Stud Second Lang Acq 32(2):335–349. https://doi.org/10.1017/S0272263109990544
Fan Y, Xu J (2020) Exploring student engagement with peer feedback on L2 writing. J Second Lang Writ 50:100775. https://doi.org/10.1016/j.jslw.2020.100775
Fang T, Yang S, Lan K et al. (2023) Is ChatGPT a highly fluent grammatical error correction system? A comprehensive evaluation. arXiv. https://doi.org/10.48550/ARXIV.2304.01746
Ferris D (2006) Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction. In: Hyland, F, Hyland, K (eds) Feedback in second language writing: contexts and issues. Cambridge applied linguistics. Cambridge University Press, Cambridge, pp 81–104. https://doi.org/10.1017/CBO9781139524742.007
Fleckenstein J, Leucht M, Köller O (2018) Teachers’ judgement accuracy concerning CEFR levels of prospective university students. Lang Assess Q 15(1):90–101. https://doi.org/10.1080/15434303.2017.1421956
Fleckenstein J, Liebenow LW, Meyer J (2023) Automated feedback and writing: a multi-level meta-analysis of effects on students’ performance. Front Artif Intell. https://doi.org/10.3389/frai.2023.1162454
Fu Q-K, Zou D, Xie H et al. (2022) A review of AWE feedback: types, learning outcomes, and implications. Comput Assist Lang Learn. https://doi.org/10.1080/09588221.2022.2033787
Gong H, Yan D (2023) The impact of danmaku-based and synchronous peer feedback on L2 oral performance: a mixed-method investigation. PLoS ONE 18(4):e0284843. https://doi.org/10.1371/journal.pone.0284843
Article PubMed PubMed Central Google Scholar
Han Y, Gao X (2021) Research on learner engagement with written (corrective) feedback: insights and issues. In: Mercer, S, Hiver, P, Al-Hoorie, AH (eds) Student engagement in the language classroom. Multilingual Matters, pp 56–74. https://doi.org/10.21832/9781788923613-007
Han Y, Hyland F (2015) Exploring learner engagement with written corrective feedback in a Chinese tertiary EFL classroom. J Second Lang Writ 30:31–44. https://doi.org/10.1016/j.jslw.2015.08.002
Hiver P, Al-Hoorie AH, Vitta JP et al. (2021) Engagement in language learning: a systematic review of 20 years of research methods and definitions. Lang Teach Res. https://doi.org/10.1177/13621688211001289
Hyland K, Hyland F (2019) Contexts and issues in feedback on L2 writing. In: Hyland, F (ed) Feedback in second language writing: contexts and issues, 2nd edn. Cambridge applied linguistics. Cambridge University Press, Cambridge, pp 1–22. https://doi.org/10.1017/9781108635547.003
Jamshed S (2014) Qualitative research method-interviewing and observation. J Basic Clin Pharm 5(4):87–88. https://doi.org/10.4103/0976-0105.141942
Jansen T, Vögelin C, Machts N et al. (2021) Judgment accuracy in experienced versus student teachers: assessing essays in English as a foreign language. Teach Teach Educ 97:103216. https://doi.org/10.1016/j.tate.2020.103216
Jiao W, Wang W, Huang J et al. (2023) Is ChatGPT a good translator? Yes with GPT-4 As the engine. arXiv. https://doi.org/10.48550/arXiv.2301.08745
Jin Y, Fan J (2011) Test for English majors (TEM) in China. Lang Test 28(4):589–596. https://doi.org/10.1177/0265532211414852
Koltovskaia S (2020) Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: a multiple case study. Assess Writ 44:100450. https://doi.org/10.1016/j.asw.2020.100450
Koltovskaia S, Mahapatra S (2022) Student engagement with computer-mediated teacher written corrective feedback: a case study. JALT CALL J 18(2):286–315. https://doi.org/10.29140/jaltcall.v18n2.519
Lee H (2023) The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. https://doi.org/10.1002/ase.2270
Li J, Link S, Hegelheimer V (2015) Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. J Second Lang Writ 27:1–18. https://doi.org/10.1016/j.jslw.2014.10.004
Liu S, Yu G (2022) L2 learners’ engagement with automated feedback: an eye-tracking study. Lang Learn Technol 26(2):78–105. 10125/73480
Mercer S (2019) Language learner engagement: setting the scene. In: Gao, X (ed) Second handbook of English language teaching. Springer international handbooks of education, Springer International Publishing, Cham, pp 1–19. https://doi.org/10.1007/978-3-319-58542-0_40-1
Mizumoto A, Eguchi M (2023) Exploring the potential of using an AI language model for automated essay scoring. Res Methods Appl Linguist 2(2):100050. https://doi.org/10.1016/j.rmal.2023.100050
Naamati-Schneider L, Alt D (2024) Beyond digital literacy: the era of AI-powered assistants and evolving user skills. Educ Inf Technol. https://doi.org/10.1007/s10639-024-12694-z
Nelson MM, Schunn CD (2009) The nature of feedback: how different types of peer feedback affect writing performance. Instr Sci 37(4):375–401. https://doi.org/10.1007/s11251-008-9053-x
Nicol D (2021) The power of internal feedback: exploiting natural comparison processes. Assess Eval High Educ 46(5):756–778. https://doi.org/10.1080/02602938.2020.1823314
ONeill R, Russell A (2019) Stop! Grammar time: university students’ perceptions of the automated feedback program Grammarly. Australas J Educ Technol. https://doi.org/10.14742/ajet.3795
Oppenlaender J, Linder R, Silvennoinen J (2023) Prompting AI art: an investigation into the creative skill of prompt engineering. https://doi.org/10.48550/arXiv.2303.13534
Palinkas LA, Horwitz SM, Green CA et al. (2015) Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health 42(5):533–544. https://doi.org/10.1007/s10488-013-0528-y
Park S, Weng W (2020) The relationship between ICT-related factors and student academic achievement and the moderating effect of country economic index across 39 countries: using multilevel structural equation modelling. Educ Technol Soc 23(3):1–15
Pohl M, Wallner G, Kriglstein S (2016) Using lag-sequential analysis for understanding interaction sequences in visualizations. Int J Hum Comput Stud 96:54–66. https://doi.org/10.1016/j.ijhcs.2016.07.006
Rad HS, Alipour R, Jafarpour A (2023) Using artificial intelligence to foster students’ writing feedback literacy, engagement, and outcome: a case of Wordtune application. Interact Learn Environ. https://doi.org/10.1080/10494820.2023.2208170
Ranalli J (2018) Automated written corrective feedback: how well can students make use of it? Comput Assist Lang Learn 31(7):653–674. https://doi.org/10.1080/09588221.2018.1428994
Ranalli J (2021) L2 student engagement with automated feedback on writing: potential for learning and issues of trust. J Second Lang Writ 52:100816. https://doi.org/10.1016/j.jslw.2021.100816
Rudolph J, Tan S, Tan S (2023) ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J Appl Learn Teach 6(1):1–22. https://doi.org/10.37074/jalt.2023.6.1.9
Shi Y (2021) Exploring learner engagement with multiple sources of feedback on L2 writing across genres. Front. Psychol. https://doi.org/10.3389/fpsyg.2021.758867
Sonnenberg C, Bannert M (2015) Discovering the effects of metacognitive prompts on the sequential structure of SRL-processes using process mining techniques. J Learn Anal 2(1):72–100. https://doi.org/10.18608/jla.2015.21.5
Stake RE (1995) The art of case study research. Sage Publications, Thousand Oaks
Steiss J, Tate T, Graham S et al. (2024) Comparing the quality of human and ChatGPT feedback of students’ writing. Eur Res Int 91:101894. https://doi.org/10.1016/j.learninstruc.2024.101894
Stevenson M, Phakiti A (2019) Automated feedback and second language writing. In: Hyland, F, Hyland, K (eds) Feedback in second language writing: contexts and issues, 2nd edn. Cambridge applied linguistics. Cambridge University Press, Cambridge, pp 125–142. https://doi.org/10.1017/9781108635547.009
Tan S, Cho YW, Xu W (2022) Exploring the effects of automated written corrective feedback, computer-mediated peer feedback and their combination mode on EFL learner’s writing performance. Interact Learn Environ. https://doi.org/10.1080/10494820.2022.2066137
Tonmoy SMTI, Zaman SMM, Jain V et al. (2024) A comprehensive survey of hallucination mitigation techniques in large language models. arXiv. https://doi.org/10.48550/arXiv.2401.01313
Tseng W, Warschauer M (2023) AI-writing tools in education: if you can’t beat them, join them. J China Comput Assist Lang Learn. https://doi.org/10.1515/jccall-2023-0008
Warschauer M, Grimes D (2008) Automated writing assessment in the classroom. Pedagogies 3(1):22–36. https://doi.org/10.1080/15544800701771580
Watkins MW, Pacheco M (2000) Interobserver agreement in behavioral research: Importance and calculation. J Behav Educ 10(4):205–212. https://doi.org/10.1023/A:1012295615144
White J, Fu Q, Hays S et al. (2023) A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv. https://doi.org/10.48550/ARXIV.2302.11382
Wood J (2022) Supporting the uptake process with dialogic peer screencast feedback: a sociomaterial perspective. Teach Higher Educ. https://doi.org/10.1080/13562517.2022.2042243
Wu H, Wang W, Wan Y et al. (2023) ChatGPT or Grammarly? Evaluating ChatGPT on grammatical error correction benchmark. arXiv. https://doi.org/10.48550/ARXIV.2303.13648
Wu R, Yu Z (2023) Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. Brit J Educ Technol. https://doi.org/10.1111/bjet.13334
Yan D (2023) Impact of ChatGPT on learners in a L2 writing practicum: an exploratory investigation. Educ Inf Technol 28(11):13943–13967. https://doi.org/10.1007/s10639-023-11742-4
Yan D (2024a) Rubric co-creation to promote quality, interactivity and uptake of peer feedback. Assess Eval Higher Educ. https://doi.org/10.1080/02602938.2024.2333005
Yan D (2024b) Feedback seeking abilities of L2 writers using ChatGPT: a mixed method multiple case study. Kybernetes. https://doi.org/10.1108/K-09-2023-1933
Yan D, Wang J (2022) Teaching data science to undergraduate translation trainees: pilot evaluation of a task-based course. Front Psychol 13:939689. https://doi.org/10.3389/fpsyg.2022.939689
Yin RK (2013) Case study research: design and methods. 5th edn. SAGE Publications, Los Angeles
Zhang J, Zhang LJ (2022) The effect of feedback on metacognitive strategy use in EFL writing. Comput Assist Lang Learn. https://doi.org/10.1080/09588221.2022.2069822
Zhang Z (2017) Student engagement with computer-generated feedback: a case study. ELT J 71(3):317–328. https://doi.org/10.1093/elt/ccw089
Zhang Z, Hyland K (2018) Student engagement with teacher and automated feedback on L2 writing. Assess Writ 36:90–102. https://doi.org/10.1016/j.asw.2018.02.004
Zhang Z, Hyland K (2023) Student engagement with peer feedback in L2 writing: Insights from reflective journaling and revising practices. Assess Writ 58:100784. https://doi.org/10.1016/j.asw.2023.100784
Zheng L, Niu J, Zhong L et al. (2021) The effectiveness of artificial intelligence on learning achievement and learning perception: a meta-analysis. Interact Learn Environ. https://doi.org/10.1080/10494820.2021.2015693
Zheng Y, Yu S (2018) Student engagement with teacher written corrective feedback in EFL writing: a case study of Chinese lower-proficiency students. Assess Writ 37:13–24. https://doi.org/10.1016/j.asw.2018.03.001
Download references
This research project is supported by funding from the Young Researcher Program of Xinyang Agriculture and Forestry University [Grant QN2022049, QN2021033]. We would also like to thank all the anonymous reviewers for the constructive feedback.
Authors and affiliations.
School of Foreign Languages, Xinyang Agricultural and Forestry University, Xinyang, China
Da Yan & Shuxian Zhang
You can also search for this author in PubMed Google Scholar
Da Yan: conceptualization, data curation, writing—original draft, formal analysis, project administration, writing—review, and editing. Shuxian Zhang: data curation, coding, and writing—review.
Correspondence to Shuxian Zhang .
Competing interests.
The authors declare no competing interests.
At the time of the study, Xinyang Agriculture and Forestry University had no policy for ethical clearance, nor did it have an ethical committee. Thus, ethical approval was obtained from the School of Foreign Languages, Xinyang Agriculture and Forestry University in December 2022. The study was conducted in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards, including existing laws and regulations on personal data, privacy, and research data management.
Written informed consent was obtained via email from all participants before the study in December 2022.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplemental material, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Yan, D., Zhang, S. L2 writer engagement with automated written corrective feedback provided by ChatGPT: A mixed-method multiple case study. Humanit Soc Sci Commun 11 , 1086 (2024). https://doi.org/10.1057/s41599-024-03543-y
Download citation
Received : 27 February 2024
Accepted : 31 July 2024
Published : 26 August 2024
DOI : https://doi.org/10.1057/s41599-024-03543-y
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Eimear Duff, Jason Hynes, Kashif Ahmad, Comment on ‘ChatGPT and medical writing in dermatology: why should we keep writing?’, Clinical and Experimental Dermatology , Volume 49, Issue 9, September 2024, Pages 1082–1083, https://doi.org/10.1093/ced/llae130
Dear Editor, Artificial intelligence (AI) technologies such as ChatGPT hold immense potential to generate efficiencies for medical researchers. In this context, we read with great interest the recent letter in Clinical and Experimental Dermatology by Potestio et al . 1 While we concur that AI can be a helpful collaborator in academic writing, the authors claim that this tool, trained on data from the internet written by humans, may ‘help to extract information from electronic medical records’. It should be noted that there are significant legal risks associated with privately-owned AIs accessing patient data, most notably issues of data governance 2 and patient confidentiality.
Furthermore, the authors highlight that AI may assist during the screening process for literature reviews. A caveat when using AI as a tool for paper retrieval is the possibility of generating nonexistent literature references when the inquired content surpasses its capacity. 3 Open AI ChatGPT GPT-3.5 was trained on a fixed knowledge base, with the latest training data being from January 2022. Therefore, dermatologists must remain up to date with advances in the literature, and beware of fabrications arising from a tool operating from a stagnant source of data, known in AI as the ‘hallucination’ phenomenon. 4 This is a continuously-changing landscape, and GPT-4 (https://openai.com/index/gpt-4/), which can browse real-time internet data or alternative AIs, may be more targeted towards scholarly research.
Personal account.
Sign in with a library card.
Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:
Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.
Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.
If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.
Enter your library card number to sign in. If you cannot sign in, please contact your librarian.
Society member access to a journal is achieved in one of the following ways:
Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:
If you do not have a society account or have forgotten your username or password, please contact your society.
Some societies use Oxford Academic personal accounts to provide access to their members. See below.
A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.
Some societies use Oxford Academic personal accounts to provide access to their members.
Click the account icon in the top right to:
Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.
For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.
To purchase short-term access, please sign in to your personal account above.
Don't already have a personal account? Register
Month: | Total Views: |
---|---|
April 2024 | 13 |
May 2024 | 1 |
June 2024 | 1 |
July 2024 | 1 |
August 2024 | 2 |
Citing articles via.
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
BMC Medical Education volume 24 , Article number: 907 ( 2024 ) Cite this article
108 Accesses
Metrics details
This paper is devoted to a narrative review of the literature on emotions and academic performance in medicine. The review aims to examine the role emotions play in the academic performance of undergraduate medical students.
Eight electronic databases were used to search the literature from 2013 to 2023, including Academic Search Ultimate, British Education Index, CINAHL, Education Abstract, ERIC, Medline, APA Psych Articles and APA Psych Info. Using specific keywords and terms in the databases, 3,285,208 articles were found. After applying the predefined exclusion and inclusion criteria to include only medical students and academic performance as an outcome, 45 articles remained, and two reviewers assessed the quality of the retrieved literature; 17 articles were selected for the narrative synthesis.
The findings indicate that depression and anxiety are the most frequently reported variables in the reviewed literature, and they have negative and positive impacts on the academic performance of medical students. The included literature also reported that a high number of medical students experienced test anxiety during their study, which affected their academic performance. Positive emotions lead to positive academic outcomes and vice versa. However, Feelings of shame did not have any effect on the academic performance of medical students.
The review suggests a significant relationship between emotions and academic performance among undergraduate medical students. While the evidence may not establish causation, it underscores the importance of considering emotional factors in understanding student performance. However, reliance on cross-sectional studies and self-reported data may introduce recall bias. Future research should concentrate on developing anxiety reduction strategies and enhancing mental well-being to improve academic performance.
Peer Review reports
Studying medicine is a multi-dimensional process involving acquiring medical knowledge, clinical skills, and professional attitudes. Previous research has found that emotions play a significant role in this process [ 1 , 2 ]. Different types of emotions are important in an academic context, influencing performance on assessments and evaluations, reception of feedback, exam scores, and overall satisfaction with the learning experience [ 3 ]. In particular, medical students experience a wide range of emotions due to many emotionally challenging situations, such as experiencing a heavy academic workload, being in the highly competitive field of medicine, retaining a large amount of information, keeping track of a busy schedule, taking difficult exams, and dealing with a fear of failure [ 4 , 5 , 6 ].Especially during their clinical years, medical students may experience anxiety when interacting with patients who are suffering, ill, or dying, and they must work with other healthcare professionals. Therefore, it is necessary to understand the impact of emotions on medical students to improve their academic outcomes [ 7 ].
To distinguish the emotions frequently experienced by medical students, it is essential to define them. Depression is defined by enduring emotions of sadness, despair, and a diminished capacity for enjoyment or engagement in almost all activities [ 4 ]. Negative emotions encompass unpleasant feelings such as anger, fear, sadness, and anxiety, and they frequently cause distress [ 8 ]. Anxiety is a general term that refers to a state of heightened nervousness or worry, which can be triggered by various factors. Test anxiety, on the other hand, is a specific type of anxiety that arises in the context of taking exams or assessments. Test anxiety is characterised by physiological arousal, negative self-perception, and a fear of failure, which can significantly impair a student’s ability to perform well academically [ 9 , 10 ]. Shame is a self-conscious emotion that arises from the perception of having failed to meet personal or societal standards. It can lead to feelings of worthlessness and inadequacy, severely impacting a student’s motivation and academic performance [ 11 , 12 ]. In contrast, positive emotions indicate a state of enjoyable involvement with the surroundings, encompassing feelings of happiness, appreciation, satisfaction, and love [ 8 ].
Academic performance generally refers to the outcomes of a student’s learning activities, often measured through grades, scores, and other formal assessments. Academic achievement encompasses a broader range of accomplishments, including mastery of skills, attainment of knowledge, and the application of learning in practical contexts. While academic performance is often quantifiable, academic achievement includes qualitative aspects of a student’s educational journey [ 13 ].
According to the literature, 11–40% of medical students suffer from stress, depression, and anxiety due to the intensity of medical school, and these negative emotions impact their academic achievement [ 14 , 15 ]. Severe anxiety may impair memory function, decrease concentration, lead to a state of hypervigilance, and interfere with judgment and cognitive function, further affecting academic performance [ 16 ]. However, some studies have suggested that experiencing some level of anxiety has a positive effect and serves as motivation that can improve academic performance [ 16 , 17 ].
Despite the importance of medical students’ emotions and their relation to academic performance, few studies have been conducted in this area. Most of these studies have focused on the prevalence of specific emotions without correlating with medical students’ academic performance. Few systematic reviews have addressed the emotional challenges medical students face. However, there is a lack of comprehensive reviews that discuss the role of emotions and academic outcomes. Therefore, this review aims to fill this gap by exploring the relationship between emotions and the academic performance of medical students.
This review aims to examine the role emotions play in the academic performance of undergraduate medical students.
A systematic literature search examined the role of emotions in medical students’ academic performance. The search adhered to the concepts of a systematic review, following the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 18 ]. Then, narrative synthesise was done to analyse the retrieved literature and synthesise the results. A systematic literature search and narrative review provide complete coverage and flexibility to explore and understand findings. Systematic search assures rigour and reduces bias, while narrative synthesis allows for flexible integration and interpretation. This balance improves review quality and utility.
Inclusion criteria.
The study’s scope was confined to January 2013 to December 2023, focusing exclusively on undergraduate medical students. The research encompassed articles originating within medical schools worldwide, accepting content from all countries. The criteria included only full-text articles in English published in peer-reviewed journals. Primary research was considered, embracing quantitative and mixed-method research. The selected studies had to explicitly reference academic performance, test results, or GPA as key outcomes to address the research question.
The study excluded individuals beyond the undergraduate medical student demographic, such as students in other health fields and junior doctors. There was no imposed age limit for the student participants. The research specifically focused on articles within medical schools, excluding those from alternative settings. It solely considered full-text articles in English-language peer-reviewed journals. Letters or commentary articles were excluded, and the study did not limit itself to a particular type of research. Qualitative studies were excluded from the review because they did not have the quantitative measures required to answer the review’s aim. This review excluded articles on factors impacting academic performance, those analysing nursing students, and gender differences. The reasons and numbers for excluding articles are shown in Table 1 .
Eight electronic databases were used to search the literature. These were the following: Academic Search Ultimate, British Education Index, CINAHL, Education Abstract, ERIC, Medline, APA Psych Articles and APA Psych Info. The databases were chosen from several fields based on relevant topics, including education, academic evaluation and assessment, medical education, psychology, mental health, and medical research. Initially, with the help of a subject librarian, the researcher used all the above databases; the databases were searched with specific keywords and terms, and the terms were divided into the following concepts emotions, academic performance and medical students. Google Scholar, EBSCOhost, and the reference list of the retrieved articles were also used to identify other relevant articles.
This review started with a search of the databases. Eight electronic databases were used to search the literature from 2013 to 2023. Specific keywords and terms were used to search the databases, resulting in 3,285,208 articles. After removing duplicates, letters and commentary, this number was reduced to 1,637 articles. Exclusion and inclusion criteria were then applied, resulting in 45 articles. After two assessors assessed the literature, 17 articles were selected for the review. The search terms are as follows:
Keywords: Emotion, anxiety, stress, empathy, test anxiety, exam anxiety, test stress, exam stress, depression, emotional regulation, test scores, academic performance, grades, GPA, academic achievement, academic success, test result, assessment, undergraduate medical students and undergraduate medical education.
Emotions: TI (Emotion* OR Anxiety OR Stress OR empathy) OR emotion* OR (test anxiety or exam anxiety or test stress or exam stress) OR (depression) OR AB ((Emotion* OR Anxiety OR Stress OR empathy) OR emotion* OR (test anxiety or exam anxiety or test stress or exam stress)) (MH “Emotions”) OR (MH “Emotional Regulation”) DE “EMOTIONS”.
Academic performance: TI (test scores or academic performance or grades or GPA) OR (academic achievement or academic performance or academic success) OR (test result* OR assessment*) OR AB (test scores or academic performance or grades or GPA) OR (academic achievement or academic performance or academic success) OR test result* OR assessment*.
Medical Students: TI (undergraduate medical students OR undergraduate medical education) OR AB (undergraduate medical students OR undergraduate medical education), TI “medical students” OR AB “medical students” DE “Medical Students”.
This literature review attempts to gather only peer-reviewed journal articles published in English on undergraduate medical students’ negative and positive emotions and academic performance from January 2013 to December 2023. Their emotions, including depression, anxiety, physiological distress, shame, happiness, joy, and all emotions related to academic performance, were examined in quantitative research and mixed methods.
Moreover, to focus the search, the author specified and defined each keyword using advanced search tools, such as subject headings in the case of the Medline database. The author used ‘MeSH 2023’ as the subject heading, then entered the term ‘Emotion’ and chose all the relevant meanings. This method was applied to most of the keywords.
Studies were included based on predefined criteria related to study design, participants, exposure, outcomes, and study types. Two independent reviewers screened each record, and the report was retrieved. In the screening process, reviewers independently assessed each article against the inclusion criteria, and discrepancies were resolved through consensus during regular team meetings. In cases of persistent disagreement, a third reviewer was consulted. Endnote library program was used for the initial screening phase. This tool was used to identify duplicates, facilitated the independent screening of titles and abstracts and helped to retrieve the full-text articles. The reasons for excluding the articles are presented in Table 1 .
Two independent reviewers extracted data from the eligible studies, with any discrepancies resolved through discussion and consensus. If the two primary reviewers could not agree, a third reviewer served as an arbitrator. For each included study, the following information was extracted and recorded in a standardised database: first author name, publication year, study design, sample characteristics, details of the emotions exposed, outcome measures, and results.
Academic performance as an outcome for medical students was defined to include the following: Exam scores (e.g., midterm, final exams), Clinical assessments (e.g., practical exams, clinical rotations), Overall grade point average (GPA) or any other relevant indicators of academic achievement.
Data were sought for all outcomes, including all measures, time points, and analyses within each outcome domain. In cases where studies reported multiple measures or time points, all relevant data were extracted to provide a comprehensive overview of academic performance. If a study reported outcomes beyond the predefined domains, inclusion criteria were established to determine whether these additional outcomes would be included in the review. This involved assessing relevance to the primary research question and alignment with the predefined outcome domains.
The quality and risk of bias in included studies were assessed using the National Institute of Health’s (NIH) critical appraisal tool. The tool evaluates studies based on the following domains: selection bias, performance bias, detection bias, attrition bias, reporting bias, and other biases. Two independent reviewers assessed the risk of bias in each included study. Reviewers worked collaboratively to reach a consensus on assessments. Discrepancies were resolved through discussion and consensus. In cases of persistent disagreement, a third reviewer was consulted.
To determine the validity of eligible articles, all the included articles were critically appraised, and all reviewers assessed bias. The validity and reliability of the results were assessed by using objective measurement. Each article was scored out of 14, with 14 indicating high-quality research and 1 indicating low-quality research. High-quality research, according to the NIH (2013), includes a clear and focused research question, defines the study population, features a high participation rate, mentions inclusion and exclusion criteria, uses clear and specific measurements, reports results in detail, lists the confounding factors and lists the implications for the local community. Therefore, an article was scored 14 if it met all criteria of the critical appraisal tool. Based on scoring, each study was classified into one of three quality categories: good, fair or poor. The poorly rated articles mean their findings were unreliable, and they will not be considered, including two articles [ 16 , 19 ]. Seventeen articles were chosen after critical appraisal using the NIH appraisal tool, as shown in Table 2 .
For each outcome examined in the included studies, various effect measures were utilised to quantify the relationship between emotions and academic performance among undergraduate medical students. The effect measures commonly reported across the studies included prevalence rat, correlation coefficients, and mean differences. The reviewer calculated the effect size for the studies that did not report the effect. The choice of effect measure depended on the nature of the outcome variable and the statistical analysis conducted in each study. These measures were used to assess the strength and direction of the association between emotional factors and academic performance.
The findings of individual studies were summarised to highlight crucial characteristics. Due to the predicted heterogeneity, the synthesis involved pooling effect estimates and using a narrative method. A narrative synthesis approach was employed in the synthesis of this review to assess and interpret the findings from the included studies qualitatively. The narrative synthesis involved a qualitative examination of the content of each study, focusing on identifying common themes. This synthesis was employed to categorise and interpret data, allowing for a nuanced understanding of the synthesis. Themes related to emotions were identified and extracted for synthesis. Control-value theory [ 20 ] was used as an overarching theory, providing a qualitative synthesis of the evidence and contributing to a deeper understanding of the research question. If the retrieved articles include populations other than medical, such as dental students or non-medical students, the synthesis will distinguish between them and summarise the findings of the medical students only, highlighting any differences or similarities.
The Control-Value Theory, formulated by Pekrun (2006), is a conceptual framework that illustrates the relationship between emotions and academic achievement through two fundamental assessments: control and value. Control pertains to the perceived ability of a learner to exert influence over their learning activities and the results they achieve. Value relates to a student’s significance to these actions and results. The theory suggests that students are prone to experiencing good feelings, such as satisfaction and pride when they possess a strong sense of control and importance towards their academic assignments. On the other hand, individuals are prone to encountering adverse emotions (such as fear and embarrassment) when they perceive a lack of control or worth in these particular occupations. These emotions subsequently impact students’ motivation, learning strategies, and, eventually, their academic achievement. The relevance of control-value theory in reviewing medical student emotions and their influence on academic performance is evident for various reasons. This theory offers a complete framework that facilitates comprehending the intricate connection between emotions and academic achievement. It considers positive and negative emotions, providing a comprehensive viewpoint on how emotions might influence learning and performance. The relevance of control and value notions is particularly significant for medical students due to their frequent exposure to high-stakes tests and difficult courses. Gaining insight into the students’ perception of their power over academic assignments and the importance they attach to their medical education might aid in identifying emotional stimuli and devising remedies. Multiple research has confirmed the theory’s assertions, showing the critical influence of control and value evaluations on students’ emotional experiences and academic achievements [ 21 , 22 ].
For this step, a data extraction sheet was developed using the data extraction template provided by the Cochrane Handbook. To ensure the review is evidence-based and bias-free, the Cochrane Handbook strongly suggests that more than one reviewer review the data. Therefore, the main researcher extracted the data from the included studies, and another reviewer checked the included, excluded and extracted data. Any disagreements were resolved via discussion by a third reviewer. The data extraction Table 2 identified all study features, including the author’s name, the year of publication, the method used the aim of the study, the number and description of participants, data collection tools, and study findings.
Prisma sheet and the summary of final studies that have been used for the review.
When the keywords and search terms related to emotions, as mentioned above, in the eight databases listed, 3,285,208 articles were retrieved. After using advanced search and subject headings, the number of articles increased to 3,352,371. Similarly, searching for the second keyword, ‘academic performance,’ using all the advanced search tools yielded 8,119,908 articles. Searching for the third keyword, ‘medical students’, yielded 145,757 articles. All terms were searched in article titles and abstracts. After that, the author combined all search terms by using ‘AND’ and applied the time limit from 2013 to 2023; the search narrowed to 2,570 articles. After duplicates, letters and commentary were excluded, the number was reduced to 1,637 articles. After reading the title and abstract to determine relevance to the topic and applying the exclusion and inclusion criteria mentioned above, 45 articles remained; after the quality of the retrieved literature was assessed by two reviewers, 17 articles were selected for the review. The PRISMA flow diagram summarising the same is presented in Fig. 1 . Additionally, One article by Ansari et al. (2018) was selected for the review; it met most inclusion and exclusion criteria except that the outcome measure is cognitive function and not academic performance. Therefore, it was excluded from the review. Figure 1 shows the Prisma flow diagram (2020) of studies identified from the databases.
Prisma flow diagram (2020)
Table 2 , summarising the characteristics of the included studies, is presented below.
Country of the study.
Many of the studies were conducted in developing countries, with the majority being conducted in Europe ( n = 4), followed by Pakistan ( n = 2), then Saudi Arabia ( n = 2), and the United States ( n = 2). The rest of the studies were conducted in South America ( n = 1), Morocco ( n = 1), Brazil ( n = 1), Australia ( n = 1), Iran ( n = 1), South Korea ( n = 1) and Bosnia and Herzegovina ( n = 1). No included studies were conducted in the United Kingdom.
Regarding study design, most of the included articles used a quantitative methodology, including 12 cross-sectional studies. There were two randomised controlled trials, one descriptive correlation study, one cohort study, and only one mixed-method study.
Regarding population and setting, most of the studies focused on all medical students studying in a medical school setting, from first-year medical students to those in their final year. One study compared medical students with non-medical students; another combined medical students with dental students.
The study aims varied across the included studies. Seven studies examined the prevalence of depression and anxiety among medical students and their relation to academic performance. Four studies examined the relationship between test anxiety and academic performance in medical education. Four studies examined the relationship between medical students’ emotions and academic achievements. One study explored the influence of shame on medical students’ learning.
The studies were assessed for quality using tools created by the NIH (2013) and then divided into good, fair, and poor based on these results. Nine studies had a high-quality methodology, seven achieved fair ratings, and only three achieved poor ratings. The studies that were assigned the poor rating were mainly cross-sectional studies, and the areas of weakness were due to the study design, low response rate, inadequate reporting of the methodology and statistics, invalid tools, and unclear research goals.
Most of the outcome measures were heterogenous and self-administered questionnaires; one study used focus groups and observation ward assessment [ 23 ]. All the studies used the medical students’ academic grades.
The prevalence rate of psychological distress in the retrieved articles.
Depression and anxiety are the most common forms of psychological distress examined concerning academic outcomes among medical students. Studies consistently show concerningly high rates, with prevalence estimates ranging from 7.3 to 66.4% for anxiety and 3.7–69% for depression. These findings indicate psychological distress levels characterised as moderate to high based on common cut-off thresholds have a clear detrimental impact on academic achievement [ 16 , 24 , 25 , 26 ].
The studies collectively examine the impact of psychological factors on academic performance in medical education contexts, using a range of effect sizes to quantify their findings. Aboalshamat et al. (2015) identified a small effect size ( η 2 = 0.018) for depression’s impact on academic performance, suggesting a modest influence. Mihailescu (2016) found a significant negative correlation between levels of depression/anxiety (rho=-0.14, rho=-0.19), academic performance and GPA among medical students. Burr and Beck Dallaghan (2019) reported professional efficacy explaining 31.3% of the variance in academic performance, indicating a significant effect size. However, Del-Ben (2013) et al. did not provide the significant impact of affective changes on academic achievement, suggesting trivial effect sizes for these factors.
In conclusion, anxiety and depression, both indicators of psychological discomfort, are common among medical students. There is a link between distress and poor academic performance results, implying that this relationship merits consideration. Table 3 below shows the specific value of depression and anxiety in retrieved articles.
In this review, four studies examined the relationship between test anxiety and academic performance in medical education [ 27 , 28 , 29 , 30 ]. The studies found high rates of test anxiety among medical students, ranging from 52% [ 27 ] to as high as 81.1% [ 29 ]. Final-year students tend to experience the highest test anxiety [ 29 ].
Test anxiety has a significant negative correlation with academic performance measures and grade point average (GPA) [ 27 , 28 , 29 ]. Green et al. (2016) found that test anxiety was moderately negatively correlated with USMLE score ( r = − 0.24, p = 0.00); high test anxiety was associated with low USMLE scores in the control group, further suggesting that anxiety can adversely affect performance. The findings that a test-taking strategy course reduced anxiety without improving test scores highlight the complex nature of anxiety’s impact on performance.
Nazir et al. (2021) found that excellent female medical students reported significantly lower test anxiety than those with low academic grades, with an odds ratio of 1.47, indicating that students with higher test anxiety are more likely to have lower academic grades. Kim’s (2016) research shows moderate correlations between test anxiety and negative achievement emotions such as anxiety and boredom, but interestingly, this anxiety does not significantly affect practical exam scores (OSCE) or GPAs. However, one study found that examination stress enhanced academic performance with a large effect size (W = 0.78), with stress levels at 47.4% among their sample, suggesting that a certain stress level before exams may be beneficial [ 30 ].
Three papers explored shame’s effect on medical students’ academic achievement [ 24 , 31 , 32 ]. Hayat et al. (2018) reported that academic feelings, like shame, significantly depend on the academic year. shame was found to have a slight negative and significant correlation with the academic achievement of learners ( r =-0.15). One study found that some medical students felt shame during simulations-based education examinations because they had made incorrect decisions, which decreased their self-esteem and motivation to learn. However, others who felt shame were motivated to study harder to avoid repeating the same mistakes [ 23 ].
Hautz (2017) study examined how shame affects medical students’ learning using a randomised controlled trial where researchers divided the students into two groups: one group performed a breast examination on mannequins and the other group on actual patients. The results showed that students who performed the clinical examination on actual patients experienced significantly higher levels of shame but performed better in examinations than in the mannequin group. In the final assessments on standardised patients, both groups performed equally well. Therefore, shame decreased with more clinical practice, but shame did not have significant statistics related to learning or performance. Similarly, Burr and Dallaghan (2019) reported that the shame level of medical students was (40%) but had no association with academic performance.
Three articles discussed medical students’ emotions and academic performance [ 23 , 24 , 32 ]. Burr and Dallaghan (2019) examine the relationship between academic success and emotions in medical students, such as pride, hope, worry, and shame. It emphasises the links between academic accomplishment and professional efficacy, as well as hope, pride, worry, and shame. Professional efficacy was the most significant factor linked to academic performance, explaining 31.3% of the variance. The importance of emotions on understanding, processing of data, recall of memories, and cognitive burden is emphasised throughout the research. To improve academic achievement, efforts should be made to increase student self-efficacy.
Hayat et al. (2018) found that positive emotions and intrinsic motivation are highly connected with academic achievement, although emotions fluctuate between educational levels but not between genders. The correlations between negative emotions and academic achievement, ranging from − 0.15 to -0.24 for different emotions, suggest small but statistically significant adverse effects.
Behren et al.‘s (2019) mixed-method study found that students felt various emotions during the simulation, focusing on positive emotions and moderate anxiety. However, no significant relationships were found between positive emotions and the student’s performance during the simulation [ 23 ].
This review aims to investigate the role of emotions in the academic performance of undergraduate medical students. Meta-analysis cannot be used because of the heterogeneity of the data collection tools and different research designs [ 33 ]. Therefore, narrative synthesis was adopted in this paper. The studies are grouped into four categories as follows: (1) The effect of depression and anxiety on academic performance, (2) Test anxiety and academic achievement, (3) Shame and academic performance, and (4) Academic performance, emotions and medical students. The control-value theory [ 20 ], will be used to interpret the findings.
According to the retrieved research, depression and anxiety can have both a negative and a positive impact on the academic performance of medical students. Severe anxiety may impair memory function, decrease concentration, lead to a state of hypervigilance, interfere with judgment and cognitive function, and further affect academic performance [ 4 ]. Most of the good-quality retrieved articles found that anxiety and depression were associated with low academic performance [ 16 , 24 , 25 , 26 ]. Moreira (2018) and Mihailescu (2016) found that higher depression levels were associated with more failed courses and a lower GPA. However, they did not find any association between anxiety level and academic performance.
By contrast, some studies have suggested that experiencing some level of anxiety reinforces students’ motivation to improve their academic performance [ 16 , 34 ]. Zalihic et al. (2017) conducted a study to investigate anxiety sensitivity about academic success and noticed a positive relationship between anxiety level and high academic scores; they justified this because when medical students feel anxious, they tend to prepare and study more, and they desire to achieve better scores and fulfil social expectations. Similarly, another study found anxiety has a negative impact on academic performance when excessive and a positive effect when manageable, in which case it encourages medical students and motivates them to achieve higher scores [ 35 ].
In the broader literature, the impact of anxiety on academic performance has contradictory research findings. While some studies suggest that having some level of anxiety can boost students’ motivation to improve their academic performance, other research has shown that anxiety has a negative impact on their academic success [ 36 , 37 ]. In the cultural context, education and anxiety attitudes differ widely across cultures. High academic pressure and societal expectations might worsen anxiety in many East Asian societies. Education is highly valued in these societies, frequently leading to significant academic stress. This pressure encompasses attaining high academic marks and outperformance in competitive examinations. The academic demands exerted on students can result in heightened levels of anxiety. The apprehension of not meeting expectations can lead to considerable psychological distress and anxiety, which can appear in their physical and mental health and academic achievement [ 38 , 39 ].
The majority of the studies reviewed confirm that test anxiety negatively affects academic performance [ 27 , 28 , 29 ]. Several studies have found a significant correlation between test anxiety and academic achievement, indicating that higher levels of test anxiety are associated with lower exam scores and lower academic performance [ 40 , 41 ]. For example, Green et al. (2016) RCT study found that test anxiety has a moderately significant negative correlation with the USMLE score. They found that medical students who took the test-taking strategy course had lower levels of test anxiety than the control group, and their test anxiety scores after the exam had improved from the baseline. Although their test anxiety improved after taking the course, there was no significant difference in the exam scores between students who had and had not taken the course. Therefore, the intervention they used was not effective. According to the control-value theory, this intervention can be improved if they design an emotionally effective learning environment, have a straightforward instructional design, foster self-regulation of negative emotions, and teach students emotion-oriented regulation [ 22 ].
Additionally, according to this theory, students who perceive exams as difficult are more likely to experience test anxiety because test anxiety results from a student’s negative appraisal of the task and outcome values, leading to a reduction in their performance. This aligns with Kim’s (2016) study, which found that students who believed that the OSCE was a problematic exam experienced test anxiety more than other students [ 9 , 22 , 42 ].
In the wider literature, a meta-analysis review by von der Embse (2018) found a medium significant negative correlation ( r =-0.24) between test anxiety and test performance in undergraduate educational settings [ 43 ] . Also, they found a small significant negative correlation ( r =-0.17) between test anxiety and GPA. This indicates that higher levels of test anxiety are associated with lower test performance. Moreover, Song et al. (2021) experimental study examined the effects of test anxiety on working memory capacity and found that test anxiety negatively correlated with academic performance [ 44 ]. Therefore, the evidence from Song’s study suggests a small but significant effect of anxiety on working memory capacity. However, another cross-sectional study revealed that test anxiety in medical students had no significant effect on exam performance [ 45 ]. The complexities of this relationship necessitate additional investigation. Since the retrieved articles are from different countries, it is critical to recognise the possible impact of cultural differences on the impact of test anxiety. Cultural factors such as different educational systems, assessment tools and societal expectations may lead to variances in test anxiety experience and expression across diverse communities [ 46 , 47 ]. Culture has a substantial impact on how test anxiety is expressed and evaluated. Research suggests that the degree and manifestations of test anxiety differ among different cultural settings, emphasising the importance of using culturally validated methods to evaluate test anxiety accurately. A study conducted by Lowe (2019) with Canadian and U.S. college students demonstrated cultural variations in the factors contributing to test anxiety. Canadian students exhibited elevated levels of physiological hyperarousal, but U.S. students had more pronounced cognitive interference. These variations indicate that the cultural environment has an influence on how students perceive and respond to test anxiety, resulting in differing effects on their academic performance in different cultures. Furthermore, scholars highlight the significance of carrying out meticulous instruments to assess test anxiety, which are comparable among diverse cultural cohorts. This technique guarantees that the explanations of test scores are reliable and can be compared across different populations. Hence, it is imperative to comprehend and tackle cultural disparities in order to create efficient interventions and assistance for students who encounter test anxiety in diverse cultural environments. Therefore, there is a need for further studies to examine the level of test anxiety and cultural context.
The review examined three studies that discuss the impact of feelings of shame on academic performance [ 23 , 24 , 48 ]. Generally, shame is considered a negative emotion which involves self-reflection and self-evaluation, and it leads to rumination and self-condemnation [ 49 ]. Intimate examinations conducted by medical students can induce feelings of shame, affecting their ability to communicate with patients and their clinical decisions. Shame can increase the avoidance of intimate physical examinations and also encourage clinical practice [ 23 , 24 , 48 ].
One study found that some medical students felt shame during simulations-based education examinations because they had made incorrect decisions, which decreased their self-esteem and motivation to learn. However, others who felt shame were motivated to study harder to avoid repeating the same mistakes [ 23 ]. Shame decreased with more clinical practice, but shame did not affect their learning or performance [ 48 ]. The literature on how shame affects medical students’ learning is inconclusive [ 31 ].
In the broader literature, shame is considered maladaptive, leading to dysfunctional behaviour, encouraging withdrawal and avoidance of events and inhibiting social interaction. However, few studies have been conducted on shame in the medical field. Therefore, more research is needed to investigate the role of shame in medical students’ academic performance [ 49 ]. In the literature, there are several solutions that can be used to tackle the problem of shame in medical education; it is necessary to establish nurturing learning settings that encourage students to openly discuss their problems and mistakes without the worry of facing severe criticism. This can be accomplished by encouraging medical students to participate in reflective practice, facilitating the processing of their emotions, and enabling them to derive valuable insights from their experiences, all while avoiding excessive self-blame [ 50 ]. Offering robust mentorship and support mechanisms can assist students in effectively managing the difficulties associated with intimate examinations. Teaching staff have the ability to demonstrate proper behaviours and provide valuable feedback and effective mentoring [ 51 ]. Training and workshops that specifically target communication skills and the handling of sensitive situations can effectively equip students to handle intimate tests, hence decreasing the chances of them avoiding such examinations due to feelings of shame [ 52 ].
The literature review focused on three studies that examined the relationship between emotions and the academic achievements of medical students [ 23 , 24 , 32 ].
Behren et al. (2019) mixed-method study on the achievement emotions of medical students during simulations found that placing students in challenging clinical cases that they can handle raises positive emotions. Students perceived these challenges as a positive drive for learning and mild anxiety was considered beneficial. However, the study also found non-significant correlations between emotions and performance during the simulation, indicating a complex relationship between emotions and academic performance. The results revealed that feelings of frustration were perceived to reduce students’ interest and motivation for studying, hampered their decision-making process, and negatively affected their self-esteem, which is consistent with the academic achievement emotions literature where negative emotions are associated with poor intrinsic motivation and reduced the ability to learn [ 3 ].
The study also emphasises that mild anxiety can have positive effects, corroborated by Gregor (2005), which posits that moderate degrees of anxiety can improve performance. The author suggests that an ideal state of arousal (which may be experienced as anxiety) enhances performance. Mild anxiety is commonly seen as a type of psychological stimulation that readies the body for upcoming challenges, frequently referred to as a “fight or flight” response. Within the realm of academic performance, this state of heightened arousal can enhance concentration and optimise cognitive functions such as memory, problem-solving skills, and overall performance. However, once the ideal point is surpassed, any additional increase in arousal can result in a decline in performance [ 53 ]. This is additionally supported by Cassady and Johnson (2002), who discovered that a specific level of anxiety can motivate students to engage in more comprehensive preparation, hence enhancing their performance.
The reviewed research reveals a positive correlation between positive emotions and academic performance and a negative correlation between negative emotions and academic performance. These findings align with the control–value theory [ 8 , 22 ], which suggests that positive emotions facilitate learning through mediating factors, including cognitive learning strategies such as strategic thinking, critical thinking and problem-solving and metacognitive learning strategies such as monitoring, regulating, and planning students’ intrinsic and extrinsic motivation. Additionally, several studies found that extrinsic motivation from the educational environment and the application of cognitive and emotional strategies improve students’ ability to learn and, consequently, their academic performance [ 23 , 24 , 32 ]. By contrast, negative emotions negatively affect academic performance. This is because negative emotions reduce students’ motivation, concentration, and ability to process information [ 23 , 24 , 32 ].
This review aims to thoroughly investigate the relationship between emotions and academic performance in undergraduate medical students, but it has inherent limitations. Overall, the methodological quality of the retrieved studies is primarily good and fair. Poor-quality research was excluded from the synthesis. The good-quality papers demonstrated strengths in sampling techniques, data analysis, collection and reporting. However, most of the retrieved articles used cross-section studies, and the drawback of this is a need for a more causal relationship, which is a limitation in the design of cross-sectional studies. Furthermore, given the reliance on self-reported data, there were concerns about potential recall bias. These methodological difficulties were noted in most of the examined research. When contemplating the implications for practice and future study, the impact of these limitations on the validity of the data should be acknowledged.
The limitation of the review process and the inclusion criteria restricted the study to articles published from January 2013 to December 2023, potentially overlooking relevant research conducted beyond this timeframe. Additionally, the exclusive focus on undergraduate medical students may constrain the applicability of findings to other health fields or educational levels.
Moreover, excluding articles in non-English language and those not published in peer-reviewed journals introduces potential language and publication biases. Reliance on electronic databases and specific keywords may inadvertently omit studies using different terms or indexing. While the search strategy is meticulous, it might not cover every relevant study due to indexing and database coverage variations. However, the two assessors’ involvement in study screening, selection, data extraction, and quality assessment improved the robustness of the review and ensured that it included all the relevant research.
In conclusion, these limitations highlight the need for careful interpretation of the study’s findings and stress the importance of future research addressing these constraints to offer a more comprehensive understanding of the nuanced relationship between emotions and academic performance in undergraduate medical education.
The review exposes the widespread prevalence of depression, anxiety and test anxiety within the medical student population. The impact on academic performance is intricate, showcasing evidence of adverse and favourable relationships. Addressing the mental health challenges of medical students necessitates tailored interventions for enhancing mental well-being in medical education. Furthermore, it is crucial to create practical strategies considering the complex elements of overcoming test anxiety. Future research should prioritise the advancement of anxiety reduction strategies to enhance academic performance, focusing on the control-value theory’s emphasis on creating an emotionally supportive learning environment. Additionally, Test anxiety is very common among medical students, but the literature has not conclusively determined its actual effect on academic performance. Therefore, there is a clear need for a study that examines the relationship between test anxiety and academic performance. Moreover, the retrieved literature did not provide effective solutions for managing test anxiety. This gap highlights the need for practical solutions informed by Pekrun’s Control-Value Theory. Ideally, a longitudinal study measuring test anxiety and exam scores over time would be the most appropriate approach. it is also necessary to explore cultural differences to develop more effective solutions and support systems tailored to specific cultural contexts.
The impact of shame on academic performance in medical students was inconclusive. Shame is a negative emotion that has an intricate influence on learning outcomes. The inadequacy of current literature emphasises the imperative for additional research to unravel the nuanced role of shame in the academic journeys of medical students.
Overall, emotions play a crucial role in shaping students’ academic performance, and research has attempted to find solutions to improve medical students’ learning experiences; thus, it is recommended that medical schools revise their curricula and consider using simulation-based learning in their instructional designs to enhance learning and improve students’ emotions. Also, studies have suggested using academic coaching to help students achieve their goals, change their learning styles, and apply self-testing and simple rehearsal of the material. Moreover, the study recommended to improve medical students’ critical thinking and autonomy and changing teaching styles to support students better.
all included articles are mentioned in the manuscript, The quality assessment of included articles are located in the supplementary materials file no. 1.
Weurlander M, Lonn A, Seeberger A, Hult H, Thornberg R, Wernerson A. Emotional challenges of medical students generate feelings of uncertainty. Med Educ. 2019;53(10):1037–48.
Article Google Scholar
Boekaerts M, Pekrun R. Emotions and emotion regulation in academic settings. Handbook of educational psychology: Routledge; 2015. pp. 90–104.
Google Scholar
Camacho-Morles J, Slemp GR, Pekrun R, Loderer K, Hou H, Oades LG. Activity achievement emotions and academic performance: a meta-analysis. Educational Psychol Rev. 2021;33(3):1051–95.
Aboalshamat K, Hou X-Y, Strodl E. Psychological well-being status among medical and dental students in Makkah, Saudi Arabia: a cross-sectional study. Med Teach. 2015;37(Suppl 1):S75–81.
Mirghni HO, Ahmed Elnour MA. The perceived stress and approach to learning effects on academic performance among Sudanese medical students. Electron Physician. 2017;9(4):4072–6.
Baessler F, Zafar A, Schweizer S, Ciprianidis A, Sander A, Preussler S, et al. Are we preparing future doctors to deal with emotionally challenging situations? Analysis of a medical curriculum. Patient Educ Couns. 2019;102(7):1304–12.
Rowe AD, Fitness J. Understanding the role of negative emotions in Adult Learning and Achievement: a Social Functional Perspective. Behav Sci (Basel). 2018;8(2).
Pekrun R, Frenzel AC, Goetz T, Perry RP. The control-value theory of achievement emotions: An integrative approach to emotions in education. Emotion in education: Elsevier; 2007. pp. 13–36.
Zeidner M. Test anxiety: The state of the art. 1998.
Cassady JC, Johnson RE. Cognitive test anxiety and academic performance. Contemp Educ Psychol. 2002;27(2):270–95.
Tangney JP, Dearing RL. Shame and guilt: Guilford Press; 2003.
Fang J, Brown GT, Hamilton R. Changes in Chinese students’ academic emotions after examinations: pride in success, shame in failure, and self-loathing in comparison. Br J Educ Psychol. 2023;93(1):245–61.
York TT, Gibson C, Rankin S. Defining and measuring academic success. Practical Assess Res Evaluation. 2019;20(1):5.
Abdulghani HM, Irshad M, Al Zunitan MA, Al Sulihem AA, Al Dehaim MA, Al Esefir WA, et al. Prevalence of stress in junior doctors during their internship training: a cross-sectional study of three Saudi medical colleges’ hospitals. Neuropsychiatr Dis Treat. 2014;10:1879–86.
Moreira de Sousa J, Moreira CA, Telles-Correia D, Anxiety. Depression and academic performance: a Study Amongst Portuguese Medical Students Versus non-medical students. Acta Med Port. 2018;31(9):454–62.
Junaid MA, Auf AI, Shaikh K, Khan N, Abdelrahim SA. Correlation between academic performance and anxiety in Medical students of Majmaah University - KSA. JPMA J Pakistan Med Association. 2020;70(5):865–8.
MihĂIlescu AI, Diaconescu LV, Donisan T, Ciobanu AM, THE INFLUENCE OF EMOTIONAL, DISTRESS ON THE ACADEMIC PERFORMANCE IN UNDERGRADUATE MEDICAL STUDENTS. Romanian J Child Adolesc Psychiatry. 2016;4(1/2):27–40.
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372.
Hahn H, Kropp P, Kirschstein T, Rücker G, Müller-Hilke B. Test anxiety in medical school is unrelated to academic performance but correlates with an effort/reward imbalance. PLoS ONE. 2017;12(2):1–13.
Pekrun R. The control-value theory of achievement emotions: assumptions, corollaries, and Implications for Educational Research and Practice. Educational Psychol Rev. 2006;18(4):315–41.
Graesser AC. Emotions are the experiential glue of learning environments in the 21st century. Learn Instruction. 2019.
Pekrun R, Perry RP. Control-value theory of achievement emotions. International handbook of emotions in education: Routledge; 2014. pp. 120 – 41.
Behrens CC, Dolmans DH, Gormley GJ, Driessen EW. Exploring undergraduate students achievement emotions during ward round simulation: a mixed-method study. BMC Med Educ. 2019;19(1):316.
Burr J, Beck-Dallaghan GL. The relationship of emotions and Burnout to Medical Students’ academic performance. Teach Learn Med. 2019;31(5):479–86.
Zalihić A, Mešukić S, Sušac B, Knezović K, Martinac M. Anxiety sensitivity as a predictor of academic success of medical students at the University of Mostar. Psychiatria Danubina. 2017;29(Suppl 4):851–4.
Del-Ben CM, Machado VF, Madisson MM, Resende TL, Valério FP, Troncon LEDA. Relationship between academic performance and affective changes during the first year at medical school. Med Teach. 2013;35(5):404–10.
Nazir MA, Izhar F, Talal A, Sohail ZB, Majeed A, Almas K. A quantitative study of test anxiety and its influencing factors among medical and dental students. J Taibah Univ Med Sci. 2021;16(2):253–9.
Green M, Angoff N, Encandela J. Test anxiety and United States Medical Licensing Examination scores. Clin Teacher. 2016;13(2):142–6.
Ben Loubir D, Serhier Z, Diouny S, Battas O, Agoub M, Bennani Othmani M. Prevalence of stress in Casablanca medical students: a cross-sectional study. Pan Afr Med J. 2014;19:149.
Kausar U, Haider SI, Mughal IA, Noor MSA, Stress levels; stress levels of final year mbbs students and its effect on their academic performance. Prof Med J. 2018;25(6):932–6.
Hautz WE, Schröder T, Dannenberg KA, März M, Hölzer H, Ahlers O, et al. Shame in Medical Education: a randomized study of the Acquisition of intimate Examination skills and its effect on subsequent performance. Teach Learn Med. 2017;29(2):196–206.
Hayat AA, Salehi A, Kojuri J. Medical student’s academic performance: the role of academic emotions and motivation. J Adv Med Educ Professionalism. 2018;6(4):168–75.
Deeks JJ, Riley RD, Higgins JP. Combining Results Using Meta-Analysis. Systematic Reviews in Health Research: Meta‐Analysis in Context. 2022:159 – 84.
Aboalshamat K, Hou X-Y, Strodl E. The impact of a self-development coaching programme on medical and dental students’ psychological health and academic performance: a randomised controlled trial. BMC Med Educ. 2015;15:134.
Jamil H, Alakkari M, Al-Mahini MS, Alsayid M, Al Jandali O. The impact of anxiety and depression on academic performance: a cross-sectional study among medical students in Syria. Avicenna J Med. 2022;12(03):111–9.
Mirawdali S, Morrissey H, Ball P. Academic anxiety and its effects on academic performance. 2018.
Al-Qaisy LM. The relation of depression and anxiety in academic achievement among group of university students. Int J Psychol Couns. 2011;3(5):96–100.
Cheng DR, Poon F, Nguyen TT, Woodman RJ, Parker JD. Stigma and perception of psychological distress and depression in Australian-trained medical students: results from an inter-state medical school survey. Psychiatry Res. 2013;209(3):684–90.
Lee M, Larson R. The Korean ‘examination hell’: long hours of studying, distress, and depression. J Youth Adolesc. 2000;29(2):249–71.
Ali SK. 861 – Social phobia among medical students. Eur Psychiatry. 2013;28:1.
Bonna AS, Sarwar M, Md Nasrullah A, Bin Razzak S, Chowdhury KS, Rahman SR. Exam anxiety among medical students in Dhaka City and its Associated Factors-A cross-sectional study. Asian J Med Health. 2022;20(11):20–30.
Kim K-J. Factors associated with medical student test anxiety in objective structured clinical examinations: a preliminary study. Int J Med Educ. 2016;7:424–7.
Von der Embse N, Jester D, Roy D, Post J. Test anxiety effects, predictors, and correlates: a 30-year meta-analytic review. J Affect Disord. 2018;227:483–93.
Song J, Chang L, Zhou R. Test anxiety impairs filtering ability in visual working memory: evidence from event-related potentials. J Affect Disord. 2021;292:700–7.
Theobald M, Breitwieser J, Brod G. Test anxiety does not predict exam performance when knowledge is controlled for: strong evidence against the interference hypothesis of test anxiety. Psychol Sci. 2022;33(12):2073–83.
Lowe PA. Examination of test anxiety in samples of Australian and US Higher Education Students. High Educ Stud. 2019;9(4):33–43.
Kavanagh BE, Ziino SA, Mesagno C. A comparative investigation of test anxiety, coping strategies and perfectionism between Australian and United States students. North Am J Psychol. 2016;18(3).
Mihăilescu AI, Diaconescu LV, Ciobanu AM, Donisan T, Mihailescu C. The impact of anxiety and depression on academic performance in undergraduate medical students. Eur Psychiatry. 2016;33:S341–2.
Terrizzi JA Jr, Shook NJ. On the origin of shame: does shame emerge from an evolved disease-avoidance architecture? Front Behav Neurosci. 2020;14:19.
Epstein RM. Mindful practice. JAMA. 1999;282(9):833–9.
Hauer KE, Teherani A, Dechet A, Aagaard EM. Medical students’ perceptions of mentoring: a focus-group analysis. Med Teach. 2005;27(8):732–4.
Kalet A, Pugnaire MP, Cole-Kelly K, Janicik R, Ferrara E, Schwartz MD, et al. Teaching communication in clinical clerkships: models from the macy initiative in health communications. Acad Med. 2004;79(6):511–20.
Gregor A. Examination anxiety: live with it, control it or make it work for you? School Psychol Int. 2005;26(5):617–35.
Download references
I would like to thank Lancaster university library for helping me to search the literature and to find the appropriate databases and thanks to Lancaster university to prove access to several softwares.
No funding.
Authors and affiliations.
King Abdulaziz University, Jeddah, Saudi Arabia
Nora Alshareef
Lancaster University, Lancaster, UK
Nora Alshareef, Ian Fletcher & Sabir Giga
You can also search for this author in PubMed Google Scholar
NA made substantial contributions throughout the systematic review process and was actively involved in writing and revising the manuscript. NA’s responsible for the design of the study, through the acquisition, analysis, and interpretation of data, to the drafting and substantive revision of the manuscript. NA has approved the submitted version and is personally accountable for her contributions, ensuring the accuracy and integrity of the work. IF was instrumental in screening the literature, extracting data, and conducting the quality assessment of the included studies. Additionally, IF played a crucial role in revising the results and discussion sections of the manuscript, ensuring that the interpretation of data was both accurate and insightful. IF has approved the submitted version and has agreed to be personally accountable for his contributions, particularly in terms of the accuracy and integrity of the parts of the work he was directly involved in. SG contributed significantly to the selection of papers and data extraction, demonstrating critical expertise in resolving disagreements among authors. SG’s involvement was crucial in revising the entire content of the manuscript, enhancing its coherence and alignment with the study’s objectives. SG has also approved the submitted version and is personally accountable for his contributions, committed to upholding the integrity of the entire work.
Correspondence to Nora Alshareef .
Ethics approval and consent to participate.
Not applicable.
Consent of publication was obtained from the other authors.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Alshareef, N., Fletcher, I. & Giga, S. The role of emotions in academic performance of undergraduate medical students: a narrative review. BMC Med Educ 24 , 907 (2024). https://doi.org/10.1186/s12909-024-05894-1
Download citation
Received : 08 March 2024
Accepted : 12 August 2024
Published : 23 August 2024
DOI : https://doi.org/10.1186/s12909-024-05894-1
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6920
Subscribe to the brown center on education policy newsletter, brian a. jacob and brian a. jacob walter h. annenberg professor of education policy; professor of economics, and professor of education - university of michigan, former brookings expert cristina stanojevich cs cristina stanojevich doctoral student - michigan state university.
August 26, 2024
In March 2020, virtually all public school districts in the U.S. shut their doors. For the next 18 months, schooling looked like it never had before. Homes became makeshift classrooms; parents became de facto teachers. But by fall 2022, many aspects of K-12 education had returned to “normal.” Schools resumed in-person classes, extracurricular activities flourished, and mask mandates faded.
But did schools really return to what they were before the COVID-19 pandemic? Our research suggests not. We interviewed teachers, school leaders, and district administrators across 12 districts in two states, and then we surveyed a nationally representative set of veteran educators in May 2023. We found that the COVID-19 pandemic transformed K-12 education in fundamental ways.
Below, we describe how the pandemic reshaped the educational landscape in these ways and we consider the opportunities and challenges these changes present for students, educators, and policymakers.
One of the most immediate and visible changes brought about by the pandemic was the rapid integration of technology into the classroom. Before COVID-19, many schools were easing into the digital age. The switch to remote learning in March 2020 forced schools to fully embrace Learning Management Systems (LMS), Zoom, and educational software almost overnight.
When students returned to in-person classrooms, the reliance on these digital tools persisted. Over 70% of teachers in our survey report that students are now assigned their own personal device (over 80% for secondary schools). LMS platforms like Google Classroom and Schoology remain essential in many schools. An assistant superintendent of a middle-income district remarked, “Google Classroom has become a mainstay for many teachers, especially middle school [and] high school.”
The platforms serve as hubs for posting assignments, accessing educational content, and enabling communication between teachers, students, and parents. They have become popular among parents as well. One teacher, who has school-age children herself, noted :
“Whereas pre-COVID…you’re hoping and praying your kids bring home information…[now] I can go on Google classroom and be like, ‘Oh, it says you worked on Mesopotamia today. What was that lesson about?’”
The pandemic’s impact on student learning was profound. Reading and math scores dropped precipitously, and the gap widened between more and less advantaged students. Many schools responded by adjusting their schedules or adopting new programs. Several mentioned adopting “What I need” (WIN) or “Power” blocks to accommodate diverse learning needs. During these blocks, teachers provide individualized support to students while others work on independent practice or extension activities.
Teachers report placing greater emphasis on small-group instruction and personalized learning. They spend less time on whole-class lecture and rely more on educational software (e.g., Lexia for reading and Zearn for math) to tailor instruction to individual student needs. A third-grade teacher in a low-income district explained:
“The kids are in so many different places, Lexia is very prescriptive and diagnostic, so it will give the kids specifically what level and what skills they need. [I] have a student who’s working on Greek and Latin roots, and then I have another kid who’s working on short vowel sounds. [It’s] much easier for them to get it through Lexia than me trying to get, you know, 18 different reading lessons.”
Teachers aren’t just using technology to personalize instruction. Having spent months gaining expertise with educational software, more teachers find it natural to integrate those programs into their classrooms today. Those teachers who used ed tech before report doing so even more now. They describe using software like Flowcabulary and Prodigy to make learning more engaging, and games such as Kahoot to give students practice with various skills. Products like Nearpod let them create presentations that integrate instruction with formative assessment. Other products, like Edpuzzle, help teachers monitor student progress.
Some teachers discovered how to use digital tools to save time and improve their communications to students. One elementary teacher, for example, explains even when her students complete an assignment by hand, she has them take a picture of it and upload it to her LMS:
“I can sort them, and I can comment on them really fast. So it’s made feedback better. [I have] essentially a portfolio of all their math, rather than like a hard copy that they could lose…We can give verbal feedback. I could just hit the mic and say, ‘Hey, double check number 6, your fraction is in fifths, it needs to be in tenths.’”
The pandemic also revealed and exacerbated the social-emotional challenges that students face. In our survey, nearly 40% of teachers report many more students struggling with depression and anxiety than before the COVID-19 pandemic; over 80% report having at least a few more students struggling.
These student challenges have changed teachers’ work. When comparing how they spend class time now versus before the pandemic, most teachers report spending more time on activities relating to students’ social-emotional well-being (73%), more time addressing behavioral issues (70%), and more time getting students caught up and reviewing routines and procedures (60%).
In response, schools have invested in social-emotional learning (SEL) programs and hired additional counselors and social workers. Some districts turned to online platforms such as Class Catalyst and CloseGap that allow students to anonymously report their emotional state on a daily basis, which helps school staff track students’ mental health.
Teachers also have been adapting their expectations of students. Many report assigning less homework and providing students more flexibility to turn in assignments late and retake exams.
The pandemic also radically reshaped parent-teacher communications. Mirroring trends across society, videoconferencing has become a go-to option. Schools use videoconferencing for regular parent-teacher conferences, along with meetings to discuss special education placements and disciplinary incidents. In our national survey, roughly one-half of teachers indicate that they conduct a substantial fraction of parent-teacher conferences online; nearly a quarter of teachers report that most of their interactions with parents are virtual.
In our interviews, teachers and parents gushed about the convenience afforded by videoconferencing, and some administrators believe it has increased overall parent participation. (One administrator observed, “Our attendance rates [at parent-teacher conferences] and interaction with parents went through the roof.”)
An administrator from a low-income district shared the benefits of virtual Individualized Education Plan (IEP) meetings:
“It’s rare that we have a face-to-face meeting…everything is Docusigned now. Parents love it because I can have a parent that’s working—a single mom that’s working full time—that can step out during her lunch break…[and] still interact with everybody.”
During the pandemic, many districts purchased a technology called Remind that allows teachers to use their personal smartphones to text with parents while blocking their actual phone number. We heard that teachers continue to text with parents, citing the benefits for quick check-ins or questions. Remind and many LMS also have translation capabilities that makes it easier for teachers and parents to overcome language barriers.
The changes described above have the potential to improve student learning and increase educational equity. They also carry risks. On the one hand, the growing use of digital tools to differentiate instruction may close achievement gaps, and the ubiquity of video conferencing could allow working parents to better engage with school staff. On the other hand, the overreliance on digital tools could harm students’ fine motor skills (one teacher remarked, “[T]heir handwriting sucks compared to how it used to be”) and undermine student engagement. Some new research suggests that relying on digital platforms might impede learning relative to the old-fashioned “paper and pencil” approach. And regarding virtual conferences, the superintendent of a small, rural district told us, “There’s a disconnect when we do that…No, I want the parents back in our buildings, I want people back. We’re [the school] a community center.”
Of course, some of the changes we observed may not persist. For example, fewer teachers may rely on digital tools to tailor instruction once the “COVID cohorts” have aged out of the system. As the emotional scars of the pandemic fade, schools may choose to devote fewer resources to SEL programming. It’s important to note, too, that many of the changes we found come from the adoption of new technology, and the technology available to educators will continue to evolve (e.g., with the integration of new AI technologies into personalized tutoring systems). That being said, now that educators have access to more instructional technology and—perhaps more importantly—greater familiarity with using such tools, they might continue to rely on them.
The changes brought about by the COVID-19 pandemic provide a unique opportunity to rethink and improve the structure of K-12 education. While the integration of technology and the focus on social-emotional learning offer promising avenues for enhancing student outcomes, they also require continuous evaluation. Indeed, these changes raise some questions beyond simple cost-benefit calculations. For example, the heightened role of ed tech raises questions about the proper role of the private sector in public education. As teachers increasingly “outsource” the job of instruction to software products, what might be lost?
Educational leaders and policymakers must ensure that these pandemic-inspired changes positively impact learning and address the evolving needs of students and teachers. As we navigate this new educational landscape, the lessons learned from this unprecedented time can serve as a guide for building a more resilient, equitable, and effective educational system for the future.
Beyond technological changes, COVID-19 shifted perspectives about K-12 schooling. A middle-school principal described a new mentality among teachers in her district, “I think we have all become more readily able to adapt…we’ve all learned to assess what we have in front of us and make the adjustments we need to ensure that students are successful.” And a district administrator emphasized how the pandemic highlighted the vital role played by schools:
“…we saw that when students were not in school. From a micro and macro level, the environment that a school creates to support you growing up…we realized how needed this network is…both academically and socially, in growing our citizens up to be productive in the world. And we are happy to have everyone back.”
At the end of the day, this realization may be one of the pandemic’s most enduring legacies.
Related Content
Monica Bhatt, Jonathan Guryan, Jens Ludwig
June 3, 2024
Douglas N. Harris
August 29, 2023
September 27, 2017
Education Access & Equity Education Policy Education Technology K-12 Education
Governance Studies
U.S. States and Territories
Brown Center on Education Policy
Spelman College, Atlanta Georgia
7:00 pm - 12:30 pm EDT
Nicol Turner Lee
August 8, 2024
Online Only
10:00 am - 12:30 pm EDT
IMAGES
COMMENTS
The relationship of discourse and topic knowledge to fifth graders' writing performance. Journal of Educational Psychology, 107, 391-406. Google Scholar. Parr J., Jesson R. (2016). Mapping the landscape of writing instruction in New Zealand. Reading & Writing: An Interdisciplinary Journal, 29, 981-1011.
Teaching/Writing: The Journal of Writing Teacher Education is a peer reviewed journal focusing on issues of writing teacher education - the development, education, and mentoring of prospective, new, and experienced teachers of writing at all levels. The journal draws from composition studies - writing program administrators, writing across-the-curriculum specialists, and other teaching ...
This article provides an overview of writing for publication in peer-reviewed journals. While the main focus is on writing a research article, it also provides guidance on factors influencing journal selection, including journal scope, intended audience for the findings, open access requirements, and journal citation metrics.
Reading and Writing publishes high-quality scientific articles pertaining to the processes, acquisition, and loss of reading and writing skills. The journal fully represents the necessarily interdisciplinary nature of research in the field, focusing on the interaction among various disciplines, such as linguistics, information processing, neuropsychology, cognitive psychology, speech and ...
The oldest educational publication in the country, the Journal of Education's mission is to disseminate knowledge that informs practice in PK-12, higher, and professional education. A refereed publication, the Journal offers … | View full journal description. This journal is a member of the Committee on Publication Ethics (COPE).
Writing is an essential but complex skill that students must master if they are to take full advantage of educational, occupational, and civic responsibilities. Schools, and the teachers who work in them, are tasked with teaching students how to write. Knowledge about how to teach writing can be obtained from many different sources, including one's experience teaching or being taught to ...
the necessary steps to writing for publication in educational journals. The first section discusses the steps in the writing process, from first thoughts on •a topic to the final draft. The next section deals with identifying and selecting•a publisher, whether to write a query letter, and what criteria one journal uses when
Language arts. "Language Arts is a professional journal for elementary and middle school teachers and teacher educators. It provides a forum for discussions on all aspects of language arts learning and teaching, primarily as they relate to children in pre-kindergarten through the eighth grade." Reading & writing quarterly: overcoming learning ...
Writing is one of the earliest academic skills studied by researchers interested in education and literacy. Edward Thorndike, whose theory of connectionism laid the groundwork for educational psychology, published a research report in Citation 1910 examining whether one aspect of writing (handwriting) can be measured reliably and validly. During that and the next decade, research in writing ...
"A Review of Educational Dialogue Strategies to Improve Academic Writing Skills," by Marlies Schillings, Herma Roebertsen, Hans Savelberg, and Diana Dolmans in Active Learning in Higher Education, November 2018 . This journal article provides an overview of written feedback, sharing research on its effectiveness in improving academic writing.
Journal of the Scholarship of Teaching and Learning, Vol. 10, No. 2, June 2010, pp. 34 - 47. Academic literacy: The importance and impact of writing across the curriculum - a case study Joseph Defazio1, Josette Jones2, Felisa Tennant3 and Sara ... communicate in the real world of work is a challenge for educators in higher education. Faculty
Writing for Young Children. Young Children is a peer-reviewed journal from the National Association for the Education of Young Children (NAEYC). Published four times a year, each issue offers practical, research-based articles on timely topics of interest. Our readers work with or on behalf of young children from birth through age 8.
'Four Square' Michele Morgan has been writing IEPs and behavior plans to help students be more successful for 17 years. She is a national-board-certified teacher, Utah Teacher Fellow with Hope ...
journals, writers must plan their writing to 60 weeks; the average is 11 weeks. with the journals' themes in mind. Those Once an article is accepted, another -who wish to publish in professional jour and much longer - period usually pass nals would be well-advised to acquaint es before the article is published.
Writing in journals can be a powerful strategy for students to respond to literature, gain writing fluency, dialogue in writing with another student or the teacher, or write in the content areas. While journaling is a form of writing in its own right, students can also freely generate ideas for other types of writing as they journal.Teachers can use literature that takes the form of a journal ...
Your writing should always: 1) Be tailored for the audience of the educational community. 2) Be tailored for the type or purpose of writing in education. 3) Use formal, specific, and precise language. 4) Be credibly sourced and free of plagiarism. 5) Convey clear, complete, and organized communication. 6) Use correct English language conventions.
Show the students a reading response journal; read to them a few of the student's writings and the teacher's responses. Explain that the student then responds to the teacher's response, if appropriate, and adds more to his/her journal. What you liked and disliked about the selection and why. What you wish had happened.
Classroom Journaling Is Essential. The benefits of students integrating journal writing across the curriculum are amply documented. From a teacher's perspective, there are few activities that can trump journal writing for understanding and supporting the development of student thinking. Journaling turbo-charges curiosity.
Objective: To introduce the process of journal writing to promote reflection and discuss the techniques and strategies to implement journal writing in an athletic training education curriculum. Background: Journal writing can facilitate reflection and allow students to express feelings regarding their educational experiences. The format of this writing can vary depending on the students' needs ...
Journal Rankings on Education. All subject areas. Education. Agricultural and Biological Sciences (miscellaneous) Biochemistry, Genetics and Molecular Biology (miscellaneous) Business, Management and Accounting (miscellaneous) Economics, Econometrics and Finance (miscellaneous) Organizational Behavior and Human Resource Management.
Living Education is an online journal that celebrates and explores issues that are of relevance to homeschooling families. They are "especially interested in articles that highlight unique and innovative paths that the educational journey can take." ... They are looking for the following type of articles: Favorite Classroom Writing Prompts ...
If a journal article has a DOI, include the DOI in the reference. Always include the issue number for a journal article. If the journal article does not have a DOI and is from an academic research database, end the reference after the page range (for an explanation of why, see the database information page).The reference in this case is the same as for a print journal article.
Automated written corrective feedback (AWCF) has been widely applied in second language (L2) writing classrooms in the past few decades. Recently, the introduction of tools based on generative ...
The Army University Press - the US Army's premier multimedia organization - focuses on advancing the ideas and insights military professionals need to lead and succeed. The Army University Press is the Army's entry point for cutting edge thought and discussion on topics important to the Army and national defense. Through its suite of publication platforms and educational services, the ...
The Army University Press - the US Army's premier multimedia organization - focuses on advancing the ideas and insights military professionals need to lead and succeed. The Army University Press is the Army's entry point for cutting edge thought and discussion on topics important to the Army and national defense. Through its suite of publication platforms and educational services, the ...
In this context, we read with great interest the recent letter in Clinical and Experimental Dermatology by Potestio et al. 1 While we concur that AI can be a helpful collaborator in academic writing, the authors claim that this tool, trained on data from the internet written by humans, may 'help to extract information from electronic medical ...
This paper is devoted to a narrative review of the literature on emotions and academic performance in medicine. The review aims to examine the role emotions play in the academic performance of undergraduate medical students. Eight electronic databases were used to search the literature from 2013 to 2023, including Academic Search Ultimate, British Education Index, CINAHL, Education Abstract ...
Student behavior problems, cellphones in class, anemic pay and AI-powered cheating are taking their toll on America's teachers. Many are demoralized or leaving the profession.
Educational leaders and policymakers must ensure that these pandemic-inspired changes positively impact learning and address the evolving needs of students and teachers. As we navigate this new ...
Increased funding for a wall on the US-Mexico border - one of Trump's signature proposals in 2016 - is proposed in the document. Project 2025 also proposes dismantling the Department of Homeland ...