Artificial intelligence and the Futures of Learning

AI and future of learning

The Artificial Intelligence and the Futures of Learning project builds on the  Recommendation on the Ethics of Artificial Intelligence  adopted at the 41st session of the UNESCO General Conference in 2019 and follows up on the recommendations of the UNESCO global report  Reimagining our futures together: a new social contract for education ,  launched in November 2021. It is implemented within the framework of the  Beijing Consensus on Artificial Intelligence and Education  and against the backdrop of the UNESCO Strategy on technological innovation in education (2021-2025) .

The project will address both the human and technological dimensions related to AI and the futures of learning. 

Strands of work

The project consists of three independent but complementary strands: 

  • AI and the Future of Learning
  • Guidance for Generative AI in education and research
  • AI Competency Frameworks for Students and Teachers 

Policy dialogue and consultations

  • International Forums on AI and Education: International Forum on AI and Education: Steering AI to Empower Teachers and Transform Teaching (December 2022); International Forum on AI and Education: Ensuring AI as a Common Good to Transform Education (December 2021); International Forum on AI and the Futures of Education: Developing competencies for the AI Era (December 2020); and International Conference on Artificial Intelligence and Education: Planning Education in the AI Era: Lead the leap (May 2019).
  • Ministerial Roundtable on Generative AI in Education : A virtual ministerial meeting on Generative AI in education took place on 25 May 2023, which gathered 25 ministers to debate on the urgent need for regulations on generative AI and competencies needed to reap its benefits. More information: https://www.unesco.org/en/articles/ministerial-roundtable-generative-ai-education
  • Consultations on AI competency framework for teachers (October 2022) : the consultation was attended by 15 international experts in AI and education and more than 70 participants. More information: https://www.unesco.org/en/articles/unesco-supports-definition-and-development-ai-competencies-teachers

Knowledge production

  • Guidance for Generative AI in Education and Research: The Guidance has been drafted and will be launched during Digital Learning Week (4-7 September, 2023).
  • Drafting the AI Competency Framework for school students: The first draft of the Framework for consultation that will be presented during Digital Learning Week (4-7 September 2023).
  • Drafting the AI Competency Framework for teachers: The first draft of the Framework for consultation that will be presented during Digital Learning Week (4-7 September 2023).
  • K-12 AI curricula: a mapping of government-endorsed AI curricula : The report builds on the results of a survey on “AI curricula for school students” which was circulated to all UNESCO Member States in 2022. The report will inform the development of the AI Competency Framework for Students . 
  • A survey on the governmental use of AI in education : Completed in early 2023, the survey covers assessments of Member States’ setting up of regulations on ethics of AI and its use in education, strategies for AI in education, and national programmes on developing AI competencies for teachers. The results of the survey informed the AI Competency Framework for teachers. 
  • Definition of Algorithm Literacy and Data Literacy : A call for contribution to the definition of Algorithm Literacy and Data Literacy was launched in June 2023. The selected think-pieces will feed inputs for the development of AI Competency Frameworks for students and teachers.
  • An in-depth case study on K-12 AI curricula of the United Arab Emirates : In collaboration with the Regional Center for Educational Planning (RCEP) and the Ministry of Education of the United Arab Emirates (UAE), UNESCOcompleted a case study on the UAE’s K-12 AI curriculum and its implementation. The case study is planned for release in 2023.

Capacity building

  • Workshop on AI curriculum development for schools in Oman (May 2022) : UNESCO, the Ministry of Education of Oman, RCEP, UNESCO Doha and Ericsson conducted an online workshop to empower 25+ national curriculum developers in integrating AI competencies into K-12 education. More information: https://www.unesco.org/en/articles/oman-embarks-development-k-12-ai-curricula-support-unesco-and-rcep
  • Workshop on Coding and AI for teachers in Lebanon (May 2023): As part of the “Teaching Coding and AI for Teachers and K-12 Students” programme in Lebanon, UNESCO, UNESCO Beirut and Ministry of Education of Lebanon conducted a three-day training on Coding and AI, empowering teachers and staff with AI skills and computational thinking. More information: https://www.unesco.org/en/articles/unesco-and-lebanon-join-forces-develop-coding-skills-teachers-and-disadvantaged-students

Publications

  • International forum on AI and education: steering AI to empower teachers and transform teaching, 5-6 December 2022; analytical report , UNESCO, 2023
  • K-12 AI curricula: a mapping of government-endorsed AI curricula , UNESCO, 2022
  • AI and education: Guidance for policy-makers , UNESCO, 2021
  • International Forum on AI and Education: Ensuring AI as a Common Good to Transform Education, synthesis report , UNESCO 2021
  • International Forum on AI and the Futures of Education, developing competencies for the AI Era, 7-8 December 2020: synthesis report , UNESCO, 2020
  • Artificial intelligence in education: Compendium of promising initiatives , UNESCO, 2020
  • Beijing Consensus on Artificial Intelligence and Education , UNESCO, 2019
  • Artificial intelligence in education: Compendium of promising initiatives , UNESCO, 2019
  • International conference on Artificial intelligence and Education, Planning education in the AI Era: Lead the leap: final report , UNESCO 2019

Related events

  • Digital Learning Week on “Steering technology for education” , 4 -7 September 2023
  • Ministerial Roundtable on Generative AI in Education , 25 May 2023
  • International forum on AI and education: steering AI to empower teachers and transform teaching , 5-6 December 2022
  • International Forum on AI and Education: Ensuring AI as a Common Good to Transform Education , 7-8 December 2021
  • Launch of the AI and the Futures of Learning Project , 30 September 2021 
  • International Forum on Artificial Intelligence and the Futures of Education , 7-9 December 2020
  • Online Edition of Mobile Learning Week 2020 - Beyond Disruption: Technology Enabled Learning Futures ,12-14 October 2020
  • International Conference on Artificial Intelligence and Education , 16-18 May 2019
  • Mobile Learning Week 2019 – Artificial intelligence for Sustainable Development ,4-8 March 2019

The project is supported by the Tomorrow Advancing Life Education Group (TAL) of China, a long-term partner of UNESCO and one of the sponsors of the International Conference on Artificial Intelligence and Education.

Related items

  • Artificial intelligence

Educating in a World of Artificial Intelligence

  • Posted February 9, 2023
  • By Jill Anderson
  • Learning Design and Instruction
  • Teachers and Teaching
  • Technology and Media

Girl in school library with AI graphic

Senior Researcher Chris Dede isn't overly worried about growing concerns over generative artificial intelligence, like ChatGPT, in education. As a longtime researcher on emerging technologies, he's seen many decades where new technologies promised to upend the field. Instead, Dede says artificial intelligence requires educators to get smarter about how they teach in order to truly take advantage of what AI has to offer.“The trick about AI is that to get it, we need to change what we're educating people for because if you educate people for what AI does well, you're just preparing them to lose to AI. But if you educate them for what AI can't do, then you've got IA [Intelligence Augmentation],” he says. Dede, the associate director of research for the National AI Institute for Adult Learning and Online Education , says AI raises the bar and it has the power to significantly impact learning in powerful ways.

In this episode of the Harvard EdCast, Dede talks about how the field of education needs to evolve and get smarter, in order to work with — not against — artificial intelligence. 

ADDITIONAL RESOURCES

  • Dede's keynote on Intelligence Augmentation , delivered at an AI and Education conference
  • Brief on Intelligence Augmentation, co-authored by Dede for HGSE’s Next Level Lab

Jill Anderson:  I'm Jill Anderson. This is the Harvard EdCast. 

Chris Dede thinks we need to get smarter about using artificial intelligence and education. He has spent decades exploring emerging learning technologies as a Harvard researcher. The recent explosion of generative AI, like ChatGPT, has been met with mixed reactions in education. Some public school districts have banned it. Some colleges and universities have tweaked their teaching and learning already. 

Generative AI raises que

Chris Dede

Chris Dede: I've actually been working with AI for more than half a century. Way back when when I was a graduate student, I read the first article on AI in education, which was published in 1970. And the author confidently predicted that we wouldn't need teachers within five or six years because AI was going to do everything. And of course, we still see predictions like that today. 

But having lived through nine hype cycles for AI, I'm both impressed by how much it's advanced, but I'm also wary about elaborate claims for it. And there is a lot of excitement now about generative AI is the term that people are using, which includes programs like ChatGPT. It includes things like Dolly that are capable of creating images. It includes really AI on its own doing performances that we previously would have thought were something that people would have to do. 

But it's interesting to compare ChatGPT to a search engine. And people don't remember this, but there was a time when-- before search engines when people really struggled to find resources, and there was enormous excitement when search engines came out. And search engines are, in fact, AI. They are based on AI at the back end, coming up with lists of things that hopefully match what you typed in. In fact, the problem with the search engine becomes not trying to find anything, but trying to filter everything to decide what's really useful. 

So you can think of ChatGPT as the next step beyond a search engine where instead of getting a list of things and then you decide which might be useful and you examine them, you get an answer that says, this is what I think you want. And that is really more the AI taking charge than it is the AI saying, I can help you. Here's some things that you might look at and decide about. That makes me wary because AI is not at a stage where it really understands what it's saying. 

And so it will make up things when it doesn't know them, kind of a not very good student seeing if they can fake out the teacher. And it will provide answers that are not customized to somebody's culture or to somebody's reading level or to somebody's other characteristics. So it's really quite limited. 

I know that Harvard has sent some wording out that I've now put into my syllabi about students being welcome to use whatever tools they want. But when they present something as their work, it has to be something that they wrote themselves. It can't be something that somebody else wrote, which is classic plagiarism. It can't be something that Chat AI wrote that they're presenting as their work and so on. I think that what Chat AI does is it raises the bar for human performance. 

I know a lot about what people are going through now in terms of job interviews because my older daughter is an HR manager, and my younger daughter just graduated. And she's having a lot of job interviews. And in contrast to earlier times, now, job interviews typically involve a performance. 

If you're going to be hired for a marketing position, they'll say bring in a marketing plan when we do our face-to-face interview on this, and we'll evaluate it. Or in her case, in mechanical engineering, they say when you come in, there's this system that you're going to have a chance to debug, and we'll see how well you do it. Those employers are going to type the same thing into Chat AI. And if someone comes in with something that isn't any better than Chat AI, they're not going to get hired because why hire somebody that can't outcompete a free resource? 

Jill Anderson:  Oh interesting. 

Chris Dede: So it raises the bar for human performance in an interesting way. 

Jill Anderson:  Your research looks at something called intelligence augmentation. I want to know what that means and how that's different from artificial intelligence. 

Chris Dede: Intelligence augmentation is really about the opposite of this sort of negative example I was describing where now you've got to outthink Chat AI if you want to get a job. It says, when is the whole more than the sum of the parts? When do a person and AI working together do things that neither one could do as well on their own? 

And often, people think, well, yeah, I can see a computer programmer, there might be intelligence augmentation because I know that machines can start to do programming. What they don't realize is that it applies to a wide range of jobs, including mine, as a college professor. So I am the associate director for research in a national AI institute funded by the National Science Foundation on adult learning and online education. And one of the things the Institute is building is AI assistants for college faculty. 

So there's question answering assistants to help with student questions, and there's tutoring assistants and library assistants and laboratory assistants. There's even a social assistant that can help students in a large class meet other students who might be good learning partners. So now, as a professor, I'm potentially surrounded by all these assistants who are doing parts of my job, and I can be deskilled by that, which is a bad future. You sort of end up working for the assistant where they say, well, here's a question I can't answer. 

So you have to do it. Or you can upskill because the assistant is taking over routine parts of the job. And in turn, you can focus much more deeply on personalization to individual students, on bringing in cultural dimensions and equity dimensions that AI does not understand and cannot possibly help with. The trick about AI is that to get it, we need to change what we're educating people for because if you educate people for what AI does well, you're just preparing them to lose to AI. But if you educate them for what AI can't do, then you've got IA. 

Jill Anderson:  So that's the goal here. We have to change the way that we're educating young people, even older people at this point. I mean, everybody needs to change the way that they're learning about these things and interacting with them. 

Chris Dede: They do. And we're hampered by our system of assessment because the assessments that we use, including Harvard with the GRE and the SAT and so on, those are what AI does well. AI can score really well on psychometric tests. So we're using the wrong measure, if you will. We need to use performance assessments to measure what people can do to get into places like Harvard or higher education in general because that's emphasizing the skills that are going to be really useful for them. 

Jill Anderson:  You mentioned at the start artificial intelligence isn't really something brand new. This has been around for decades, but we're so slow to adapt and prepare and alter the way that we do things that once it reaches kind of the masses, we're already behind. 

Chris Dede:  Well, we are. And the other part of it is that we keep putting old wine in new bottles. I mean, this is — if I had to write a headline for the entire history of educational technology, it would be old wine in new bottles. But we don't understand what the new bottle really means. 

So let me give you an example of something that I think generative AI could make a big difference, be very powerful, but I'm not seeing it discussed in all the hype about generative AI. And that is evidence-based modeling for local decisions. So let's take climate change. 

One of the problems with climate change is that let's say that you're in Des Moines, Iowa, and you read about all this flooding in California. And you say to yourself, well, I'm not next to an ocean. I don't live in California. And I don't see why I should be that worried about this stuff. 

Now, no one has done a study, I assume, of flooding in Des Moines, Iowa, in 2050 based on mid-level projections about climate change. But with generative AI, we can estimate that now. 

Generative AI can reach out across topographic databases, meteorological databases, and other related databases to come up with here's the parts of Des Moines that are going to go underwater in 2050 and here's how often this is going to happen if these models are correct. That really changes the dialogue about climate change because now you're talking about wait a minute.  You mean that park I take my kids to is going to have a foot of water in it? So I think that kind of evidence-based modeling is not something that people are doing with generative AI right now, but it's perfectly feasible. And that's the new wine that we can put in the new bottle. 

Jill Anderson:  That's really a great way to use that. I mean, and you could even use that in your classroom. Something that you said a long, long time ago was that — and this is paraphrasing — the idea that we often implement new technology, and we make this mistake of focusing on students first rather than teachers.   Chris Dede:  In December, I gave a keynote at a conference called Empowering Learners for the Age of AU that has been held the last few years. And one of the things I talked about was the shift from teaching to learning. Both are important, but teaching is ultimately sort of pouring knowledge into the minds of learners. And learning is much more open ended, and it's essential for the future because every time you need to learn something new, you can't afford to go back and have another master's degree. You need to be able to do self-directed learning. 

And where AI can be helpful with this is that AI can be like an intellectual partner, even when you don't have a teacher that can help you learn in different ways. One of the things that I've been working on with a professor at the Harvard Business School is AI systems that can help you learn negotiation. 

Now, the AI can't be the person you're negotiating with. AI is not good at playing human beings — not yet and not for quite a long time, I think. But what AI can do is to create a situation where a human being can play three people at once. So here you are. You're learning how to negotiate a raise. 

You go into a virtual conference room. There's three virtual people who are three bosses. There's one simulation specialist behind all three, and you negotiate with them. And then at the end, the system gives you some advice on what you did well and not so well. 

And if you have a human mentor, that person gives you advice as well. Ronda Bandy, who was a professor in HGSE until she moved to Hunter College, she and I have published five articles on the work we did for the HGSE's Reach Every Reader Project on using this kind of digital puppeteering to help teachers practice equitable discussion leading. So again, here's something that people aren't talking about where AI on the front end can create rich evocative situations, and AI and machine learning on the back end can find really interesting patterns for improvement. 

Jill Anderson:  You know, Chris, how hard is it to get there for educators? 

Chris Dede: I think, in part, that's what these national AI institutes are about. Our institute, which is really adult learning with a workplace focus, is looking at that part of the spectrum. There's another institute whose focus is middle school and high school and developing AI partners for students where the student and the partner are learning together in a different kind of IA. There's a third Institute that's looking at narrative and storytelling as a powerful form of education and how can AI help with narrative and storytelling. 

You can imagine sitting down. Mom and dad aren't around. You've got a storybook like Goldilocks and the Three Bears, and you've got something like Alexa that can listen to what you're reading and respond. 

And so you begin, and you say, Goldilocks went out of her house one day and went into the woods and got lost. And Alexa says, why do you think Goldilocks went into the woods? Was she a naughty girl? No. Or was she an adventurous girl, or was she deeply concerned about climate change and wanting to study ecosystems? 

I mean, I'm being playful about this, but I think the point is that AI doesn't understand any of the questions that it's asking but it can ask the questions, and then the child can start to think deeper than just regurgitating the story. So there's all sorts of possibilities here that we just have to think of as new wine instead of asking how can AI automate our order thinking about teaching and learning. 

Jill Anderson:  I've been hearing a lot of concern about writing in particular-- writing papers where young people are actually expressing their own ideas, concerns about plagiarism and cheating, which I would say the latter have long existed as challenges in education, aren't really a new one. Does AI really change this? And how might a higher ed or any educator really look at this differently? 

Chris Dede:  So I think where AI changes this is it helps us understand the kind of writing that we should be teaching versus the kind of writing that we are teaching. So I remember preparing my children for the SAT, and it used to have something called the essay section. And you had to write this very formal essay that was a certain number of paragraphs, and the topic sentences each had to do this and so on. 

Nobody in the world writes those kinds of essays in the real world. They're just like an academic exercise. And of course, AI now can do that beautifully. 

But any reporter will tell you that they could never use Chat AI to write their stories because stories is what they write. They write narratives. If you just put in a description, you'll be fired from your reportorial job because no one is interested in descriptions. They want a story. 

So giving students a description and teaching them to turn it into a story or teaching them to turn it into something else that has a human and creative dimension for it, how would you write this for a seventh-grader that doesn't have much experience with the world? How would you write this for somebody in Russia building on the foundation of what AI gives you and taking it in ways that only people can? That's where writing should be going. 

And of course, good writing teachers will tell you, well, that's nothing new. I've been teaching my students how to write descriptive essays. The people who are most qualified to talk about the limits of AI are the ones who teach what the AI is supposedly doing. 

Jill Anderson:  So do you have any helpful tips for educators regardless of what level they're working at on where to kind of begin embracing this technology? 

Chris Dede: What AI can do well is what's called reckoning, which is calculative prediction. And I've given some examples of that with flooding in Des Moines and other kinds of things. And what people do is practical wisdom, if you will, and it involves culture and ethics and what it's like to be embodied and to have the biological things that are part of human nature and so on. 

So when I look at what I'm teaching, I have to ask myself, how much of what I'm teaching is reckoning? So I'm preparing people to lose to AI. And how much of what I'm teaching is practical wisdom? 

So for example, we spend a lot of time in vocational technical education and standard academic education teaching people to factor. How do you factor these complex polynomials? 

There is no workplace anywhere in the world, even in the most primitive possible conditions, where anybody makes a living by factoring. It's an app. It's an app on a phone. Should you know a little bit about factoring so it's not magic? Sure. 

Should you become fluent in factoring? Absolutely not. It's on the wrong side of the equation.  So I think just teachers and curriculum developers and assessors and stakeholders in the outcomes of education need to ask themselves, what is being taught now, and which parts of it are shifting over? And how do we include enough about those parts that AI isn't magic? But how do we change the balance of our focus to be more on the practical wisdom side? 

Jill Anderson:  So final thoughts here — don't be scared but figure out how to use this to your advantage? 

Chris Dede: Yeah, don't be scared. AI is not smart. It really isn't. People would be appalled if they knew how little AI understands what it's telling you, especially given how much people seem to be relying on it. But it is capable of taking over parts of what you do that are routine and predictable and, in turn, freeing up the creative and the innovative and the human parts that are really the rewarding part of both work the life. 

EdCast: Chris Dede is a senior research fellow at the Harvard Graduate School of Education. He is also a co-principal investigator of the National Artificial Intelligence Institute in adult learning and online education. I'm Jill Anderson. This is the Harvard EdCast produced by the Harvard Graduate School of Education. Thanks for listening.  [MUSIC PLAYING] 

Subscribe to the Harvard EdCast.

EdCast logo

An education podcast that keeps the focus simple: what makes a difference for learners, educators, parents, and communities

Related Articles

Child learning on laptop conference

Sal Khan on Innovations in the Classroom

Child staring at a computer

Embracing Artificial Intelligence in the Classroom

Generative AI tools can reflect our failure of imagination and that is when the real learning starts

Student with virtual reality headset

Learning in Digital Worlds

AI Will Transform Teaching and Learning. Let’s Get it Right.

At the recent AI+Education Summit, Stanford researchers, students, and industry leaders discussed both the potential of AI to transform education for the better and the risks at play.

children work on computers in a classroom

When the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered AI began planning the inaugural AI+Education Summit last year, the public furor around AI had not reached its current level. This was the time before ChatGPT. Even so, intensive research was already underway across Stanford University to understand the vast potential of AI, including generative AI, to transform education as we know it. 

By the time the summit was held on Feb. 15, ChatGPT had reached more than 100 million unique users , and 30% of all college students had used it for assignments, making it one of the fastest-ever applications ever adopted overall – and certainly in education settings. Within the education world, teachers and school districts have been wrestling with how to respond to this emerging technology. 

The AI+Education Summit explored a central question: How can AI like this and other applications be best used to advance human learning? 

“Technology offers the prospect of universal access to increase fundamentally new ways of teaching,” said Graduate School of Education Dean Daniel Schwartz in his opening remarks. “I want to emphasize that a lot of AI is also going to automate really bad ways of teaching. So [we need to] think about it as a way of creating new types of teaching.” 

Researchers across Stanford – from education, technology, psychology, business, law, and political science – joined industry leaders like Sal Khan, founder and CEO of Khan Academy, in sharing cutting-edge research and brainstorming ways to unlock the potential of AI in education in an ethical, equitable, and safe manner. 

Participants also spent a major portion of the day engaged in small discussion groups in which faculty, students, researchers, staff, and other guests shared their ideas about AI in education. Discussion topics included natural language processing applied to education; developing students’ AI literacy; assisting students with learning differences; informal learning outside of school; fostering creativity; equity and closing achievement gaps; workforce development; and avoiding potential misuses of AI with students and teachers. 

Several themes emerged over the course of the day on AI’s potential, as well as its significant risks.

First, a look at AI’s potential:

1. Enhancing personalized support for teachers at scale

Great teachers remain the cornerstone of effective learning. Yet teachers receive limited actionable feedback to improve their practice. AI presents an opportunity to support teachers as they refine their craft at scale through applications such as: 

  • Simulating students: AI language models can serve as practice students for new teachers. Percy Liang , director of the Stanford HAI Center for Research on Foundation Models , said that they are increasingly effective and are now capable of demonstrating confusion and asking adaptive follow-up questions.
  • Real-time feedback and suggestions: Dora Demszky , assistant professor of education data science, highlighted the ability for AI to provide real-time feedback and suggestions to teachers (e.g., questions to ask the class), creating a bank of live advice based on expert pedagogy. 
  • Post-teaching feedback: Demszky added that AI can produce post-lesson reports that summarize the classroom dynamics. Potential metrics include student speaking time or identification of the questions that triggered the most engagement. Research finds that when students talk more, learning is improved.
  • Refreshing expertise: Sal Khan, founder of online learning environment Khan Academy, suggested that AI could help teachers stay up-to-date with the latest advancements in their field. For example, a biology teacher would have AI update them on the latest breakthroughs in cancer research, or leverage AI to update their curriculum.

2. Changing what is important for learners

Stanford political science Professor Rob Reich proposed a compelling question: Is generative AI comparable to the calculator in the classroom, or will it be a more detrimental tool? Today, the calculator is ubiquitous in middle and high schools, enabling students to quickly solve complex computations, graph equations, and solve problems. However, it has not resulted in the removal of basic mathematical computation from the curriculum: Students still know how to do long division and calculate exponents without technological assistance. On the other hand, Reich noted, writing is a way of learning how to think. Could outsourcing much of that work to AI harm students’ critical thinking development? 

Liang suggested that students must learn about how the world works from first principles – this could be basic addition or sentence structure. However, they no longer need to be fully proficient – in other words, doing all computation by hand or writing all essays without AI support.

In fact, by no longer requiring mastery of proficiency, Demszky argued that AI may actually raise the bar. The models won’t be doing the thinking for the students; rather, students will now have to edit and curate, forcing them to engage deeper than they have previously. In Khan’s view, this allows learners to become architects who are able to pursue something more creative and ambitious.

Dora Demszky

And Noah Goodman , associate professor of psychology and of computer science, questioned the analogy, saying this tool may be more like the printing press, which led to democratization of knowledge and did not eliminate the need for human writing skills.

3. Enabling learning without fear of judgment

Ran Liu, chief AI scientist at Amira Learning, said that AI has the potential to support learners’ self-confidence. Teachers commonly encourage class participation by insisting that there is no such thing as a stupid question. However, for most students, fear of judgment from their peers holds them back from fully engaging in many contexts. As Liu explained, children who believe themselves to be behind are the least likely to engage in these settings.

Interfaces that leverage AI can offer constructive feedback that does not carry the same stakes or cause the same self-consciousness as a human’s response. Learners are therefore more willing to engage, take risks, and be vulnerable. 

One area in which this can be extremely valuable is soft skills. Emma Brunskill , associate professor of computer science, noted that there are an enormous number of soft skills that are really hard to teach effectively, like communication, critical thinking, and problem-solving. With AI, a real-time agent can provide support and feedback, and learners are able to try different tactics as they seek to improve.

4. Improving learning and assessment quality

Bryan Brown , professor of education, said that “what we know about learning is not reflected in how we teach.” For example, teachers know that learning happens through powerful classroom discussions. However, only one student can speak up at a time. AI has the potential to support a single teacher who is trying to generate 35 unique conversations with each student. 

Bryan Brown and Emmy Brunskill

This also applies to the workforce. During a roundtable discussion facilitated by Stanford Digital Economy Lab Director Erik Brynjolfsson and Candace Thille , associate professor of education and faculty lead on adult learning at the Stanford Accelerator for Learning, attendees noted that the inability to judge a learner’s skill profile is a leading industry challenge. AI has the potential to quickly determine a learner’s skills, recommend solutions to fill the gaps, and match them with roles that actually require those skills. 

Of course, AI is never a panacea. Now a look at AI’s significant risks:

1. Model output does not reflect true cultural diversity

At present, ChatGPT and AI more broadly generates text in language that fails to reflect the diversity of students served by the education system or capture the authentic voice of diverse populations. When the bot was asked to speak in the cadence of the author of The Hate U Give , which features an African American protagonist, ChatGPT simply added “yo” in front of random sentences. As Sarah Levine , assistant professor of education, explained, this overwhelming gap fails to foster an equitable environment of connection and safety for some of America’s most underserved learners.

2. Models do not optimize for student learning

While ChatGPT spits out answers to queries, these responses are not designed to optimize for student learning. As Liang noted, the models are trained to deliver answers as fast as possible, but that is often in conflict with what would be pedagogically sound, whether that’s a more in-depth explanation of key concepts or a framing that is more likely to spark curiosity to learn more.

3. Incorrect responses come in pretty packages

Goodman demonstrated that AI can produce coherent text that is completely erroneous. His lab trained a virtual tutor that was tasked with solving and explaining algebra equations in a chatbot format. The chatbot would produce perfect sentences that exhibited top-quality teaching techniques, such as positive reinforcement, but fail to get to the right mathematical answer. 

4. Advances exacerbate a motivation crisis

Chris Piech , assistant professor of computer science, told a story about a student who recently came into his office crying. The student was concerned about the rapid progress of ChatGPT and how this would deter future job prospects after many years of learning how to code. Piech connected the incident to a broader existential motivation crisis, where many students may no longer know what they should be focusing on or don’t see the value of their hard-earned skills. 

The full impact of AI in education remains unclear at this juncture, but as all speakers agreed, things are changing, and now is the time to get it right. 

Watch the full conference:

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more

More News Topics

IMAGES

  1. The Role of Artificial Intelligence in Education

    artificial intelligence in education

  2. 5 Ways How Artificial Intelligence Is Changing Education

    artificial intelligence in education

  3. 7 Roles for Artificial Intelligence in Education

    artificial intelligence in education

  4. 7 Real-Life Examples of AI in Education

    artificial intelligence in education

  5. How Modern Era Allow The Use Artificial Intelligence In Education

    artificial intelligence in education

  6. How artificial intelligence can empower students to learn

    artificial intelligence in education

VIDEO

  1. AI: Future of Learning and Teaching

  2. Using Artificial Intelligence to Drive Educational Outcomes

  3. Artificial Intelligence (AI): Fundamental Skills for Educators & Students (CDE)

  4. KNOWLEDGE REPRESENTATION-ARTIFICIAL INTELLIGENCE|Mrs.D.M.KALAI SELVI|ASSISTANT PROFESSOR CSE RMDEC

COMMENTS

  1. Artificial intelligence in education: A systematic literature

    Artificial intelligence (AI) in education (AIED) has evolved into a substantial body of literature with diverse perspectives. In this review paper, we seek insights into three critical questions: (1) What are the primary categories of AI applications explored in the education field? (2) What are the predominant research topics and their key ...

  2. How AI can transform education for students and teachers

    Advances in artificial intelligence (AI) could transform education systems and make them more equitable. It can accelerate the long overdue transformation of education systems towards inclusive learning that will prepare young people to thrive and shape a better future.

  3. Artificial intelligence in education

    Artificial intelligence in education is the application of artificial intelligence (AI) to enhance teaching and learning processes. It has garnered significant attention in the educational field due to its potential to revolutionize learning processes, personalize instruction, and improve educational outcomes.

  4. Artificial intelligence and the Futures of Learning

    A survey on the governmental use of AI in education: Completed in early 2023, the survey covers assessments of Member States’ setting up of regulations on ethics of AI and its use in education, strategies for AI in education, and national programmes on developing AI competencies for teachers.

  5. Artificial Intelligence and the Future of Teaching and Learning

    The report describes AI as a rapidly-advancing set of technologies for recognizing patterns in data and automating actions, and guides educators in understanding what these emerging technologies can do to advance educational goals—while evaluating and limiting key risks.

  6. Artificial Intelligence

    AI can help educators address variability in student learning. With AI, designers can anticipate and address the long tail of variations in how students can successfully learn—whereas traditional curricular resources were designed to teach to the middle or most common learning pathways.

  7. AI in Education| Harvard Graduate School of Education

    Chris Dede thinks we need to get smarter about using artificial intelligence and education. He has spent decades exploring emerging learning technologies as a Harvard researcher. The recent explosion of generative AI, like ChatGPT, has been met with mixed reactions in education.

  8. AI Will Transform Teaching and Learning. Let’s Get it Right

    Within the education world, teachers and school districts have been wrestling with how to respond to this emerging technology. The AI+Education Summit explored a central question: How can AI like this and other applications be best used to advance human learning?