Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

AI Should Augment Human Intelligence, Not Replace It

  • David De Cremer
  • Garry Kasparov

artificial intelligence and human intelligence essay

Artificial intelligence isn’t coming for your job, but it will be your new coworker. Here’s how to get along.

Will smart machines really replace human workers? Probably not. People and AI both bring different abilities and strengths to the table. The real question is: how can human intelligence work with artificial intelligence to produce augmented intelligence. Chess Grandmaster Garry Kasparov offers some unique insight here. After losing to IBM’s Deep Blue, he began to experiment how a computer helper changed players’ competitive advantage in high-level chess games. What he discovered was that having the best players and the best program was less a predictor of success than having a really good process. Put simply, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” As leaders look at how to incorporate AI into their organizations, they’ll have to manage expectations as AI is introduced, invest in bringing teams together and perfecting processes, and refine their own leadership abilities.

In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to the global economy by 2030 . As AI is changing how companies work, many believe that who does this work will change, too — and that organizations will begin to replace human employees with intelligent machines . This is already happening: intelligent systems are displacing humans in manufacturing, service delivery, recruitment, and the financial industry, consequently moving human workers towards lower-paid jobs or making them unemployed. This trend has led some to conclude that in 2040 our workforce may be totally unrecognizable .

  • David De Cremer is a professor of management and technology at Northeastern University and the Dunton Family Dean of its D’Amore-McKim School of Business. His website is daviddecremer.com .
  • Garry Kasparov is the chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative. He writes and speaks frequently on politics, decision-making, and human-machine collaboration. Kasparov became the youngest world chess champion in history at 22 in 1985 and retained the top rating in the world for 20 years. His famous matches against the IBM super-computer Deep Blue in 1996 and 1997 were key to bringing artificial intelligence, and chess, into the mainstream. His latest book on artificial intelligence and the future of human-plus-machine is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (2017).

Partner Center

  • AI technologies

This wide-ranging guide to artificial intelligence in the enterprise provides the building blocks for becoming successful business consumers of AI technologies. It starts with introductory explanations of AI's history, how AI works and the main types of AI. The importance and impact of AI is covered next, followed by information on AI's key benefits and risks, current and potential AI use cases, building a successful AI strategy, steps for implementing AI tools in the enterprise and technological breakthroughs that are driving the field forward. Throughout the guide, we include hyperlinks to TechTarget articles that provide more detail and insights on the topics discussed.

Artificial intelligence vs. human intelligence: differences explained, artificial intelligence is humanlike. there are differences, however, between natural and artificial intelligence. here are three ways ai and human cognition diverge..

Michael Bennett

  • Michael Bennett, Northeastern University

Smartness. Understanding. Brainpower. Ability to reason. Sharpness. Wisdom.

These are terms typically used to indicate human intelligence. The broad range of connotations they encompass is indicative of the many debates that have attempted to capture the essence of what we mean when we say intelligence . For thousands of years, humans have obsessed over how best to describe and define the term. Hundreds of definitions have been created, but for much of that time, intelligence has meant a biopsychological capacity to acquire and apply knowledge and skills.

For more than a century now, the intelligence debates have been energized by a sense of competition and uncertainty about the suitability of the biopsychological element of the meaning.

Artificial intelligence ( AI ), or machines with the capacity to do things traditionally associated with and assumed to be within the exclusive domain of humans, has rattled human society. Since the second half of the 20th century and with a vastly accelerated pace in the last two decades, machines have exhibited the ability to learn and to apply learning in ways that only humans had been able to previously.

Human and artificial intelligence differ in significant ways, however. They are not synonymous or fungible. Even given the still intense internal contestation over what defines human intelligence and what defines artificial intelligence, the differences between the two are clear.

This article is part of

A guide to artificial intelligence in the enterprise

  • Which also includes:
  • 10 top AI and machine learning trends for 2024
  • 10 top artificial intelligence certifications and courses for 2024

The future of AI: What to expect in the next 5 years

Human intelligence explained: What can humans do better than AI?

Humans tend to be superior to AI in contexts and at tasks that require empathy. Human intelligence encompasses the ability to understand and relate to the feelings of fellow humans, a capacity that AI systems struggle to emulate. Having evolved over at least 300,000 years, the species Homo sapiens developed a broad set of interactive skills -- an intelligence grounded in its development as a social animal -- that makes it adept at many forms of social intelligence. Related activities such as judgment, intuition, subtle yet effective communication, and imagination are all domains in which human intelligence is much more useful and valuable -- and simply better -- than AI in any of its present forms.

Artificial intelligence explained: What can AI do better than humans?

Artificial intelligence systems outperform humans in a range of important categories. AI, particularly machine algorithms, is strikingly effective at processing and integrating new information and sharing new knowledge among separate AI models. The endurance of AI is also superior to human intelligence; machines do not require rest and do not get distracted. And AI works at speeds well beyond those of human intelligence; a machine will outperform a human at most tasks that both have been trained to complete by many orders of magnitude.

3 specific ways AI and human intelligence differ

1. one-shot vs. multishot learning.

Human intelligence. One of the most miraculous qualities of humans is the ability to learn new concepts and ideas from a small number of samples, sometimes from a single one. Most humans are even able to understand and identify a pattern and to use it to generalize and extrapolate. Having been shown one or two images of a leopard, for example, and then being shown images of various types of animals, a human would be able to determine with high accuracy whether those images depicted a leopard. This ability is referred to as one-shot learning .

AI. Much more often than not, artificial intelligence systems need copious examples to achieve comparable levels of learning. An AI system may require millions, even billions, of such samples to learn at a level beyond that of a human of average intelligence. This requirement for multishot learning distinguishes AI from human intelligence. Many researchers feel that this difference is a strong basis for describing humans as being, on average, much more efficient learners than AI systems.

Human intelligence vs. AI

2. Imagination and recitation

Human intelligence. Many psychologists, philosophers and cognitive researchers deem imagination a fundamental human ability. They even go so far as to enshrine imagination as an element of what it means to be human. The quickening tempo of climate catastrophes, growing threats of potentially devastating international conflict and other looming challenges have led to continuous calls for imaginative problem-solving. The notion that human survival in the 21st century deeply depends on novel ideas has led to a mini-renaissance in thinking about human imagination and how best to cultivate it.

Definitions abound, but most consider human imagination as the ability to form ideas, mental sensations and concepts of phenomena that are not present and/or do not exist. Things that could've been, might've been or could never be are classic forms of the imaginable and are routinely conjured in the minds of virtually every human.

AI. By comparison, many researchers agree that artificial intelligence systems recite rather than imagine. Recitation can be understood as recalling information as it was presented. Computer systems are exceptionally well designed to do this. Some AI systems can recite in synthesized forms. When these systems are trained to draw images of various types of automobiles, they are then able to create mashups of the examples from which they learned. For example, an AI system trained on iconic automobiles could go on to generate a mashup of a 1968 Ford Mustang, a 1950 Volkswagen Beetle and a 2023 Ferrari Portofino. Although a small subset of AI researchers have described this as imagination, a more accurate description would be to call it synthetic recitation .

For more on artificial intelligence in the enterprise, read the following articles:

Top degree programs for studying artificial intelligence

Top artificial intelligence certifications and courses

Main types of artificial intelligence: Explained

AI vs. machine learning vs. deep learning: Key differences

What is AI ethics?

3. Multisensory input and output

Human intelligence. Another comparatively striking quality of human intelligence is the ability to receive and quickly integrate information from all our senses and use that integrated perception to then make decisions. Sight, hearing, touch, smell and taste meld seamlessly and rapidly into a coherent understanding of where we are and what is happening around us and within us. The typical human is also able to subsequently respond to these perceptions with complex reactions that are based on multiple modes of sensation. In this way, the average human is able to incorporate multimodal inputs and to create multimodal outputs.

AI. In 2023, most artificial intelligence systems are unable to learn in this multimodal way. Famous AI systems, like ChatGPT, can only receive inputs in one form -- say, text. Some autonomous vehicles, however, are able to receive inputs from multiple types of sources . Self-driving automobiles currently use a variety of sensor types, including radar, lidar, accelerometers and microphones, to absorb crucial information from the environment they are navigating. Self-driving automobiles use multiple AI systems to understand these various flows of information, aggregate them and then make navigational decisions.

AI and human intelligence working together

As AI research and implementation continue apace and the practical, existential need for more applied human imagination grows, we should expect to see the two forms of intelligence increasingly brought together in human-AI teaming.

Recent polling of citizens and indications from policymakers around the world indicate a strong disinclination for turning decision-making over to even the most intelligent AI systems. But at the same time, the problems confronted by human societies presently seem to outstrip the ability of humans to find solutions in a timely manner. The central challenge will likely be to integrate the two intelligences such that the virtues of each are amplified, while respective weaknesses are diminished or erased. Some will find this prospect unnerving. But the magnitude of the global problems we confront will probably make the melding inevitable. Human-AI teaming might be not only our best hope, but one we will find irresistible.

Michael Bennett is director of educational curriculum and business lead for responsible AI in The Institute for Experiential Artificial Intelligence at Northeastern University in Boston. Previously, he served as Discovery Partners Institute's director of student experiential immersion learning programs at the University of Illinois. He holds a J.D. from Harvard Law School.

AI transparency: What is it and why do we need it?

What is trustworthy AI and why is it important?

AI regulation: What businesses need to know

How to become an artificial intelligence engineer

Related Resources

  • Lessons Learned From Search to Generative Answering –Coveo
  • Generative AI Prompts Productivity, Imagination, And Innovation In The ... –MicroStrategy
  • Moving AI from Innovation to Impact –Red Hat
  • The Changing Face Of HR –Sage

Dig Deeper on AI technologies

artificial intelligence and human intelligence essay

What is artificial general intelligence (AGI)?

artificial intelligence and human intelligence essay

The need for common sense in AI systems

OliviaWisbey

Google AI targets programmer productivity

CliffSaran

MoD sets out strategy to develop military AI with private sector

SebastianKlovig Skelton

Analytics governance might not seem exciting, but it can improve innovation and mitigate risks. It's also critical to responsible...

Social BI enables users to interact with their organization's data -- and data experts -- in applications where they already ...

AR and VR data visualizations offer a new perspective to capture patterns and trends in complex data sets that traditional data ...

The next U.S. president will set the tone on tech issues such as AI regulation, data privacy and climate tech. This guide breaks ...

A challenge companies are facing while preparing for compliance with climate risk reporting rules is a lack of consistency among ...

Key leadership decisions like poor architecture to rushed processes can lead to technical debt that will affect a company ...

Pairing retrieval-augmented generation with an LLM helps improve prompts and outputs, democratizing data access and making ...

Vector databases excel in different areas of vector searches, including sophisticated text and visual options. Choose the ...

Generative AI creates new opportunities for how organizations use data. Strong data governance is necessary to build trust in the...

New capabilities in AI technology hold promise for manufacturers, but companies should proceed carefully until issues such as ...

A 3PL with experience working with supply chain partners and expertise in returns can help simplify a company's operations. Learn...

Neglecting enterprise asset management can lead to higher equipment costs and delayed operations. Learn more about EAM software ...

Table of Contents

What is artificial intelligence, what is human intelligence, artificial intelligence vs. human intelligence: a comparison, what brian cells can be tweaked to learn faster, artificial intelligence vs. human intelligence: what will the future of human vs ai be, impact of ai on the future of jobs, will ai replace humans, upskilling: the way forward, learn more about ai with simplilearn, ai vs human intelligence: key insights and comparisons.

Artificial Intelligence vs. Human Intelligence

From the realm of science fiction into the realm of everyday life, artificial intelligence has made significant strides. Because AI has become so pervasive in today's industries and people's daily lives, a new debate has emerged, pitting the two competing paradigms of AI and human intelligence. 

While the goal of artificial intelligence is to build and create intelligent systems that are capable of doing jobs that are analogous to those performed by humans, we can't help but question if AI is adequate on its own. This article covers a wide range of subjects, including the potential impact of AI on the future of work and the economy, how AI differs from human intelligence, and the ethical considerations that must be taken into account.

The term artificial intelligence may be used for any computer that has characteristics similar to the human brain, including the ability to think critically, make decisions, and increase productivity. The foundation of AI is human insights that may be determined in such a manner that machines can easily realize the jobs, from the most simple to the most complicated. 

Insights that are synthesized are the result of intellectual activity, including study, analysis, logic, and observation. Tasks, including robotics, control mechanisms, computer vision, scheduling, and data mining , fall under the umbrella of artificial intelligence.

The origins of human intelligence and conduct may be traced back to the individual's unique combination of genetics, upbringing, and exposure to various situations and environments. And it hinges entirely on one's freedom to shape his or her environment via the application of newly acquired information.

The information it provides is varied. For example, it may provide information on a person with a similar skill set or background, or it may reveal diplomatic information that a locator or spy was tasked with obtaining. After everything is said and done, it is able to deliver information about interpersonal relationships and the arrangement of interests.

The following is a table that compares human intelligence vs artificial intelligence:

Evolution

The cognitive abilities to think, reason, evaluate, and so on are built into human beings by their very nature.

Norbert Wiener, who hypothesized critique mechanisms, is credited with making a significant early contribution to the development of artificial intelligence (AI).

Essence

The purpose of human intelligence is to combine a range of cognitive activities in order to adapt to new circumstances.



The goal of artificial intelligence (AI) is to create computers that are able to behave like humans and complete jobs that humans would normally do.

Functionality

People make use of the memory, processing capabilities, and cognitive talents that their brains provide.

The processing of data and commands is essential to the operation of AI-powered devices.

Pace of operation

When it comes to speed, humans are no match for artificial intelligence or robots.

Computers have the ability to process far more information at a higher pace than individuals do. In the instance that the human mind can answer a mathematical problem in five minutes, artificial intelligence is capable of solving ten problems in one minute.

Learning ability

The basis of human intellect is acquired via the process of learning through a variety of experiences and situations.

Due to the fact that robots are unable to think in an abstract manner or make conclusions based on the experiences of the past. They are only capable of acquiring knowledge via exposure to material and consistent practice, although they will never create a cognitive process that is unique to humans.

Choice Making

It is possible for subjective factors that are not only based on numbers to influence the decisions that humans make.

Because it evaluates based on the entirety of the acquired facts, AI is exceptionally objective when it comes to making decisions.

Perfection

When it comes to human insights, there is almost always the possibility of "human mistake," which refers to the fact that some nuances may be overlooked at some time or another.

The fact that AI's capabilities are built on a collection of guidelines that may be updated allows it to deliver accurate results regularly.

Adjustments 

The human mind is capable of adjusting its perspectives in response to the changing conditions of its surroundings. Because of this, people are able to remember information and excel in a variety of activities.

It takes artificial intelligence a lot more time to adapt to unneeded changes.

Flexibility

The ability to exercise sound judgment is essential to multitasking, as shown by juggling a variety of jobs at once.

In the same way that a framework may learn tasks one at a time, artificial intelligence is only able to accomplish a fraction of the tasks at the same time.

Social Networking

Humans are superior to other social animals in terms of their ability to assimilate theoretical facts, their level of self-awareness, and their sensitivity to the emotions of others. This is because people are social creatures.

Artificial intelligence has not yet mastered the ability to pick up on associated social and enthusiastic indicators.

Operation

It might be described as inventive or creative.

It improves the overall performance of the system. It is impossible for it to be creative or inventive since robots cannot think in the same way that people can.

According to the findings of recent research, altering the electrical characteristics of certain cells in simulations of neural circuits caused the networks to acquire new information more quickly than in simulations with cells that were identical. They also discovered that in order for the networks to achieve the same outcomes, a smaller number of the modified cells were necessary and that the approach consumed fewer resources than models that utilized identical cells.

These results not only shed light on how human brains excel at learning but may also help us develop more advanced artificial intelligence systems, such as speech and facial recognition software for digital assistants and autonomous vehicle navigation systems.

Become a AI & Machine Learning Professional

  • $267 billion Expected Global AI Market Value By 2027
  • 37.3% Projected CAGR Of The Global AI Market From 2023-2030
  • $15.7 trillion Expected Total Contribution Of AI To The Global Economy By 2030

Artificial Intelligence Engineer

  • Industry-recognized AI Engineer Master’s certificate from Simplilearn
  • Dedicated live sessions by faculty of industry experts

Post Graduate Program in AI and Machine Learning

  • Program completion certificate from Purdue University and Simplilearn
  • Gain exposure to ChatGPT, OpenAI, Dall-E, Midjourney & other prominent tools

Here's what learners are saying regarding our programs:

Indrakala Nigam Beniwal

Indrakala Nigam Beniwal

Technical consultant , land transport authority (lta) singapore.

I completed a Master's Program in Artificial Intelligence Engineer with flying colors from Simplilearn. Thanks to the course teachers and others associated with designing such a wonderful learning experience.

Akili Yang

Personal Financial Consultant , OCBC Bank

The live sessions were quite good; you could ask questions and clear doubts. Also, the self-paced videos can be played conveniently, and any course part can be revisited. The hands-on projects were also perfect for practice; we could use the knowledge we acquired while doing the projects and apply it in real life.

The capabilities of AI are constantly expanding. It takes a significant amount of time to develop AI systems, which is something that cannot happen in the absence of human intervention. All forms of artificial intelligence, including self-driving vehicles and robotics, as well as more complex technologies like computer vision, and natural language processing , are dependent on human intellect.

1. Automation of Tasks

The most noticeable effect of AI has been the result of the digitalization and automation of formerly manual processes across a wide range of industries. These tasks, which were formerly performed manually, are now performed digitally. Currently, tasks or occupations that involve some degree of repetition or the use and interpretation of large amounts of data are communicated to and administered by a computer, and in certain cases, the intervention of humans is not required in order to complete these tasks or jobs.

2. New Opportunities

Artificial intelligence is creating new opportunities for the workforce by automating formerly human-intensive tasks . The rapid development of technology has resulted in the emergence of new fields of study and work, such as digital engineering. Therefore, although traditional manual labor jobs may go extinct, new opportunities and careers will emerge.

3. Economic Growth Model

When it's put to good use, rather than just for the sake of progress, AI has the potential to increase productivity and collaboration inside a company by opening up vast new avenues for growth. As a result, it may spur an increase in demand for goods and services, and power an economic growth model that spreads prosperity and raises standards of living.

4. Role of Work

In the era of AI, recognizing the potential of employment beyond just maintaining a standard of living is much more important. It conveys an understanding of the essential human need for involvement, co-creation, dedication, and a sense of being needed, and should therefore not be overlooked. So, sometimes, even mundane tasks at work become meaningful and advantageous, and if the task is eliminated or automated, it should be replaced with something that provides a comparable opportunity for human expression and disclosure.

5. Growth of Creativity and Innovation

Experts now have more time to focus on analyzing, delivering new and original solutions, and other operations that are firmly in the area of the human intellect, while robotics, AI, and industrial automation handle some of the mundane and physical duties formerly performed by humans.

While AI has the potential to automate specific tasks and jobs, it is likely to replace humans in some areas. AI is best suited for handling repetitive, data-driven tasks and making data-driven decisions. However, human skills such as creativity, critical thinking, emotional intelligence, and complex problem-solving still need to be more valuable and easily replicated by AI.

The future of AI is more likely to involve collaboration between humans and machines, where AI augments human capabilities and enables humans to focus on higher-level tasks that require human ingenuity and expertise. It is essential to view AI as a tool that can enhance productivity and facilitate new possibilities rather than as a complete substitute for human involvement.

Supercharge your career in Artificial Intelligence with our comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name AI Engineer Master's Program Post Graduate Program In Artificial Intelligence Post Graduate Program In Artificial Intelligence Geo All Geos All Geos IN/ROW University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 11 Months Coding Experience Required Basic Basic No Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including chatbots, NLP, Python, Keras and more. 8+ skills including Supervised & Unsupervised Learning Deep Learning Data Visualization, and more. Additional Benefits Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM Applied learning via 3 Capstone and 12 Industry-relevant Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance Upto 14 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$$ Explore Program Explore Program Explore Program

Artificial intelligence is revolutionizing every sector and pushing humanity forward to a new level. However, it is not yet feasible to achieve a precise replica of human intellect. The human cognitive process remains a mystery to scientists and experimentalists. Because of this, the common sense assumption in the growing debate between AI and human intelligence has been that AI would supplement human efforts rather than immediately replace them. Check out the Post Graduate Program in AI and Machine Learning at Simplilearn if you are interested in pursuing a career in the field of artificial intelligence. 

Our AI & Machine Learning Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees

Cohort Starts:

16 weeks€ 2,199

Cohort Starts:

11 months€ 2,990

Cohort Starts:

16 weeks€ 2,490

Cohort Starts:

11 Months€ 3,990

Cohort Starts:

16 weeks€ 2,199

Cohort Starts:

11 months€ 2,290
11 Months€ 1,490

Get Free Certifications with free video courses

Machine Learning using Python

AI & Machine Learning

Machine Learning using Python

Artificial Intelligence Beginners Guide: What is AI?

Artificial Intelligence Beginners Guide: What is AI?

Learn from Industry Experts with free Masterclasses

Kickstart Your Gen AI & ML Career on a High-Growth Path in 2024 with IIT Kanpur

Ethics in Generative AI: Why It Matters and What Benefits It Brings

Navigating the GenAI Frontier

Recommended Reads

Artificial Intelligence Career Guide: A Comprehensive Playbook to Becoming an AI Expert

Data Science vs Artificial Intelligence: Key Differences

How Does AI Work

Introduction to Artificial Intelligence: A Beginner's Guide

What is Artificial Intelligence and Why Gain AI Certification

Discover the Differences Between AI vs. Machine Learning vs. Deep Learning

Get Affiliated Certifications with Live Class programs

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Artificial Intelligence Essay for Students and Children

500+ words essay on artificial intelligence.

Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. With Artificial Intelligence, machines perform functions such as learning, planning, reasoning and problem-solving. Most noteworthy, Artificial Intelligence is the simulation of human intelligence by machines. It is probably the fastest-growing development in the World of technology and innovation . Furthermore, many experts believe AI could solve major challenges and crisis situations.

Artificial Intelligence Essay

Types of Artificial Intelligence

First of all, the categorization of Artificial Intelligence is into four types. Arend Hintze came up with this categorization. The categories are as follows:

Type 1: Reactive machines – These machines can react to situations. A famous example can be Deep Blue, the IBM chess program. Most noteworthy, the chess program won against Garry Kasparov , the popular chess legend. Furthermore, such machines lack memory. These machines certainly cannot use past experiences to inform future ones. It analyses all possible alternatives and chooses the best one.

Type 2: Limited memory – These AI systems are capable of using past experiences to inform future ones. A good example can be self-driving cars. Such cars have decision making systems . The car makes actions like changing lanes. Most noteworthy, these actions come from observations. There is no permanent storage of these observations.

Type 3: Theory of mind – This refers to understand others. Above all, this means to understand that others have their beliefs, intentions, desires, and opinions. However, this type of AI does not exist yet.

Type 4: Self-awareness – This is the highest and most sophisticated level of Artificial Intelligence. Such systems have a sense of self. Furthermore, they have awareness, consciousness, and emotions. Obviously, such type of technology does not yet exist. This technology would certainly be a revolution .

Get the huge list of more than 500 Essay Topics and Ideas

Applications of Artificial Intelligence

First of all, AI has significant use in healthcare. Companies are trying to develop technologies for quick diagnosis. Artificial Intelligence would efficiently operate on patients without human supervision. Such technological surgeries are already taking place. Another excellent healthcare technology is IBM Watson.

Artificial Intelligence in business would significantly save time and effort. There is an application of robotic automation to human business tasks. Furthermore, Machine learning algorithms help in better serving customers. Chatbots provide immediate response and service to customers.

artificial intelligence and human intelligence essay

AI can greatly increase the rate of work in manufacturing. Manufacture of a huge number of products can take place with AI. Furthermore, the entire production process can take place without human intervention. Hence, a lot of time and effort is saved.

Artificial Intelligence has applications in various other fields. These fields can be military , law , video games , government, finance, automotive, audit, art, etc. Hence, it’s clear that AI has a massive amount of different applications.

To sum it up, Artificial Intelligence looks all set to be the future of the World. Experts believe AI would certainly become a part and parcel of human life soon. AI would completely change the way we view our World. With Artificial Intelligence, the future seems intriguing and exciting.

{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [{ “@type”: “Question”, “name”: “Give an example of AI reactive machines?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Reactive machines react to situations. An example of it is the Deep Blue, the IBM chess program, This program defeated the popular chess player Garry Kasparov.” } }, { “@type”: “Question”, “name”: “How do chatbots help in business?”, “acceptedAnswer”: { “@type”: “Answer”, “text”:”Chatbots help in business by assisting customers. Above all, they do this by providing immediate response and service to customers.”} }] }

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

CONCEPTUAL ANALYSIS article

Human- versus artificial intelligence.

J. E. (Hans). Korteling

  • TNO Human Factors, Soesterberg, Netherlands

AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.

Introduction: Artificial and Human Intelligence, Worlds of Difference

Artificial general intelligence at the human level.

Recent advances in information technology and in AI may allow for more coordination and integration between of humans and technology. Therefore, quite some attention has been devoted to the development of Human-Aware AI, which aims at AI that adapts as a “team member” to the cognitive possibilities and limitations of the human team members. Also metaphors like “mate,” “partner,” “alter ego,” “Intelligent Collaborator,” “buddy” and “mutual understanding” emphasize a high degree of collaboration, similarity, and equality in “hybrid teams”. When human-aware AI partners operate like “human collaborators” they must be able to sense, understand, and react to a wide range of complex human behavioral qualities, like attention, motivation, emotion, creativity, planning, or argumentation, (e.g. Krämer et al., 2012 ; van den Bosch and Bronkhorst, 2018 ; van den Bosch et al., 2019 ). Therefore these “AI partners,” or “team mates” have to be endowed with human-like (or humanoid) cognitive abilities enabling mutual understanding and collaboration (i.e. “human awareness”).

However, no matter how intelligent and autonomous AI agents become in certain respects, at least for the foreseeable future, they probably will remain unconscious machines or special-purpose devices that support humans in specific, complex tasks. As digital machines they are equipped with a completely different operating system (digital vs biological) and with correspondingly different cognitive qualities and abilities than biological creatures, like humans and other animals ( Moravec, 1988 ; Klein et al., 2004 ; Korteling et al., 2018a ; Shneiderman, 2020a ). In general, digital reasoning- and problem-solving agents only compare very superficially to their biological counterparts, (e.g. Boden, 2017 ; Shneiderman, 2020b ). Keeping that in mind, it becomes more and more important that human professionals working with advanced AI systems, (e.g. in military‐ or policy making teams) develop a proper mental model about the different cognitive capacities of AI systems in relation to human cognition. This issue will become increasingly relevant when AI systems become more advanced and are deployed with higher degrees of autonomy. Therefore, the present paper tries to provide some more clarity and insight into the fundamental characteristics, differences and idiosyncrasies of human/biological and artificial/digital intelligences. In the final section, a global framework for constructing educational content on this “Intelligence Awareness” is introduced. This can be used for the development of education and training programs for humans who have to use or “collaborate with” advanced AI systems in the near and far future.

With the application of AI systems with increasing autonomy more and more researchers consider the necessity of vigorously addressing the real complex issues of “human-level intelligence” and more broadly artificial general intelligence , or AGI, (e.g. Goertzel et al., 2014 ). Many different definitions of A(G)I have already been proposed, (e.g. Russell and Norvig, 2014 for an overview). Many of them boil down to: technology containing or entailing (human-like) intelligence , (e.g. Kurzweil, 1990 ). This is problematic. Most definitions use the term “intelligence”, as an essential element of the definition itself, which makes the definition tautological. Second, the idea that A(G)I should be human-like seems unwarranted. At least in natural environments there are many other forms and manifestations of highly complex and intelligent behaviors that are very different from specific human cognitive abilities (see Grind, 1997 for an overview). Finally, like what is also frequently seen in the field of biology, these A(G)I definitions use human intelligence as a central basis or analogy for reasoning about the—less familiar—phenomenon of A(G)I ( Coley and Tanner, 2012 ). Because of the many differences between the underlying substrate and architecture of biological and artificial intelligence this anthropocentric way of reasoning is probably unwarranted. For these reasons we propose a (non-anthropocentric) definition of “intelligence” as: “ the capacity to realize complex goals ” ( Tegmark, 2017 ). These goals may pertain to narrow, restricted tasks (narrow AI) or to broad task domains (AGI). Building on this definition, and on a definition of AGI proposed by Bieger et al. (2014) and one of Grind (1997) , we define AGI here as: “ Non-biological capacities to autonomously and efficiently achieve complex goals in a wide range of environments”. AGI systems should be able to identify and extract the most important features for their operation and learning process automatically and efficiently over a broad range of tasks and contexts. Relevant AGI research differs from the ordinary AI research by addressing the versatility and wholeness of intelligence, and by carrying out the engineering practice according to a system comparable to the human mind in a certain sense ( Bieger et al., 2014 ).

It will be fascinating to create copies of ourselves which can learn iteratively by interaction with partners and thus become able to collaborate on the basis of common goals and mutual understanding and adaptation, (e.g. Bradshaw et al., 2012 ; Johnson et al., 2014 ). This would be very useful, for example when a high degree of social intelligence of AI will contribute to more adequate interactions with humans, for example in health care or for entertainment purposes ( Wyrobek et al., 2008 ). True collaboration on the basis of common goals and mutual understanding necessarily implies some form of humanoid general intelligence. For the time being, this remains a goal on a far-off horizon. In the present paper we argue why for most applications it also may not be very practical or necessary (and probably a bit misleading) to vigorously aim or to anticipate on systems possessing “human-like” AGI or “human-like” abilities or qualities. The fact that humans possess general intelligence does not imply that new inorganic forms of general intelligence should comply to the criteria of human intelligence. In this connection, the present paper addresses the way we think about (natural and artificial) intelligence in relation to the most probable potentials (and real upcoming issues) of AI in the short- and mid-term future. This will provide food for thought in anticipation of a future that is difficult to predict for a field as dynamic as AI.

What Is “Real Intelligence”?

Implicit in our aspiration of constructing AGI systems possessing humanoid intelligence is the premise that human (general) intelligence is the “real” form of intelligence. This is even already implicitly articulated in the term “Artificial Intelligence”, as if it were not entirely real, i.e., real like non-artificial (biological) intelligence. Indeed, as humans we know ourselves as the entities with the highest intelligence ever observed in the Universe. And as an extension of this, we like to see ourselves as rational beings who are able to solve a wide range of complex problems under all kinds of circumstances using our experience and intuition, supplemented by the rules of logic, decision analysis and statistics. It is therefore not surprising that we have some difficulty to accept the idea that we might be a bit less smart than we keep on telling ourselves, i.e., “the next insult for humanity” ( van Belkom, 2019 ). This goes as far that the rapid progress in the field of artificial intelligence is accompanied by a recurring redefinition of what should be considered “real (general) intelligence.” The conceptualization of intelligence, that is, the ability to autonomously and efficiently achieve complex goals, is then continuously adjusted and further restricted to: “those things that only humans can do.” In line with this, AI is then defined as “the study of how to make computers do things at which, at the moment, people are better” ( Rich and Knight, 1991 ; Rich et al., 2009 ). This includes thinking of creative solutions, flexibly using contextual- and background information, the use of intuition and feeling, the ability to really “think and understand,” or the inclusion of emotion in an (ethical) consideration. These are then cited as the specific elements of real intelligence, (e.g. Bergstein, 2017 ). For instance, Facebook’s director of AI and a spokesman in the field, Yann LeCun, mentioned at a Conference at MIT on the Future of Work that machines are still far from having “the essence of intelligence.” That includes the ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true. Another way of saying this is that machines don’t have common sense ( Bergstein, 2017 ), like submarines that cannot swim ( van Belkom, 2019 ). When exclusive human capacities become our pivotal navigation points on the horizon we may miss some significant problems that may need our attention first.

To make this point clear, we first will provide some insight into the basic nature of both human and artificial intelligence. This is necessary for the substantiation of an adequate awareness of intelligence ( Intelligence Awareness ), and adequate research and education anticipating the development and application of A(G)I. For the time being, this is based on three essential notions that can (and should) be further elaborated in the near future.

• With regard to cognitive tasks, we are probably less smart than we think. So why should we vigorously focus on human -like AGI?

• Many different forms of intelligence are possible and general intelligence is therefore not necessarily the same as humanoid general intelligence (or “AGI on human level”).

• AGI is often not necessary; many complex problems can also be tackled effectively using multiple narrow AI’s. 1

We Are Probably Not so Smart as We Think

How intelligent are we actually? The answer to that question is determined to a large extent by the perspective from which this issue is viewed, and thus by the measures and criteria for intelligence that is chosen. For example, we could compare the nature and capacities of human intelligence with other animal species. In that case we appear highly intelligent. Thanks to our enormous learning capacity, we have by far the most extensive arsenal of cognitive abilities 2 to autonomously solve complex problems and achieve complex objectives. This way we can solve a huge variety of arithmetic, conceptual, spatial, economic, socio-organizational, political, etc. problems. The primates—which differ only slightly from us in genetic terms—are far behind us in that respect. We can therefore legitimately qualify humans, as compared to other animal species that we know, as highly intelligent.

Limited Cognitive Capacity

However, we can also look beyond this “ relative interspecies perspective” and try to qualify our intelligence in more absolute terms, i.e., using a scale ranging from zero to what is physically possible. For example, we could view the computational capacity of a human brain as a physical system ( Bostrom, 2014 ; Tegmark, 2017 ). The prevailing notion in this respect among AI scientists is that intelligence is ultimately a matter of information and computation, and (thus) not of flesh and blood and carbon atoms. In principle, there is no physical law preventing that physical systems (consisting of quarks and atoms, like our brain) can be built with a much greater computing power and intelligence than the human brain. This would imply that there is no insurmountable physical reason why machines one day cannot become much more intelligent than ourselves in all possible respects ( Tegmark, 2017 ). Our intelligence is therefore relatively high compared to other animals, but in absolute terms it may be very limited in its physical computing capacity, albeit only by the limited size of our brain and its maximal possible number of neurons and glia cells, (e.g. Kahle, 1979 ).

To further define and assess our own (biological) intelligence, we can also discuss the evolution and nature of our biological thinking abilities. As a biological neural network of flesh and blood, necessary for survival, our brain has undergone an evolutionary optimization process of more than a billion years. In this extended period, it developed into a highly effective and efficient system for regulating essential biological functions and performing perceptive-motor and pattern-recognition tasks, such as gathering food, fighting and flighting, and mating. Almost during our entire evolution, the neural networks of our brain have been further optimized for these basic biological and perceptual motor processes that also lie at the basis of our daily practical skills, like cooking, gardening, or household jobs. Possibly because of the resulting proficiency for these kinds of tasks we may forget that these processes are characterized by extremely high computational complexity, (e.g. Moravec, 1988 ). For example, when we tie our shoelaces, many millions of signals flow in and out through a large number of different sensor systems, from tendon bodies and muscle spindles in our extremities to our retina, otolithic organs and semi-circular channels in the head, (e.g. Brodal, 1981 ). This enormous amount of information from many different perceptual-motor systems is continuously, parallel, effortless and even without conscious attention, processed in the neural networks of our brain ( Minsky, 1986 ; Moravec, 1988 ; Grind, 1997 ). In order to achieve this, the brain has a number of universal (inherent) working mechanisms, such as association and associative learning ( Shatz, 1992 ; Bar, 2007 ), potentiation and facilitation ( Katz and Miledi, 1968 ; Bao et al., 1997 ), saturation and lateral inhibition ( Isaacson and Scanziani, 2011 ; Korteling et al., 2018a ).

These kinds of basic biological and perceptual-motor capacities have been developed and set down over many millions of years. Much later in our evolution—actually only very recently—our cognitive abilities and rational functions have started to develop. These cognitive abilities, or capacities, are probably less than 100 thousand years old, which may be qualified as “embryonal” on the time scale of evolution, (e.g. Petraglia and Korisettar, 1998 ; McBrearty and Brooks, 2000 ; Henshilwood and Marean, 2003 ). In addition, this very thin layer of human achievement has necessarily been built on these “ancient” neural intelligence for essential survival functions. So, our “higher” cognitive capacities are developed from and with these (neuro) biological regulation mechanisms ( Damasio, 1994 ; Korteling and Toet, 2020 ). As a result, it should not be a surprise that the capacities of our brain for performing these recent cognitive functions are still rather limited. These limitations are manifested in many different ways, for instance:

‐The amount of cognitive information that we can consciously process (our working memory, span or attention) is very limited ( Simon, 1955 ). The capacity of our working memory is approximately 10–50 bits per second ( Tegmark, 2017 ).

‐Most cognitive tasks, like reading text or calculation, require our full attention and we usually need a lot of time to execute them. Mobile calculators can perform millions times more complex calculations than we can ( Tegmark, 2017 ).

‐Although we can process lots of information in parallel, we cannot simultaneously execute cognitive tasks that require deliberation and attention, i.e., “multi-tasking” ( Korteling, 1994 ; Rogers and Monsell, 1995 ; Rubinstein, Meyer, and Evans, 2001 ).

‐Acquired cognitive knowledge and skills of people (memory) tend to decay over time, much more than perceptual-motor skills. Because of this limited “retention” of information we easily forget substantial portions of what we have learned ( Wingfield and Byrnes, 1981 ).

Ingrained Cognitive Biases

Our limited processing capacity for cognitive tasks is not the only factor determining our cognitive intelligence. Except for an overall limited processing capacity, human cognitive information processing shows systematic distortions. These are manifested in many cognitive biases ( Tversky and Kahneman, 1973 , Tversky and Kahneman, 1974 ). Cognitive biases are systematic, universally occurring tendencies, inclinations, or dispositions that skew or distort information processes in ways that make their outcome inaccurate, suboptimal, or simply wrong, (e.g. Lichtenstein and Slovic, 1971 ; Tversky and Kahneman, 1981 ). Many biases occur in virtually the same way in many different decision situations ( Shafir and LeBoeuf, 2002 ; Kahneman, 2011 ; Toet et al., 2016 ). The literature provides descriptions and demonstrations of over 200 biases. These tendencies are largely implicit and unconscious and feel quite naturally and self/evident when we are aware of these cognitive inclinations ( Pronin et al., 2002 ; Risen, 2015 ; Korteling et al., 2018b ). That is why they are often termed “intuitive” ( Kahneman and Klein, 2009 ) or “irrational” ( Shafir and LeBoeuf, 2002 ). Biased reasoning can result in quite acceptable outcomes in natural or everyday situations, especially when the time cost of reasoning is taken into account ( Simon, 1955 ; Gigerenzer and Gaissmaier, 2011 ). However, people often deviate from rationality and/or the tenets of logic, calculation, and probability in inadvisable ways ( Tversky and Kahneman, 1974 ; Shafir and LeBoeuf, 2002 ) leading to suboptimal decisions in terms of invested time and effort (costs) given the available information and expected benefits.

Biases are largely caused by inherent (or structural) characteristics and mechanisms of the brain as a neural network ( Korteling et al., 2018a ; Korteling and Toet, 2020 ). Basically, these mechanisms—such as association, facilitation, adaptation, or lateral inhibition—result in a modification of the original or available data and its processing, (e.g. weighting its importance). For instance, lateral inhibition is a universal neural process resulting in the magnification of differences in neural activity (contrast enhancement), which is very useful for perceptual-motor functions, maintaining physical integrity and allostasis, (i.e. biological survival functions). For these functions our nervous system has been optimized for millions of years. However, “higher” cognitive functions, like conceptual thinking, probability reasoning or calculation, have been developed only very recently in evolution. These functions are probably less than 100 thousand years old, and may, therefore, be qualified as “embryonal” on the time scale of evolution, (e.g. McBrearty and Brooks, 2000 ; Henshilwood and Marean, 2003 ; Petraglia and Korisettar, 2003 ). In addition, evolution could not develop these new cognitive functions from scratch, but instead had to build this embryonal, and thin layer of human achievement from its “ancient” neural heritage for the essential biological survival functions ( Moravec, 1988 ). Since cognitive functions typically require exact calculation and proper weighting of data, data transformations—like lateral inhibition—may easily lead to systematic distortions, (i.e. biases) in cognitive information processing. Examples of the large number of biases caused by the inherent properties of biological neural networks are: Anchoring bias (biasing decisions toward previously acquired information, Furnham and Boo, 2011 ; Tversky and Kahneman, 1973 , Tversky and Kahneman, 1974 ), the Hindsight bias (the tendency to erroneously perceive events as inevitable or more likely once they have occurred, Hoffrage et al., 2000 ; Roese and Vohs, 2012 ) the Availability bias (judging the frequency, importance, or likelihood of an event by the ease with which relevant instances come to mind, Tversky and Kahnemann, 1973 ; Tversky and Kahneman, 1974 ), and the Confirmation bias (the tendency to select, interpret, and remember information in a way that confirms one’s preconceptions, views, and expectations, Nickerson, 1998 ). In addition to these inherent (structural) limitations of (biological) neural networks, biases may also originate from functional evolutionary principles promoting the survival of our ancestors who, as hunter-gatherers, lived in small, close-knit groups ( Haselton et al., 2005 ; Tooby and Cosmides, 2005 ). Cognitive biases can be caused by a mismatch between evolutionarily rationalized “heuristics” (“evolutionary rationality”: Haselton et al., 2009 ) and the current context or environment ( Tooby and Cosmides, 2005 ). In this view, the same heuristics that optimized the chances of survival of our ancestors in their (natural) environment can lead to maladaptive (biased) behavior when they are used in our current (artificial) settings. Biases that have been considered as examples of this kind of mismatch are the Action bias (preferring action even when there is no rational justification to do this, Baron and Ritov, 2004 ; Patt and Zeckhauser, 2000 ), Social proof (the tendency to mirror or copy the actions and opinions of others, Cialdini, 1984 ), the Tragedy of the commons (prioritizing personal interests over the common good of the community, Hardin, 1968 ), and the Ingroup bias (favoring one’s own group above that of others, Taylor and Doria, 1981 ).

This hard-wired (neurally inherent and/or evolutionary ingrained) character of biased thinking makes it unlikely that simple and straightforward methods like training interventions or awareness courses will be very effective to ameliorate biases. This difficulty of bias mitigation seems indeed supported by the literature ( Korteling et al., 2021 ).

General Intelligence Is Not the Same as Human-like Intelligence

Fundamental differences between biological and artificial intelligence.

We often think and deliberate about intelligence with an anthropocentric conception of our own intelligence in mind as an obvious and unambiguous reference. We tend to use this conception as a basis for reasoning about other, less familiar phenomena of intelligence, such as other forms of biological and artificial intelligence ( Coley and Tanner, 2012 ). This may lead to fascinating questions and ideas. An example is the discussion about how and when the point of “intelligence at human level” will be achieved. For instance, Ackermann. (2018) writes: “Before reaching superintelligence, general AI means that a machine will have the same cognitive capabilities as a human being”. So, researchers deliberate extensively about the point in time when we will reach general AI, (e.g., Goertzel, 2007 ; Müller and Bostrom, 2016 ). We suppose that these kinds of questions are not quite on target. There are (in principle) many different possible types of (general) intelligence conceivable of which human-like intelligence is just one of those. This means, for example that the development of AI is determined by the constraint of physics and technology, and not by those of biological evolution. So, just as the intelligence of a hypothetical extraterrestrial visitor of our planet earth is likely to have a different (in-)organic structure with different characteristics, strengths, and weaknesses, than the human residents this will also apply to artificial forms of (general) intelligence. Below we briefly summarize a few fundamental differences between human and artificial intelligence ( Bostrom, 2014 ):

‐Basic structure: Biological (carbon) intelligence is based on neural “wetware” which is fundamentally different from artificial (silicon-based) intelligence. As opposed to biological wetware, in silicon, or digital, systems “hardware” and “software” are independent of each other ( Kosslyn and Koenig, 1992 ). When a biological system has learned a new skill, this will be bounded to the system itself. In contrast, if an AI system has learned a certain skill then the constituting algorithms can be directly copied to all other similar digital systems.

‐Speed: Signals from AI systems propagate with almost the speed of light. In humans, the conduction velocity of nerves proceeds with a speed of at most 120 m/s, which is extremely slow in the time scale of computers ( Siegel and Sapru, 2005 ).

‐Connectivity and communication: People cannot directly communicate with each other. They communicate via language and gestures with limited bandwidth. This is slower and more difficult than the communication of AI systems that can be connected directly to each other. Thanks to this direct connection, they can also collaborate on the basis of integrated algorithms.

‐Updatability and scalability: AI systems have almost no constraints with regard to keep them up to date or to upscale and/or re-configure them, so that they have the right algorithms and the data processing and storage capacities necessary for the tasks they have to carry out. This capacity for rapid, structural expansion and immediate improvement hardly applies to people.

‐In contrast, biology does a lot with a little: organic brains are millions of times more efficient in energy consumption than computers. The human brain consumes less energy than a lightbulb, whereas a supercomputer with comparable computational performance uses enough electricity to power quite a village ( Fischetti, 2011 ).

These kinds of differences in basic structure, speed, connectivity, updatability, scalability, and energy consumption will necessarily also lead to different qualities and limitations between human and artificial intelligence. Our response speed to simple stimuli is, for example, many thousands of times slower than that of artificial systems. Computer systems can very easily be connected directly to each other and as such can be part of one integrated system. This means that AI systems do not have to be seen as individual entities that can easily work alongside each other or have mutual misunderstandings. And if two AI systems are engaged in a task then they run a minimal risk to make a mistake because of miscommunications (think of autonomous vehicles approaching a crossroad). After all, they are intrinsically connected parts of the same system and the same algorithm ( Gerla et al., 2014 ).

Complexity and Moravec’s Paradox

Because biological, carbon-based, brains and digital, silicon-based, computers are optimized for completely different kinds of tasks (e.g., Moravec, 1988 ; Korteling et al., 2018b ), human and artificial intelligence show fundamental and probably far-stretching differences. Because of these differences it may be very misleading to use our own mind as a basis, model or analogy for reasoning about AI. This may lead to erroneous conceptions, for example about the presumed abilities of humans and AI to perform complex tasks. Resulting flaws concerning information processing capacities emerge often in the psychological literature in which “complexity” and “difficulty” of tasks are used interchangeably (see for examples: Wood et al., 1987 ; McDowd and Craik, 1988 ). Task complexity is then assessed in an anthropocentric way, that is: by the degree to which we humans can perform or master it. So, we use the difficulty to perform or master a task as a measure of its complexity , and task performance (speed, errors) as a measure of skill and intelligence of the task performer. Although this could sometimes be acceptable in psychological research, this may be misleading if we strive for understanding the intelligence of AI systems. For us it is much more difficult to multiply two random numbers of six digits than to recognize a friend on a photograph. But when it comes to counting or arithmetic operations, computers are thousands of times faster and better, while the same systems have only recently taken steps in image recognition (which only succeeded when deep learning technology, based on some principles of biological neural networks, was developed). In general: cognitive tasks that are relatively difficult for the human brain (and which we therefore find subjectively difficult) do not have to be computationally complex, (e.g., in terms of objective arithmetic, logic, and abstract operations). And vice versa: tasks that are relatively easy for the brain (recognizing patterns, perceptual-motor tasks, well-trained tasks) do not have to be computationally simple. This phenomenon, that which is easy for the ancient, neural “technology” of people and difficult for the modern, digital technology of computers (and vice versa) has been termed the moravec’s Paradox. Hans Moravec (1988) wrote: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

Human Superior Perceptual-Motor Intelligence

Moravec’s paradox implies that biological neural networks are intelligent in different ways than artificial neural networks. Intelligence is not limited to the problems or goals that we as humans, equipped with biological intelligence, find difficult ( Grind, 1997 ). Intelligence, defined as the ability to realize complex goals or solve complex problems, is much more than that. According to Moravec (1988) high-level reasoning requires very little computation, but low-level perceptual-motor skills require enormous computational resources. If we express the complexity of a problem in terms of the number of elementary calculations needed to solve it, then our biological perceptual motor intelligence is highly superior to our cognitive intelligence. Our organic perceptual-motor intelligence is especially good at associative processing of higher-order invariants (“patterns”) in the ambient information. These are computationally more complex and contain more information than the simple, individual elements ( Gibson, 1966 , Gibson, 1979 ). An example of our superior perceptual-motor abilities is the Object Superiority Effect : we perceive and interpret whole objects faster and more effective than the (more simple) individual elements that make up these objects ( Weisstein and Harris, 1974 ; McClelland, 1978 ; Williams and Weisstein, 1978 ; Pomerantz, 1981 ). Thus, letters are also perceived more accurately when presented as part of a word than when presented in isolation, i.e. the Word superiority effect, (e.g. Reicher, 1969 ; Wheeler, 1970 ). So, the difficulty of a task does not necessarily indicate its inherent complexity . As Moravec (1988) puts it: “We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

The Supposition of Human-like AGI

So, if there would exist AI systems with general intelligence that can be used for a wide range of complex problems and objectives, those AGI machines would probably have a completely different intelligence profile, including other cognitive qualities, than humans have ( Goertzel, 2007 ). This will be even so, if we manage to construct AI agents who display similar behavior like us and if they are enabled to adapt to our way of thinking and problem-solving in order to promote human-AI teaming. Unless we decide to deliberately degrade the capabilities of AI systems (which would not be very smart), the underlying capacities and abilities of man and machines with regard to collection and processing of information, data analysis, probability reasoning, logic, memory capacity etc. will still remain dissimilar. Because of these differences we should focus at systems that effectively complement us, and that make the human-AI system stronger and more effective. Instead of pursuing human-level AI it would be more beneficial to focus on autonomous machines and (support) systems that fill in, or extend on, the manifold gaps of human cognitive intelligence. For instance, whereas people are forced—by the slowness and other limitations of biological brains—to think heuristically in terms of goals, virtues, rules and norms expressed in (fuzzy) language, AI has already established excellent capacities to process and calculate directly on highly complex data. Therefore, or the execution of specific (narrow) cognitive tasks (logical, analytical, computational), modern digital intelligence may be more effective and efficient than biological intelligence. AI may thus help to produce better answers for complex problems using high amounts of data, consistent sets of ethical principles and goals, probabilistic-, and logic reasoning, (e.g. Korteling et al., 2018b ). Therefore, we conjecture that ultimately the development of AI systems for supporting human decision making may appear the most effective way leading to the making of better choices or the development of better solutions on complex issues. So, the cooperation and division of tasks between people and AI systems will have to be primarily determinated by their mutually specific qualities. For example, tasks or task components that appeal to capacities in which AI systems excel, will have to be less (or less fully) mastered by people, so that less training will probably be required. AI systems are already much better than people at logically and arithmetically correct gathering (selecting) and processing (weighing, prioritizing, analyzing, combining) large amounts of data. They do this quickly, accurately and reliably. They are also more stable (consistent) than humans, have no stress and emotions and have a great perseverance and a much better retention of knowledge and skills than people. As a machine, they serve people completely and without any “self-interest” or “own hidden agenda.” Based on these qualities AI systems may effectively take over tasks, or task components, from people. However, it remains important that people continue to master those tasks to a certain extent, so that they can take over tasks or take adequate action if the machine system fails.

In general, people are better suited than AI systems for a much broader spectrum of cognitive and social tasks under a wide variety of (unforeseen) circumstances and events ( Korteling et al., 2018b ). People are also better at the social-psychosocial interaction for the time being. For example, it is difficult for AI systems to interpret human language and -symbolism. This requires a very extensive frame of reference, which, at least until now and for the near future, is difficult to achieve within AI. As a result of all these differences, people are still better at responding (as a flexible team) to unexpected and unpredictable situations and creatively devising possibilities and solutions in open and ill-defined tasks and across a wide range of different, and possibly unexpected, circumstances. People will have to make extra use of their specific human qualities, (i.e. what people are relatively good at) and train to improve relevant competencies. In addition, human team members will have to learn to deal well with the overall limitations of AIs. With such a proper division of tasks, capitalizing on the specific qualities and limitations of humans and AI systems, human decisional biases may be circumvented and better performance may be expected. This means that enhancement of a team with intelligent machines having less cognitive constraints and biases, may have more surplus value than striving at collaboration between humans and AI that have developed the same (human) biases. Although cooperation in teams with AI systems may need extra training in order to effectively deal with this bias-mismatch, this heterogeneity will probably be better and safer. This also opens up the possibility of a combination of high levels of meaningful human control AND high levels of automation which is likely to produce the most effective and safe human-AI systems ( Elands et al., 2019 ; Shneiderman, 2020a ). In brief: human intelligence is not the golden standard for general intelligence; instead of aiming at human-like AGI, the pursuit of AGI should thus focus on effective digital/silicon AGI in conjunction with an optimal configuration and allocation of tasks.

Explainability and Trust

Developments in relation to artificial learning, or deep (reinforcement) learning, in particular have been revolutionary. Deep learning simulates a network resembling the layered neural networks of our brain. Based on large quantities of data, the network learns to recognize patterns and links to a high level of accuracy and then connect them to courses of action without knowing the underlying causal links. This implies that it is difficult to provide deep learning AI with some kind of transparency in how or why it has made a particular choice by, for example, by expressing an intelligible reasoning (for humans) about its decision process, like we do, (e.g. Belkom, 2019 ). In addition, reasoning about decisions like humans do is a very malleable and ad hoc process (at least in humans). Humans are generally unaware of their implicit cognitions or attitudes, and therefore not be able to adequately report on them. It is therefore rather difficult for many humans to introspectively analyze their mental states, as far as these are conscious, and attach the results of this analysis to verbal labels and descriptions, (e.g. Nosek et al. (2011) . First, the human brain hardly reveals how it creates conscious thoughts, (e.g. Feldman-Barret, 2017 ). What it actually does is giving us the illusion that its products reveal its inner workings. In other words: our conscious thoughts tell us nothing about the way in which these thoughts came about. There is also no subjective marker that distinguishes correct reasoning processes from erroneous ones ( Kahneman and Klein, 2009 ). The decision maker therefore has no way to distinguish between correct thoughts, emanating from genuine knowledge and expertize, and incorrect ones following from inappropriate neuro-evolutionary processes, tendencies, and primal intuitions. So here we could ask the question: isn’t it more trustworthy to have a real black box, than to listen to a confabulating one? In addition, according to Werkhoven et al. (2018) demanding explainability observability, or transparency ( Belkom, 2019 ; van den Bosch et al., 2019 ) may cause artificial intelligent systems to constrain their potential benefit for human society, to what can be understood by humans.

Of course we should not blindly trust the results generated by AI. Like other fields of complex technology, (e.g. Modeling & Simulation), AI systems need to be verified (meeting specifications) and validated (meeting the systems’ goals) with regard to the objectives for which the system was designed. In general, when a system is properly verified and validated, it may be considered safe, secure and fit for purpose. It therefore deserves our trust for (logically) comprehensible and objective reasons (although mistakes still can happen). Likewise people trust in the performance of aero planes and cell phones despite we are almost completely ignorant about their complex inner processes. Like our own brains, artificial neural networks are fundamentally intransparant ( Nosek et al., 2011 ; Feldman-Barret, 2017 ). Therefore, trust in AI should be primarily based on its objective performance. This forms a more important base than providing trust on the basis of subjective (trickable) impressions, stories, or images aimed at belief and appeal to the user. Based on empirical validation research, developers and users can explicitly verify how well the system is doing with respect to the set of values and goals for which the machine was designed. At some point, humans may want to trust that goals can be achieved against less cost and better outcomes, when we accept solutions even if they may be less transparent for humans ( Werkhoven et al., 2018 ).

The Impact of Multiple Narrow AI Technology

Agi as the holy grail.

AGI, like human general intelligence, would have many obvious advantages, compared to narrow (limited, weak, specialized) AI. An AGI system would be much more flexible and adaptive. On the basis of generic training and reasoning processes it would understand autonomously how multiple problems in all kinds of different domains can be solved in relation to their context, (e.g. Kurzweil, 2005 ). AGI systems also require far fewer human interventions to accommodate the various loose ends among partial elements, facets, and perspectives in complex situations. AGI would really understand problems and is capable to view them from different perspectives (as people—ideally—also can do). A characteristic of the current (narrow) AI tools is that they are skilled in a very specific task, where they can often perform at superhuman levels, (e.g. Goertzel, 2007 ; Silver et al., 2017 ). These specific tasks have been well-defined and structured. Narrow AI systems are less suitable, or totally unsuitable, for tasks or task environments that offer little structure, consistency, rules or guidance, in which all sorts of unexpected, rare or uncommon events, (e.g. emergencies) may occur. Knowing and following fixed procedures usually does not lead to proper solutions in these varying circumstances. In the context of (unforeseen) changes in goals or circumstances, the adequacy of current AI is considerably reduced because it cannot reason from a general perspective and adapt accordingly ( Lake et al., 2017 ; Horowitz, 2018 ). As with narrow AI systems, people are then needed to supervise on these deviations in order to enable flexible and adaptive system performance. Therefore the quest of AGI may be considered as looking for a kind of holy grail.

Multiple Narrow AI is Most Relevant Now!

The potential high prospects of AGI, however, do not imply that AGI will be the most crucial factor in future AI R&D, at least for the short- and mid-term. When reflecting on the great potential benefits of general intelligence, we tend to consider narrow AI applications as separate entities that can very well be outperformed by a broader AGI that presumably can deal with everything. But just as our modern world has evolved rapidly through a diversity of specific (limited) technological innovations, at the system level the total and wide range of emerging AI applications will also have a groundbreaking technological and societal impact ( Peeters et al., 2020 ). This will be all the more relevant for the future world of big data, in which everything is connected to everything through the Internet of Things . So, it will be much more profitable and beneficial to develop and build (non-human-like) AI variants that will excel in areas where people are inherently limited. It seems not too far-fetched to suppose that the multiple variants of narrow AI applications also gradually get more broadly interconnected. In this way, a development toward an ever broader realm of integrated AI applications may be expected. In addition, it is already possible to train a language model AI (Generative Pre-trained Transformer3, GPT-3) with a gigantic dataset and then have it learn various tasks based on a handful of examples—one or few-shot learning. GPT-3 (developed by OpenAI) can do this with language-related tasks, but there is no reason why this should not be possible with image and sound, or with combinations of these three ( Brown, 2020 ).

Besides, the moravec Paradox implies that the development of AI “partners” with many kinds of human (-level) qualities will be very difficult to obtain, whereas their added value, (i.e. beyond the boundaries of human capabilities) will be relatively low. The most fruitful AI applications will mainly involve supplementing human constraints and limitations. Given the present incentives for competitive technological progress, multiple forms of (connected) narrow AI systems will be the major driver of AI impact on our society for short- and mid-term. For the near future, this may imply that AI applications will remain very different from, and in many aspects almost incomparable with, human agents. This is likely to be true even if the hypothetical match of artificial general intelligence (AGI) with human cognition were to be achieved in the future in the longer term. Intelligence is a multi-dimensional (quantitative, qualitative) concept. All dimensions of AI unfold and grow along their own different path with their own dynamics. Therefore, over time an increasing number of specific (narrow) AI capacities may gradually match, overtake and transcend human cognitive capacities. Given the enormous advantages of AI, for example in the field of data availability and data processing capacities, the realization of AGI probably would at the same time outclass human intelligence in many ways. Which implies that the hypothetical point of time of matching human- and artificial cognitive capacities, i.e. human-level AGI, will probably be hard to define in a meaningful way ( Goertzel, 2007 ). 3

So when AI will truly understand us as a “friend,” “partner,” “alter ego” or “buddy,” as we do when we collaborate with other humans as humans, it will surpass us in many areas at the same Moravec (1998) time. It will have a completely different profile of capacities and abilities and thus it will not be easy to really understand the way it “thinks” and comes to its decisions. In the meantime, however, as the capacities of robots expand and move from simple tools to more integrated systems, it is important to calibrate our expectations and perceptions toward robots appropriately. So, we will have to enhance our awareness and insight concerning the continuous development and progression of multiple forms of (integrated) AI systems. This concerns for example the multi-facetted nature of intelligence. Different kind of agents may have different combinations of intelligences of very different levels. An agent with general intelligence may for example be endowed with excellent abilities on the area of image recognition and navigation, calculation, and logical reasoning while at the same time being dull on the area of social interaction and goal-oriented problem solving. This awareness of the multi-dimensional nature of intelligence also concerns the way we have to deal with ( and capitalize on) anthropomorphism. That is the human tendency in human-robot interaction to characterize non-human artifacts that superficially look similar to us as possessing human-like traits, emotions, and intentions, (e.g., Kiesler and Hinds, 2004 ; Fink, 2012 ; Haring et al., 2018 ). Insight into these human factors issues is crucial to optimize the utility, performance and safety of human-AI systems ( Peeters et al., 2020 ).

From this perspective, the question whether or not “AGI at the human level” will be realized is not the most relevant question for the time being. According to most AI scientists, this will certainly happen, and the key question is not IF this will happen, but WHEN, (e.g., Müller and Bostrom, 2016 ). At a system level, however, multiple narrow AI applications are likely to overtake human intelligence in an increasingly wide range of areas.

Conclusions and Framework

The present paper focused on providing some more clarity and insight into the fundamental characteristics, differences and idiosyncrasies of human and artificial intelligences. First we presented ideas and arguments to scale up and differentiate our conception of intelligence, whether this may be human or artificial. Central to this broader, multi-faceted, conception of intelligence is the notion that intelligence in itself is a matter of information and computation, independent of its physical substrate. However, the nature of this physical substrate (biological/carbon or digital/silicon), will substantially determine its potential envelope of cognitive abilities and limitations. Organic cognitive faculties of humans have been very recently developed during the evolution of mankind. These “embryonal” faculties have been built on top of a biological neural network apparatus that has been optimized for allostasis and (complex) perceptual motor functions. Human cognition is therefore characterized by various structural limitations and distortions in its capacity to process certain forms of non-biological information. Biological neural networks are, for example, not very capable of performing arithmetic calculations, for which my pocket calculator fits millions of times better. These inherent and ingrained limitations, that are due to the biological and evolutionary origin of human intelligence, may be termed “hard-wired.”

In line with the Moravic’s paradox , we argued that intelligent behavior is more than what we, as homo sapiens, find difficult. So we should not confuse task-difficulty (subjective, anthropocentric) with task-complexity (objective). Instead we advocated a versatile conceptualization of intelligence and an acknowledgment of its many possible forms and compositions. This implies a high variety in types of biological or other forms of high (general) intelligence with a broad range of possible intelligence profiles and cognitive qualities (which may or may not surpass ours in many ways). This would make us better aware of the most probable potentials of AI applications for the short- and medium-term future. For example, from this perspective, our primary research focus should be on those components of the intelligence spectrum that are relatively difficult for the human brain and relatively easy for machines. This involves primarily the cognitive component requiring calculation, arithmetic analysis, statistics, probability calculation, data analysis, logical reasoning, memorization, et cetera.

In line with this we have advocated a modest, more humble, view of our human, general intelligence. Which also implies that human-level AGI should not be considered as the “golden standard” of intelligence (to be pursued with foremost priority). Because of the many fundamental differences between natural and artificial intelligences, human-like AGI will be very difficult to accomplish in the first place (and also with relatively limited added value). In case an AGI will be accomplished in the (far) future it will therefore probably have a completely different profile of cognitive capacities and abilities than we, as humans, have. When such an AGI has come so far that it is able to “collaborate” like a human, it will at the same time be likely that can in many respects already function at highly superior levels relative to what we are able to. For the time being, however, it will not be very realistic and useful to aim at AGI that includes the broad scope of human perceptual-motor and cognitive abilities. Instead, the most profitable AI applications for the short- and mid-term future, will probably be based on multiple narrow AI systems. These multiple narrow AI applications may catch up with human intelligence in an increasingly broader range of areas.

From this point of view we advocate not to dwell too intensively on the AGI question, whether or when AI will outsmart us, take our jobs, or how to endow it with all kinds of human abilities. Given the present state of the art it may be wise to focus more on the whole system of multiple AI innovations with humans as a crucial connecting and supervising factor. This also implies the establishment and formalization of legal boundaries and proper (effective, ethical, safe) goals for AI systems ( Elands et al., 2019 ; Aliman, 2020 ). So this human factor (legislator, user, “collaborator”) needs to have good insight into the characteristics and capacities of biological and artificial intelligence (under all sorts of tasks and working conditions). Both in the workplace and in policy making the most fruitful AI applications will be to complement and compensate for the inherent biological and cognitive constraints of humans. For this reason, prominent issues concern how to use it intelligently? For what tasks and under what conditions decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the strengths of human intelligence and how to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition. See ( Hoffman and Johnson, 2019 ; Shneiderman, 2020a ; Shneiderman, 2020b ) for recent overviews.

In summary: No matter how intelligent autonomous AI agents become in certain respects, at least for the foreseeable future, they will remain unconscious machines. These machines have a fundamentally different operating system (biological vs digital) with correspondingly different cognitive abilities and qualities than people and other animals. So, before a proper “team collaboration” can start, the human team members will have to understand these kinds of differences, i.e., how human information processing and intelligence differs from that of–the many possible and specific variants of—AI systems. Only when humans develop a proper of these “interspecies” differences they can effectively capitalize on the potential benefits of AI in (future) human-AI teams. Given the high flexibility, versatility, and adaptability of humans relative to AI systems, the first challenge becomes then how to ensure human adaptation to the more rigid abilities of AI? 4 In other words: how can we achieve a proper conception the differences between human- and artificial intelligence?

Framework for Intelligence Awareness Training

For this question, the issue of Intelligence Awareness in human professionals needs to be addressed more vigorously. Next to computer tools for the distribution of relevant awareness information ( Collazos et al., 2019 ) in human-machine systems, this requires better education and training on how to deal with the very new and different characteristics, idiosyncrasies, and capacities of AI systems. This includes, for example, a proper understanding of the basic characteristics, possibilities, and limitations of the AI’s cognitive system properties without anthropocentric and/or anthropomorphic misconceptions. In general, this “Intelligence Awareness” is highly relevant in order to better understand, investigate, and deal with the manifold possibilities and challenges of machine intelligence. This practical human-factors challenge could, for instance, be tackled by developing new, targeted and easily configurable (adaptive) training forms and learning environments for human-AI systems. These flexible training forms and environments, (e.g. simulations and games) should focus at developing knowledge, insight and practical skills concerning the specific, non-human characteristics, abilities, and limitations of AI systems and how to deal with these in practical situations. People will have to understand the critical factors determining the goals, performance, and choices of AI? Which may in some cases even include the simple notion that AIs excite as much about their performance in achieving their goals as your refrigerator does for keeping your milkshake well. They have to learn when and under what conditions decisions are safe to leave to AI and when is human judgment required or essential? And more in general: how does it “think” and decide? The relevance of this kind of knowledge, skills and practices will only become bigger when the degree of autonomy (and genericity) of advanced AI systems will grow.

What does such an Intelligence Awareness training curriculum look like? It needs to include at least a module on the cognitive characteristics of AI. This is basically a subject similar to those subjects that are also included in curricula on human cognition. This broad module on the “Cognitive Science of AI” may involve a range of sub-topics starting with a revision of the concept of "Intelligence" stripped of anthropocentric and anthropomorphic misunderstandings. In addition, this module should focus at providing knowledge about the structure and operation of the AI operating system or the “AI mind.” This may be followed by subjects like: Perception and interpretation of information by AI, AI cognition (memory, information processing, problem solving, biases), dealing with AI possibilities and limitations in the “human” areas like creativity, adaptivity, autonomy, reflection, and (self-) awareness, dealing with goal functions (valuation of actions in relation to cost-benefit), AI ethics and AI security. In addition, such a curriculum should include technical modules providing insight into the working of the AI operating system. Due to the enormous speed with which the AI technology and application develops, the content of such a curriculum is also very dynamic and continuously evolving on the basis of technological progress. This implies that the curriculum and training-aids and -environments should be flexible, experiential, and adaptive, which makes the work form of serious gaming ideally suited. Below, we provide a global framework for the development of new educational curricula on AI awareness. These subtopics go beyond learning to effectively “operate,” “control” or interact with specific AI applications (i.e. conventional human-machine interaction):

‐Understanding of underlying system characteristics of the AI (the “AI brain”). Understanding the specific qualities and limitations of AI relative to human intelligence.

‐Understanding the complexity of the tasks and of the environment from the perspective of AI systems.

‐Understanding the problem of biases in human cognition, relative to biases in AI.

‐Understanding the problems associated with the control of AI, predictability of AI behavior (decisions), building trust, maintaining situation awareness (complacency), dynamic task allocation, (e.g. taking over each other’s tasks) and responsibility (accountability).

‐How to deal with possibilities and limitations of AI in the field of “creativity”, adaptability of AI, “environmental awareness”, and generalization of knowledge.

‐Learning to deal with perceptual and cognitive limitations and possible errors of AI which may be difficult to comprehend.

‐Trust in the performance of AI (possibly in spite of limited transparency or ability to “explain”) based on verification and validation.

‐Learning to deal with our natural inclination to anthropocentrism and anthropomorphism (“theory of mind”) when reasoning about human-robot interaction.

‐How to capitalize on the powers of AI in order to deal with the inherent constraints of human information processing (and vice versa).

‐Understanding the specific characteristics and qualities of the man-machine system and being able to decide on when, for what, and how the integrated combination of human- and AI faculties may perform at best overall system potential.

In conclusion: due to the enormous speed with which the AI technology and application evolves we need a more versatile conceptualization of intelligence and an acknowledgment of its many possible forms and combinations. A revised conception of intelligence includes also a good understanding of the basic characteristics, possibilities, and limitations of different (biological, artificial) cognitive system properties without anthropocentric and/or anthropomorphic misconceptions. This “Intelligence Awareness” is highly relevant in order to better understand and deal with the manifold possibilities and challenges of machine intelligence, for instance to decide when to use or deploy AI in relation to tasks and their context. The development of educational curricula with new, targeted, and easily configurable training forms and learning environments for human-AI systems are therefore recommended. Further work should focus on training tools, methods and content that are flexible and adaptive enough to be able to keep up with the rapid changes in the field of AI and with the wide variety of target groups and learning goals.

Author Contributions

The literature search, analysis, conceptual work, and the writing of the manuscript was done by JEK. All authors listed have made substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors want to thank J. van Diggelen, L.J.H.M. Kester for their useful inputs for this manuscript. The present paper was a deliverable of 1) the BIHUNT program (Behavioral Impact of NIC Teaming, V1719) funded by the Dutch Ministry of Defense and of the Wise Policy Making program funded by the Netherlands Organization for Applied Scientific Research (TNO).

1 Narrow AI can be defined as the production of systems displaying intelligence regarding specific, highly constrained tasks, like playing chess, facial recognition, autonomous navigation, or locomotion ( Goertzel et al., 2014 ).

2 Cognitive abilities involve deliberate, conceptual or analytic thinking (e.g., calculation, statistics, analysis, reasoning, abstraction)

3 Unless of course AI will be deliberately constrained or degraded to human-level functioning.

4 Next to the issue of Human-Aware AI, i.e. tuning AI to the cognitive characteristics of humans.

Ackermann, N. (2018). Artificial Intelligence Framework: a visual introduction to machine learning and AI Retrieved from: https://towardsdatascience.com/artificial-intelligence-framework-a-visual-introduction-to-machine-learning-and-ai-d7e36b304f87 . (September 9, 2019).

Aliman, N-M. (2020). Hybrid cognitive-affective Strategies for AI safety . PhD thesis . Utrecht, Netherlands: Utrecht University . doi:10.33540/203

CrossRef Full Text

Bao, J. X., Kandel, E. R., and Hawkins, R. D. (1997). Involvement of pre- and postsynaptic mechanisms in posttetanic potentiation at Aplysia synapses. Science 275, 969–973. doi:10.1126/science.275.5302.969Dane

PubMed Abstract | CrossRef Full Text | Google Scholar

Bar, M. (2007). The proactive brain: using analogies and associations to generate predictions. Trends Cogn. Sci. 11, 280–289. doi:10.1016/j.tics.2007.05.005

Baron, J., and Ritov, I. (2004). Omission bias, individual differences, and normality. Organizational Behav. Hum. Decis. Process. 94, 74–85. doi:10.1016/j.obhdp.2004.03.003

CrossRef Full Text | Google Scholar

Belkom, R. v. (2019). Duikboten zwemmen niet: de zoektocht naar intelligente machines. Den Haag: Stichting Toekomstbeeld der Techniek (STT) .

Google Scholar

Bergstein, B. (2017). AI isn’t very smart yet. But we need to get moving to make sure automation works for more people . Cambridge, MA, United States: MIT Technology Retrieved from: https://www.technologyreview.com/s/609318/the-great-ai-paradox/

Bieger, J. B., Thorisson, K. R., and Garrett, D. (2014). “Raising AI: tutoring matters,” in 7th international conference, AGI 2014 quebec city, QC, Canada, august 1–4, 2014 proceedings . Editors B. Goertzel, L. Orseau, and J. Snaider (Berlin, Germany: Springer ). doi:10.1007/978-3-319-09274-4

Boden, M. (2017). Principles of robotics: regulating robots in the real world. Connect. Sci. 29 (2), 124–129.

Bostrom, N. (2014). Superintelligence: pathts, dangers, strategies . Oxford United Kingdom: Oxford University Press .

Bradshaw, J. M., Dignum, V., Jonker, C. M., and Sierhuis, M. (2012). Introduction to special issue on human-agent-robot teamwork. IEEE Intell. Syst. 27, 8–13. doi:10.1109/MIS.2012.37

Brodal, A. (1981). Neurological anatomy in relation to clinical medicine . New York, NY, United States: Oxford University Press .

Brown, T. B. (2020). Language models are few-shot learners, arXiv 2005, 14165v4.

Cialdini, R. D. (1984). Influence: the psychology of persuation . New York, NY, United States: Harper .

Coley, J. D., and Tanner, K. D. (2012). Common origins of diverse misconceptions: cognitive principles and the development of biology thinking. CBE Life Sci. Educ. 11 (3), 209–215. doi:10.1187/cbe.12-06-0074

Collazos, C. A., Gutierrez, F. L., Gallardo, J., Ortega, M., Fardoun, H. M., and Molina, A. I. (2019). Descriptive theory of awareness for groupware development. J. Ambient Intelligence Humanized Comput. 10, 4789–4818. doi:10.1007/s12652-018-1165-9

Damasio, A. R. (1994). Descartes’ error: emotion, reason and the human brain . New York, NY, United States: G. P. Putnam’s Sons .

Elands, P., HuizingKester, A. L., Oggero, S., and Peeters, M. (2019). Governing ethical and effective behavior of intelligent systems: a novel framework for meaningful human control in a military context. Militaire Spectator 188 (6), 302–313.

Feldman-Barret, L. (2017). How emotions are made: the secret life of the brain . Boston, MA, United States: Houghton Mifflin Harcourt .

Fink, J. (2012). “Anthropomorphism and human likeness in the design of robots and human-robot interaction,” in Social robotics. ICSR 2012 . Lecture notes in computer science . Editors S. S. Ge, O. Khatib, J. J. Cabibihan, R. Simmons, and M. A. Williams (Berlin, Germany: Springer ), 7621. doi:10.1007/978-3-642-34103-8_20

Fischetti, M. (2011). Computers vs brains. Scientific American 175 th anniversary issue Retreived from: https://www.scientificamerican.com/article/computers-vs-brains/ .

Furnham, A., and Boo, H. C. (2011). A literature review of the anchoring effect. The J. Socio-Economics 40, 35–42. doi:10.1016/j.socec.2010.10.008

Gerla, M., Lee, E-K., and Pau, G. (2014). Internet of vehicles: from intelligent grid to autonomous cars and vehicular clouds. WF-IoT 12, 241–246. doi:10.1177/1550147716665500

Gibson, J. J. (1979). The ecological approach to visual perception . Boston, MA, United States: Houghton Mifflin .

Gibson, J. J. (1966). The senses considered as perceptual systems . Boston, MA, United States: Houghton Mifflin.

Gigerenzer, G., and Gaissmaier, W. (2011). Heuristic decision making. Annu. Rev. Psychol. 62, 451–482. doi:10.1146/annurev-psych-120709-145346

Goertzel, B. (2007). Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's the singularity is near, and McDermott’s critique of Kurzweil. Artif. Intelligence 171 (18), 1161–1173. doi:10.1016/j.artint.2007.10.011

Goertzel, B., Orseau, L., and Snaider, J., (Editors). (2014). Preface. 7th international conference, AGI 2014 Quebec City, QC, Canada, August 1–4, 2014 Proceedings Springer .

Grind, W. A. van. de. (1997). Natuurlijke intelligentie: over denken, intelligentie en bewustzijn van mensen en andere dieren . 2nd edn. Amsterdam, Netherlands: Nieuwezijds Retrieved from https://www.nieuwezijds.nl/boek/natuurlijke-intelligentie/ . (July 9, 2019).

Hardin, G. (1968). The tragedy of the commons. The population problem has no technical solution; it requires a fundamental extension in morality. Science 162, 1243–1248. doi:10.1126/science.162.3859.1243

Haring, K. S., Watanabe, K., Velonaki, M., Tosell, C. C., and Finomore, V. (2018). Ffab—the form function attribution bias in human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 10 (4), 843–851. doi:10.1109/TCDS.2018.2851569

Haselton, M. G., Bryant, G. A., Wilke, A., Frederick, D. A., Galperin, A., Frankenhuis, W. E., et al. (2009). Adaptive rationality: an evolutionary perspective on cognitive bias. Soc. Cogn. 27, 733–762. doi:10.1521/soco.2009.27.5.733

Haselton, M. G., Nettle, D., and Andrews, P. W. (2005). “The evolution of cognitive bias,” in The handbook of evolutionary psychology . Editor D.M. Buss (Hoboken, NJ, United States: John Wiley & Sons ), 724–746.

Henshilwood, C., and Marean, C. (2003). The origin of modern human behavior. Curr. Anthropol. 44 (5), 627–651. doi:10.1086/377665

Hoffman, R. R., and Johnson, M. (2019). “The quest for alternatives to “levels of automation” and “task allocation,” in Human performance in automated and autonomous systems . Editors M. Mouloua, and P. A. Hancock (Boca Raton, FL, United States: CRC Press ), 43–68.

Hoffrage, U., Hertwig, R., and Gigerenzer, G. (2000). Hindsight bias: a by-product of knowledge updating? J. Exp. Psychol. Learn. Mem. Cogn. 26, 566–581. doi:10.1037/0278-7393.26.3.566

Horowitz, M. C. (2018). The promise and peril of military applications of artificial intelligence. Bulletin of the atomic scientists Retrieved from https://thebulletin.org/militaryapplications-artificial-intelligence/promise-and-peril-military-applications-artificial-intelligence (Accessed March 27, 2019).

Isaacson, J. S., and Scanziani, M. (2011). How inhibition shapes cortical activity. Neuron 72, 231–243. doi:10.1016/j.neuron.2011.09.027

Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., van Riemsdijk, M. B., and Sierhuis, M. (2014). Coactive design: designing support for interdependence in joint activity. J. Human-Robot Interaction 3 (1), 43–69. doi:10.5898/JHRI.3.1.Johnson

Kahle, W. (1979). Band 3: nervensysteme und Sinnesorgane , in Taschenatlas de anatomie. Stutttgart . Editors W. Kahle, H. Leonhardt, and W. Platzer (New York, NY, United States: Thieme Verlag ).

Kahneman, D., and Klein, G. (2009). Conditions for intuitive expertize: a failure to disagree. Am. Psychol. 64, 515–526. doi:10.1037/a0016755

Kahneman, D. (2011). Thinking, fast and slow . New York, NY, United States: Farrar, Straus and Giroux .

Katz, B., and Miledi, R. (1968). The role of calcium in neuromuscular facilitation. J. Physiol. 195, 481–492. doi:10.1113/jphysiol.1968.sp008469

Kiesler, S., and Hinds, P. (2004). Introduction to this special issue on human–robot interaction. Int J Hum-Comput. Int. 19 (1), 1–8. doi:10.1080/07370024.2004.9667337

Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., and Feltovich, P. J. (2004). Ten challenges for making automation a ‘team player’ in joint human-agent activity. IEEE Intell. Syst. 19 (6), 91–95. doi:10.1109/MIS.2004.74

Korteling, J. E. (1994). Multiple-task performance and aging . Bariet, Ruinen, Netherlands: Dissertation. TNO-Human Factors Research Institute/State University Groningen https://www.researchgate.net/publication/310626711_Multiple-Task_Performance_and_Aging .

Korteling, J. E., and Toet, A. (2020). Cognitive biases. in Encyclopedia of behavioral neuroscience . 2nd Edn (Amsterdam-Edinburgh: Elsevier Science ) doi:10.1016/B978-0-12-809324-5.24105-9

Korteling, J. E., Brouwer, A. M., and Toet, A. (2018a). A neural network framework for cognitive bias. Front. Psychol. 9, 1561. doi:10.3389/fpsyg.2018.01561

Korteling, J. E., van de Boer-Visschedijk, G. C., Boswinkel, R. A., and Boonekamp, R. C. (2018b). Effecten van de inzet van Non-Human Intelligent Collaborators op Opleiding and Training [V1719]. Report TNO 2018 R11654. Soesterberg: TNO defense safety and security , Soesterberg, Netherlands: TNO, Soesterberg .

Korteling, J. E., Gerritsma, J., and Toet, A. (2021). Retention and transfer of cognitive bias mitigation interventions: a systematic literature study. Front. Psychol. 1–20. doi:10.13140/RG.2.2.27981.56800

Kosslyn, S. M., and Koenig, O. (1992). Wet Mind: the new cognitive neuroscience . New York, NY, United States: Free Press .

Krämer, N. C., von der Pütten, A., and Eimler, S. (2012). “Human-agent and human-robot interaction theory: similarities to and differences from human-human interaction,” in Human-computer interaction: the agency perspective . Studies in computational intelligence . Editors M. Zacarias, and J. V. de Oliveira (Berlin, Germany: Springer ), 396, 215–240. doi:10.1007/978-3-642-25691-2_9

Kurzweil, R. (2005). The singularity is near . New York, NY, United States: Viking press .

Kurzweil, R. (1990). The age of intelligent machines . Cambridge, MA, United States: MIT Press .

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. (2017). Building machines that learn and think like people. Behav. Brain Sci. 40, e253. doi:10.1017/S0140525X16001837

Lichtenstein, S., and Slovic, P. (1971). Reversals of preference between bids and choices in gambling decisions. J. Exp. Psychol. 89, 46–55. doi:10.1037/h0031207

McBrearty, S., and Brooks, A. (2000). The revolution that wasn't: a new interpretation of the origin of modern human behavior. J. Hum. Evol. 39 (5), 453–563. doi:10.1006/jhev.2000.0435

McClelland, J. L. (1978). Perception and masking of wholes and parts. J. Exp. Psychol. Hum. Percept Perform. 4, 210–223. doi:10.1037//0096-1523.4.2.210

McDowd, J. M., and Craik, F. I. M. (1988). Effects of aging and task difficulty on divided attention performance. J. Exp. Psychol. Hum. Percept. Perform . 14, 267–280.

Minsky, M. (1986). The Society of Mind . London, United Kingdom: Simon and Schuster .

Moravec, H. (1988). Mind children . Cambridge, MA, United States: Harvard University Press .

Moravec, H. (1998). When will computer hardware match the human brain? J. Evol. Tech. 1Retreived from https://jetpress.org/volume1/moravec.htm .

Müller, V. C., and Bostrom, N. (2016). Future progress in artificial intelligence: a survey of expert opinion. Fundamental issues of artificial intelligence . Cham, Switzerland: Springer . doi:10.1007/978-3-319-26485-1

Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2, 175–220. doi:10.1037/1089-2680.2.2.175

Nosek, B. A., Hawkins, C. B., and Frazier, R. S. (2011). Implicit social cognition: from measures to mechanisms. Trends Cogn. Sci. 15 (4), 152–159. doi:10.1016/j.tics.2011.01.005

Patt, A., and Zeckhauser, R. (2000). Action bias and environmental decisions. J. Risk Uncertain. 21, 45–72. doi:10.1023/a:1026517309871

Peeters, M. M., van Diggelen, J., van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., et al. (2020). Hybrid collective intelligence in a human–AI society. AI and Society 38, 217–(238.) doi:10.1007/s00146-020-01005-y

Petraglia, M. D., and Korisettar, R. (1998). Early human behavior in global context . Oxfordshire, United Kingdom: Routledge .

Pomerantz, J. (1981). “Perceptual organization in information processing,” in Perceptual organization . Editors M. Kubovy, and J. Pomerantz (Hillsdale, NJ, United States: Lawrence Erlbaum ).

Pronin, E., Lin, D. Y., and Ross, L. (2002). The bias blind spot: perceptions of bias in self versus others. Personal. Soc. Psychol. Bull. 28, 369–381. doi:10.1177/0146167202286008

Reicher, G. M. (1969). Perceptual recognition as a function of meaningfulness of stimulus material. J. Exp. Psychol. 81, 274–280.

Rich, E., and Knight, K. (1991). Artificial intelligence . 2nd edition. New York, NY, United States: McGraw-Hill .

Rich, E., Knight, K., and Nair, S. B. (2009). Articial intelligence . 3rd Edn. New Delhi, India: Tata McGraw-Hill .

Risen, J. L. (2015). Believing what we do not believe: acquiescence to superstitious beliefs and other powerful intuitions. Psychol. Rev. 123, 182–207. doi:10.1037/rev0000017

Roese, N. J., and Vohs, K. D. (2012). Hindsight bias. Perspect. Psychol. Sci. 7, 411–426. doi:10.1177/1745691612454303

Rogers, R. D., and Monsell, S. (1995). Costs of a predictible switch between simple cognitive tasks. J. Exp. Psychol. Gen. 124, 207e231. doi:10.1037/0096-3445.124.2.207

Rubinstein, J. S., Meyer, D. E., and Evans, J. E. (2001). Executive control of cognitive processes in task switching. J. Exp. Psychol. Hum. Percept Perform. 27, 763–797. doi:10.1037//0096-1523.27.4.763

Russell, S., and Norvig, P. (2014). Artificial intelligence: a modern approach . 3rd ed. Harlow, United Kingdom: Pearson Education .

Shafir, E., and LeBoeuf, R. A. (2002). Rationality. Annu. Rev. Psychol. 53, 491–517. doi:10.1146/annurev.psych.53.100901.135213

Shatz, C. J. (1992). The developing brain. Sci. Am. 267, 60–67. doi:10.1038/scientificamerican0992-60

Shneiderman, B. (2020a). Design lessons from AI’s two grand goals: human emulation and useful applications. IEEE Trans. Tech. Soc. 1, 73–82. doi:10.1109/TTS.2020.2992669

Shneiderman, B. (2020b). Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Human–Computer Interaction 36 (6), 495–504. doi:10.1080/10447318.2020.1741118

Siegel, A., and Sapru, H. N. (2005). Essential neuroscience . Philedelphia, PA, United States: Lippincott Williams and Wilkins .

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature 550 (7676), 354. doi:10.1038/nature24270

Simon, H. A. (1955). A behavioral model of rational choice. Q. J. Econ. 69, 99–118. doi:10.2307/1884852

Taylor, D. M., and Doria, J. R. (1981). Self-serving and group-serving bias in attribution. J. Soc. Psychol. 113, 201–211. doi:10.1080/00224545.1981.9924371

Tegmark, M. (2017). Life 3.0: being human in the age of artificial intelligence . New York, NY, United States: Borzoi Book published by A.A. Knopf .

Toet, A., Brouwer, A. M., van den Bosch, K., and Korteling, J. E. (2016). Effects of personal characteristics on susceptibility to decision bias: a literature study. Int. J. Humanities Soc. Sci. 8, 1–17.

Tooby, J., and Cosmides, L. (2005). “Conceptual foundations of evolutionary psychology,” in Handbook of evolutionary psychology . Editor D.M. Buss (Hoboken, NJ, United States: John Wiley & Sons ), 5–67.

Tversky, A., and Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science 185 (4157), 1124–1131. doi:10.1126/science.185.4157.1124

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi:10.1126/science.7455683

Tversky, A., and Kahneman, D. (1973). Availability: a heuristic for judging frequency and probability. Cogn. Psychol. 5, 207–232. doi:10.1016/0010-0285(73)90033-9

van den Bosch, K., and Bronkhorst, K. (2018). Human-AI cooperation to benefit military decision making. Soesterberg, Netherlands: TNO.

van den Bosch, K., and Bronkhorst, K. (2019). Six challenges for human-AI Co-learning. Adaptive instructional systems 11597, 572–589. doi:10.1007/978-3-030-22341-0_45

Weisstein, N., and Harris, C. S. (1974). Visual detection of line segments: an object-superiority effect. Science 186, 752–755. doi:10.1126/science.186.4165.752

Werkhoven, P., Neerincx, M., and Kester, L. (2018). Telling autonomous systems what to do. Proceedings of the 36th European Conference on Cognitive Ergonomics, ECCE 2018 , Utrecht, Nehterlands , 5–7 September, 2018 , 1–8. doi:10.1145/3232078.3232238

Wheeler, D., (1970). Processes in word recognition Cogn. Psychol. 1, 59–85.

Williams, A., and Weisstein, N. (1978). Line segments are perceived better in a coherent context than alone: an object-line effect in visual perception. Mem. Cognit 6, 85–90. doi:10.3758/bf03197432

Wingfield, A., and Byrnes, D. (1981). The psychology of human memory . New York, NY, united States: Academic Press .

Wood, R. E., Mento, A. J., and Locke, E. A. (1987). Task complexity as a moderator of goal effects: a meta-analysis. J. Appl. Psychol. 72 (3), 416–425. doi:10.1037/0021-9010.72.3.416

Wyrobek, K. A., Berger, E. H., van der Loos, H. F. M., and Salisbury, J. K. (2008). Toward a personal robotics development platform: rationale and design of an intrinsically safe personal robot. Proceedinds of 2008 IEEE International Conference on Robotics and Automation , Pasadena, CA, United States , 19-23 May 2008 . doi:10.1109/ROBOT.2008.4543527

Keywords: human intelligence, artificial intelligence, artificial general intelligence, human-level artificial intelligence, cognitive complexity, narrow artificial intelligence, human-AI collaboration, cognitive bias

Citation: Korteling JE(, van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC and Eikelboom AR (2021) Human- versus Artificial Intelligence. Front. Artif. Intell. 4:622364. doi: 10.3389/frai.2021.622364

Received: 29 October 2020; Accepted: 01 February 2021; Published: 25 March 2021.

Reviewed by:

Copyright © 2021 Korteling, van de Boer-Visschedijk, Blankendaal, Boonekamp and Eikelboom. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: J. E. (Hans). Korteling, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

Table of contents.

  • 1. Concerns about human agency, evolution and survival
  • 2. Solutions to address AI’s anticipated negative impacts
  • 3. Improvements ahead: How humans and AI might evolve together in the next decade
  • About this canvassing of experts
  • Acknowledgments

Table that shows that people in most of the surveyed countries are more willing to discuss politics in person than via digital channels.

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

[chart id=”21972″]

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson Erik Brynjolfsson

Erik Brynjolfsson , director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. … I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable, so the right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices.”

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Marina Gorbis , executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Judith Donath , author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society, commented, “By 2030, most social situations will be facilitated by bots – intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them – to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Technology Adoption

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

Many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, computer chips in human brains: how americans view the technology amid recent advances, most popular, report materials.

  • Shareable quotes from experts about artificial intelligence and the future of humans

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

Difference Between Artificial Intelligence and Human Intelligence

Artificial intelligence:  .

Artificial Intelligence is based on human insights that can be decided in a way that can machine can effortlessly actualize the tasks, from the basic to those that are indeed more complex. The reason for manufactured insights is learning, problem-solving, reasoning, and perception. 

This term may be connected to any machines which show related to a human intellect such as examination and decision-making and increments the efficiency. 

AI covers assignments like robotics, control systems, face recognition, scheduling, data mining, and numerous others. 

Advantages of Artificial Intelligence (AI):

  • AI can process vast amounts of data much faster than humans.
  • AI can work around the clock without needing breaks or rest.
  • AI can perform tasks that are too dangerous or difficult for humans.

Disadvantages of Artificial Intelligence (AI):

  • AI lacks the creativity and intuition that humans possess.
  • AI is limited by its programming and may not be able to adapt to new or unexpected situations.
  • AI may make errors if not programmed and trained properly.

Human Intelligence:  

Human intelligence or the behavior of the human being has come from past experiences and the doings based upon situation, and environment. And it is completely based upon the ability to change his/her surroundings through knowledge which we gained. 

It gives diverse sorts of information. It can provide data on things related to a particular aptitude and knowledge, which can be another human subject, or, within the case of locators and spies, diplomatic data which they had to get to. So, after concluding all it can give data on interpersonal connections and arrange of interest. 

Advantages of Human Intelligence (HI):

  • HI has creativity, intuition, and emotional intelligence that AI lacks.
  • HI can adapt to new and unexpected situations.
  • HI can provide ethical and moral considerations in decision-making.

Disadvantages of Human Intelligence (HI):

  • HI is limited by its physical and mental capabilities.
  • HI is prone to biases and may make errors or poor decisions.
  • HI requires rest and breaks, which can slow down processes.

Similarities between Artificial Intelligence (AI) and Human Intelligence (HI):

  • Both AI and HI can learn and improve over time.
  • Both AI and HI can be used to solve complex problems and make decisions.
  • Both AI and HI can process and interpret information from the world around them.

Below is a table of differences between Artificial intelligence and Human intelligence: 

S. No. Feature Artificial Intelligence Human Intelligence
1. Emergence AI is an advancement made by human insights; its early improvement is credited to Norbert Weiner who theorized on criticism mechanisms. On the other hand, human creatures are made with the intrinsic capacity to think, reason, review, etc.
2. Nature Artificial intelligence (AI) strives to build machines that can mimic human behavior and carry out human-like tasks. Human intelligence seeks to adapt to new situations by combining a variety of cognitive processes. 
3. State Machines are digital.  The human brain is analogous.
4. Function AI-powered machines rely on input of data and instructions. Humans use their brains’ memory, processing power, and cognitive abilities.
5. Pace/Rate of AI and human As compared to people, computers can handle more data at a speedier rate. For occurrence, in the event that the human intellect can solve a math problem in 5 minutes, AI can solve 10 problems in a minute. In terms of speed, humans cannot beat the speed of AI or machines.
6. Learning ability As machines are unable to reason abstractly or draw conclusions from the past. They can only acquire knowledge through information and frequent training, but they will never develop a human-specific thinking process. Learning from various events and prior experiences is the foundation of human intelligence. 
7. Decision Making AI is profoundly objective in choice making because it analyzes based on absolutely accumulated data. Human choices may be affected by subjective components which are not based on figures alone.
8. Perfection AI frequently produces precise comes about because its capacities are based on a set of modified rules. For human insights, there’s more often than not room for “human error” as certain subtle elements may be missed at one point or the other.
9. Energy Consumption The modern computer generally uses 2 watts of energy. On the other hand, human brains uses about 25 watts
10. Modification of AI and Human AI takes much more time to adjust to unused changes. Human insights can be adaptable in reaction to the changes in their environment. This makes individuals able to memorize and ace different skills.
11. Versatility AI can as it were perform fewer assignments at the same time as a framework can as it were learn duties one at a time. The human judgment skills underpin multitasking as proven by differing and concurrent roles.
12. Social Networking AI has not aced the capacity to choose up on related social and enthusiastic cues. On the other hand, as social creatures, people are much way better at social interaction since they can prepare theoretical data, have self-awareness, and are delicate to others’ feelings.
13. Task It does optimization of the system. It cannot be creative or innovative as humans can only think and machines cannot. It is innovative or creative.
 
 
Processing  Based on algorithms and mathematical models  Based on cognitive processes and biological structures
 
Learning  Based on data and feedback loops  Based on experience, intuition, and creativity
 
Speed  Can process data and perform tasks much faster than humans  Slower than AI in processing large amounts of data, but can make complex decisions quickly
 
Adaptability  Can quickly adapt to new data and situations  Can adapt to new situations, learn from experience, and make decisions based on context
 
Emotions  Lacks emotions and empathy  Capable of feeling emotions and empathy
 
Creativity  Limited ability to be creative or think outside of the box  Capable of creativity, imagination, and innovation
 
Ethics  Does not have a moral code or conscience  Has a moral code and conscience that guides decision-making
 
Physical Limitations  Does not have physical limitations, can operate 24/7  Limited by physical capabilities and requires rest and maintenance
 

Conclusion:

Artificial Intelligence (AI) mimics specific cognitive abilities but lacks the depth of the human mind. Human intelligence encompasses cognitive abilities, emotions, creativity, and adaptability. The true potential lies in synergizing AI’s data processing prowess with human intuition, ethics, and contextual understanding to augment capabilities while aligning with societal values.

Please Login to comment...

Similar reads.

  • Difference Between
  • Write From Home

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

SEAS shield

SEAS welcomes new faculty in Computer Science, Applied Math

Faculty bring expertise in machine learning, AI and data

Applied Mathematics , Computer Science

Harvard SEAS MDE students and Enlight co-founders Joachim Asare, Sangyu Xi, Hessan Sedaghat and Prachi Mehta on a stairwell at the MIT Sloan Product Conference in Cambridge

Enlightening analytics for the visually impaired

MDE students design accessibility tools for business owners

AI / Machine Learning , Computer Science , Design , Entrepreneurship

Stratos Idreos

Stratos Idreos Appointed Faculty Co-Director of Harvard Data Science Initiative

Computer scientist will help foster interdisciplinary collaboration and innovation across Harvard

Computer Science

How close are we to AI that surpasses human intelligence?

Subscribe to the center for technology innovation newsletter, jeremy baum and jeremy baum undergraduate student - ucla, researcher - ucla institute for technology, law, and policy john villasenor john villasenor nonresident senior fellow - governance studies , center for technology innovation.

July 18, 2023

  • Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction.
  • AGI may still be far off, but the growing capabilities of generative AI suggest that we could be making progress toward its development.
  • The development of AGI will have a transformative effect on society and create significant opportunities and threats, raising difficult questions about regulation.

For decades, superintelligent artificial intelligence (AI) has been a staple of science fiction, embodied in books and movies about androids, robot uprisings, and a world taken over by computers. As far-fetched as those plots often were, they played off a very real mix of fascination, curiosity, and trepidation regarding the potential to build intelligent machines.

Today, public interest in AI is at an all-time high. With the headlines in recent months about generative AI systems like ChatGPT, there is also a different phrase that has started to enter the broader dialog: a rtificial general intelligence , or AGI. But what exactly is AGI, and how close are today’s technologies to achieving it?

Despite the similarity in the phrases generative AI and artificial general intelligence, they have very different meanings. As a post from IBM explains, “Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” However, the ability of an AI system to generate content does not necessarily mean that its intelligence is general.

To better understand artificial general intelligence, it helps to first understand how it differs from today’s AI, which is highly specialized. For example, an AI chess program is extraordinarily good at playing chess, but if you ask it to write an essay on the causes of World War I, it won’t be of any use. Its intelligence is limited to one specific domain. Other examples of specialized AI include the systems that provide content recommendations on the social media platform TikTok, navigation decisions in driverless cars, and purchase recommendations from Amazon.

AGI: A range of definitions

By contrast, AGI refers to a much broader form of machine intelligence. There is no single, formally recognized definition of AGI—rather, there is a range of definitions that include the following:

“…highly autonomous systems that outperform humans at most economically valuable work”
“[a] hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.”
“…any intelligence (there might be many) that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence.”
“…systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level.”

While the OpenAI definition ties AGI to the ability to “outperform humans at most economically valuable work,” today’s systems are nowhere near that capable. Consider Indeed’s list of the most common jobs in the U.S. As of March 2023, the first 10 jobs on that list were: cashier, food preparation worker, stocking associate, laborer, janitor, construction worker, bookkeeper, server, medical assistant, and bartender. These jobs require not only intellectual capacity but, crucially, most of them require a far higher degree of manual dexterity than today’s most advanced AI robotics systems can achieve.

None of the other AGI definitions in the table specifically mention economic value. Another contrast evident in the table is that while the OpenAI AGI definition requires outperforming humans, the other definitions only require AGI to perform at levels comparable to humans. Common to all of the definitions, either explicitly or implicitly, is the concept that an AGI system can perform tasks across many domains, adapt to the changes in its environment, and solve new problems—not only the ones in its training data.

GPT-4: Sparks of AGI?

A group of industry AI researchers recently made a splash when they published a preprint of an academic paper titled, “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” GPT-4 is a large language model that has been publicly accessible to ChatGPT Plus (paid upgrade) users since March 2023. The researchers noted that “GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” exhibiting “strikingly close to human-level performance.” They concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version” of AGI.

Of course, there are also skeptics: As quoted in a May New York Times article , Carnegie Mellon professor Maarten Sap said, “The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches.” In an interview with IEEE Spectrum, researcher and robotics entrepreneur Rodney Brooks underscored that in evaluating the capabilities of systems like ChatGPT, we often “mistake performance for competence.”

GPT-4 and beyond

While the version of GPT-4 currently available to the public is impressive, it is not the end of the road. There are groups working on additions to GPT-4 that are more goal-driven, meaning that you can give the system an instruction such as “Design and build a website on (topic).” The system will then figure out exactly what subtasks need to be completed and in what order in order to achieve that goal. Today, these systems are not particularly reliable, as they frequently fail to reach the stated goal. But they will certainly get better in the future.

In a 2020 paper , Yoshihiro Maruyama of the Australian National University identified eight attributes a system must have for it to be considered AGI: Logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness. The last two attributes—embodiment and embeddedness—refer to having a physical form that facilitates learning and understanding of the world and human behavior, and a deep integration with social, cultural, and environmental systems that allows adaption to human needs and values.

It can be argued that ChatGPT displays some of these attributes, like logic. For example, GPT-4 with no additional features reportedly scored a 163 on the LSAT and 1410 on the SAT . For other attributes, the determination is tied as much to philosophy as much as to technology. For instance, is a system that merely exhibits what appears to be morality actually moral? If asked to provide a one-word answer to the question “is murder wrong?” GPT-4 will respond by saying “Yes.” This is a morally correct response, but it doesn’t mean that GPT-4 itself has morality, but rather that it has inferred the morally correct answer through its training data.

A key subtlety that often goes missing in the “How close is AGI?” discussion is that intelligence exists on a continuum, and therefore assessing whether a system displays AGI will require considering a continuum. On this point, the research done on animal intelligence offers a useful analog. We understand that animal intelligence is far too complex to enable us to meaningfully convey animal cognitive capacity by classifying each species as either “intelligent” or “not intelligent:” Animal intelligence exists on a spectrum that spans many dimensions, and evaluating it requires considering context. Similarly, as AI systems become more capable, assessing the degree to which they display generalized intelligence will be involve more than simply choosing between “yes” and “no.”

AGI: Threat or opportunity?

Whenever and in whatever form it arrives, AGI will be transformative, impacting everything from the labor market to how we understand concepts like intelligence and creativity. As with so many other technologies, it also has the potential of being harnessed in harmful ways. For instance, the need to address the potential biases in today’s AI systems is well recognized, and that concern will apply to future AGI systems as well. At the same time, it is also important to recognize that AGI will also offer enormous promise to amplify human innovation and creativity. In medicine, for example, new drugs that would have eluded human scientists working alone could be more easily identified by scientists working with AGI systems.

AGI can also help broaden access to services that previously were accessible only to the most economically privileged. For instance, in the context of education, AGI systems could put personalized, one-on-one tutoring within easy financial reach of everyone, resulting in improved global literacy rates. AGI could also help broaden the reach of medical care by bringing sophisticated, individualized diagnostic care to much broader populations.

Regulating emergent AGI systems

At the May 2023 G7 summit in Japan, the leaders of the world’s seven largest democratic economies issued a communiqué that included an extended discussion of AI, writing that “international governance of new digital technologies has not necessarily kept pace.” Proposals regarding increased AI regulation are now a regular feature of policy discussions in the United States , the European Union , Japan , and elsewhere.

In the future, as AGI moves from science fiction to reality, it will supercharge the already-robust debate regarding AI regulation. But preemptive regulation is always a challenge, and this will be particularly so in relation to AGI—a technology that escapes easy definition, and that will evolve in ways that are impossible to predict.

An outright ban on AGI would be bad policy. For example, AGI systems that are capable of emotional recognition could be very beneficial in a context such as education, where they could discern whether a student appears to understand a new concept, and adjust an interaction accordingly. Yet the EU Parliament’s AI Act, which passed a major legislative milestone in June, would ban emotional recognition in AI systems (and therefore also in AGI systems) in certain contexts like education.

A better approach is to first gain a clear understanding of potential misuses of specific AGI systems once those systems exist and can be analyzed, and then to examine whether those misuses are addressed by existing, non-AI-specific regulatory frameworks (e.g., the prohibition against employment discrimination provided by Title VII of the Civil Rights Act of 1964). If that analysis identifies a gap, then it does indeed make sense to examine the potential role in filling that gap of “soft” law (voluntary frameworks) as well as formal laws and regulations. But regulating AGI based only on the fact that it will be highly capable would be a mistake.

Related Content

Cameron F. Kerry

July 7, 2023

Alex Engler

June 16, 2023

Darrell M. West

May 17, 2023

Artificial Intelligence Technology Policy & Regulation

Governance Studies

Center for Technology Innovation

Darrell M. West, Roxana Muenster

August 5, 2024

Valerie Wirtschafter, Derek Belle

July 25, 2024

The Brookings Institution, Washington DC

11:00 am - 12:00 pm EDT

Human and Artificial Intelligence: A Critical Comparison

  • First Online: 30 June 2022

Cite this chapter

artificial intelligence and human intelligence essay

  • Thomas Fuchs 4  

990 Accesses

1 Citations

Advances in artificial intelligence and robotics increasingly call into question the distinction between simulation and reality of the human person. On the one hand, they suggest a computeromorphic understanding of human intelligence, and on the other, an anthropomorphization of AI systems. In other words: We increasingly conceive of ourselves in the image of our machines, while conversely we elevate our machines to new subjects. So what distinguishes human intelligence from artificial intelligence? The essay sets out a number of criteria for this.

Abridged version of an essay in the volume: T. Fuchs (2020). Verteidigung des Menschen. Grundfragen einer verkörperten Anthropologie . Frankfurt/M.: Suhrkamp, pp. 21–70.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

artificial intelligence and human intelligence essay

Demystifying the Intelligent Machine

artificial intelligence and human intelligence essay

Artificial Intelligence and the Concept of “Human Thinking”

artificial intelligence and human intelligence essay

Robotics and AI: How Technology May Change the Way We Shape Our Bodies and What This Does to the Mind

I. Pearson (2008). “The Future of Life. Creating Natural, Artificial, Synthetic and Virtual Organisms.” European Molecular Biology Organization (EMBO) Reports 9 (Supplement 1): 75–77. 3 Ray Kurzweil, as cited in L. Greenemeier (2010). “Machine Self-awareness.” Scientific American 302: 44–45.

R. Kurzweil (2005). The Singularity Is Near. When Humans Transcend Biology . New York: Penguin.

Cf. A. M. Turing (1950). “Computing Machinery and Intelligence.” Mind 59: 433–460.

C. Weller (2017). “Meet the first-ever robot citizen, a humanoid named Sophia that once said it would destroy humans.” Business Insider Nordic. Haettu , 30 Jg.

Hoffmann, E. T. A. (1960). “The Sandman,” in Ders, Fantasy and Night Plays . Munich: Winkler, pp. 331–363.

Leitgeb, V.-V. 2017. “Robot Mario to Care for Dementia Patients.” Süddeutsche Zeitung, online 24.11.2017. https://www.sueddeutsche.de/bayern/gesundheit-roboter-mario-soll-demenzkrankepflegen-1.3762375.

S. Pinker (1997). How the Mind Works . New York: Norton, p. 524 (transl. T. F.).

Metzinger, T. 1999. Subject and self-model. The perspectivity of phenomenal consciousness against the background of a naturalistic theory of mental representation . Paderborn: Mentis, p. 284.

J. R. Searle (1980). “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3: 417–457.

Cf. for the following in detail T. Fuchs (2018). Ecology of the Brain. The phenomenology and biology of the embodied mind . Oxford: Oxford University Press, esp. pp. 109–114.

See Damasio, A. 2010.  Self comes to Mind. Constructing the Conscious Brain . New York:Pantheon Books. Panksepp, J. 1998. Affective neuroscience: the foundations of human and animal emotions . Oxford, New York: Oxford University Press; and Fuchs (2017).

Searle (1980); see above, footnote 11.

Cf. B. Schölkopf, “Symbolic, Statistical and Causal Intelligence”, Lecture at the Marsilius-Kolleg of the University of Heidelberg, 16.07.2020.

Jonas, H. 1966. The Phenomenon of Life: Toward a Philosophical Biology. New York: Harper & Row, p. 110.

One example of this is the increasingly frequent assessments of the recidivism risk of offenders by AI systems in the USA (with an obvious bias to the disadvantage of people of colour). Here, opaque programs become assistant judges or even decision-making authorities (cf. L. Kirchner, J. Angwin, J. Larson, S. Mattu. 2016. “Machine Bias: There’s Software Used across the Country to Predict Future Criminals, and It’s Biased against Blacks.” Pro Publica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) .

Lenzen, M. 2018. Artificial intelligence. What it can do and what we can expect . Munich: Beck, p. 247.

Author information

Authors and affiliations.

Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University Hospital, Heidelberg, Germany

Thomas Fuchs

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas Fuchs .

Editor information

Editors and affiliations.

University of Heidelberg, Heidelberg, Germany

Rainer M. Holm-Hadulla

Department of Psychology, University of Heidelberg, Heidelberg, Germany

Joachim Funke

Institute of Pharmacy and Molecular Biotechnology, University of Heidelberg, Heidelberg, Germany

Michael Wink

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Fuchs, T. (2022). Human and Artificial Intelligence: A Critical Comparison. In: Holm-Hadulla, R.M., Funke, J., Wink, M. (eds) Intelligence - Theories and Applications. Springer, Cham. https://doi.org/10.1007/978-3-031-04198-3_14

Download citation

DOI : https://doi.org/10.1007/978-3-031-04198-3_14

Published : 30 June 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-04197-6

Online ISBN : 978-3-031-04198-3

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

CBSE Related Links

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

artificial intelligence and human intelligence essay

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Artificial Intelligence: History, Challenges, and Future Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

In the editorial “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence” by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired, humanized AI, and artificial narrow, general, and superintelligent AI. They address the AI effect, which is the phenomenon in which observers disregard AI behavior by claiming that it does not represent true intelligence. The article also uses the analogy of the four seasons (spring, summer, fall, and winter) to describe the history of AI.

The article provides a useful overview of the history of AI and its current state. The authors provide a useful framework for understanding AI by dividing it into categories based on the types of intelligence it exhibits or its evolutionary stage. It addresses the concept of the AI effect, which is the phenomenon where observers disregard AI behavior by claiming that it does not represent true intelligence.

The central claim made by Michael Haenlein and Andreas Kaplan is that AI can be classified into different types based on the types of intelligence it exhibits or its evolutionary stage. The authors argue that AI has evolved significantly since its birth in the 1940s, but there have also been ups and downs in the field (Haenlein). The evidence used to support this claim is the historical overview of AI. The authors also discuss the current challenges faced by firms today and the future of AI. They make qualifications by acknowledging that only time will tell whether AI will reach Artificial General Intelligence and that early systems, such as expert systems had limitations. If one takes their claims to be true, it suggests that AI has the potential to transform various industries, but there may also be ethical and social implications to consider. Overall, the argument is well-supported with evidence, and the authors acknowledge the limitations of AI. As an AI language model, I cannot take a stance on whether the argument is persuasive, but it is an informative overview of the history and potential of AI.

The article can be beneficial for the research on the ethical and social implications of AI in society. It offers a historical overview of AI, and this can help me understand how AI has evolved and what developments have occurred in the field. Additionally, the article highlights the potential of AI and the challenges that firms face today, and this can help me understand the practical implications of AI. The authors also classify AI into three categories, and this can help me understand the types of AI that exist and how they can be used in different contexts.

The article raises several questions that I would like to explore further, such as the impact of AI on the workforce and job displacement. The article also provides a new framework for looking at AI, and this can help me understand the potential of AI and its implications for society. However, I do not disagree with the author’s ideas, and I do not see myself working against the ideas presented.

Personally, I find the topic of AI fascinating, and I believe that it has the potential to transform society in numerous ways. However, I also believe that we need to approach AI with caution and be mindful of its potential negative impacts. As the editorial suggests, we need to develop clear AI strategies and ensure that ethical considerations are taken into account. In this way, we can guarantee that the benefits of AI are maximized while minimizing its negative impacts.

Haenlein, Michael, and Andreas Kaplan. “ A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence .” California Management Review , vol. 61, no. 4, 2019, pp. 5–14, Web.

  • Ethical Considerations of Technology Development in a Competitive Unregulated Environment
  • Artificial Intelligence in Education: Key Opportunities
  • The Lost Leonardo Film by Andreas Koefoed
  • Editorial Independence in Kuwaiti Legislation
  • Visual Metaphors in Print Advertising for Fashion Products by Stuart Kaplan
  • Artificial Intelligence in the Field of Copywriting
  • Artificial Intelligence for Recruitment and Selection
  • Artificial Intelligence and Gamification in Hiring
  • Open-Source Intelligence and Deep Fakes
  • Artificial Intelligence and Frankenstein's Monster
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, February 25). Artificial Intelligence: History, Challenges, and Future. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/

"Artificial Intelligence: History, Challenges, and Future." IvyPanda , 25 Feb. 2024, ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

IvyPanda . (2024) 'Artificial Intelligence: History, Challenges, and Future'. 25 February.

IvyPanda . 2024. "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

1. IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

Bibliography

IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

  • Search Menu
  • Sign in through your institution
  • Computer Science
  • Earth Sciences
  • Information Science
  • Life Sciences
  • Materials Science
  • Science Policy
  • Advance Access
  • Special Topics
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Reasons to submit
  • About National Science Review
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Human intelligence: a multifaceted concept, what about the intelligence of computers, do we need strong ai with consciousness and emotion, conversation between the two disciplines, the social and ethical challenges of ai.

  • < Previous

Inspired, but not mimicking: a conversation between artificial intelligence and human intelligence

  • Article contents
  • Figures & tables
  • Supplementary Data

Weijie Zhao, Inspired, but not mimicking: a conversation between artificial intelligence and human intelligence, National Science Review , Volume 9, Issue 6, June 2022, nwac068, https://doi.org/10.1093/nsr/nwac068

  • Permissions Icon Permissions

How intelligent is artificial intelligence (AI)? How intelligent will it become in the future? What is the relationship between AI and human intelligence (HI)? These questions have been a hot topic of discussion in recent years, but no consensus has yet been reached. To discuss these issues, we should first understand the concept of intelligence as well as the underlying mechanisms for both HI and AI. In this NSR Forum, experts from both disciplines gathered to discuss these issues; in particular, the similarities and differences between AI and HI, how these two disciplines could benefit from each other, and the emerging social and ethical challenges of AI.

graphic

Professor at the Institute of Psychology, Chinese Academy of Sciences (CAS)

graphic

Professor at the Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, CAS

graphic

Professor at the Institute of Biophysics, CAS

graphic

Zhuojun Liu

Professor at the Academy of Mathematics and Systems Science, CAS

graphic

Professor at the Institute of Automation, CAS

graphic

Zhi-Hua Zhou

Professor at National Key Laboratory for Novel Software Technology, Nanjing University

graphic

Huimin Lin (Chair)

Professor at the Institute of Software, CAS

Lin: Welcome to this panel discussion. I am a computer scientist, but not specialised in AI. AI, especially deep learning, has achieved great success in the past decade and many people in and out of this field have begun to think about the relationship between AI and HI. Here, we invite experts in neuroscience, cognitive science, psychology and AI to discuss various relevant issues.

Lin: First, what's human intelligence?

He: The concept and measurement of HI have been controversial for a long time. Loosely speaking, intelligence is the ability to acquire and use knowledge, including recognition, problem solving, etc. Historically, some researchers conceptualized that there could be a single index to measure intelligence. For example, Charles Spearman proposed his Theory of Intelligence in 1904, with the key idea that there is a single General factor (g factor) that underlies one's intelligence. Naturally, not everyone accepted this theory.

Other researchers tried to divide HI into different components. Robert Sternberg proposed the Triarchic Theory of Intelligence in the 1980s, suggesting that there are three types

Intelligence should contain two key components: knowledge, and the ability to obtain knowledge and use it to solve problems. —Xiaolan Fu

of intelligence: analytical intelligence, creative intelligence and practical intelligence. These abilities are related but also distinct. For instance, some artists can be extremely creative, but they lack practical intelligence to deal with daily problems. This is likely related to the somewhat modular organization of the human brain, both structurally and functionally, to support different cognitive abilities.

So I think we can have a general definition of intelligence, but also more specialized definitions when it comes to specific problem-solving issues.

Fu: I think intelligence should contain two key components: knowledge, and the ability to obtain knowledge and use it to solve problems. These two components are the key abilities needed by all types of cognitive tasks.

Gu: I think intelligence is not just to have as much knowledge as possible, instead, it is to learn general rules from knowledge and apply them to new tasks. In neuroscience, there is a good example, which is the ‘cognitive map’ proposed by Edward Tolman in 1948. The concept was first proposed based on observations of rats’ behavior when they were wandering around a maze. During this spatial navigation task, rats first store a series of space and time events as egocentric coordinates to form ‘episodic memory’, which is further turned into more abstract ‘semantic memory’ in the form of a cognitive map. Based on the allocentric map, rats and other animals can use structured knowledge to navigate new environments, or plan new routes when certain paths are blocked in the maze.

Now, we know that a cognitive map is not only a map for spatial navigation, but also a map for abstract navigation, for example through a social, or value space. In a recent Cell article, scientists found that monkeys use the same brain areas, including the hippocampus, to navigate through a space, either physical or abstract. These brain areas are responsible for abstracting general laws and forming real knowledge that can be transferred to solve different problems. That is how humans and other animals possess the ability of meta learning, or learning to learn, which is really the key to intelligence, in particular the general intelligence that allows us to master multitasking.

Lin: Suppose we can create a machine to scan a person's brain and read the status and interconnections of all neurons. Can we then decode the knowledge he/she has learned, and the problems he/she could solve using that knowledge?

Gu: I think it's possible theoretically, but impossible practically. The human brain contains as many as 86 billion neurons, and each neuron can form as many as 1000 synapses with other neurons. So the possible structural combinations are nearly infinite. I do not think that we would be able to map all of that within 50 or 100 years.

He: It's impractical, but it may even be impossible theoretically. The human brain is extremely complex. Take a much simpler brain, such as the brain of a zebrafish; even if we could map all the neurons and synapses, based on our current understanding, we would still be unable to tell what knowledge or memory it contains.

Lin: Do computers have intelligence? What are the differences and similarities between computer intelligence and HI?

I would like to talk about it first. A computer is a machine created to perform computation. Since computation is an important aspect of human intelligence, computers do have intelligence. However, the intelligence of a computer is mechanical. Computers can only do what humans instruct them to do. They are fast and accurate, but do not have the abilities of creation and abstraction.

AI programs have been extremely successful. AlphaGo beat top human Go players SeDol Lee and Jie Ke, but it's only a program written by the DeepMind team members; it is a fruit human creativity. It has been proven to be impossible for computers to automatically generate programs from arbitrary requirements. Programs can only be designed and written by humans, because this needs creativity which computers do not possess. Computers are becoming more and more powerful because we have created more and more programs for them to run, but computers cannot extend their capabilities by themselves.

Tan: Since first being coined at the 1956 Dartmouth Conference, AI has been developing for 65 years. It has been an extremely hot topic in recent years, and has been listed as a national strategic priority by many countries. But we have to notice that its recent success is mostly due to breakthroughs in practical applications, especially the application of pattern recognition (such as face recognition), while on the theoretical side, there have not been major breakthroughs for a long time. Some people, especially those who are not real AI experts, are somehow over optimistic about AI.

Whether computers or machines possess intelligence or not depends on how we define intelligence. We can say that an AI system has intelligence in terms of the functional aspect of its behaviors, but essentially, the system does not really understand

[Computers] are fast and accurate, but do not have the abilities of creation and abstraction. —Huimin Lin

what it is doing and why, so from this point of view, it seems that we cannot say it has intelligence.

Zhou: As an AI researcher coming from computer science discipline, I would like to say that machines can undoubtedly exhibit intelligent behaviors, such as reasoning and Go playing. However, it is hard to judge whether they can really have what the so-called ‘intelligence’. Indeed, we computer scientists do not care much about this, because our motivation, at least my own, is just to try to make some intelligent tools that can help people. I often take an analogy that we human beings saw bird flying in the sky and got inspiration to make aircrafts that can help us fly, but the flying mechanisms of aircrafts are very different from that of birds, and there is no need to demand aircrafts fly as same as birds.

Liu: In the 1992 textbook Artificial Intelligence , Patrick Winston of MIT gave a definition of AI: ‘Artificial Intelligence is the study of the computation that makes it possible to perceive, reason and act’. It means that AI is a technology that applies algorithms, models and other computational methods to realize some human intelligent behaviors, including receiving external information (looking and listening), reasoning (thinking) and exporting certain outputs (language and behavior). I think that is an appropriate understanding of AI, it is a tool inspired by the human brain and empowered by mathematical and computational methods that can realize multiple intelligent behaviors.

[AI] is a tool inspired by the human brain and empowered by mathematical and computational methods that can realize multiple intelligent behaviors. —Zhuojun Liu

Lin: There is a famous statement in the first paragraph of A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence : ‘The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.’ But I think this conjecture is not true. For instance, a human's creative intelligent behavior cannot ‘be so precisely described that a machine can be made to simulate it’, and the behaviors that can be ‘precisely described’ must be mechanical which are exactly those can be done by computers.

Fu: I agree that computers have intelligence, but that intelligence is not as comprehensive as HI. With regard to knowledge, the knowledge that has been stored and mastered by computers is still quite limited, and does not contain knowledge of communication, sociality and emotion. With regard to the ability to obtain and use knowledge, there exists an essential difference between HI and AI: humans and animals need this ability to stay alive, but computers do not have that desire, they just follow the instructions.

Lin: Is it necessary for AI to have consciousness and emotion?

Zhou: Computer scientists’ aim of developing AI is not to create human-like organisms, but to make intelligent tools that obey and help humans. From this aspect, there seems no need to try to create intelligent organisms that have self-consciousness, that may disobey or even take the place of human beings on this planet.

Tan: I agree. The aim of AI and other technologies is to enhance human capability and performance, not to completely replace humans. Many people are talking about strong AI or general-purpose AI. It's good to use one AI system for multiple tasks, but if general AI means an AI agent that can realize HI as a whole, I think it is neither possible nor necessary. We should not consider the development of such general AI systems as a major future research direction. In fact, it is sufficient and convenient to use specific-purpose AI algorithms to help us with different tasks.

Fu: But in some scenarios, if we hope the AI agent can integrate into human society and communicate and cooperate with human beings as a counterpart, it seems that they would need to have self-consciousness. They would need to recognize human emotion and give appropriate emotional responses. If they cannot do that, it would be impossible to achieve human–machine coexistence.

Zhou: There is a sub-direction of AI known as ‘affective computing’. Algorithms are being developed to make people feel that they are communicating with AI agents with emotions. However, these are just behaviors ‘exhibiting emotions’. It is hard to say that the agents really have the so-called ‘emotion’.

Lin: Technically, every activity a computer is able to perform is implemented by electronic circuits. A computer can execute a human-written program, to deceive people interacting with it that it has ‘emotion’, which does not mean the computer has emotion itself.

He: At this point, we do not yet have a good scientific definition for consciousness, so it would be difficult to discuss whether computers can have consciousness. I think we should first investigate the functions of human consciousness. Is consciousness a mere epiphenomenon of brain activity, or an important component of cognitive functions? In what way can consciousness benefit or enable cognitive functions?

If we can answer this question and list the functions, such as A, B and C, that require or depend on consciousness, but not function D, then we will have a better understanding of the role of consciousness in cognitive functions, so we can identify the benefits of consciousness (as seen in A, B and C) in AI.

Gu: That's right. Consciousness is not well defined. I think some better-defined parts of it are probably needed by AI. For example, the Theory of Mind says that in human society, people need to differentiate their own thoughts from the thoughts of ohters, and understand those thoughts. Autonomous driving probably needs this ability too. An autonomous vehicle needs to recognize the intentions of other vehicles to make appropriate decisions.

Another example is that robotic arms and other intelligent robots may need to have self-awareness of their own limbs, which can help them adapt to new tasks quickly and accurately. A recent study by a Columbia University group involved the construction of a robotic arm with 100 joints. It was asked to do random motion trajectories repeatedly. During this process, the robot learned to gradually build up an internal model of its arm status, and was able to perform the new task of grabbing balls and moving them to a desired cup with high accuracy without any feedback. Interestingly, modifying the arms could cause the robot to relearn and build up new internal models. This is quite like how human babies learn to develop self-awareness of their own arms, and when they get hurt, how to recover from it via plasticity.

Is consciousness a mere epiphenomenon of brain activity, or an important component of cognitive functions? —Sheng He

Lin: The application of deep learning and pattern recognition has been a big success of AI. Machine-learning programs train classifiers with giant labeled data sets to perform recognition. Does the human brain use similar strategies to recognize objects?

He: Yes, there are some similarities between the structures of deep neural networks and the ventral occipitotemporal object-recognition pathway in the primate visual cortex. They are both hierarchical multileveled structures. The primary visual cortex processes line segments with different orientations; at the next stage, neurons represent configurations with medium complexity; then further upstream the information is classified into different categories such as cats and dogs, and in the case of human faces may eventually achieve individual recognition as somebody we know.

But there are also many differences. For example, there is extensive feedback modulation in human visual information processing. The current deep networks are primarily based upon feedforward signals, although there are efforts to incorporate feedback processing into the network. Human brains are very good at using contextual information or prior knowledge. If we see a fuzzy triangular shape in the kitchen, we may recognize it as a kitchen knife, but if we see the same shape in the bathroom, we may recognize it as a hairdryer. If we can make better use of feedback or contextual modulation in pattern recognition, it may help to avoid recognition errors caused by over-emphasizing local information such as textures.

Moreover, there are major parallel pathways in the human brain where different information is processed somewhat independently, which may help to prevent issues such as catastrophic forgetting when a network is trained sequentially on different tasks.

Zhou: The multilayered structure seems to be a similarity between deep neural networks and the human neural system, although multilayered structure has been used in neural networks for a long time. But until 2005 people did not know how to train a neural network with more than five layers, because gradient vanishing with more layers. The deep learning breakthrough started from a computational trick: to do one layer each time and then pining them together for global refining.

Gu: Besides recognition, AI systems and the human brain have similarities in spatial navigation and autonomous driving. My lab studies multisensory integration, in particular how the brain integrates visual and vestibular information for effective self-motion perception during locomotion or spatial navigation. Similar ideas have been used in autonomous driving systems, which integrate information from multiple sources including GPS, inertial motion unit, video cameras and radars to help drive under complex and dynamic scenarios.

Another example is the grid cells that were discovered in 2005 by the 2014 Nobel Prize winners May-Britt Morser and Edvard Morser in the entorhinal cortex of rodents. The discovery of grid cells led to a flurry of computational work aiming to understand their function in navigation. For example, in 2018, DeepMind published a paper in Nature showing that for a recurrent neural network with reinforcement learning, ‘grid-like cells’ appeared in the hidden layer after being trained with rats’ behavioral data. Interestingly, the appearance of these ‘AI grid cells’ significantly improved the overall performance of the network by exhibiting animal and human-like intelligence during navigation in new environments, or environments with changed contexts like blocked paths. Thus, the brain hippocampal-entorhinal system provides a very useful guide for AI in spatial navigation tasks.

Lin: The artificial neural network was proposed in the early 1940s, and tremendous progresses have been made in neural science since then. Now, to what extent is an AI system similar to a human nervous system?

He: I think they are more different than similar.

Compared with biological nervous systems, artificial neural networks are still over-simplified models. —Yong Gu

Gu: Yes. Compared with biological nervous systems, artificial neural networks are still over-simplified models. In a human nervous system, other than typical bottom-up projections, neurons within a layer form many lateral connections. There are also many feedback top-down projections. In the brain, there are also many different types of neurons—excitatory neurons, inhibitory neurons and their subgroups with different shapes and projected targets, indicating heterogeneous functions. There are also many different types of neurotransmitters—dopamine, serotonin, norepinephrine etc.—that affect the brain's state and information processing. Many of these traits have not been implemented in AI systems.

It is said that only 10% of human brain potential has been exploited, and I think only 10% of the human brain, or even less, has been simulated by AI. So I am very optimistic about AI, in the sense that it still has a huge space to develop.

Lin: Have AI researchers tried to mimic and make use of these complex structures and functions of human brain?

Zhou: AI researchers are anxious to get inspiration from neuroscience, and there were successful stories. For example, the MP-model proposed in 1943, inspired by neuron mechanisms, is still the basis of almost all current deep neural networks. However, in most cases, direct mimicking is quite difficult to get success. For example, there were efforts trying to mimic the pulse-spiking mechanism of neurons, where not only the potential but also peak time are considered. Though it offers advantages from biological plausible aspect, after half a century's exploration, the algorithmic success is very limited and more explorations are needed.

Generally, it is often very difficult to transform neuroscience findings directly into effective computer algorithms, and the return is often marginal. It seems that the most important thing AI researchers can get from neuroscience would be directional inspirations, motivating us about what things can be tried to tackle. While as for how to tackle them, we usually need resort to mathematics and engineering approaches, rather than trying to direct mimicking of the human brain.

It seems that the most important thing AI researchers can get from neuroscience would be directional inspirations, motivating us about what things can be tried to tackle. —Zhi-Hua Zhou

Lin: We talked about brain science as being an inspiration for AI. What about the other direction? Can AI help with brain science?

Gu: AI can help brain science research in at least two aspects. Firstly, AI offers a great tool. For example, it can help with image recognition work in the construction of mesoscopic neural connection maps with high efficiency and high speed. AI also helps doctors to analyze medical scanned images.

Secondly, I think AI can in turn help us understand the human brain. I believe that HI is only one of the many possible forms of intelligence, which is generated in specific environments after a long period of evolution. By analyzing AI systems and comparing them with the human brain, we have a chance to see the possible mechanisms involved in forming intelligence that are different from the human brain. We have seen some hints in AlphaGo Zero, which is the second version of AlphaGo. With reinforcement learning, AlphaGo Zero did not need any training data set from humans. Instead, it learned by playing Go with itself. After only a few days, it not only defeated top human Go players but also created some moves that had never been thought of by humans. Because of this, AI is now used to train human professionals, improving their chess ability much faster than a human coach could. I think we should appreciate this difference in strategy between AI and HI. If we can further analyze these differences, it will also help us to better understand the functions of the human brain and why the human brain works the way it does.

Lin: I have a few more comments about AlphaGo Zero. A Go board has 361 grid points, so there are as many as 3 361 possible configurations, which cannot be exhausted by the world's fastest computer in reasonable time. But to determine the winner, it needs only to count the 361 points, which can be accomplished by a personal computer in no time. So it's easy for AlphaGo Zero to play with itself, record the strategies of both sides, and at the end of a game, keep the winner's strategies and discard the loser's. But this approach is applicable for very few problems.

He: Besides, we can also use AI as a ‘sand table’ for modeling the nervous system. It is more flexible and costs less than using animal models. By testing its process and observing different outputs under different inputs, we could gain some insight into neuroscience questions.

Lin: What are the possible social and ethical problems that may be caused by AI?

Tan: This has been a long-discussed issue. I think one of the most urgent problems is Deepfake, or automatic content generation. AI is already able to generate text, voice, image, video and other contents with an incredibly high level of realism. If we cannot strictly control its usage, it will present real risks for public security and national security. There are also other social problems that may be caused by the use of AI, such as privacy issues, employment issues and issues of equality—how can different nations, organizations and individuals get equal access to AI technologies, and avoid an intelligence divide?

Researchers are also trying to keep AI controllable and avoid these problems. Some are developing algorithms to identify deepfake content. There are also technologies that can encode

Whether AI is a devil or an angel depends on whether the person using it is a devil or an angel. —Tieniu Tan

and transform collected private data, such as biometric data, before its usage, so as to avoid privacy disclosure and personal information leakage.

We should take both technical and managerial measures to deal with these challenges. The good news is that China and many other countries have already started to address these issues and are beginning to set up supervisory measures. Challenges are inevitable, and the most important thing is to formulate necessary laws and regulations in time.

Actually, like AI, the development of other technologies would also bring similar challenges. Whether AI is a devil or an angel depends on whether the person using it is a devil or an angel. So our researchers should be responsible—that is essential to guaranteeing that the tools we make are controllable and beneficial.

Liu: It is important to keep AI controllable. We cannot authorize algorithms to make fundamental decisions, such as to launch a guided missile or not. I have talked with some companies and helped them to create standards with regard to AI. I think it's important to have companies involved in the process of policy making, so that we can better supervise them and promote the healthy development of the AI industry at the same time.

Fu: Many end users worry a lot about privacy risks, some are even afraid of the possibility that stronger machines will harm people.

Zhou: As I have mentioned, most AI researchers just want to make intelligent tools that can help human beings, rather than man-made intelligent organisms coming with big risks or even may take place of human beings on the planet. This, however, is not that clear to the public, and thus, it is possible to make people feel threatened. So, we should make more communication efforts to help the public get a better understanding of AI research.

Gu: AI chatbots are developing fast, but an emerging problem is that they may learn ‘bad words’ or improper world views from a too-large data set that lacks supervision. As a result, they may talk in a violent or racially discriminative way. It is a big challenge to keep these chatbots controllable.

Zhou: This is a challenge for current technology. The chatbots generally learn from extremely huge corpuses, and it is hard to have enough human resources to screen and clear those huge corpuses in advance.

Tan: It should be AI researchers’ responsibility to cope with this issue. We should develop more efficient methods for selecting and building a corpus.

He: There are issues that may not pose an immediate danger, but should also be considered. For example, should we apply AI to every task that it is capable of? For some industries, the application of AI may cause mass unemployment. Should we slow down AI application in these fields? Moreover, AI gets seemingly better and better at creating literature and art. Algorithms can paint pictures, write poems and compose music. Should we encourage AI in these activities? Or should we leave those creative works for ourselves, so that we can enjoy the fun of creation? These are questions that we should consider.

Lin: Thanks for the discussion today on HI and AI, their relationship, and the social and ethical challenges regarding AI. Such interdisciplinary communication is thought-provoking and can expand our vision.

Month: Total Views:
April 2022 46
May 2022 6
June 2022 111
July 2022 309
August 2022 147
September 2022 172
October 2022 220
November 2022 115
December 2022 150
January 2023 94
February 2023 147
March 2023 160
April 2023 224
May 2023 142
June 2023 83
July 2023 109
August 2023 126
September 2023 111
October 2023 151
November 2023 194
December 2023 156
January 2024 132
February 2024 110
March 2024 132
April 2024 111
May 2024 101
June 2024 78
July 2024 78
August 2024 32

Email alerts

Citing articles via.

  • Recommend to Your Librarian

Affiliations

  • Online ISSN 2053-714X
  • Print ISSN 2095-5138
  • Copyright © 2024 China Science Publishing & Media Ltd. (Science Press)
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

What Is Artificial Intelligence (AI)? Definition, Uses, and More

  • Share on Facebook
  • Follow us on LinkedIn
  • Share on Pinterest
  • Share via Email

Graphic showing AI

Once reserved for the realms of science fiction, artificial intelligence (AI) is now a very real, emerging technology, with a vast array of applications and benefits. From generating vast quantities of content in mere seconds to answering queries, analyzing data, automating tasks, and providing personal assistance, there’s so much it’s capable of.

Unsurprisingly, with such versatility, AI technology is swiftly becoming part of many businesses and industries, playing an increasingly large part in the processes that shape our world. But what is artificial intelligence, exactly? How does it work and what are some of the types, pros, and cons of AI systems? Let’s find out.

What Exactly Is Artificial Intelligence (AI)?

AI is quite a broad, umbrella term with multiple interpretations. However, at a fundamental level, it can be defined as a representation of human intelligence through the medium of machines.

In other words, AI is about enabling devices like computers to carry out tasks and processes that would usually demand human-level intelligence, like reading, understanding and responding to language, analyzing data, and problem solving, learning and improving as they do so.

That’s the key part – “learning and improving.” For years, we’ve been able to code or program machines to carry out tasks, but with AI, there’s no need to do so. AI-powered systems can learn as they go, gaining knowledge from the data they work with to become more efficient and intelligent.

A Brief History of AI So Far

As a concept, the history of AI goes back many decades. It’s been talked about and dreamed of throughout history, with famed computer scientist Alan Turing developing his self-named “ Turing test ” and exploring the possibilities of AI in solving problems and making decisions back in 1950.

Turing’s work, especially his paper, “Computing Machinery and Intelligence,” effectively demonstrated that some sort of machine or artificial intelligence was a plausible reality. In the years that followed, many more researchers and scientists built on his discoveries.

They focused primarily on the science of “machine learning.” This is the process of effectively teaching machines to learn new skills from data without the need for specific programming, recreating the power of the human brain in machine form.

Yet, for all the research efforts, the idea of functional AI remained something of a pipe dream for several decades. It was written about in sci-fi books or imagined in movies and TV without any tangible impacts on real life.

However, after so many years and efforts, that dream has become a reality. The release of popular generative AI tools like OpenAI’s ChatGPT and other AI solutions has ushered in a modern age of AI, and this tech is now evolving at remarkable speed, with new uses discovered daily.

How Does AI Work?

Now that you have an answer to artificial intelligence, you may be eager to learn more about how it works.

Graphic depicting AI system at work

At its core, AI’s functionality revolves around algorithms and data. Huge amounts of data have to first be collected and then applied to algorithms (mathematical models), which analyze that data, noting patterns and trends. This is the “training” process for AI, effectively teaching AI models and systems to carry out certain tasks.

To dig deeper, let’s explore the key components and technologies behind AI systems:

Machine Learning

As discussed previously, machine learning is essentially the process used to create AI. It’s the ability to combine data with algorithms to “teach” artificial intelligence, helping it get progressively smarter, more precise, and more efficient based on usage and historical data.

Deep Learning

Deep learning is a type of machine learning. It uses additional layers of algorithms to allow machines to learn at an even deeper level, recognizing more complicated patterns and understanding more complex processes, like image recognition. It’s a huge driving force in the evolution of AI.

Neural Networks

Machine learning and deep learning are done via neural networks. These involve multiple algorithms and consist of layers of interconnected nodes that imitate the neurons of the brain. Each node can receive and transmit data to those around it, giving AI new and ever-enhancing abilities.

Natural Language Processing (NLP)

NLP is the process of teaching computers to understand language at the human level so that they can answer questions, for example, or conduct conversations in real time. It involves a mix of technical linguistics, machine learning, and deep neural networks.

Computer Vision

Finally, computer vision is the concept of enabling machines to “see” or scan images and other forms of visual media, extracting data and insights. Computer vision has numerous applications, like facial recognition, image interpretation, and even self-driving cars.

Types of Artificial Intelligence

We can classify or divide AI into various types and categories.

For example, there’s the division of strong AI vs. weak AI, where strong AI refers to AI systems that are able to comprehend a range of concepts, acquire varied knowledge, and apply it in numerous ways. This, in many ways, is the ultimate aim and form of AI – for now, though, it’s only a fantasy. Weak AI, meanwhile, is AI trained for specific functions like content generators and language models.

We can also classify AI into four different types according to its level of intelligence:

  • Reactive: A reactive machine is the most primitive form of AI. As the name suggests, it reacts to the provided data, which means it can only perform roles of limited scope. It may also be referred to as “narrow AI.”
  • Limited Memory: Limited memory machines are those with a certain understanding of historical events and data. They can use that “experience” to build their knowledge, but cannot fully comprehend the world or broaden their reach beyond their primary functions.
  • Theory of Mind: This is the next stage of AI evolution, in which AI can essentially think beyond its main functions. It could broaden its understanding into numerous fields, moving towards a form of artificial general intelligence (AGI).
  • Self-Aware: The final form and pinnacle of AI as we know it would be self-aware machines that understand the world, themselves, and more. While such machines are far from reality, once developed, they could change the world by being able to process large amounts of data and understand almost any concept imaginable.

Examples of AI

Artificial intelligence is an immensely powerful and versatile form of technology with far-reaching applications and impacts on both personal and professional lives. Here are just some of the ways in which it’s being used.

Generative AI

One of the most well-known examples of AI in action is in the form of generative models. These tools generate content according to user prompts, like writing essays in an instant, creating images according to user needs, responding to queries, or coming up with ideas. Such technology is proving invaluable in fields such as marketing, product design, and education, among others.

Image showing someone working on a generative AI platform

Smart Assistants

AI can also be implemented in the form of smart assistants like Siri and Alexa. Powered by AI technology, these virtual companions can do so much, from answering queries to sending messages, playing music, checking the weather, or carrying out various tedious tasks, freeing workers to focus on more important matters.

Image Recognition

Through the power of computer vision, AI can interpret pictures and videos, extracting data from the very pixels themselves. Again, this has numerous possible applications. Those in interior design, for example, can turn to AI for guidance on how to decorate a space. Such tech also has applications in healthcare and the automotive industry.

Translation

Through natural language processing, AI can be used to not only hear and understand speech but also to transcribe and translate it into other languages. In effect, an AI model or assistant could serve as a reliable interpreter, facilitating discussion and collaboration between people with different native languages.

Stats and Analytics

AI is also strikingly effective at dealing with large amounts of data. It can take huge data sets or massive amounts of statistics, then clean, organize, and analyze them in seconds to extract valuable, actionable insights. This process can help businesses arrive at smarter decisions regarding their future, making it that much easier to not merely survive, but prosper in any industry.

Pros and Cons of AI

Like any technology, AI brings both pros and cons to the table.

  • Versatility: As shown by the examples above, AI is a versatile force applicable to many industries. It has so many uses, with so many more yet to be uncovered.
  • Speed: AI can carry out tasks dramatically faster than humans ever could. It can analyze vast datasets in seconds, for example.
  • Accuracy: When trained well and implemented correctly, AI doesn’t make mistakes. It’s reliable and accurate, removing the risk of human error.
  • Potential: Arguably the biggest benefit of AI is its long-term potential. There’s so much room for it to grow and transform the world for the better in fields like healthcare and education.
  • Misuse: AI’s remarkable functions and features can be misused. For example, content generators can be used to spread misinformation.
  • Job Loss: There’s a general fear in many industries that AI could eventually replace many human jobs, as it can carry out human duties much faster and more efficiently.
  • Uncertainty: Despite best guesses and predictions regarding the future of AI, we can’t say for sure how it might impact the world, and there’s much uncertainty about the risks it brings.

What Does the Future Look Like for AI?

The future looks bright for AI. This technology is still in its infancy, and it’s already having a massive impact on the world. As it becomes better and more intelligent, new uses will inevitably be discovered, and the part that AI has to play in society will only grow bigger.

Of course, we can’t predict the future with absolute certainty, but it seems a good bet that its development will change the global job market in more ways than one. There’s already an increasing demand for AI experts, with many new AI-related roles emerging in fields like tech and finance.

At a broader, society-wide level, we can expect AI to shape the future of human interactions, creativity, and capabilities. It’ll undoubtedly have numerous transformative impacts, both good and bad, perhaps solving problems that have plagued mankind for centuries, while also presenting new challenges for us to overcome.

A Look at AI in the Workforce

As discussed earlier, AI is already a prevalent force in the working world, with AI-powered tools embraced across numerous industries, including:

  • Healthcare: AI-powered solutions are helping researchers discover new treatment methods. In the future, AI robotics could even carry out complex surgeries.
  • Finance: AI helps financial experts with forecasting, modeling, budgeting, fraud detection, and data analysis.
  • Education: In education, AI helps teachers and academic professionals unlock new learning opportunities, create lesson plans, and devise more effective teaching methods.
  • Marketing: In marketing, AI can do everything from in-depth audience analysis to optimizing content for search engines.
  • Entertainment: AI models can also be used to create or fine-tune entertainment content like TV scripts, on-screen visual effects, video game character profiles, and beyond.

Image showing some elements of artificial intelligence

How to Elevate Your Business AI Understanding and Use

AI is undoubtedly the most influential technological force in the business world today and will continue to be so for years to come. Now is the ideal time to learn more about AI and gain the skills and knowledge necessary to implement it effectively in a business context.

Unlock the Power of AI in Business with a Graduate Certificate

The University of Cincinnati’s Carl H. Lindner College of Business offers an online Artificial Intelligence in Business Graduate Certificate designed for business professionals seeking to enhance their knowledge and skills in AI. This program provides essential tools for leveraging AI to increase productivity and develop AI-driven solutions for complex business challenges.

To deepen your understanding of artificial intelligence in the business world, contact a UC Online Enrollment Services Advisor to learn more or get started today.

Contact UC Online to Learn More or Get Started

Now that you understand “What is artificial intelligence?”, UC Online is ready to help you learn more. Our graduate certificate in AI opens countless doors to business opportunities. Contact us today to learn more about our AI in Business Graduate Certificate.

Frequently Asked Questions (FAQs)

What is artificial intelligence in simple words.

Artificial intelligence is a field of technology that focuses on helping machines think and react like people.

What are examples of artificial intelligence?

Examples of artificial intelligence include generative AI tools like ChatGPT, smart assistants such as Siri and Alexa, and image recognition systems used in facial recognition technology. AI is also present in machine learning algorithms for data analysis, natural language processing, and self-driving cars.

Where is AI used in everyday life?

AI is integrated into everyday life through smart assistants that manage tasks, recommendation systems on streaming platforms, and navigation apps that optimize routes. It is also utilized in personalized shopping experiences, automated customer service, and social media algorithms that curate content.

Why do we need AI?

AI offers numerous benefits for the future in fields like healthcare, education, and scientific research. It will help save time, money, and resources and could create helpful innovations and solutions.

birds eye view of students working at a round table with laptops and study materials

Sign up for updates from UC Online

  • Name * First Last
  • Name This field is for validation purposes and should be left unchanged.
  • Notice of Non-Discrimination
  • Privacy Policy
  • Clery and HEOA Notice
  • eAccessibility Concerns

© 2024 University of Cincinnati Online Copyright Information

Row of blue flags with circles of yellow stars in front of building

A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

artificial intelligence and human intelligence essay

Associate Professor in Law, Macquarie University

Disclosure statement

Rita Matulionyte does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Macquarie University provides funding as a member of The Conversation AU.

View all partners

Around the world, governments are grappling with how best to manage the increasingly unruly beast that is artificial intelligence (AI).

This fast-growing technology promises to boost national economies and make completing menial tasks easier. But it also poses serious risks, such as AI-enabled crime and fraud, increased spread of misinformation and disinformation, increased public surveillance and further discrimination of already disadvantaged groups.

The European Union has taken a world-leading role in addressing these risks. In recent weeks, its Artificial Intelligence Act came into force.

This is the first law internationally designed to comprehensively manage AI risks – and Australia and other countries can learn much from it as they too try to ensure AI is safe and beneficial for everyone.

AI: a double edged sword

AI is already widespread in human society. It is the basis of the algorithms that recommend music, films and television shows on applications such as Spotify or Netflix. It is in cameras that identify people in airports and shopping malls. And it is increasingly used in hiring, education and healthcare services.

But AI is also being used for more troubling purposes. It can create deepfake images and videos, facilitate online scams, fuel massive surveillance and violate our privacy and human rights.

For example, in November 2021 the Australian Information and Privacy Commissioner, Angelene Falk, ruled a facial recognition tool , Clearview AI, breached privacy laws by scraping peoples photographs from social media sites for training purposes. However, a Crikey investigation earlier this year found the company is still collecting photos of Australians for its AI database.

Cases such as this underscore the urgent need for better regulation of AI technologies. Indeed, AI developers have even called for laws to help manage AI risks.

Clearview AI logo seen in front of a screen of blurred faces

The EU Artificial Intelligence Act

The European Union’s new AI law came into force on August 1.

Crucially, it sets requirements for different AI systems based on the level of risk they pose. The more risk an AI system poses for health, safety or human rights of people, the stronger requirements it has to meet.

The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China .

Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements.

For example, these systems must have their own risk management plan, be trained on quality data, meet accuracy, robustness and cybersecurity requirements and ensure a certain level of human oversight.

Lower risk AI systems, such as various chatbots, need to comply with only certain transparency requirements. For example, individuals must be told they are interacting with an AI bot and not an actual person. AI-generated images and text also need to contain an explanation they are generated by AI, and not by a human.

Designated EU and national authorities will monitor whether AI systems used in the EU market comply with these requirements and will issue fines for non-compliance.

Other countries are following suit

The EU is not alone in taking action to tame the AI revolution.

Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law.

Canada is also discussing the AI and Data Bill . Like the EU laws, this will set rules to various AI systems, depending on their risks.

Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors.

Australia can learn – and lead

In Australia, people are deeply concerned about AI, and steps are being taken to put necessary guardrails on the new technology.

Last year, the federal government ran a public consultation on safe and responsible AI in Australia . It then established an AI expert group which is currently working on the first proposed legislation on AI.

The government also plans to reform laws to address AI challenges in healthcare, consumer protection and creative industries.

The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.

However, a single law on AI will never be able to address the complexities of the technology in specific industries. For example, AI use in healthcare will raise complex ethical and legal issues that will need to be addressed in specialised healthcare laws. A generic AI Act will not suffice.

Regulating diverse AI applications in various sectors is not an easy task, and there is still a long way to go before all countries have comprehensive and enforceable laws in place. Policymakers will have to join forces with industry and communities around Australia to ensure AI brings the promised benefits to the Australian society – without the harms.

  • Artificial intelligence (AI)
  • European Union (EU)
  • Human rights
  • Facial recognition
  • clearview ai
  • AI chatbots
  • AI misinformation

artificial intelligence and human intelligence essay

Casual Facilitator: GERRIC Student Programs - Arts, Design and Architecture

artificial intelligence and human intelligence essay

Senior Lecturer, Digital Advertising

artificial intelligence and human intelligence essay

Service Delivery Fleet Coordinator

artificial intelligence and human intelligence essay

Manager, Centre Policy and Translation

artificial intelligence and human intelligence essay

Newsletter and Deputy Social Media Producer

Subscribe to the PwC Newsletter

Join the community, edit social preview.

artificial intelligence and human intelligence essay

Add a new code entry for this paper

Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row.

TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK REMOVE

Remove a task

Add a method, remove a method, edit datasets, hadron: human-friendly control and artificial intelligence for military drone operations.

13 Aug 2024  ·  Ana M. Casado Faulí , Mario Malizia , Ken Hasselmann , Emile Le Flécher , Geert De Cubber , Ben Lauwens · Edit social preview

As drones are getting more and more entangled in our society, more untrained users require the capability to operate them. This scenario is to be achieved through the development of artificial intelligence capabilities assisting the human operator in controlling the Unmanned Aerial System (UAS) and processing the sensor data, thereby alleviating the need for extensive operator training. This paper presents the HADRON project that seeks to develop and test multiple novel technologies to enable human-friendly control of drone swarms. This project is divided into three main parts. The first part consists of the integration of different technologies for the intuitive control of drones, focusing on novice or inexperienced pilots and operators. The second part focuses on the development of a multi-drone system that will be controlled from a command and control station, in which an expert pilot can supervise the operations of the multiple drones. The third part of the project will focus on reducing the cognitive load on human operators, whether they are novice or expert pilots. For this, we will develop AI tools that will assist drone operators with semi-automated real-time data processing.

Code Edit Add Remove Mark official

Datasets edit.

More From Forbes

Real artificial intelligence can’t exist without human collaboration: here’s why.

Forbes Technology Council

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

David Joosten is President and CEO of Vodafone US Inc. , leading Vodafone Business commercial operations throughout North America.

These are exciting times: AI has moved from a hot topic on TED Talks to making a real-world impact in a matter of months. We’ve witnessed the latest generation of models emerge—generative tools, advanced computer vision and language models, with rapidly evolving multimodal systems close behind them.

When ChatGPT first appeared on the scene in 2022 , however, despite its instant popularity, using it was a pretty lonely experience. It would only remember things in the context of the data you were feeding it and didn’t have access to the repository of information it does today. But OpenAI moved quickly to create enterprise-friendly instances that are secure for teams to use, and those teams only have to feed it data once for all members to benefit. ChatGPT, like so many other generative AI platforms, has now become a drastically improved collaboration tool.

As a result, we’re starting to find meaningful real-world uses. AI is now being used to analyze medical imagery like X-rays and CT scans and to put together personalized treatment plans for patients. In finance, AI is increasingly used for fraud detection and algorithmic trading and in retail, for personalized shopping experiences and inventory management. A few years ago, these use cases were pipe dreams. Today, they’re very real.

These accomplishments represent only a few steps in the grander scheme of things. Yes, we’ve gone from large language models with 117 million parameters to 1.76 trillion in just five years. However, the vision for what lies ahead is even more remarkable: A world where humans and AI come together to break boundaries. To fully unlock AI’s potential—the very reason telecommunications companies and so many others invest so heavily in this technology—we must recognize that the next steps can't be taken alone. Collaboration between the people using these AI systems is key.

NYT ‘Strands’ Hints, Spangram And Answers For Sunday, August 18th

Ufc 305 results: bonus winners, highlights and reaction, ‘inside out 2’ debuts on digital streaming this week, how to collaborate on ai.

AI thrives at the intersection of disciplines, where departments like HR and legal or marketing and finance share insights, knowledge and expertise to make the most of the tech. No matter how specialized a company is within its field, the intricate nature of any AI project demands that any application surpasses the mere sum of its individual components.

The key challenge to getting AI to where it needs to be is ensuring that enough people from the broadest possible spectrum are involved in its development. The solution is, as always, collaboration.

At Vodafone, we achieve this by opening the floor. The conversation about AI is for everyone, and we see it as a leveler. It makes more things possible for more people—colleagues who can’t code now can, and colleagues who aren’t writers by trade now can be. Our most junior colleagues speak about it like our executives do. For AI to grow, we must let it break down every progress-stunting barrier it can, thereby leveling the playing field.

Joint Efforts For Game-Changing Results

Navigating the complexities of an AI endeavor requires a diverse array of perspectives, voices and expertise. There’s a lot to consider, and you’ll need a lot of information that’s hard to come by internally. You simply don’t know what you don’t know, and you won’t without working with others.

The pros in the space are acutely aware of this. In recent developments, IBM announced a groundbreaking partnership with SAP , integrating WatsonX, an IBM generative AI platform, into SAP’s service cloud homepage, SAP Start. This strategic collaboration aims to empower users with WatsonX's natural language capabilities while utilizing SAP's suite of cloud applications. Similarly, Microsoft has forged and continues to cultivate partnerships with the renowned OpenAI, working together to develop cutting-edge AI solutions on a large scale.

It’s clear that collaboration is the key to unlocking the next generation of AI. We're already seeing instances of truly meaningful deployments made in partnership toward common goals. It makes undeniable sense.

The Role Of International Collaboration

International collaboration facilitates access to specialized datasets, enriches training data, addresses cultural and language gaps and helps ensure alignment with industry standards. In some instances, regulatory compliance mandates reliance on specific datasets, underscoring the indispensable role of collaboration in this realm.

However, developing a world-class product is one challenge while successfully bringing it to market is another. International collaboration helps here too thanks to the scale of exposure it brings. Access to each other's customer base and audiences can significantly influence adoption rates. This exchange is mutually beneficial, as AI tools are iterative by nature. With more users comes more input data, resulting in more comprehensive testing and feedback.

International collaboration harmonizes standards, helps developers address global issues like ethical AI use and fosters innovation. Several things must happen for this to occur. Common principles that provide a universally accepted set of ethical principles for AI development must be established and data governance must be standardized. Collaborative research and development must be promoted and encouraged, too. The Global Partnership on Artificial Intelligence (GPAI) is a great example. Founded in 2020 and boasting over 15 member states like the EU, the U.S. and India, the GPAI is a truly global initiative to bridge the gap between theory and practice on AI policy.

Governments are vital in establishing regulatory frameworks and promoting research and development, while tech companies need to develop and deploy AI systems responsibly. Academia and research bodies contribute too by conducting research on the social and ethical implications of AI and helping develop best practices. At the same time, civil society organizations must advocate for public interest and raise awareness.

The European Centre for Algorithmic Transparency , the Montreal Declaration for a Responsible Development of Artificial Intelligence and the EU Artificial Intelligence Act are examples of AI collaboration at its best. They work because they prioritize transparency, open dialogue, public engagement and continuous evaluation at the fore. For AI to be what we need it to be, we must all collaborate to achieve the same harmony.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

David Joosten

  • Editorial Standards
  • Reprints & Permissions

MIT Technology Review

  • Newsletters

Here’s how people are actually using AI

Something peculiar and slightly unexpected has happened: people have started forming relationships with AI systems.

  • Melissa Heikkilä archive page

person holding a phone wearing a wig with lipstick. The screen shows the OpenAi logo and voice icon

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here .

When the generative AI boom started with ChatGPT in late 2022, we were sold a vision of superintelligent AI tools that know everything, can replace the boring bits of work, and supercharge productivity and economic gains. 

Two years on, most of those productivity gains haven’t materialized. And we’ve seen something peculiar and slightly unexpected happen: People have started forming relationships with AI systems. We talk to them, say please and thank you, and have started to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers. 

We’re seeing a giant, real-world experiment unfold, and it’s still uncertain what impact these AI companions will have either on us individually or on society as a whole, argue Robert Mahari, a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School, and Pat Pataranutaporn, a researcher at the MIT Media Lab. They say we need to prepare for “addictive intelligence”, or AI companions that have dark patterns built into them to get us hooked. You can read their piece here . They look at how smart regulation can help us prevent some of the risks associated with AI chatbots that get deep inside our heads. 

The idea that we’ll form bonds with AI companions is no longer just hypothetical. Chatbots with even more emotive voices, such as OpenAI’s GPT-4o , are likely to reel us in even deeper. During safety testing , OpenAI observed that users would use language that indicated they had formed connections with AI models, such as “This is our last day together.” The company itself admits that emotional reliance is one risk that might be heightened by its new voice-enabled chatbot. 

There’s already evidence that we’re connecting on a deeper level with AI even when it’s just confined to text exchanges. Mahari was part of a group of researchers that analyzed a million ChatGPT interaction logs and found that the second most popular use of AI was sexual role-playing. Aside from that, the overwhelmingly most popular use case for the chatbot was creative composition. People also liked to use it for brainstorming and planning, asking for explanations and general information about stuff.  

These sorts of creative and fun tasks are excellent ways to use AI chatbots. AI language models work by predicting the next likely word in a sentence. They are confident liars and often present falsehoods as facts, make stuff up, or hallucinate. This matters less when making stuff up is kind of the entire point. In June, my colleague Rhiannon Williams wrote about how comedians found AI language models to be useful for generating a first “vomit draft” of their material; they then add their own human ingenuity to make it funny.

But these use cases aren’t necessarily productive in the financial sense. I’m pretty sure smutbots weren’t what investors had in mind when they poured billions of dollars into AI companies, and, combined with the fact we still don't have a killer app for AI,it's no wonder that Wall Street is feeling a lot less bullish about it recently.

The use cases that would be “productive,” and have thus been the most hyped, have seen less success in AI adoption. Hallucination starts to become a problem in some of these use cases, such as code generation, news and online searches , where it matters a lot to get things right. Some of the most embarrassing failures of chatbots have happened when people have started trusting AI chatbots too much, or considered them sources of factual information. Earlier this year, for example, Google’s AI overview feature, which summarizes online search results, suggested that people eat rocks and add glue on pizza. 

And that’s the problem with AI hype. It sets our expectations way too high, and leaves us disappointed and disillusioned when the quite literally incredible promises don’t happen. It also tricks us into thinking AI is a technology that is even mature enough to bring about instant changes. In reality, it might be years until we see its true benefit.

Now read the rest of The Algorithm

Deeper learning, ai “godfather” yoshua bengio has joined a uk project to prevent ai catastrophes.

Yoshua Bengio, a Turing Award winner who is considered one of the godfathers of modern AI, is throwing his weight behind a project funded by the UK government to embed safety mechanisms into AI systems. The project, called Safeguarded AI, aims to build an AI system that can check whether other AI systems deployed in critical areas are safe. Bengio is joining the program as scientific director and will provide critical input and advice. 

What are they trying to do: Safeguarded AI’s goal is to build AI systems that can offer quantitative guarantees, such as risk scores, about their effect on the real world. The project aims to build AI safety mechanisms by combining scientific world models, which are essentially simulations of the world, with mathematical proofs. These proofs would include explanations of the AI’s work, and humans would be tasked with verifying whether the AI model’s safety checks are correct. Read more from me here .

Bits and Bytes

Google deepmind trained a robot to beat humans at table tennis.

Researchers managed to get a robot  wielding a 3D-printed paddle to win 13 of 29 games against human opponents of varying abilities in full games of competitive table tennis. The research represents a small step toward creating robots that can perform useful tasks skillfully and safely in real environments like homes and warehouses, which is a long-standing goal of the robotics community. ( MIT Technology Review )

Are we in an AI bubble? Here’s why it’s complex.

There’s been a lot of debate recently, and even some alarm, about whether AI is ever going to live up to its potential, especially thanks to tech stocks’ recent nosedive. This nuanced piece explains why although the sector faces significant challenges, it’s far too soon to write off AI’s transformative potential. ( Platformer ) 

How Microsoft spread its bets beyond OpenAI

Microsoft and OpenAI have one of the most successful partnerships in AI. But following OpenAI’s boardroom drama last year, the tech giant and its CEO, Satya Nadella, have been working on a strategy that will make Microsoft more independent of Sam Altman’s startup. Microsoft has diversified its investments and partnerships in generative AI, built its own smaller, cheaper models, and hired aggressively to develop its consumer AI efforts. ( Financial Times ) 

Humane’s daily returns are outpacing sales

Artificial intelligence

a knight standing in a virtual space

How generative AI could reinvent what it means to play

AI-powered NPCs that don’t need a script could make games—and other worlds—deeply immersive.

  • Niall Firth archive page

collage of 9 scenes from video of human players matched against a robot in ping pong

It was able to draw on vast amounts of data to refine its playing style and adjust its tactics as matches progressed.

  • Rhiannon Williams archive page

a grid of frames from an ai-generated video showing a figure gesturing with her arms

Synthesia’s hyperrealistic deepfakes will soon have full bodies

With bodies that move and hands that wave, deepfakes just got a whole lot more realistic.

three phones with bitmap images of a girl&#039;s face more and less close up with a neural net in the background

We need to prepare for ‘addictive intelligence’

The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people.

  • Robert Mahari archive page
  • Pat Pataranutaporn archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: problem solving through human-ai preference-based cooperation.

Abstract: While there is a widespread belief that artificial general intelligence (AGI) -- or even superhuman AI -- is imminent, complex problems in expert domains are far from being solved. We argue that such problems require human-AI cooperation and that the current state of the art in generative AI is unable to play the role of a reliable partner due to a multitude of shortcomings, including inability to keep track of a complex solution artifact (e.g., a software program), limited support for versatile human preference expression and lack of adapting to human preference in an interactive setting. To address these challenges, we propose HAI-Co2, a novel human-AI co-construction framework. We formalize HAI-Co2 and discuss the difficult open research problems that it faces. Finally, we present a case study of HAI-Co2 and demonstrate its efficacy compared to monolithic generative AI models.
Comments: 16 pages (excluding references)
Subjects: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)
Cite as: [cs.AI]
  (or [cs.AI] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

COMMENTS

  1. Artificial Versus Human Intelligence

    Human intelligence lies in the basis of such developments and represents the collective knowledge gained from the analysis of experiences people live through. In turn, AI is an outcome of this progression, which allows humanity to put this data in a digital form that possesses some autonomous qualities. As a result, AI also contains limitations ...

  2. AI Should Augment Human Intelligence, Not Replace It

    In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to ...

  3. Artificial Intelligence vs. Human Intelligence: Differences Explained

    Many researchers feel that this difference is a strong basis for describing humans as being, on average, much more efficient learners than AI systems. Artificial intelligence is humanlike, but there are fundamental differences between natural and artificial intelligence. 2. Imagination and recitation. Human intelligence.

  4. AI vs Human Intelligence 2024: A Comparative Study!

    Essence. The purpose of human intelligence is to combine a range of cognitive activities in order to adapt to new circumstances. The goal of artificial intelligence (AI) is to create computers that are able to behave like humans and complete jobs that humans would normally do. Functionality. People make use of the memory, processing ...

  5. Artificial Intelligence Essay for Students and Children

    Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. With Artificial Intelligence, machines perform functions such as learning, planning, reasoning and problem-solving. Most noteworthy, Artificial Intelligence is the simulation of human intelligence by machines.

  6. Frontiers

    Human- versus Artificial Intelligence. J. E. (Hans). Korteling * G. C. van de Boer-Visschedijk R. A. M. Blankendaal R. C. Boonekamp A. R. Eikelboom. TNO Human Factors, Soesterberg, Netherlands. AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human ...

  7. Artificial Intelligence and the Future of Humans

    Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018. The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.

  8. Defining intelligence: Bridging the gap between human and artificial

    First, Goertzel (2010); Goertzel & Yu, 2014) defined artificial intelligence as a system's ability to recognise patterns quantifiable through the observable development of actions or responses while achieving complex goals in complex environments. Goertzel's reference to the ability to recognise patterns is consistent with human intelligence ...

  9. Artificial Intelligence vs. Human Intelligence: Understanding the

    In this essay, we'll compare and contrast artificial intelligence (AI) with human intelligence (HI), discussing their similarities and differences as well as the prospects for a peaceful ...

  10. Artificial Intelligence: Definition and Background

    We would effectively be defining the phenomenon out of existence. A common definition of AI is that it is a technology that enables machines to imitate various complex human skills. This, however, does not give is much to go on. In fact, it does no more than render the term 'artificial intelligence' in different words.

  11. Difference Between Artificial Intelligence and Human Intelligence

    Artificial Intelligence: Artificial intelligence is the field of computer science associated with making machines that are programmed to be capable of thinking and solving problems like the human brain. These machines can perform human-like tasks and can also learn from past experiences like human beings. Artificial intelligence involves advanced a

  12. The present and future of AI

    Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), ... For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges. The second has a much more mixed view. I think this comes from the fact that as AI tools have come into the ...

  13. How close are we to AI that surpasses human intelligence?

    July 18, 2023. Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction. AGI may still be far off, but the ...

  14. Human and Artificial Intelligence: A Critical Comparison

    The common philosophy of cognitive science as well as of Artificial Intelligence is functionalism: According to this philosophy, mental states (i.e., feelings, perceptions, thoughts, beliefs, etc.) consist of regular links between input and output of a system.For example, someone who pricks his finger has a mental state that leads to distorted facial muscles, moaning, and withdrawal of the finger.

  15. 500+ Words Essay on Artificial Intelligence

    Artificial Intelligence Essay. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use ...

  16. Human vs. Artificial Intelligence

    In this essay we compare human and artificial intelligence from two points of view: computational and neuroscience. We discuss the differences and limitations of AI with respect to our intelligence, ending with three challenging areas that are already with us: neural technologies, responsible AI, and hybrid AI systems.

  17. Human Intelligence vs. Artificial Intelligence: Top Differences

    Human intelligence is adept at abstract thinking. The very foundation of human civilization has been the ability to feel, observe, understand, and change. Through this process, emotions interact with logic to drive conclusions that satiate the rational faculties of human beings. AI lacks the ability to perform abstract thinking.

  18. Artificial Intelligence and Its Impact on Education Essay

    Today, some of the events and impact of AI on the education sector are concentrated in the fields of online learning, task automation, and personalization learning (Chen, Chen and Lin, 2020). The COVID-19 pandemic is a recent news event that has drawn attention to AI and its role in facilitating online learning among other virtual educational ...

  19. Exploring Artificial Intelligence in Academic Essay: Higher Education

    Artificial Intelligence (AI) and academic essay writing merge to create a transformative intersection in education, each reciprocally refining and reforming the other. ... Human and automated essay scoring approaches. Technology, Knowledge and Learning, 28 (3) (2023), pp. 1015-1031, 10.1007/s10758-022-09592-z. View in Scopus Google Scholar.

  20. Artificial Intelligence: History, Challenges, and Future Essay

    In the editorial "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence" by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired ...

  21. What Is Artificial Intelligence? Definition, Uses, and Types

    Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and ...

  22. Inspired, but not mimicking: a conversation between artificial

    Liu: In the 1992 textbook Artificial Intelligence, Patrick Winston of MIT gave a definition of AI: 'Artificial Intelligence is the study of the computation that makes it possible to perceive, reason and act'. It means that AI is a technology that applies algorithms, models and other computational methods to realize some human intelligent ...

  23. Artificial Intelligence Essay

    Examples of AI-Artificial Intelligence. In this topic, we are going to provide an essay on Artificial Intelligence. This long essay on Artificial Intelligence will cover more than 1000 words, including Introduction of AI, History of AI, Advantages and disadvantages, Types of AI, Applications of AI, Challenges with AI, and Conclusion.

  24. What Is Artificial Intelligence (AI)? Definition, Uses, and More

    However, at a fundamental level, it can be defined as a representation of human intelligence through the medium of machines. In other words, AI is about enabling devices like computers to carry out tasks and processes that would usually demand human-level intelligence, like reading, understanding and responding to language, analyzing data, and ...

  25. A world-first law in Europe is targeting artificial intelligence. Other

    The EU Artificial Intelligence Act The European Union's new AI law came into force on August 1. Crucially, it sets requirements for different AI systems based on the level of risk they pose.

  26. HADRON: Human-friendly Control and Artificial Intelligence for Military

    As drones are getting more and more entangled in our society, more untrained users require the capability to operate them. This scenario is to be achieved through the development of artificial intelligence capabilities assisting the human operator in controlling the Unmanned Aerial System (UAS) and processing the sensor data, thereby alleviating the need for extensive operator training.

  27. AI experts look ahead to artificial general intelligence

    Conference sessions at the University of Washington centered on a concept known as artificial general intelligence, or AGI. Artificial intelligence can already outperform humans on a growing list ...

  28. Real AI Can't Exist Without Human Collaboration: Here's Why

    The Global Partnership on Artificial Intelligence (GPAI) is a great example. Founded in 2020 and boasting over 15 member states like the EU, the U.S. and India, the GPAI is a truly global ...

  29. Here's how people are actually using AI

    They say we need to prepare for "addictive intelligence", or AI companions that have dark patterns built into them to get us hooked. You can read their piece here. They look at how smart ...

  30. Problem Solving Through Human-AI Preference-Based Cooperation

    While there is a widespread belief that artificial general intelligence (AGI) -- or even superhuman AI -- is imminent, complex problems in expert domains are far from being solved. We argue that such problems require human-AI cooperation and that the current state of the art in generative AI is unable to play the role of a reliable partner due to a multitude of shortcomings, including ...