The Impact of Artificial Intelligence on Learning, Teaching, and Education

The Impact of Artificial Intelligence on Learning, Teaching, and Education

Table of Contents





The Impact of Artificial Intelligence on Learning

This report describes the current state of the art in artificial intelligence (AI) and its potential impact on learning, teaching, and education. It provides conceptual foundations for well-informed policy-oriented work, research, and forward-looking activities that address the opportunities and challenges created by recent developments in AI. The report is aimed at policy developers, but it also makes contributions that are of interest to AI technology developers and researchers studying the impact of AI on the economy, society, and the future of education and learning.

Introduction:

All human actions are based on anticipated futures. We cannot know the future because it does not exist yet, but we can use our current knowledge to imagine futures and make them happen. The better we understand the present and the history that has created it, the better we can understand the possibilities of the future. To appreciate the opportunities and challenges that artificial intelligence (AI) creates, we need both a good understanding of what AI is today and what the future may bring when AI is widely used in society. AI can enable new ways of learning, teaching, and education, and it may also change society in ways that pose new challenges for educational institutions. It may amplify skill differences and polarize jobs, or it may equalize opportunities for learning. The use of AI in education may generate insights on how learning happens, and it can change the way learning is assessed. It may re-organize classrooms or make them obsolete, it can increase the efficiency of teaching, or it may force students to adapt to the requirements of technology, depriving humans of the powers of agency and possibilities for responsible action. All this is possible. Now is a good time to start thinking about what AI could mean for learning, teaching, and education. There is a lot of hype, and the topic is not an easy one. It is, however, both important, interesting, and worth the effort.

Since 2013, when Frey and Osborne5 estimated that almost half of U.S. jobs were at a high risk of becoming automated, AI has been on top of policymakers’ agendas. Many studies have replicated and refined this study, and the general consensus now is that AI will generate major transformations in the labor market.6 Many skills that were important in the past are becoming automated, and many jobs and occupations will become obsolete or transformed when AI will be increasingly used. At the same time, there has been a tremendous demand for people with skills in AI development, leading to seven-figure salaries and sign-up fees. China has announced that it aims to become the world leader in AI and grow 150 billion AI ecosystems by 2030. The U.S. Department of Defense invested about 2.5 billion USD in AI in 2017, and the total private investment in the U.S. is now probably over 20 billion USD per year. In 2017, there were about 1200 AI start-ups in Europe,7 and the European Commission aims to increase the total public and private investment in AI in the EU to be at least 20 billion euros by the end of 2020.

In limited tasks, AI already exceeds human capabilities. Last year, with just about one month of system development, researchers at Stanford were able to use AI to diagnose 14 types of medical conditions using frontal-view X-ray images, exceeding the human diagnostic accuracy for pneumonia.9 In 2017, given no domain knowledge except the game rules, an artificial neural network system, AlphaZero, achieved within 24 hours a superhuman level of play in the games of chess, shogi, and Go.10 In May 2018, Google CEO Sundar Pichai caused a firestorm when he demonstrated in his keynote an AI system, Duplex, that can autonomously schedule appointments on the phone, fooling people to think they are discussing with another human. In the midst of self-driving cars, speaking robots, and the flood of AI miracles, it may be easy to think that AI is rapidly becoming super intelligent and gain all the good and evil powers awarded to it in popular culture. This, of course, is not the case. The current AI systems are severely limited, and there are technical, social, scientific, and conceptual limits to what they can do. As one recent author noted, AI may be riding a one-trick pony as almost all AI advances reported in the media are based on ideas that are more than three decades old.11 A particular challenge of the currently dominant learning models used in AI is that they can only see the world as a repetition of the past. The available categories and success criteria that are used for their training are supplied by humans. Personal and cultural biases, thus, are an inherent element in AI systems. A three-level model of human action presented in the next section suggests that norms and values are often tacit and expressed through unarticulated emotional reactions. Perhaps surprisingly, the recent successes in AI also represent the oldest approach to AI and one where almost all the intelligence comes from humans.

Instead of a beginning of an AI revolution, we could be at the end of one. This, of course, depends on what we mean by revolution. Electricity did not revolutionize the world when Volta found a way to store it in 1800 or when Edison General Electric Company was incorporated in 1889. The transformative impact of general-purpose technologies becomes visible only gradually when societies and economies reinvent themselves as users of new technologies. Technological change requires a cultural change that is reflected in lifestyles, norms, policies, social institutions, skills, and education. Because of this, AI—now often called the “new electricity”—may revolutionize many areas of life when it is taken into use even if it keeps on driving its “one-trick” pony for the foreseeable future. Many interesting things will happen when already existing technologies will be adopted, adapted, and applied for learning, teaching, and education. For example, AI may enable both new learning and teaching practices, and it may generate a new social, cultural, and economic context for education.

Below we ask simple questions that illustrate the relevance of AI for educational policies and practices. Which vocations and occupations will become obsolete in the near future? What are the 21st Century skills in a world where AI is widely used? How should AI be incorporated into the K-12 curriculum? How will AI change teaching? Should real-time monitoring of student emotions be allowed in classrooms? Can AI fairly assess students? Do we need fewer classrooms because of AI? Does AI reduce the impact of dyslexia, dyscalculia, or other learning difficulties? These questions are simple to ask and relevant for understanding the future of learning, teaching, and education. The answers, of course, are more complex.

The main aim of this report is to put these and other similar questions in a context where they can be meaningfully addressed. We do not aim to provide final answers; instead, we hope to provide a background that will facilitate discussion on these and other important questions that need to be asked as AI becomes increasingly visible in the society and economy around us. To do this, we have to first open the “black box” of AI and peek inside. There are several things AI can do well, and many things it cannot do. At present, there is an avalanche of reports and newspaper articles on AI, and it is not always easy to distinguish important messages from noise. It is, however, important to understand some key characteristics of current AI to be able to imagine realistic futures. In the next sections, we put AI in the context of learning, teaching, and education, and then focus on the specific form of AI, adaptive artificial neural networks, that have generated the recent interest in AI.

Policy challenges

The current excitement about AI easily leads to a technology push, where AI is viewed as a solution to a wide variety of problems in education and learning. It is probably fair to say that the potential and challenges of AI in education are still not adequately understood. AI can be understood as a general-purpose technology, and it can be applied in many different ways. Although the characteristics of the technology itself may push development towards specific directions, it is always possible to use technology in many ways and for many different purposes, also in education. For policy development, it is therefore probably more important to understand why and for what we use technology than how it is used. The future promises of technology, in this view, have to be justified by making explicit the motivation of using the technology, as well as the key assumptions that underpin the stated motivation. This lifts technology to a level of policy, and we have to ask what are the objectives and goals of using it. Only if we have such a birds-eye view on technical development, we can say where we want to go and how technology can help us on the way. When the assumptions and motivations are made explicit, they can also be critically assessed.

A continuous dialogue on the appropriate and responsible uses of AI in education is therefore needed. As technology and its uses change, important contributions to this dialogue may emerge from “outsiders” who do not represent current stakeholder interests. Enabling and funding independent research on, for example, the politics, ethics, social implications, and economy of AI may be a practical way to create useful inputs to this dialogue.

In the domain of educational policy, it is important for educators and policymakers to understand AI in the broader context of the future of learning. To a large extent, the debate about AI is now about the ongoing informationalization, digitalization, and computer-mediated globalization. The current estimates of the impact of AI and other digital technologies on the labor market highlight the point that the demand for skills and competences is changing fast, and the educational system needs to adapt, in particular when education aims to create skills for work. AI enables the automation of many productive tasks that in the past have been done by humans. As AI will be used to automate productive processes, we may need to reinvent current educational institutions. It is, for example, possible that formal education will play a diminishing role in creating job-related competences. This could mean that the future role of education will increasingly be in supporting human development.

For example, the current AI systems make an almost continuous assessment of student progress possible. Instead of high-stakes testing that functions as a social filter, AI-supported assessment can be used to help learners to develop their skills and competences and keep students on effective learning paths. With such ongoing assessment, high-stakes testing may become redundant, and broader evidence may be used for assessing skills and competences. This may be important in particular for assessing transversal key competencies that are now relatively difficult to assess. As AI and other information technologies facilitate informal learning, it also becomes important to ask what the division of labor between formal and informal learning will be in the future.

In general, the balance may thus shift from the instrumental role of education towards its more developmental role. Perhaps more importantly, it is possible that the industrial age link between work and education is changing. Current institutions of education to a large extent address the needs of an industrial world. As knowledge and data are now created, used, and learned in ways that have not been possible before, it is important that AI is not understood only as a solution to problems in the current educational systems.

In general, the profound changes in the society and economy that AI and related technologies are now making possible will create a world where many social institutions will change, and people have to adapt. When a similar broad change occurred almost two centuries ago, the social and human costs were high. Although we now with hindsight often neglect the negative consequences of technical development and emphasize its positive consequences, it is important to realize that general-purpose technologies can have a fundamental transformative impact on social life and human development. The rather poetic declaration in 1848 that “all that is solid melts into air,” was not just a vision but it was based on careful empirical observation of the everyday consequences of industrialization.85 A general policy challenge, thus, is to increase among educators and policymakers awareness of AI technologies and their potential impact. One way of doing this is to participate in processes that generate images of the future, develop concepts that can be used to describe them, and design scenarios and experiments where such imagined futures can be tested. A rather simple proposal for policy development, thus, is to launch explicitly future-oriented processes that generate an understanding of the possibilities of the present.

AI provides new means for research on learning, but it is also important to rethink the capabilities of AI systems using existing knowledge about learning.86 In particular, almost all currently developed AI systems rely on associative and behaviouristic models of learning. The long history of neural AI contains many attempts to go beyond these simple models of learning. Learning sciences could have much to offer to research on AI, and such mutual interaction would enable a better understanding of how to use AI for learning and in educational settings, as well as in other domains of application.

Data that is needed for machine learning is often highly personal. If it is used for assessing student performance, data security can become a key bottleneck in using AI, learning analytics, and educational data mining. As neural AI systems do not understand the data they process, it is also easy to forge data that fools the decision process.87 AI security is an important topic, but it is also challenging as neural AI systems typically use complex internal representations of data that are difficult or impossible to interpret. Because of this, there is now considerable interest in creating “explainable AI.” The current systems, however, lack all the essential reflective and metacognitive capabilities that would be needed to explain what they do or don’t do.88 To rephrase Descartes, it is, therefore, as futile to ask a clock on the wall why it just struck seven or eight as it is to ask a deep learning AI system why it gave a specific grade to a student. Clocks are not built to explain their ticking, and AI systems, as we know them, have no explanatory capabilities. At best they can support humans in explaining what happened and why. As there may be fundamental theoretical and practical limits in designing AI systems that can explain their behavior and decisions, it is important to keep humans in the decision-making loop.

As several recent reports have emphasized, ethical considerations become highly relevant when AI is applied in society or in educational settings.89 From a policy perspective, the ethics of AI is a generic challenge, but it has specific relevance for educational policies. From the regulatory point of view, ethical considerations provide the fundamental basis from which new regulations and laws are created and justified. From a developmental point of view, ethics, and value judgments underpin fundamental concepts such as agency, responsibility, identity, freedoms, and human capabilities. In supervised AI learning models, the possible choice outcomes need to be provided to the system before it starts to learn. This means that the world becomes described in closed terms, based on predefined interests and categories. Furthermore, the categories are based on data that are collected in the past. Neural AI categorizes people in clusters where data from other people, considered similar to the system, is used to predict individual characteristics and behaviour.

From political and ethical points of view, this is highly problematic. Human agency means that we can make choices about future acts, and thus become responsible for them. When AI systems predict our acts using historical data averaged over a large number of other persons, AI systems cannot understand people who make true choices or who break out from historical patterns of behaviour. AI can therefore also limit the domain where humans can express their agency.

As has been emphasized above, the recent successes in AI have to a large extent been based on the availability of vast amounts of data. AI-based products and services can be created in the educational sector only if appropriate data is available. At present, some of the existing datasets can be considered as natural monopolies, and they are often controlled by few large corporations. An important policy challenge is how such large datasets that are needed for the development and use of AI-based systems could be made more widely available. One potential solution is to build on the current General Data Protection Regulation which requires that data subjects can have a copy of their personal data from data controllers in a commonly used electronic form. Technically this would make it possible for users to access their personal data, anonymize it locally, and submit it in an appropriate format to platforms that are used for AI learning and educational purposes. Such functionality might be relatively easily embedded, for example in commonly used web browsers, if platforms for data aggregation would be available. One possibility could be to pilot such aggregation platforms on a suitable scale and, if successful, provided at the EU level.

 

About KSRA

The Kavian Scientific Research Association (KSRA) is a non-profit research organization to provide research / educational services in December 2013. The members of the community had formed a virtual group on the Viber social network. The core of the Kavian Scientific Association was formed with these members as founders. These individuals, led by Professor Siavosh Kaviani, decided to launch a scientific / research association with an emphasis on education.

KSRA research association, as a non-profit research firm, is committed to providing research services in the field of knowledge. The main beneficiaries of this association are public or private knowledge-based companies, students, researchers, researchers, professors, universities, and industrial and semi-industrial centers around the world.

Our main services Based on Education for all Spectrum people in the world. We want to make an integration between researches and educations. We believe education is the main right of Human beings. So our services should be concentrated on inclusive education.

The KSRA team partners with local under-served communities around the world to improve the access to and quality of knowledge based on education, amplify and augment learning programs where they exist, and create new opportunities for e-learning where traditional education systems are lacking or non-existent.

FULL Paper PDF file:

The Impact of Artificial Intelligence on Learning, Teaching, and Education

 

Bibliography

author

TUOMI I., CABRERA GIRALDEZ Marcelino, VUORIKARI Riina, PUNIE Yves

Year

2018

Title

 The Impact of Artificial Intelligence on Learning, Teaching, and Education

Publish in

The Impact of Artificial Intelligence on Learning, Teaching, and Education. Policies for the future, Eds. Cabrera, M., Vuorikari, R & Punie, Y., EUR 29442 EN, Publications Office of the European Union, Luxembourg, 2018, ISBN 978-92-79-97257-7, doi:10.2760/12297, JRC113226.

DOI

DOI: 10.2760/12297 (online)

PDF reference and original file: Click here

 

Website | + posts

Nasim Gazerani was born in 1983 in Arak. She holds a Master's degree in Software Engineering from UM University of Malaysia.

Website | + posts

Professor Siavosh Kaviani was born in 1961 in Tehran. He had a professorship. He holds a Ph.D. in Software Engineering from the QL University of Software Development Methodology and an honorary Ph.D. from the University of Chelsea.