OECD: Opportunities & guidelines for effective use of AI in education

O

A new publication of the OECD states the opportunities and guidelines for the effective use of AI in education.

As artificial intelligence continues its rapid advancement, the question is no longer if it will impact education – but how. AI is poised to transform teaching and learning as we know it, for better or worse. But how can we maximize its benefits while avoiding potential pitfalls?

In this article, we’ll take a detailed look at AI’s role in classrooms from multiple perspectives. We’ll examine both the opportunities it provides as well as the risks that require careful consideration and oversight. We’ll also explore recommendations from organizations like UNICEF and OECD on how to guide AI’s application in a way that strengthens education systems while protecting students.

From personalized lesson plans and accessibility tools to early intervention systems, AI holds promise for enhancing education in powerful ways. But its rise also brings potential issues like inequity, privacy concerns and bias if unchecked. By understanding AI’s effects through reports from groups focused on children’s rights and wellbeing, we can help ensure its application supports students fully.

By the end, our goal is for you to have a nuanced view of AI in education – recognizing both its potential upsides and downsides. With knowledge of ongoing guidelines and guardrails, we can support AI fulfilling its role in service of learners everywhere. Let’s dive in to what the future may bring!

Exploring the Future: Instructional Design and AI



Conclusive Summary – OECD: effective use of AI in education

The report titled “Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in Education” was jointly developed by the OECD and Education International. It focuses on the use of artificial intelligence and digital technology in teaching and learning.

The document aims to facilitate collaboration between education authorities and the teaching profession, inspire further discussions, and provide guidelines for effective use of AI in education.

It was reviewed by various stakeholders and experts in the field. The document is part of the OECD’s efforts to promote effective and equitable educational recovery and the development of a sustainable digital education ecosystem.

OECD Opportunities & guidelines for effective use of AI in education

What are the guidelines for effective use of AI in Education?

The Guidelines for effective use of AI in Education aim to help educational jurisdictions and organizations navigate the fast-moving developments in AI. They build on the OECD Council Recommendations on Artificial Intelligence and Broadband Connectivity. The Guidelines emphasize the importance of inclusive education and equity, the need for trained and qualified teachers, and the potential of technology to support personalized teaching, collaboration, and professional growth. Transparency, explainability, and negotiation are also highlighted as important factors in the use of AI in education.

What are the risks to effective use of AI in Education?

The risks for the effective use of AI in education include:

  1. Inequalities: Unequal access to technology and its effectiveness can result in disparities among students. This includes weaker usage by students and educators who are intimidated by technology and challenges in making full use of its potential.
  2. Privacy and security: Concerns exist regarding the privacy, security, and use of learners’ and teachers’ personal information and data. Excessive time spent on technology-based activities, especially for young children, is also a concern.
  3. Bias and discrimination: The use of algorithms in making automated decisions, such as identifying potential early school leavers or determining progression or admission, can carry risks of bias. This can result from biases in the developers, society, and past datasets, leading to discrimination and potentially amplifying existing biases.
  4. Ethical concerns: Unethical human behaviour can arise from identifying risks, such as expelling students at risk of dropping out or stigmatizing students with special needs instead of providing support.
  5. Effectiveness and impact on learning outcomes: The efficacy of AI-enabled tools in improving learning outcomes or decreasing educators’ workload is not always well established. Not all forms of knowledge or learning processes are easily transferable to a digital format, potentially leading to a prioritization of easily digitized forms of learning and jeopardizing the breadth and quality of education.
  6. Social isolation and mental health: Excessive time spent on technology can lead to social isolation, negatively impacting mental health and learning outcomes, especially for younger learners.
  7. Teacher workload and well-being: AI-enabled tools may add to teachers’ workload rather than serve as an aide, especially when not designed in collaboration with the teaching profession. There are also risks for teachers regarding access to technology, well-being, professional development opportunities, and teacher data use.

These risks highlight the importance of addressing issues such as equitable access to technology, data protection and privacy, ethical guidelines, and integrating digital technologies in curricula with a focus on equity, quality, and efficiency.

Explaining equitable access to affordable, high-quality connectivity

Educational jurisdictions should create digital learning infrastructures that provide accessible and affordable high-quality connectivity to all learners and educators, both in and outside of school. This ensures equitable access to advanced technology for learning and enables a quick and fair transition to remote learning if needed.

Governments should prioritize the development of comprehensive and reliable physical infrastructure to ensure equity in the availability, quality, and affordability of devices and connectivity. Additionally, innovative solutions can be designed to make digital technology accessible to teachers and learners, even in areas with uneven connectivity distribution or equipment.

What is Equitable access to and equitable use of digital learning resources?

Educational jurisdictions should ensure that teachers and learners have equitable access to high-quality digital learning resources. This includes making digital learning platforms and resources easily usable on mobile devices and providing various resources catering to teaching and learning preferences.

While equity in the access of decent quality learning resources must be a key objective, variations and inequity can come from a variety of use across classrooms and schools. While respecting teachers’ pedagogical autonomy, jurisdictions should provide clear guidance about the types of digital competences learners should develop, and how.

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in
Education, OECD Publishing, Paris. p. 7,

Additionally, adaptive learning systems can be considered to support learners with special needs, and digital tools can assist teachers in designing lessons and generating learning materials. Mutualizing expensive resources, such as augmented or virtual reality tools, across schools can also promote equity.

It is important for jurisdictions to provide clear guidance on developing digital competencies for learners and offer training on the use of generative AI to avoid equity gaps based on differing abilities or confidence in using such applications.

What does it recommend for Teacher agency and
professional learning?

The paper recommends that digital learning resources’ critical and pedagogical uses should become integral to teachers’ professional competencies. It suggests fostering this integration in initial education and continuous professional learning opportunities.

Teachers’ AI literacy should be cultivated, so they understand AI techniques, can critically assess AI productions and recommendations, and creatively use AI in their teaching.
While initial education is important, learning on the job is what makes a good teacher a great one, and continuous professional learning for teachers should include the use of technology in teaching and learning.

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in
Education, OECD Publishing, Paris. p. 8,

Recognizing the importance of teacher agency, efficacy, and leadership is key to allowing them to make critical use of digital learning resources and design rich learning scenarios with their students. It also emphasizes the need for a trained and qualified workforce that is trusted and supported to apply AI-enabled tools in teaching and enhance the relational and social experience of learning.

What does it recommend for Student and teacher wellbeing?

The document recommends using and developing AI-enabled technology to prioritize learners’ and teachers’ wellbeing and mental health. It suggests creating ethical guidelines on digital communications in partnership with teachers and their organizations, recognizing that learning is a relational and social experience involving human-to-human interactions.

While digital technology has the potential to improve teaching and learning, for example by diversifying learning scenarios for students or by making education more aligned with contemporary society, the excessive usage of digital technology and expanded possibilities of diffusion of unethical content present risks for the wellbeing of learners and teachers.

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in
Education, OECD Publishing, Paris. p. 9

It highlights the importance of maintaining a good balance between digital and non-digital activities and limiting excessive usage of digital technology to ensure the well-being of learners and teachers.

Additionally, it mentions that specific tools could be designed to detect bullying and cyberbullying. AI can help address student well-being through data analytics connected to digital tools and human services related to socio-emotional learning.

Overall, the document emphasizes the need for a proactive approach to AI literacy for teachers and students to ensure safe and conducive learning environments.

What does it recommend for the Co-creation of AI-enabled
digital learning tools?

The documentation recommends that jurisdictions encourage the involvement of teachers, students, and other end users as co-designers in the technology research and development process to ensure the usefulness and use of AI-enabled digital tools.

Education technology companies have technology competences that many teachers typically do not have. This is why a constructive dialogue with them is necessary and desirable. For education technology companies to develop “useful” tools for teachers, teachers need to be involved in the design process, piloting and monitoring and evaluation of these tools.

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in
Education, OECD Publishing, Paris. p. 10,

It suggests creating an innovation-friendly ecosystem that allows technology developers to experiment and pilot tools with the support of teachers and learners. It also emphasizes the importance of a constructive dialogue with education technology companies and involving teachers in the design process, piloting, and evaluation of these tools.

Additionally, it suggests that government-funded institutional programs could involve various stakeholders in defining and researching the effective use of these tools within schools.

What does it recommend for Research and co-creation of evidence through disciplined innovation?

The Guidelines recommend several opportunities and guidelines for the effective and equitable use of AI in education. These include:

  1. AI-enabled technologies can support inclusive education and equity by providing accessibility tools for visually- or hearing-impaired learners and addressing learning difficulties such as dyslexia, dyscalculia, and dysgraphia.
  2. Early warning systems powered by AI can help identify at-risk students and support interventions for students from disadvantaged backgrounds.
  3. Technology can make learning resources and knowledge accessible to broader audiences, including in lower-income countries, and help develop social and collaborative skills among students and teachers.
  4. The effective use of AI tools in education depends on trained and qualified teachers who have the autonomy to choose digital tools and how they are applied in the classroom.
  5. Technology applications can help personalize teaching, provide feedback, and reduce administrative tasks for teachers, freeing up time for instructional design and activities.
  6. Technology can build communities of learners, enhance goal orientation, motivation, persistence, and effective learning strategies, and facilitate collaboration among educators.
  7. Technology can support system leaders and governments in sharing best practices, curriculum design, policy, pedagogy, and research.
  8. Generative AI applications, such as large language models, can support teachers in generating lesson plans and developing critical thinking skills in the classroom.

In addition to these opportunities, the Guidelines emphasize the need for transparency, explainability, and negotiation when using digital tools in education. Educational jurisdictions should be transparent about the objectives and processes of algorithms used in high-stakes digital tools. It is important to explain to teachers, students, and families how these tools work and provide information, education, and training about them. Policymakers should balance the expected effectiveness of tools against their explainability or transparency.

Opportunities and guidelines for effective and equitable use of AI in education

The Guidelines recommend several opportunities, guidelines, and guardrails for the effective and equitable use of AI in education.

Privacy and data protection must be balanced against other important educational objectives such as equity or effectiveness, which may require the collection of personal data, including sensitive data.

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in
Education, OECD Publishing, Paris. p. 12,

These include:

  1. AI-enabled technologies can support inclusive education and equity by providing accessibility tools for visually- or hearing-impaired learners and addressing learning difficulties such as dyslexia, dyscalculia, and dysgraphia.
  2. Early warning systems powered by AI can help identify at-risk students and support interventions, particularly for students from disadvantaged backgrounds.
  3. Technology can make learning resources and knowledge accessible to broader audiences, including in lower-income countries, and help develop social and collaborative skills among students and teachers.
  4. The effective use of AI tools in education depends on trained and qualified teachers who have the autonomy to choose digital tools and how they are applied in the classroom.
  5. Technology applications can help personalize teaching, provide feedback, and reduce administrative tasks for teachers, freeing up time for instructional design and activities.
  6. Technology can build communities of learners, enhance goal orientation, motivation, persistence, and effective learning strategies, and facilitate collaboration among educators.
  7. Technology can support system leaders and governments in sharing best practices, curriculum design, policy, pedagogy, and research.
  8. Generative AI applications, such as large language models, can support teachers in generating lesson plans and developing critical thinking skills in the classroom.
  9. There is a need to monitor and evaluate the effectiveness of digital tools, address the risk of human stigmatization, and provide guidance on personal data protection and other policies.
  10. Transparency and explainability are crucial when using digital tools, particularly in high-stakes evaluations and assessments. Educational jurisdictions should be transparent about the objectives and processes of algorithms and negotiate their use with stakeholders.

What does it recommend for Transparency, explainability and negotiation?

The documentation recommends that educational jurisdictions should be transparent about the objectives and processes by which algorithms reach their recommendations when using digital tools for evaluation and assessment.

One challenge of AI-enabled technologies is that most people do not understand how they work and what can be expected of them. Generative AI is a striking example as it is currently difficult to fully explain the details of how it operates.

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in
Education, OECD Publishing, Paris. p. 13.

Transparency is essential, especially for high-stakes cases, and the accuracy of their performance should be verified for all sub-groups of the target populations in education. It is essential to explain to teachers, students, and families how these tools work and provide information, education, and training about them. Policymakers should balance the expected effectiveness of tools against their explainability or transparency.

What does the report recommend for Human support and human alternatives?

The documentation recommends that jurisdictions should consider providing human alternatives to AI-enabled technology when appropriate.

It is not always possible or desirable to allow people to “opt out” of the use of digital tools. For example, the use of data in contributing to the improvement of education, particularly of disadvantaged groups, relies on a comprehensive participation in data gathering.

It is also not practical for families to individually opt out of digital solutions chosen by educational institutions to support their children’s learning. This does not mean that human alternatives should not continue to be considered.

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in
Education, OECD Publishing, Paris. p. 14
.

This is particularly important in remote proctoring for exams or tests, as students from different households may have varying levels of connectivity, living space, and examination conditions at home. Therefore, it is suggested that human proctoring options should be available alongside AI-enabled remote proctoring.

Key Findings & Conclusion

Here are the key findings from the OECD report on opportunities, guidelines and guardrails for effective and equitable use of AI in education:

  • AI has the potential to improve learning outcomes, reduce teachers’ workload, and personalize teaching if used appropriately.
  • However, there are also risks like inequalities in access, privacy and data issues, bias and discrimination, effectiveness concerns, and impacts on well-being.
  • Guidelines focus on supporting inclusive education, qualified teachers with autonomy, transparency and explainability, negotiated use, monitoring effectiveness, and supporting human alternatives.
  • Opportunities include using AI to provide accessibility tools, identify at-risk students, make resources widely accessible, personalize learning, build communities, and support system goals like sharing best practices.
  • Transparency is crucial, especially for high-stakes uses like assessments. Performance should be verified for all student subgroups.
  • While opting out of data use may not always be possible, human alternatives should still be considered, especially for tasks like remote exam proctoring.
  • Jurisdictions are encouraged to involve teachers, students and other stakeholders as co-designers in AI research and development to ensure tools are useful, used appropriately and effectively.

Here are some additional details from the OECD report on opportunities, guidelines and guardrails for AI in education:

  • The report defines AI broadly as systems “capable of performing tasks that normally require human intelligence”. This includes machine learning techniques.
  • It aims to inform collaboration between education authorities and unions and help establish responsible AI policies through shared guidelines.
  • When discussing accessibility tools, it notes AI can help address needs like visual or hearing impairments as well as learning difficulties.
  • For early identification of at-risk students, it says data-driven systems could support targeting interventions, especially for disadvantaged groups.
  • In making resources accessible, it emphasizes not just technical accessibility but also ensuring quality, local relevance and addressing digital/AI literacy needs.
  • When discussing personalization, it cautions tools should maintain breadth/quality of education and avoid narrowing curricula or prioritizing easily digitized skills.
  • It emphasizes human and data protection laws, not just effectiveness alone, must guide decision-making involving the collection/use of personal learner data.
  • Guidelines also stress AI risks and biases can amplify existing inequities, so jurisdictions must balance effectiveness with explainability and alternatives.

In summary, the report provides guidelines and opportunities for jurisdictions to maximize AI’s educational benefits while guarding against risks through collaborative and evidence-based policies emphasizing explainability, alternatives and stakeholder involvement.

Frequently Asked Questions

How will AI impact teachers’ roles?

While AI could potentially replace some administrative tasks, most experts believe it will augment teachers’ work rather than replace them. AI is better suited to personalized instruction and feedback, freeing teachers to focus on creative and social aspects of learning.

Will AI exacerbate the digital divide?

There is a risk that unequal access to technology could widen inequities if not addressed. However, AI also holds potential to boost remote and personalized learning, potentially reaching more students. With proactive policies around connectivity and accessibility, AI’s impact could promote inclusion.

How can bias in AI be addressed?

Transparency into algorithms and opportunities for human oversight are crucial. Diverse, representative data used to train systems can also help. It will take diligence across the board to establish accountability and oversight measures that minimize discrimination while enabling AI to benefit learners.

What research is still needed?

Transparency into algorithms and opportunities for human oversight are crucial. Diverse, representative data used to train systems can also help. It will take diligence across the board to establish accountability and oversight measures that minimize discrimination while enabling AI to benefit learners.


Comparing Papers from OECD with UNESCO

The report has received mixed feedback in the AI in Education communities. While many appreciate the publication, some criticised OECD for being “superficial” and stuck in old structures that should be overcome to tackle future challenges.

Overall, the community had higher expectations and still praised UNESCO’s “Guidance for Generative AI in Education and Research” report, which stands out for its holistic and forward-looking structure and practical utility.

Here are some key comparisons between the OECD report on AI in education and similar reports from UNESCO:

Scope

  • The OECD report focuses specifically on guidelines and opportunities for using AI/digital technologies in educational contexts.
  • UNESCO reports tend to take a broader view of the implications of emerging technologies on children’s rights and well-being.

Focus

  • OECD emphasizes policy recommendations, collaboration frameworks and practical opportunities within educational systems.
  • UNESCO places more emphasis on risks to children, strategies for mitigating harms, and calls for governments to uphold children’s digital rights.

Perspectives

  • OECD collaborates directly with education authorities and teachers’ unions to represent their needs and viewpoints.
  • UNESCO consults experts in children’s issues but represents the interests of children independently from governments.

Outlook

  • OECD sees carefully managed educational AI applications as opportunities if risks are addressed.
  • UNESCO takes a more precautionary stance, emphasizing potential downsides if children’s rights and development are not prioritized.

Overall, while both aim to maximize benefits and minimize harm, the OECD focuses more narrowly on enabling educational use of emerging technologies if proper safeguards are established.

UNESCO advocates a child-centred approach, emphasizing potential downsides if children’s rights are neglected.

You can find the link to the complete OECD report under

OECD-Education International (2023), Opportunities, Guidelines and Guardrails on Effective and Equitable Use of AI in Education, OECD Publishing, Paris.

About the author

Herbert

Ph.D. in philosophy, author, wine expert, former poker professional, and co-founder of 11Heroes.com. On Griffl, I discuss Instructional Design & AI tools.

Add comment

By Herbert

Get in touch

I'm always happy to hear from you! Feel free to reach out to me with any questions, comments, or suggestions.

Latest

Tags