Skip to Main Content

AI Resources and Support

AI Resources for Teaching and Learning

 2024 EDUCAUSE AI Landscape Study: Introduction
and Key Findings

This inaugural report summarizes the higher education community’s current sentiments and experiences related to strategic planning and readiness, policies and procedures, workforce, and the future of AI in higher education.
 Jenay Robert. 2024 EDUCAUSE AI Landscape Study. Research report. Boulder, CO: EDUCAUSE, February 2024.


The U.S. Department of Education Office of Educational Technology’s new policy report, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations, addresses the clear need for sharing knowledge, engaging educators, and refining technology plans and policies for artificial intelligence (AI) use in education. The report describes AI as a rapidly-advancing set of technologies for recognizing patterns in data and automating actions, and guides educators in understanding what these emerging technologies can do to advance educational goals—while evaluating and limiting key risks

Office of Educational Technology:

Recommendations

Recommendations: Call to Action for Education Leaders


Recommendation #1: Emphasize Humans in the Loop

We reject the notion of AI as replacing teachers. Teachers and other people must be “in the loop” whenever AI is applied in order to notice patterns and automate educational processes. We call upon all constituents to adopt Humans-in-the-Loop as a key criteria.

Recommendation #2: Align AI Models to a Shared Vision for Education

We call upon educational decision makers, researchers, and evaluators to determine the quality of an educational technology based not only on outcomes, but also based on the degree to which the models at the heart of the AI tools and systems align to a shared vision for teaching and learning. The figure to the right describes the important qualities of AI models for education leaders to consider.

                                               Circular graphic with center of Center Students & Teachers, surrounded by overlapping circles: Privacy & Data Security, Aligned to Our Vision for Learning, Inspectable Explainable Overridable, Minimize Bias & Promote Fairness, Context-aware & Effective Across Contexts, Transparent Accountable & Responsible Use. Then two outer circles labled Humans in the Loop and Within an Educational Systems Perspective

Recommendation #3: Design Using Modern Learning Principles

Achieving effective systems requires more than processing “big data”—it requires more than data science. Applications of AI must be based on established, modern learning principles, the wisdom of educational practitioners, and should leverage the expertise in the educational assessment community around detecting bias and improving fairness.  Going forward, we also must seek to create AI systems that are culturally responsive and culturally sustaining, leveraging the growth of published techniques for doing so. Further, most early AI systems had few specific supports for students with disabilities and English learners and we must ensure that AI-enabled learning resources are intentionally inclusive of these students.

Recommendation #4: Prioritize Strengthening Trust

Technology can only help us to achieve educational objectives when we trust it. Yet, we learned through a series of public listening sessions that distrust of educational technology and AI is commonplace.  Because trust develops as people meet and relate to each other, we call for a focus on building trust and establishing criteria for trustworthiness of emerging educational technologies within the associations, convenings, and professional organizations that bring educators, innovators, researchers, and policymakers together.

Recommendation #5: Inform and Involve Educators

We call on educational leaders to prioritize informing and involving educational constituents so they are prepared to investigate how and when AI fits specific teaching and learning needs, and what risks may rise. Now is the time to show the respect and value we hold for educators by informing and involving them in every step of the process of designing, developing, testing, improving, adopting, and managing AI-enabled educational technology. This includes involving educators in reviewing existing AI-enabled systems, tools, and data use in schools, designing new applications of AI based on teacher input, carrying out pilot evaluations of proposed new instructional tools, collaborating with developers to increase the trustworthiness of the deployed system, and raising issues about risks and unexpected consequences as the system is implemented.

Recommendation #6: Focus R&D on Addressing Context and Enhancing Trust and Safety

Research that focuses on how AI-enabled systems can adapt to context (diversity among learners, variability in instructional approaches, differences in educational settings) is essential to answering the question “Do specific applications of AI work in education, and if so, for whom and under what conditions?” We call upon researchers and their funders to prioritize investigations of how AI can address the long tail of learning variability and to seek advances in how AI can incorporate contextual considerations when detecting patterns and recommending options to students and teachers. Further, researchers should accelerate their attention to how to enhance trust and safety in AI-enabled systems for education.

Recommendation #7: Develop Education-Specific Guidelines and Guardrails

Data privacy regulation already covers educational technology; further, data security is already a priority of school educational technology leaders. Modifications and enhancements to the status quo will be required to address the new capabilities alongside the risks of AI. We call for involvement of all perspectives in the ecosystem to define a set of guidelines (such as voluntary disclosures and technology procurement checklists) and guardrails (such as enhancements to existing regulations or additional requirements) so that we can achieve safe and effective AI for education.


Listening Sessions

The U.S. Department of Education’s Office of Educational Technology, with support from Digital Promise, held listening sessions about Artificial Intelligence (AI). We connected with all constituents involved in making decisions about technology in education, including but not limited to teachers, educational leaders, students, parents, technologists, researchers, and policy makers.

The goal of these listening sessions were to gather input, ideas, and engage in conversations that will help the Department shape a vision for AI policy that is inclusive of cutting-edge research and practices while also informed by the opportunities and risks.