Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

This glossary defines key terms from artificial intelligence, educational psychology, and instructional design used throughout AI Thinking for Educators. Terms are arranged alphabetically within lettered sections. Definitions are written in plain language for educators at all levels of technical familiarity.


1

504 Plan — A legal accommodation plan provided under Section 504 of the Rehabilitation Act of 1973, designed for students with disabilities who do not qualify for special education services but still require adjustments to access learning. Common accommodations include extended time on tests, preferential seating, or modified assignments. Unlike an IEP, a 504 Plan does not include specialized instruction — it focuses on removing barriers to participation. AI tools can support 504 students by offering flexible pacing, text-to-speech, and alternative presentation formats.


2A

Agent (AI) — An AI agent is a software system that can perceive its environment, make decisions, and take actions autonomously to achieve a goal — often across multiple steps without constant human input. Unlike a simple chatbot that answers one question at a time, an agent can browse the web, run code, send emails, and chain tasks together. In education, AI agents could help students conduct multi-step research or assist teachers in automating administrative workflows. They represent a shift from AI as a tool to AI as a collaborator.

Andragogy — Andragogy is Malcolm Knowles’ theory of adult learning, which holds that adult learners are self-directed, motivated by internal goals, and bring rich life experience to the classroom. Adults learn best when they can see the immediate relevance of new knowledge to their lives and when they have some control over the learning process. This is distinct from pedagogy, which historically assumed learners are children who need to be guided by a teacher. AI tools that offer personalized pacing and real-world application align well with andragogical principles.

Artificial Intelligence (AI) — Artificial Intelligence is the field of computer science focused on creating systems that can perform tasks that normally require human intelligence — such as understanding language, recognizing images, making decisions, and solving problems. Modern AI systems learn from large amounts of data rather than following hand-written rules. In education, AI powers tools like intelligent tutors, writing assistants, grading support systems, and content generators. It is not a single technology but a broad umbrella covering many approaches and applications.


3B

Bloom’s Taxonomy — Bloom’s Taxonomy is a hierarchical framework for classifying educational learning objectives, originally developed by Benjamin Bloom in 1956 and revised in 2001. The six levels — from lowest to highest — are: Remember, Understand, Apply, Analyze, Evaluate, and Create. It helps educators design assessments and learning activities that target specific cognitive depths. When integrating AI, educators should consider which levels AI can support (e.g., remembering facts) versus which demand distinctly human reasoning (e.g., evaluating ethical trade-offs).


4C

Cognitive Load — Cognitive load refers to the total amount of mental effort required to process information in working memory at any given moment. Because working memory is limited, tasks that demand too much at once can overwhelm learners and impede understanding. John Sweller’s Cognitive Load Theory divides load into three types: intrinsic, extraneous, and germane. Effective instructional design — and thoughtful AI integration — aims to reduce unnecessary cognitive burden while preserving the productive challenge that supports deep learning.

Connectivism — Connectivism is a learning theory developed by George Siemens and Stephen Downes that frames learning as the process of forming connections between nodes of information across networks — including people, databases, and digital systems. In a connectivist view, knowing where to find knowledge is as important as storing it internally. This theory is especially relevant in the AI age, where learners must navigate vast information ecosystems. It challenges educators to teach not just content, but the ability to critically curate and connect information from distributed sources.

Constructivism — Constructivism is a learning theory holding that people build new knowledge by connecting it to what they already know, rather than passively receiving information. Associated with theorists like Jean Piaget and Lev Vygotsky, it emphasizes active exploration, problem-solving, and meaning-making. A constructivist classroom encourages students to question, experiment, and construct understanding through experience. AI tools are most aligned with constructivist principles when they prompt reflection and exploration rather than simply delivering correct answers.

Context Engineering — Context engineering is the practice of deliberately shaping all the information fed into an AI model — including instructions, examples, background knowledge, and conversation history — to guide the model toward more accurate and relevant responses. It goes beyond writing a single good prompt; it involves architecting the entire information environment the AI works within. For educators, context engineering might mean preloading an AI tutor with a student’s learning goals, prior knowledge, and the specific curriculum being studied. It is an emerging professional skill for AI-literate educators and designers.

Context Rot — Context rot refers to the gradual degradation of an AI conversation’s quality as the conversation grows very long and earlier instructions, context, or corrections get pushed out of or de-emphasized within the model’s context window. Over a long session, an AI may “forget” constraints set at the beginning, contradict earlier outputs, or lose track of the specific goals of the interaction. Educators and students should be aware that very long AI sessions may produce less reliable results, and starting a fresh session with a clear prompt may restore quality. This is a practical consideration when using AI for extended tutoring or research sessions.

Context Window — The context window is the total amount of text — including your instructions, the conversation history, and any documents you’ve uploaded — that an AI model can “see” and reason about at one time. Think of it as the AI’s working memory for a single session. Larger context windows allow users to share longer documents and have extended conversations without losing earlier information. Understanding context windows helps educators know why an AI might seem to “forget” something said earlier in a very long conversation.


5D

Desirable Difficulties — Desirable difficulties are learning strategies that appear to slow down initial learning but actually produce stronger long-term retention and transfer. Coined by Robert Bjork, the concept includes techniques like spaced practice, interleaving different topics, retrieval practice (testing), and varying the conditions of learning. These strategies work because they require the learner’s brain to work harder to retrieve and apply information, strengthening the memory trace. AI tools that adapt too quickly to make things easy may inadvertently remove these productive challenges.

Differentiation — Differentiation is the instructional practice of tailoring content, process, products, or the learning environment to meet the varying needs, readiness levels, interests, and learning profiles of individual students. It is not about giving some students less work — it is about giving all students appropriately challenging work. AI is a powerful differentiation tool because it can adapt explanations, adjust difficulty, and provide personalized feedback at scale. Effective differentiation still requires a teacher who knows their students deeply and makes thoughtful decisions about when and how AI support is appropriate.


6E

Extraneous Load — Extraneous load is the portion of cognitive load created by how information is presented, rather than by the inherent complexity of the content itself. Cluttered slides, confusing instructions, irrelevant examples, and poor formatting all increase extraneous load without adding educational value. Reducing extraneous load is a core goal of good instructional design. When using AI to generate content, educators should review outputs to ensure they are clean, focused, and free of unnecessary complexity that could distract or confuse learners.


7F

Fine-Tuning — Fine-tuning is the process of taking a pre-trained AI model and continuing its training on a smaller, specialized dataset to make it perform better on a specific task or in a specific domain. For example, a general-purpose language model might be fine-tuned on medical literature to make it more accurate for healthcare applications. In education, fine-tuned models could be adapted to a specific institution’s curriculum, style guide, or student population. Fine-tuning is distinct from prompt engineering — it changes the model’s internal parameters rather than just the instructions given at runtime.

Foundational Knowledge (Fink) — In L. Dee Fink’s Taxonomy of Significant Learning, foundational knowledge refers to the essential facts, concepts, principles, and relationships that students must understand and remember as the basis for deeper learning. It is roughly equivalent to the “Remember” and “Understand” levels of Bloom’s Taxonomy. While foundational knowledge alone is not sufficient for significant learning, it is necessary — students cannot analyze or evaluate ideas they do not first understand. AI can be an effective tool for helping students build foundational knowledge through explanation, examples, and practice.


8G

Gem (Gemini) — In Google’s Gemini AI platform, a “Gem” is a customized version of the Gemini assistant configured for a specific purpose, persona, or set of instructions. Educators can create Gems pre-loaded with subject-specific context, a defined role (e.g., Socratic tutor, writing coach, debate partner), and behavioral guidelines. Gems allow teachers to deploy AI assistants that stay on-task and behave consistently without students needing to re-enter instructions each session. They represent a practical tool for embedding pedagogical intent directly into an AI interface.

Generative AI — Generative AI refers to AI systems that can produce new content — including text, images, audio, video, code, and more — in response to a prompt. These systems learn patterns from vast amounts of existing human-created content and use that learning to generate novel outputs. Tools like ChatGPT, Gemini, Claude, DALL·E, and Suno are all examples of generative AI. In education, generative AI opens powerful possibilities for content creation, personalized tutoring, and creative exploration — but also raises important questions about authorship, accuracy, and academic integrity.

Germane Load — Germane load is the cognitive effort devoted to constructing and automating schemas — the mental frameworks that allow us to organize and retrieve knowledge efficiently. Unlike extraneous load (which should be minimized), germane load is productive and represents genuine learning work. When a student works hard to see how new information connects to what they already know, that effort is germane. Well-designed instruction — and thoughtfully used AI — can direct cognitive resources toward germane processing by reducing distractions and providing just enough scaffolding.

Grounded AI — Grounded AI refers to AI systems whose outputs are tied to specific, verifiable sources of information — such as a provided document, a database, or a real-time search — rather than relying solely on the model’s internal training data. Grounding reduces the risk of hallucination by anchoring the AI’s responses to retrievable facts. Tools like NotebookLM and RAG-based systems are examples of grounded AI. For educators, using grounded AI is an important strategy for ensuring that AI-generated content is accurate, citable, and aligned with specific course materials.


9H

Hallucination (AI) — In AI, hallucination refers to instances where a model generates text that is confident-sounding but factually incorrect, fabricated, or not supported by its source material. This happens because language models are optimized to produce fluent, coherent text — not necessarily true text. A hallucinating AI might invent a citation, misstate a historical date, or describe a study that does not exist. Teaching students to verify AI outputs critically is one of the most important AI literacy skills educators can develop, and grounded AI systems are one technical approach to reducing this problem.


10I

IEP (Individualized Education Plan) — An Individualized Education Plan is a legally binding document developed under the Individuals with Disabilities Education Act (IDEA) for students with qualifying disabilities who require specialized instruction. The IEP outlines the student’s present level of performance, annual learning goals, specific accommodations and modifications, and the services the school will provide. It is developed collaboratively by educators, specialists, parents, and — when appropriate — the student. AI tools can support IEP implementation by offering flexible, personalized learning experiences, but they must be used in alignment with each student’s documented plan.

Intrinsic Load — Intrinsic load is the inherent cognitive difficulty of the material being learned — determined by the complexity of the content and the learner’s existing knowledge. Learning calculus has higher intrinsic load than learning to add fractions, and intrinsic load decreases as learners develop expertise. Unlike extraneous load, intrinsic load cannot simply be eliminated; it is part of what makes the subject challenging. Effective instruction manages intrinsic load by breaking complex topics into sequenced components and building on prior knowledge before introducing new complexity.


11L

Large Language Model (LLM) — A Large Language Model is a type of AI trained on massive amounts of text data — books, websites, articles, code, and more — to understand and generate human language. LLMs like GPT-4, Claude, Gemini, and Llama work by predicting the most likely next word (or token) given everything that came before it, which allows them to produce surprisingly coherent and knowledgeable responses. They are the engine behind most modern AI writing assistants, chatbots, and tutoring tools. Understanding that LLMs are fundamentally prediction machines — not databases of truth — is essential AI literacy for educators and students alike.


12M

Metacognition — Metacognition is the awareness and regulation of one’s own thinking processes — often described as “thinking about thinking.” A metacognitive learner can monitor their own understanding, identify gaps in their knowledge, select appropriate strategies, and adjust their approach when something isn’t working. It is one of the most powerful predictors of academic success. Educators should be intentional about whether AI use supports or undermines metacognition: AI can scaffold reflection, but it can also bypass the effortful thinking that builds it.

More Knowledgeable Other (MKO) — The More Knowledgeable Other is a concept from Lev Vygotsky’s sociocultural theory referring to anyone — teacher, peer, parent, or tool — who has greater knowledge or skill in a particular area and can support a learner’s development. The MKO works within the learner’s Zone of Proximal Development to provide scaffolding and guidance. AI can function as an MKO by offering explanations, hints, and feedback tailored to a student’s current level. However, unlike a human MKO, AI lacks genuine relational awareness and cannot fully perceive a student’s emotional state or social context.

Multimodal AI — Multimodal AI refers to AI systems that can process and generate multiple types of information — such as text, images, audio, and video — rather than working in a single format. A multimodal model can analyze a photograph, read a document, listen to speech, and generate a written response all within the same system. Tools like Gemini and GPT-4o are multimodal. For educators, multimodal AI opens new possibilities for accessibility, creative projects, and richer forms of assessment that go beyond text-based interaction.


13N

NotebookLM — NotebookLM is a Google AI research and study tool that allows users to upload their own documents — notes, PDFs, slides, articles — and then ask questions, generate summaries, and explore ideas grounded specifically in those materials. Because NotebookLM is designed to stay anchored to the sources you provide, it significantly reduces the risk of hallucination compared to general-purpose AI chatbots. Educators can use it to help students engage deeply with course readings, and students can use it to synthesize and review their own notes. It is a strong example of grounded AI applied to academic work.


14P

Productive Failure — Productive failure is a learning design approach developed by Manu Kapur in which students are intentionally asked to attempt a problem before they have been taught the solution method. Even though students typically “fail” to reach the correct answer, the struggle activates prior knowledge, surfaces misconceptions, and prepares the brain to understand the solution when it is finally taught. Productive failure challenges the intuition that clear instruction should always come first. In AI-integrated classrooms, this principle cautions against immediately turning to AI for answers before attempting genuine intellectual effort.

Productive Struggle — Productive struggle refers to the cognitive effort involved when a learner works through a challenging problem that is within their ability to eventually solve — with persistence. It is “productive” because the struggle itself builds understanding, problem-solving skills, and resilience. Educators must distinguish between struggle that leads to growth and frustration that leads to shutdown. AI tutors that give answers too quickly can short-circuit productive struggle; those designed to ask guiding questions and scaffold thinking can support it.

Prompt Engineering — Prompt engineering is the practice of designing, refining, and optimizing the text instructions given to an AI model to produce better, more useful outputs. A well-engineered prompt might specify a role for the AI, provide relevant context, give examples of the desired output format, and set constraints. For educators, prompt engineering is a teachable skill — students who learn to write clear, specific, and structured prompts get dramatically better results from AI tools. It is both a technical skill and a communication skill, requiring clarity of thought before clarity of language.


15R

RAG (Retrieval-Augmented Generation) — Retrieval-Augmented Generation is a technique that combines a language model with a retrieval system — allowing the AI to search an external knowledge base (such as a document library or database) before generating a response, and then ground its answer in the retrieved information. RAG systems are more accurate and up-to-date than models relying solely on training data, and their outputs can be traced back to specific sources. In education, RAG is the technology behind tools like NotebookLM and institution-specific AI assistants built on course materials.

RLHF (Reinforcement Learning from Human Feedback) — Reinforcement Learning from Human Feedback is a training technique used to align AI model behavior with human preferences and values. Human trainers rate AI outputs as better or worse, and the model is then fine-tuned using these ratings as a reward signal — teaching it to produce responses that humans find helpful, safe, and accurate. RLHF is one of the key reasons modern AI assistants feel more natural and less harmful than earlier models. Understanding RLHF helps educators appreciate that AI behavior reflects human choices embedded in the training process — choices that carry their own biases and assumptions.


16S

Self-Determination Theory — Self-Determination Theory (SDT), developed by Edward Deci and Richard Ryan, holds that human motivation and well-being are rooted in three basic psychological needs: autonomy (feeling in control of one’s actions), competence (feeling capable and effective), and relatedness (feeling connected to others). Learning environments that support these needs tend to foster intrinsic motivation and deeper engagement. When integrating AI in education, SDT suggests that students should have meaningful choice over how they use AI, experience genuine mastery — not just AI-assisted success — and maintain human connection in their learning community.

Self-Efficacy — Self-efficacy, a concept central to Albert Bandura’s Social Cognitive Theory, refers to a person’s belief in their own ability to succeed at a specific task or in a specific domain. High self-efficacy leads to greater effort, persistence, and resilience in the face of difficulty. Low self-efficacy leads to avoidance and learned helplessness. AI tools can either strengthen or undermine self-efficacy: when AI provides scaffolded support that helps students succeed at meaningful challenges, it can build confidence; when AI does the work for students, it may erode their sense of their own capability.

System Prompt — A system prompt is a set of instructions given to an AI model before the user’s conversation begins — typically by a developer, teacher, or platform designer rather than the end user. It shapes the AI’s persona, constraints, areas of focus, and behavioral guidelines for the entire interaction. A student using an AI tutor may never see the system prompt, but it is quietly determining how the AI responds. Educators who build or configure AI tools need to understand the power of system prompts in shaping the learning experience.


17T

Temperature (AI) — Temperature is a setting in AI language models that controls how random or creative the model’s outputs are. At a low temperature (close to 0), the model consistently chooses the most probable next word, producing predictable and factual-sounding text. At a high temperature (closer to 1 or above), the model takes more risks, producing more varied, creative, and sometimes surprising — or erroneous — outputs. Educators and developers configuring AI tools for academic use often lower the temperature to improve reliability and reduce hallucination, while creative writing applications may benefit from higher temperatures.

Token — A token is the basic unit of text that AI language models process. Tokens roughly correspond to words or word fragments — for example, “education” might be one token, while “unbelievable” might be split into two or three. Both the input you send to an AI and the response it generates are measured in tokens. Context window sizes, processing costs, and usage limits are all measured in tokens. Understanding tokens helps educators and students make sense of why AI tools have length limits and why longer conversations or documents consume more of the model’s available capacity.

Transformative Learning — Transformative Learning is a theory of adult learning developed by Jack Mezirow, which describes learning as a process of critically examining and revising our deeply held assumptions, beliefs, and worldviews — what Mezirow called “frames of reference.” True transformative learning results in a fundamental shift in how a person sees and acts in the world, not just an addition of new facts. It is often triggered by a disorienting dilemma — an experience that challenges existing assumptions. The emergence of AI in society and education is, for many educators and students, exactly this kind of disorienting dilemma.


18Z

Zone of Proximal Development (ZPD) — The Zone of Proximal Development is a concept from Lev Vygotsky describing the space between what a learner can accomplish independently and what they can accomplish with guidance from a More Knowledgeable Other. Learning is most effective when it occurs within this zone — challenging enough to promote growth, but not so difficult as to be overwhelming without support. AI tutors have the potential to operate precisely within a student’s ZPD by adapting difficulty in real time and providing scaffolding tailored to the individual. However, calibrating AI support to stay within — rather than collapse — the ZPD requires thoughtful design and teacher oversight.


This glossary will be updated as the field of AI in education continues to evolve. Educators are encouraged to treat these definitions as starting points for deeper exploration rather than final answers.