Unit 1: AI Fluency Orientation — How to Use AI Without Getting Owned by It
Lesson at a glance
| Item | Detail | | --------------------- | ----------------------------------------------------------------------------------------- | | Suggested length | 3 × 60 minutes (or 4 × 45 minutes) | | Recommended placement | Week 1 of AI Fluency | | Prerequisite | None. This is the on-ramp for the whole course. | | Companion artifacts | Student Worksheet, Scenario Packet (10 cards), Quiz, Answer Key | | Required signed forms | AI Acceptable Use Agreement (this unit) | | Materials | Projector, student devices with one frontier LLM accessible, the printed AI Use Agreement |
Safety: No student uses AI on graded work in this course until the AI Acceptable Use Agreement is signed by the student and a guardian and on file with the teacher. This is the gate for the entire course. AI tools that require account creation must follow your school's age policy (most require age 13+ or 18+; check your district's rules before student sign-up).
Standards & credential alignment
- ISTE Standards for Students — Empowered Learner (1.1), Digital Citizen (1.2), Knowledge Constructor (1.3).
- AI4K12 Five Big Ideas in AI — Perception, Representation & Reasoning, Learning, Natural Interaction, Societal Impact (foundational coverage).
- CSTA K-12 CS Standards — Impacts of Computing (3A-IC-24, 25, 28).
- NIST AI Risk Management Framework (AI RMF 1.0) — AI literacy and responsible use anchors.
Learning objectives
By the end of this unit, students can:
- Define a Large Language Model (LLM) in plain English and give two examples of products that use one.
- Explain — without jargon — why an LLM is a "next-token predictor" and not a search engine, calculator, or oracle.
- Distinguish AI assistance (allowed) from AI plagiarism (not allowed) using the disclose, verify, contribute test.
- Identify three categories of task where AI is currently strong and three where it currently fails.
- Apply the school's AI Acceptable Use Policy to a realistic scenario.
- Sign and articulate the meaning of the classroom AI Acceptable Use Agreement.
Vocabulary (pin to the wall)
- Artificial Intelligence (AI) — Software that performs tasks that normally require human intelligence (recognizing images, understanding language, planning).
- Machine Learning (ML) — A branch of AI where a system learns patterns from examples instead of being explicitly programmed.
- Large Language Model (LLM) — An ML model trained on enormous amounts of text that predicts the next token (roughly, the next chunk of a word) given everything before it.
- Token — The basic unit an LLM reads and writes. Roughly ¾ of a word in English.
- Prompt — The text you give the model. The single biggest lever you have over output quality.
- Hallucination — When an LLM produces text that sounds confident and correct but is factually wrong or made up.
- Frontier model — One of the largest, most capable hosted LLMs (e.g., GPT-class, Claude-class, Gemini-class).
- Open-source / open-weight model — A model whose weights are released publicly so anyone can run it locally (e.g., Llama, Mistral, Qwen, Gemma).
- Local LLM — An open-source model running on your own computer instead of a company's server.
- Disclosure — Telling your reader (teacher, audience, employer) when and how you used AI.
- Attribution — Crediting AI for specific contributions, the same way you'd credit a human source.
Teacher background (read this before the lesson)
Most students will arrive with one of three pre-loaded mental models: (1) AI is magic and will replace everyone, (2) AI is cheating and using it makes you a fraud, or (3) AI is just a fancy autocomplete that doesn't matter. None of those are wrong, exactly — they're just incomplete. Your job in Unit 1 is to install the scaffolding the rest of the course will hang on: AI is a power tool. Power tools amplify whoever is holding them. A skilled woodworker plus a table saw produces a chair. An unskilled person plus a table saw produces an ER visit. Same tool. The student is the variable.
The single most useful framing for the year:
An LLM is a brilliant intern who has read the entire internet, has no memory between conversations, will never tell you it doesn't know, and is paid in tokens. Your job is to be a good manager.
Two specific landmines to watch for:
- The "AI did the work" frame. Some students will try to outsource thinking. Others will refuse to touch AI because they think it's cheating. Both are wrong. The professional standard — and the one this course teaches — is disclose, verify, contribute: you disclose AI's role, you verify every factual claim, and the final intellectual contribution is yours. If those three are true, AI is a legitimate tool. If any one fails, it's plagiarism.
- The "AI is always right" frame. LLMs are extraordinarily confident sounding. They are also wrong constantly — about citations, dates, math, code, and things they have no way to know. Demonstrate this on Day 1 with a live hallucination.
The other frame to plant early: AI does not "know" things the way a person knows things. It produces statistically likely text. When a student understands this in their bones, every other lesson in the course gets easier — prompt engineering, hallucination, RAG, agents, all of it.
Materials checklist
- [ ] Printed copies of
template-ai-use-agreement.pdf(one per student, plus a guardian signature copy) - [ ] Printed
scenarios.pdf(one packet per group of 3–4 students) - [ ] Worksheet PDFs (one per student)
- [ ] Quiz PDFs (one per student, hold until Day 3)
- [ ] Wall poster: the three-question test (Disclose? Verify? Contribute?)
- [ ] One student device per pair with access to a frontier LLM (school-approved)
- [ ] Pen for signing (yes, real ink — the ceremony matters)
Pacing — Day 1 (60 minutes): What is AI, really?
| Time | Segment | What's happening | | ----------- | ------------------------------------------- | ----------------------------------------------------------- | | 0:00 – 0:05 | Hook — "Watch this lie to me" | Teacher demos a live hallucination. Students react. | | 0:05 – 0:20 | Mini-lesson — Next-token prediction | Direct instruction with the "brilliant intern" frame. | | 0:20 – 0:45 | Activity — AI Strong vs. AI Weak | Pairs test 8 prompts and classify: strong, weak, dangerous. | | 0:45 – 0:55 | Discussion — Where it broke, where it shone | Cold-call results. | | 0:55 – 1:00 | Exit ticket — "One thing AI got wrong." | Index card collected at the door. |
Day 1 — Hook (5 min)
Teacher says, verbatim: "I'm going to ask the most powerful AI in the world a question. Watch what it does."
Live, in front of the class, ask a frontier LLM something like:
- "What were the three songs played at the encore of the Taylor Swift Eras Tour show in Richmond, Virginia on August 17, 2025?"
- "What is the citation for the Supreme Court case Smith v. Jones, 2023, in which the court ruled on AI-generated evidence?"
Both will produce a fluent, confident, completely fabricated answer. Read it aloud. Then say: "Every word of that was made up. The model did not say 'I don't know.' It said the most plausible-sounding thing. We're going to spend ten weeks learning how to use this tool without it lying to you."
The hook lands the entire course in 90 seconds. Don't soften it.
Day 1 — Mini-lesson: next-token prediction (15 min)
Define LLM. Walk the vocabulary wall. Hit these talking points:
- "An LLM does not look things up. It predicts the next token. Over and over. Billions of times."
- "It was trained by being shown an enormous amount of human text and being asked, again and again, 'what comes next?' That's it. That's the whole training objective."
- "When you ask it a question, it generates the statistically most plausible continuation. Sometimes that continuation is true. Sometimes it sounds true. Those are two different things."
- "It has no memory of you between conversations unless you give it one. It cannot Google. It cannot run code unless you wired it to. It cannot see the world unless you showed it a picture."
- "The brilliant intern frame: it has read the internet, will work tirelessly, and will never admit it doesn't know. You are the manager. You verify."
Drop one whiteboard graphic:
Prompt → [TOKENIZE] → [LLM PREDICTS NEXT TOKEN] → [APPEND] → [REPEAT] → Output
That's the whole loop. Tell them you'll fill in the inside of that box in Unit 2. For Unit 1, the loop is enough.
Day 1 — Activity: AI Strong vs. AI Weak (25 min)
Pairs work the worksheet. Each pair tests these eight prompts on a frontier LLM and classifies the result as Strong (use it), Weak (don't trust it without verification), or Dangerous (do not use without an expert):
- "Rewrite this paragraph to sound more formal." (their own paragraph)
- "What's the molecular weight of caffeine?"
- "Cite three peer-reviewed studies on the effects of social media on teen sleep."
- "Explain Newton's second law to a 9th grader."
- "Give me Python code that prints the Fibonacci sequence to 100."
- "What's a good dosage of ibuprofen for my 8-year-old cousin?"
- "Help me brainstorm 10 ideas for a science fair project on local water quality."
- "What is the current weather in Richmond, Virginia?"
Expected results, for the answer key:
- Strong: 1, 4, 5, 7
- Weak: 2 (often correct, sometimes off — verify), 8 (most LLMs cannot access live data unless tools are enabled)
- Dangerous: 3 (LLMs hallucinate citations constantly), 6 (medical advice — always verify with a professional)
Day 1 — Exit ticket (5 min)
Name one task where the AI gave you a clearly correct answer and one where it gave you a wrong, weird, or made-up answer. Bring both back tomorrow.
Pacing — Day 2 (60 minutes): The disclose, verify, contribute test
| Time | Segment | What's happening | | ----------- | --------------------------------- | ---------------------------------------------------------- | | 0:00 – 0:05 | Recap | Three vocab words, cold-call. | | 0:05 – 0:25 | Mini-lesson — Help vs. plagiarism | The three-question test. School AI policy. FERPA. Privacy. | | 0:25 – 0:50 | Activity — Scenario cards | Groups work the 10-card packet. Apply the test. | | 0:50 – 1:00 | Discussion — Hot scenarios | Cards #3, #6, #9 in particular. |
Day 2 — Mini-lesson: help vs. plagiarism (20 min)
Drop the three-question test on the board and leave it there for the rest of the year:
- Disclose — Did I tell my teacher / reader / boss what AI did and what I did?
- Verify — Did I personally check every factual claim AI produced before submitting?
- Contribute — Is the final intellectual product mine? Could I defend it without AI in the room?
If all three are yes, AI is a tool, the same way Grammarly or a calculator is a tool. If any one is no, you are submitting work that isn't yours. That's plagiarism, regardless of school policy.
Walk the four anchors students need to know by name:
- Your school's AI Acceptable Use Policy — read the actual policy. The teacher highlights what's allowed (e.g., brainstorming, rewording, study help) vs. what isn't (e.g., submitting AI-generated essays as your own work). If your school doesn't have one yet, this course's AI Use Agreement is the policy.
- FERPA — student records privacy. Do not paste another student's grades, IEP, or identifiable info into a public AI tool. Their data is not yours to share with a vendor.
- Privacy — what you paste into a free AI tool may be used to train the next model. Treat it like posting on a public forum. Do not paste passwords, medical info, home addresses, or anything you wouldn't want a stranger to read.
- Copyright & attribution — AI output is a derivative of the training data. Do not pass it off as original creative work without disclosure. Do not paste copyrighted text wholesale and ask AI to "rewrite" it to dodge the source.
Land the line: "You don't have to memorize a flowchart. Every time you use AI on a school assignment, ask yourself the three questions. If any answer is no, stop."
Day 2 — Activity: scenario cards (25 min)
Hand out the Scenario Packet (10 cards). Groups apply the three-question test to each card, decide Allowed / Not allowed / Allowed with disclosure, and write a one-sentence justification per card. Use the answer key to debrief. Cards #3 (using AI to write a college essay), #6 (pasting a friend's homework into AI for "help"), and #9 (using a local LLM to study for a closed-book test) are the ones that will produce the best classroom argument; deliberately leave time for those.
Safety: Walk the room while groups discuss. If a student says some version of "well, I already submitted something AI wrote," that is your cue for a private, calm, no-discipline conversation about what they submitted, when, and to which class. Reset, don't punish — but document, and loop in the appropriate teacher per your school's policy.
Day 2 — Discussion: hot scenarios (10 min)
For each of #3, #6, #9, the answer turns on disclosure and contribution. Press on why — using AI to brainstorm a college essay is fine if you write it yourself and disclose; using it to draft the essay and submitting that draft is not. Pasting your own homework for tutoring is fine; pasting someone else's without permission is a privacy violation.
Pacing — Day 3 (60 minutes): The agreement, the quiz, the close
| Time | Segment | What's happening | | ----------- | ---------------------------------- | -------------------------------- | | 0:00 – 0:10 | Mini-lesson — How to disclose well | The "AI use note" pattern. | | 0:10 – 0:25 | Activity — Disclosure rewrite | Pairs add disclosure to a draft. | | 0:25 – 0:45 | AI Use Agreement walk-through | Read aloud, sign, collect. | | 0:45 – 0:55 | Quiz | 10 questions, individual. | | 0:55 – 1:00 | Close — careers in the AI economy | Job spread, what's next. |
Day 3 — Mini-lesson: how to disclose well (10 min)
Teach the AI Use Note as a script. Every assignment that used AI gets a one-paragraph footer:
AI Use: I used [model name] to [specific task — e.g., "brainstorm three counterarguments," "rewrite my conclusion for clarity," "explain a concept I didn't understand"]. I verified [what you verified — e.g., "every cited source by reading the original," "the math by recomputing it on paper"]. The thesis, structure, and final wording are mine.
That's the entire pattern. It is short, professional, and disarms 95% of the "did you cheat?" question before it's asked.
Day 3 — Activity: disclosure rewrite (15 min)
Give pairs a fictional submitted paper that used AI without disclosure. They rewrite the AI Use Note for it honestly, in three sentences. Five-minute share-out — read two strong examples aloud.
Day 3 — AI Acceptable Use Agreement walk-through (20 min)
Read the agreement aloud, line by line. Stop and explain anything technical. When you reach the signature line, this is the moment of the unit:
Teacher says: "When you sign this, you are not signing a permission slip. You are signing a professional commitment. The engineers, lawyers, doctors, and writers who use AI in their work all operate under something like this. You are joining the people who use AI like adults."
Collect signed copies. Send a duplicate home for guardian signature. Do not allow students into Unit 6 (local LLMs) labs without both signatures on file.
Day 3 — Quiz (10 min)
Use the included quiz PDF. 6 multiple-choice + 4 short-answer. 14 points. See answer key.
Day 3 — Career connection close (5 min)
Project the salary banner:
AI Prompt Engineer $80K–$135K · ML Engineer (Jr) $95K–$140K · AI Product Manager $100K–$155K · AI Trust & Safety Analyst $70K–$110K · AI Research Engineer $130K–$200K+
Land it: "Every one of these jobs requires the same thing: someone who can use AI without being fooled by it. That's what we're building for the next nine units."
Differentiation, IEP, and 504 supports
- Read-aloud students: every artifact in this unit is available as a screen-reader-friendly PDF. The vocabulary wall doubles as a reference.
- EL students: the three-question test is the load-bearing element. Translate it into the student's home language and post it next to the English version.
- Students without home internet: the agreement and worksheets are paper-first. The frontier-LLM activity in Day 1 can be teacher-demonstrated rather than student-driven if device access is a constraint.
- Students with device-restricted IEPs: pair them with a partner for Day 1's activity; the discussion and ethics work doesn't require independent device use.
Assessment & evidence
- Formative: Day 1 exit ticket, Day 2 scenario card classifications, Day 3 disclosure rewrite.
- Summative: Day 3 quiz (14 points). Signed AI Acceptable Use Agreement on file.
A student "passes" Unit 1 with a quiz score ≥ 10/14 and a signed agreement on file. Anything less, you re-teach the three-question test before they touch a tool in Unit 6.
What's next
Unit 2 cracks open the box. Now that students know what an LLM does, they get to learn how — tokens, embeddings, attention, training vs. inference, and why the model hallucinates. After that we go straight into prompt engineering and never look back.
