education leadership

Writing an AI Acceptable Use Policy for Your School

EduGenius Team··13 min read

Writing an AI Acceptable Use Policy for Your School

A 2024 RAND Corporation survey found that only 25% of U.S. school districts had a formal AI policy in place, while another 35% were "in development" — and the remaining 40% had no policy at all. The result is predictable: teachers making individual decisions about AI use with no shared guidelines, students using AI tools with inconsistent expectations across classrooms, and administrators unable to respond clearly when problems arise because there's nothing written down to reference.

The absence of policy is itself a policy — one that says "figure it out yourself." That approach might have worked when a few teachers were experimenting with ChatGPT on their own time, but with AI tools now embedded in learning platforms, productivity suites, and even search engines, every teacher and student is making AI decisions daily. Your school needs a written, practical, reasonable policy — not to restrict innovation, but to create the shared understanding that makes innovation sustainable.

This guide walks you through writing an AI Acceptable Use Policy (AUP) from scratch, with ready-to-adapt language you can customize for your school's context.


Why Your School Needs an AI-Specific Policy Now

Your existing technology AUP probably doesn't cover AI. Traditional AUPs address hardware use, internet access, social media, and digital citizenship. AI introduces fundamentally different questions:

Traditional AUP ConcernAI-Specific Concern
"Don't access inappropriate websites""How should students disclose when they've used AI assistance?"
"Don't share passwords""What student data can be entered into AI tools?"
"Don't cyberbully""Who owns AI-generated content? Can a student submit it as their own?"
"Use school devices responsibly""Which AI tools have been approved for classroom use?"
"Don't plagiarize""Where is the line between AI assistance and AI doing the work?"

ISTE's 2023 guidance on AI in schools explicitly recommends separate AI policies rather than amendments to existing technology AUPs. The reasoning: AI changes so rapidly that AI policy sections need more frequent review cycles than traditional technology policies. Bundling them together means either reviewing the entire AUP too frequently or letting AI guidance become outdated.


The Seven Essential Sections of an AI AUP

A complete AI AUP should address seven areas. You don't need to write a 40-page document — most effective school AI policies run 4-8 pages total. Clarity and practicality matter more than comprehensiveness.

Section 1: Purpose and Scope

State clearly what the policy covers, who it applies to, and what it aims to achieve.

TEMPLATE LANGUAGE — Purpose and Scope:

This policy establishes guidelines for the use of artificial
intelligence tools by [school name] staff and students. It applies
to all AI-powered tools used for instructional planning, content
creation, student learning activities, assessment, and
administrative tasks — whether on school devices or personal
devices used for school purposes.

This policy aims to:
• Encourage responsible innovation with AI tools
• Protect student data and privacy
• Ensure academic integrity in student work
• Provide clear expectations for staff and students
• Create a framework for evaluating and approving AI tools

This policy does NOT:
• Ban AI use (AI is a reality of modern education and work)
• Replace professional judgment (teachers remain the
  decision-makers)
• Cover every possible AI scenario (it provides principles
  for new situations)

Pro tip: Include a "This policy does NOT" section. It preemptively addresses the fear that a policy means a ban.

Section 2: Approved and Prohibited Tools

Maintain a list of approved AI tools, tools under review, and tools that are prohibited. This is the most frequently updated section of the policy.

CategoryExamplesUpdate Frequency
Approved — Vetted for data privacy, educational value, and age-appropriatenessTools like EduGenius for AI-powered content generation (quiz creation, worksheet design, lesson planning), school-licensed LMS AI features, approved writing assistantsAdd as vetted; review quarterly
Under Review — Being evaluated; not yet approved for student useNewly released tools; tools requested by staffReview within 30 days of request
Prohibited — Failed privacy review, age-inappropriate, or pose unacceptable risksTools requiring student accounts without parental consent; tools that store student data without FERPA-compliant agreements; tools with no content moderationReview annually (may be reconsidered)

Pro tip: Create a simple one-page "Approved AI Tools" reference sheet that teachers can tape next to their computers. Update it monthly. The full policy stays in the handbook; the reference sheet goes in the classroom.

Section 3: Teacher AI Use Guidelines

TEMPLATE LANGUAGE — Teacher AI Use:

Staff may use approved AI tools for:
✓ Lesson planning and material creation
✓ Generating differentiated content for diverse learners
✓ Creating assessments, rubrics, and study guides
✓ Administrative tasks (emails, reports, meeting notes)
✓ Professional development and research
✓ Adapting materials for IEP/504 accommodations

Staff must:
✓ Review ALL AI-generated content for accuracy before
  classroom use
✓ Verify that AI-generated materials align with curriculum
  standards
✓ Use only approved tools for student-facing activities
✓ Never enter personally identifiable student information
  (names, IDs, grades) into non-approved AI tools
✓ Disclose AI use to colleagues when co-planning
  ("I used AI to draft this, then I modified it for our needs")

Staff must NOT:
✗ Use AI to generate IEP documents, formal evaluations,
  or legal communications without administrator review
✗ Enter student data into non-approved AI platforms
✗ Rely solely on AI for grading subjective assessments
  (AI can assist, not replace human judgment)
✗ Present AI-generated communication to parents as if it
  were personally written (brief disclosure is sufficient)

Section 4: Student AI Use Guidelines

This is the hardest section to write because it must vary by grade level. A K-2 student's AI interaction is fundamentally different from a 7th grader's.

STUDENT AI USE BY GRADE BAND:

GRADES K-2:
• Students do not directly interact with AI tools
• Teachers use AI to generate age-appropriate materials
• AI-powered adaptive learning platforms may be used under
  teacher supervision (e.g., reading programs, math apps)
• No student accounts on AI tools without parental consent
  AND administrative approval

GRADES 3-5:
• Teacher-supervised AI use permitted for approved tools
• Students may use AI for brainstorming and idea generation
  with teacher guidance
• All AI-assisted work must be identified: "I used AI to
  help me with ___"
• Students may NOT use AI to generate final written
  products

GRADES 6-9:
• Students may use approved AI tools independently for
  designated assignments
• Assignment categories:
  - AI PROHIBITED: Assessments, exams, diagnostic writing
  - AI ASSISTED: Research, drafts where student revision
    is required, brainstorming
  - AI INTEGRATED: Projects explicitly designed to include
    AI tools as part of the learning
• Students must cite AI use (teacher provides format)
• Students must be able to explain and defend their work

Section 5: Academic Integrity

TEMPLATE LANGUAGE — Academic Integrity:

AI-generated content submitted as entirely student-created work
is an academic integrity violation, equivalent to submitting
another person's work as one's own.

HOWEVER, using AI as a tool — for brainstorming, feedback,
revision support, or research — is permitted when:
1. The assignment allows AI assistance (see assignment
   categories above)
2. The student discloses their AI use
3. The student can demonstrate their own learning and
   understanding when asked

RESPONSE TO VIOLATIONS:
Level 1 (Likely unintentional): Conference with student;
  re-teach AI use expectations; resubmit assignment
Level 2 (After instruction): Parent notification; reduced
  credit; required AI literacy lesson
Level 3 (Repeated/deliberate): Existing school discipline
  procedures apply; assignment resubmitted without AI

IMPORTANT: AI detection tools (GPTZero, Turnitin AI Detection,
etc.) are unreliable and produce significant false positives
(Liang et al., 2023, found 10-20% false positive rates). They
should NEVER be the sole basis for an academic integrity
accusation. Conversation with the student is the primary
investigation method.

Section 6: Data Privacy and FERPA Compliance

Data TypeCan Enter Into Approved AI ToolsCan Enter Into Non-Approved AI Tools
General curriculum content (lesson topics, standards)✅ Yes✅ Yes
De-identified student data ("a 4th grader reading at 2nd-grade level")✅ Yes⚠️ Use caution
Student names, IDs, or specific grades⚠️ Only tools with FERPA-compliant data agreements❌ Never
IEP/504 details❌ Never enter into any AI tool❌ Never
Student behavioral or disciplinary records❌ Never enter into any AI tool❌ Never
Student-created work (for feedback purposes)✅ Approved tools only❌ Never

Include a simple rule teachers can remember: "If you wouldn't put it on a postcard, don't put it in an AI tool."

Section 7: Policy Review and Update Schedule

REVIEW SCHEDULE:

QUARTERLY (every 3 months):
• Update approved/prohibited tools list
• Review any AI-related incidents
• Collect teacher feedback on policy practicality

ANNUALLY (start of each school year):
• Full policy review by AI committee
• Update student-facing language for age-appropriateness
• Incorporate new legal requirements or district guidance
• Present updated policy to staff during opening PD

TRIGGERED REVIEWS (as needed):
• New state or federal AI legislation
• Significant AI-related incident at school
• Major new AI tool adoption by the district
• Parent or community concern that reveals a policy gap

What to Avoid

1. Writing policy in isolation. A policy drafted solely by administrators will miss critical classroom realities. Include teachers, an IT representative, a parent, and — for middle school — a student voice in the drafting process. CoSN recommends a diverse AI policy committee of 7-10 members.

2. Making the policy so restrictive that nobody follows it. A policy that prohibits all AI use is unenforceable and counterproductive. Teachers will use AI regardless — the question is whether they'll do so within a framework or without one. Design policy that channels innovation rather than blocking it.

3. Relying on AI detection tools as enforcement. Multiple studies (including Weber-Wulff et al., 2023) have demonstrated that AI detection tools are unreliable, particularly for non-native English speakers and students with learning disabilities — the populations most likely to be falsely accused. Use conversation, not detection software, as your primary investigation method.

4. Treating the policy as a one-time document. AI capabilities change quarterly. A policy written in September may be outdated by January. Build in the review cycles described above, and assign a specific person (not "the committee") responsibility for triggering reviews.


Key Takeaways

  • An AI-specific AUP is essential — your existing technology policy almost certainly doesn't address the unique questions AI raises about academic integrity, content generation, and data privacy. Write a separate AI policy with its own review cycle.
  • Seven sections cover the essentials: purpose and scope, approved/prohibited tools, teacher use, student use (by grade band), academic integrity, data privacy, and review schedule. Keep the total document to 4-8 pages.
  • Grade-banded student expectations are critical. K-2 students shouldn't directly interact with AI tools. Grades 3-5 need supervised access. Grades 6-9 can use AI independently with clear assignment categories (prohibited, assisted, integrated). See AI for School Leaders — A Strategic Guide to Transforming Education Administration for the broader strategic context.
  • Academic integrity requires nuance, not detection software. AI detection tools produce unacceptable false positive rates. Use assignment design (in-class components, process documentation) and student conversation as your primary integrity methods.
  • Data privacy is non-negotiable. Never enter identifiable student information into non-approved AI tools. Train teachers on the "postcard rule" — if you wouldn't write it on a postcard, don't type it into an AI tool.
  • Build review cycles into the policy itself. Quarterly tool list updates, annual full reviews, and triggered reviews for legislation or incidents. Assign a specific person (not a committee) as the review owner.

For practical implementation strategies, see Building a Culture of Innovation — Leading AI Adoption in Schools. For KPIs to measure your policy's effectiveness, see Measuring AI Tool Effectiveness — KPIs for Education Leaders. For the tools evaluation that feeds your approved list, see Best AI Content Generation Tools for Educators — Head-to-Head Comparison.


Frequently Asked Questions

Should our AI policy be a standalone document or part of our existing technology AUP?

Standalone is recommended by ISTE and most education technology organizations. The primary reason: AI policy needs more frequent updates than your general technology AUP, and bundling them creates either too-frequent full reviews or outdated AI guidance. A standalone document can reference your general AUP for broader technology expectations while maintaining its own review cycle. Keep the AI AUP to 4-8 pages so it remains readable and practical.

How do we handle parents who want to opt their children out of all AI use?

Accommodate the request while being transparent about limitations. A student can be excluded from assignments where AI tools are explicitly used — but they cannot be shielded from AI that's embedded in standard educational platforms (Google Docs' smart compose, calculator apps, adaptive learning platforms). Clarify the distinction between "AI as the lesson" (where opt-out is possible) and "AI in the background of standard tools" (where it's not). Document the conversation and accommodation.

What if teachers disagree with the policy?

Build disagreement into the process, not around it. Include a formal feedback mechanism (quarterly survey, suggestion box, designated AI committee representative from each grade level or department). A policy that 100% of teachers love is either too vague or imaginary. The goal is a policy that teachers find reasonable, clear, and protective — not one they find exciting. See AI Professional Development Workshop Plans for Staff Training Days for how to use training days to gather input.

How do we enforce the policy without becoming AI police?

Focus on design rather than detection. Instead of trying to catch AI use after the fact, design assignments that make unauthorized AI use obvious or irrelevant: in-class writing components, process portfolios, oral defenses of written work, revision histories. When teachers design assessments well, enforcement becomes unnecessary because the assessment itself reveals understanding (or lack thereof). See How Principals Can Champion AI Without Being Tech Experts for leadership approaches that don't require technical expertise.

#AI-policy#acceptable-use#school-technology-policy#education-leadership#responsible-AI