If your Acceptable Use Policy was written before November 2022, it has a structural problem. It probably governs internet use, software installation, email, and remote access — categories that made sense for the policy's drafting date. It almost certainly does not cover generative AI tool use, because generative AI was not a meaningful workplace category at the time.
The result: your AUP is asking employees to follow rules about a 2018-era workplace, while employees are operating in a 2026-era workplace where AI tools are everywhere. The policy is technically still in force. Practically, it offers no guidance on what your team is actually doing with their workdays.
This post is for COOs, GCs, HR leaders, and IT directors who do not want to throw the whole AUP out and rewrite from scratch. It walks through five specific clauses to add to an existing 2023-era AUP. Each clause includes sample language. The goal is a tactical upgrade — finish in an afternoon, ship to legal review, distribute by end of month — not a six-month policy overhaul.
Why the AUP matters now
Three forces converged in 2025–26 to make AI-specific AUP language a near-universal expectation in regulated and contracted SMBs.
Cyber insurance carriers added AI Riders to commercial policies, with documented AUP coverage as a standard underwriting question. Auditors started requiring documented AI governance for SOC 2 (TSC 9.0 / 1.5) and ISO 27001 (Annex A 5.1) controls. Enterprise customers began including AI use questions in their vendor security questionnaires.
In each case, the question is the same in substance: "Show us the policy that governs your employees' use of AI tools." Pointing to an AUP that does not specifically address AI is not an answer. It tells the carrier, auditor, or customer that you have not yet had the conversation internally — which they assume (correctly) means the conversation is not happening at the desk level either.
The good news: the fix is not complicated. The five clauses below cover roughly 90% of what AUP-related questions are asking about. Sample language is provided as a starting point, not a substitute for review by your own counsel.
Clause 1 — Definitions
The first thing missing from a 2023 AUP is a definition of what counts as an "AI tool." Without this, every other clause becomes ambiguous.
The definition has to be broad enough to cover the actual surface area (chatbots, transcription tools, AI-enabled browser extensions, AI features in existing SaaS tools, AI-powered analytics features, AI development tools) without being so abstract that employees can't recognize a specific tool as falling within the definition.
Sample language:
"AI Tool" means any software application, service, browser extension, embedded feature, or API that uses machine learning, large language models, generative AI, or similar technologies to produce outputs based on input data. This includes but is not limited to: (a) chat-based AI assistants such as ChatGPT, Claude, Gemini, and Microsoft Copilot; (b) AI-enabled transcription, summarization, and meeting-notes tools; (c) AI-powered browser extensions; (d) AI features within otherwise non-AI software (for example, the AI summarization feature in Outlook, AI features in Salesforce, AI capabilities in Zoom); and (e) AI-powered development tools such as GitHub Copilot, Cursor, and similar code assistants.
The breadth matters. Employees reading the policy need to be able to look at a new tool they encountered yesterday and ask "does this fit the definition" with a clear answer. Vendors-of-AI-features is the most common edge case — make sure (d) is explicit.
Clause 2 — Data classification rules
The single most important AUP clause for AI is the one that maps data classifications to tool eligibility. Without this, an employee staring at a sanctioned tool has no idea whether they can paste customer data into it.
The clause should reference your existing data classification scheme (if you have one) or define a four-tier scheme inline (if you don't). Typical tiers: Restricted (regulated data like PHI/PII/payment card data, source code, credentials), Confidential (employee data, contracts, financial information not yet public, strategic plans), Internal (non-public operational documents), and Public.
Sample language:
Employees may not enter Restricted data into any AI Tool that does not appear on the Sanctioned Tools List. Employees may enter Confidential data into Sanctioned Tools only where a Data Processing Agreement is on file. Internal data may be entered into Sanctioned Tools at the employee's discretion. Public data has no AI restriction.
Restricted data specifically includes, but is not limited to: customer personal information, customer financial information, protected health information, payment card data, social security numbers and other government identifiers, internal credentials, intellectual property classified as trade secret, and ongoing legal or regulatory matter information.
Where a 2023 AUP often goes wrong is by treating "all customer data" as a single classification. Modern AUPs need the tier system because the practical reality of AI use is that some tools (Microsoft Copilot E5 with BAA on file) can handle some Restricted data lawfully while other tools (consumer ChatGPT) cannot handle any Restricted data lawfully. The tier system enables the team to operate the distinction.
Clause 3 — Sanctioned tools and the sanctioning process
Once you've defined what an AI tool is and what data goes where, you need a list of the tools that are actually approved — and a process for adding to that list.
The Sanctioned Tools List itself does not belong in the AUP body (it changes too often). The AUP references the list as a separately maintained document and defines the process for adding to it.
Sample language:
The Sanctioned Tools List enumerates the AI Tools currently approved for use within the Company, subject to the conditions stated for each tool. The List is maintained by the IT function and reviewed quarterly. Inclusion on the List reflects that (a) a Business Associate Agreement, Data Processing Agreement, or equivalent vendor agreement is on file where required; (b) the vendor has been subjected to a security review per the Vendor Procurement Review procedure; and (c) the use case described is governed by appropriate operational controls.
Employees who wish to propose a new AI Tool for inclusion on the Sanctioned Tools List should submit a request via the IT ticketing system, including: vendor name, intended use case, data types involved, anticipated user count, and whether the vendor offers a BAA or DPA. The average review cycle is 8 business days. New AI Tools may not be used in the course of Company work prior to addition to the Sanctioned Tools List.
The "8 business days" benchmark is achievable for SMBs in our experience. If it stretches past two weeks consistently, employees will route around it — which means the AUP loses operational force. Make the review fast enough to be respected.
Clause 4 — Prohibited use
This clause is where 2023 AUPs fail most directly. The traditional "do not use Company resources for personal purposes" language does not address the modern problem, which is employees using personal AI accounts for Company purposes.
The 2026 prohibited-use clause needs to be specific and aimed at the actual failure modes that surface in incidents and breaches.
Sample language:
Employees may not, in the course of Company work:
(a) Use consumer accounts of OpenAI ChatGPT, Anthropic Claude, Google Gemini, Microsoft Copilot (non-enterprise tiers), or any other consumer LLM, including free-tier and paid-personal-subscription accounts, to process Company data of any classification;
(b) Install or use browser extensions that submit data to AI APIs not under a Company vendor contract, including but not limited to extensions installed from the Chrome Web Store, Edge Add-ons store, or Firefox Add-ons without prior IT approval;
(c) Use AI features embedded in tools whose Company-contracted version does not include those features (for example, the AI summarization feature in personal Otter.ai accounts is prohibited; the AI scribe feature in Otter.ai Enterprise+ — if sanctioned — is permitted);
(d) Access AI Tools via personal devices when those tools are used for Company work, regardless of the employee's perception of convenience;
(e) Bypass any network or endpoint control intended to restrict AI Tool access, including but not limited to use of personal hotspots, VPNs to personal networks, or alternative browsers.
This is the clause that gets a 2023 AUP from "we have a policy" to "we have a policy that addresses the actual risk vectors." Carriers and auditors look for (a) through (e) specifically. Vague "no unauthorized AI use" language is treated as effectively no language.
Clause 5 — No-blame reporting
The fifth clause is the most underrated in AUP design and the one most directly tied to incident response capability.
Most AI incidents are inadvertent. An employee pastes regulated data into the wrong tool and realizes 30 seconds later that they probably shouldn't have. The question that determines whether your firm has a manageable incident or a regulatory disaster is whether that employee tells someone in the next 24 hours, or hides it until the carrier's renewal questionnaire surfaces it 7 months later.
Punitive policy language drives concealment. No-blame language enables disclosure.
Sample language:
Employees who suspect they may have inadvertently transmitted Restricted or Confidential data to an unsanctioned AI Tool, or otherwise violated this policy in an inadvertent manner, are required to report the incident to the Security Officer (or designated equivalent) within 24 hours of discovery.
No disciplinary action will be taken solely for the act of reporting a suspected violation. The Company explicitly favors disclosure over concealment to enable timely incident assessment and any required regulatory reporting. Reports made in good faith do not constitute admission of intent and will not be used as the sole basis for disciplinary action.
Disciplinary action remains possible where (a) the underlying behavior was willful or grossly negligent, (b) the employee has a documented history of similar violations, or (c) the reporting was untimely without justification.
The "in good faith" framing is important. It is what makes the no-blame standard durable — employees know that genuinely inadvertent mistakes are forgivable, but the policy is not a free pass for repeated or willful violations.
What to do next
Five clauses. Twenty to forty paragraphs of new text. An afternoon's drafting, a week of legal review, a distribution cycle with required acknowledgment. That is the work to get a 2023 AUP to 2026 standard.
If you want a fully drafted starting point rather than building from this template, the Shadow AI Labs $47 Quick Start Guide includes an AUP template with all five clauses pre-drafted, plus the Sanctioned Tools List template, vendor review checklist, and employee acknowledgment form.
For organizations facing a forcing function — cyber insurance renewal in the next six months, audit prep, enterprise customer DD ask — the AI Risk Sprint produces a custom-drafted AUP based on your specific tool inventory and regulatory context, alongside the discovery, risk classification, and remediation roadmap. Two weeks, $5,500.
Either way: an AUP that predates generative AI is not a policy gap to defer. It is a policy gap that surfaces at every 2026 renewal, audit, or vendor review until it gets filled. Better to fill it on your schedule than the carrier's.




