Compliance

What Cyber Insurance Underwriters Want to See for AI Governance in 2026

Carriers are introducing AI Security Riders. Here's what underwriters are actually asking for, and what SMBs need to document before renewal.

Peter KwidzinskiPeter Kwidzinski
||8 min read
Layered paper-cut illustration of a cyber insurance policy with an orange AI Security Rider addendum being attached, magnifying glass examining the new section
Share:

If you renewed your cyber policy in 2024, the AI question on the renewal questionnaire was probably one line: "Do you use any artificial intelligence tools in your business operations?" Yes or no. Move on.

If you're renewing in 2026, that one line has become a section. Sometimes a separate addendum. Increasingly, an entirely new policy rider with its own underwriting criteria.

This post is for two audiences: cyber insurance brokers and producers trying to help clients navigate the new questions, and SMB owners who just got an AI security rider questionnaire and aren't sure how to fill it out without losing coverage.

What changed between 2024 and 2026

Three things converged.

First, AI tool adoption in SMBs went vertical. The median 100-employee company now has 14 to 22 generative AI services in browser telemetry, of which one or two are formally approved. Underwriters know this. They also know that most of those tools have permissive data policies, and that employees routinely paste customer data, source code, and contract terms into them.

Second, claims started landing. Not many — AI-specific cyber claims are still a small fraction of the book — but enough that carriers built actuarial models. The pattern: a senior employee (often unintentionally) exposes regulated data through an AI tool, the breach triggers HIPAA / GLBA / state notification, the cost runs $400K–$1.2M depending on industry, and the carrier discovers there's no documented governance to fall back on.

Third, the EU AI Act enforcement deadline (August 2026) and US state laws (Colorado AI Act, California AB 2013, Texas RAIGA) gave carriers the regulatory hook to require documented controls. Insurers don't write coverage for activities that violate applicable law. If your AI use is governed and documented, you're insurable. If it's not, you're not — or you are, but with an AI exclusion.

The result: AI Security Riders as a standard feature of 2026 cyber policies, with their own underwriting questionnaire and their own conditions for coverage.

What underwriters are actually asking

Across the carrier programs we've seen — Coalition, At-Bay, Resilience, plus several MGAs and wholesale markets — the AI security rider questions cluster into seven categories. Specific phrasing varies; the categories don't.

1. Documented AI Acceptable Use Policy

The first question on every rider questionnaire is some variant of:

"Has the Insured adopted a written policy governing employee use of artificial intelligence tools? If yes, please attach. If no, please describe the controls in place."

What underwriters want: a written AUP that defines AI tools, distinguishes sanctioned from unsanctioned use, lists prohibited uses (specifically: submission of regulated data to consumer-tier AI), establishes a registration process for new tools, and requires employee acknowledgment.

What gets flagged: "No formal policy" or "Use is governed by general acceptable-use policy that does not specifically address AI." Either answer typically results in either an AI exclusion endorsement or a coverage condition requiring policy adoption within 90 days.

2. AI Tool Inventory with Sanctioning Status

"Please describe your inventory of AI tools currently in use across the organization, including the data types processed and the sanctioning/approval status of each."

What underwriters want: a current list of every generative AI service in active use, who uses it, what data flows through it, and whether it's formally sanctioned.

What gets flagged: "We don't maintain a formal inventory." Underwriters interpret this as evidence that shadow AI is uncontrolled — which is actuarially correct, since the median SMB has 8–14 unsanctioned AI tools per IBM's 2025 research.

3. Employee Training on Safe AI Use

"What percentage of employees have received training on the secure use of AI tools, including data handling restrictions? When was the most recent training delivered?"

What underwriters want: documented training, ideally tracked in HRIS or LMS, refreshed annually, with content covering Restricted Data, Sanctioned Tier tools, prohibited uses, and incident reporting.

What gets flagged: under 50% trained, training over 18 months old, or training that doesn't specifically cover AI use cases.

4. Data Loss Prevention Covering AI Egress

"Do you have technical controls (DLP, egress filtering, browser policy) that monitor or block submission of regulated data to AI services?"

What underwriters want: at minimum, a documented egress policy that blocks known-high-risk AI domains (consumer ChatGPT, free Otter, free Read.ai) on managed endpoints. Better: DLP rules that scan for regulated data patterns before submission to any AI domain.

What gets flagged: "No technical controls" — and increasingly, this is a coverage condition rather than a price-modifier.

5. Vendor Governance for Embedded AI

"Have you reviewed Data Processing Agreements with vendors that have added AI-enabled features in the past 24 months?"

What underwriters want: documentation that you've reviewed Salesforce Einstein, HubSpot AI, Zoom AI Companion, Microsoft 365 Copilot (and others) for data handling, training-reuse, and your specific use case. Most of these vendors added AI features without active client renotification — your DPA from 2023 likely doesn't cover what's running today.

What gets flagged: "We rely on vendor terms." Underwriters know you didn't read them.

6. AI Incident Response Plan

"Does your incident response plan address AI-specific incidents (data leakage to AI tools, prompt injection, model abuse)?"

What underwriters want: explicit IR playbook sections covering: (a) data exposed to AI, including takedown requests where supported, (b) AI tools used in social engineering against your staff, (c) generative AI misuse by employees that may trigger HR or regulatory action.

What gets flagged: generic IR plan that doesn't mention AI.

7. Documented AI Risk Owner

"Who in your organization has accountability for AI use governance?"

What underwriters want: a named executive (typically COO, CIO, or CISO) with documented responsibility for the AUP, the inventory, the training, and the incident response. The role doesn't need to be full-time, but it needs to exist on an org chart.

What gets flagged: "No formal owner." Underwriters interpret this as a governance failure regardless of any other controls in place.

What this means at renewal

Three patterns we're seeing in 2026 renewals:

Best case: Insured produces documentation across all seven categories. Renewal proceeds at standard terms, possibly with a small premium credit for demonstrated AI governance maturity.

Common case: Insured can document 3–5 of the 7. Renewal proceeds with either (a) a coverage condition requiring closure of the remaining gaps within 60–90 days, or (b) a small premium increase reflecting elevated risk.

Worst case: Insured can document 0–2 of the 7. Carrier issues either an AI exclusion endorsement (excluding AI-related incidents from coverage entirely) or — increasingly — declines to renew, forcing the broker to remarket. Remarketed AI-uncovered SMBs are seeing 25–60% premium increases in 2026.

What brokers should be telling clients (right now)

The 60- to 90-day window before renewal is when the AI rider questionnaire shows up. By the time a client sees it, they have two weeks at most to produce documentation. Most can't.

The play, if you're a broker:

  1. Identify clients with renewals in the next 6 months. Especially: tech companies, professional services firms, healthcare practices, financial advisors, anyone with regulated data and AI tool adoption. They're at highest risk.

  2. Pre-warn them. A short note from you — "your renewal questionnaire is going to look different this year, here's what to expect" — positions you as the trusted advisor. It also gives them time to act.

  3. Have an AI specialist in your referral network. Most general MSPs and IT consultants don't have the framework expertise (NIST AI RMF, ISO 42001) to produce rider-compliant documentation. A productized 2-week assessment closes the gap before renewal hits.

  4. Document the engagement. Whatever specialist your client engages, the deliverable should explicitly map to the seven categories above. That's what underwriters want to see.

What SMBs should do

If you're an SMB owner reading this and your renewal is in the next 6 months, the honest answer is: start now. Producing the documentation underwriters want takes 2–4 weeks if you have help, and 2–4 months if you're DIY-ing it from a blank page.

Three options:

  • DIY with a structured toolkit — works if you have internal capacity and 30–90 hours to invest. We sell a toolkit at $47–$497 if you want a head start with templates, playbooks, and policy frameworks.
  • Productized assessment — works if you want speed (2 weeks), don't want to staff it internally, and want a deliverable formatted exactly for your underwriter. Our AI Risk Sprint is built for this; $5,500 fixed, includes the rider gap analysis. Many cyber brokers in our partner network refer their renewal-cycle clients here.
  • Custom security firm engagement — works if you have an existing relationship and they have AI specialty depth. Most don't, in our experience.

Whichever path you pick, the deadline is fixed by your renewal date. Insurance carriers don't extend deadlines for governance gaps.


Shadow AI Labs runs a partner program for cyber insurance brokers and producers. If you're seeing AI rider questions on client renewals and want a specialist to refer to, schedule a 20-min partner intro or email partners@shadowailabs.com.

Built by Peter Kwidzinski — AMD Fellow, founding contributor to Caliptra (open-source hardware root of trust now used across the cloud-and-silicon industry), 20+ years in platform security architecture.

Share:
#cyber insurance#AI governance#AI Security Rider#underwriting#compliance
Peter Kwidzinski

Peter Kwidzinski

Founder, Shadow AI Labs

AMD Fellow with twenty years in platform security architecture, confidential computing, and hardware attestation. Founded Shadow AI Labs to help SMBs navigate AI security and governance challenges.

Related Articles

Get AI Security Insights

Weekly insights on Shadow AI risks, compliance updates, and governance best practices. No spam, unsubscribe anytime.

We respect your privacy. Read our Privacy Policy.