πŸ‡¨πŸ‡¦VancouverπŸ‡¨πŸ‡¦TorontoπŸ‡ΊπŸ‡ΈLos AngelesπŸ‡ΊπŸ‡ΈOrlandoπŸ‡ΊπŸ‡ΈMiami
1-855-KOO-TECH
KootechnikelKootechnikel
Insights Β· Field notes from the SOC
Plain-language briefings from the people watching the alerts.
Weekly Β· No spam
Back to News
AI & GovernanceIndustry

AI vendor due diligence in 2026: the 5 questions procurement now asks

AuthorKootechnikel Solutions
Published
Read Time10 min read
Views0
AI vendor due diligence in 2026: the 5 questions procurement now asks

AI vendor due diligence in 2026: the 5 questions procurement now asks

The shift that happened quietly The mid-market AI procurement conversation looked very different in 2024 than it does today. In 2024, an AI vendor could land a $40K-$200K annual contract with a polished demo, a SOC 2 Type I in process, and a "data is processed in the US" line on the security page. The procurement function was learning what to ask.…

## The shift that happened quietly

Technical diagram showing vulnerability chain
Figure 1: Visual representation of the BeyondTrust vulnerability chain

The shift that happened quietly

The mid-market AI procurement conversation looked very different in 2024 than it does today.

In 2024, an AI vendor could land a $40K-$200K annual contract with a polished demo, a SOC 2 Type I in process, and a "data is processed in the US" line on the security page. The procurement function was learning what to ask. The legal function was new to AI-specific clauses. The cyber insurance carrier had not yet started forking AI off as a separate risk class.

By early 2026, that posture is dead. ISO/IEC 42001 is becoming the universal enterprise standard for AI governance and is showing up in 2026 RFPs in a way it was not last year. The EU AI Act's full high-risk-system obligations land August 2, 2026 β€” anyone with EU customers, EU data subjects, or EU-resident employees is in scope for at least the literacy obligations that took effect in February 2025. State-level US regulation is multiplying. Cyber insurance questionnaires now have AI-specific sections.

The result: serious mid-market RFPs ($50M-$500M revenue companies, the band Cowbell calls "Prime One" eligible) now ask five specific questions about every AI vendor under consideration. The questions are not new individually. The combination has crystallized into a standard pattern that most AI vendors are not yet ready to answer.

If you are deploying AI in 2026, these are the questions your procurement team is asking your vendors. If you are an AI vendor selling into mid-market, these are the questions you need to be ready to answer in the first sales call.

Question 1: "Will our data be used to train your model β€” or anyone else's?"

What's really being asked. We do not want our customer list, our pricing, our IP, our internal Slack messages, our source code, or our financial models showing up in someone else's chat session next quarter. We do not want our data captured in a model that gets fine-tuned on a competitor's behalf six months from now. We need a contractual commitment, not a marketing-page assurance.

The acceptable answer. "No. Customer data is not used for training. The contractual no-training language is in Section 3.4 of the MSA. The same commitment is reproduced in our Trust Center documentation. The technical enforcement happens at the API gateway via tenant-isolated data flows. We can provide a SOC 2 Type II attestation that confirms the controls are operating as described."

What disqualifies you. A vague "we don't sell your data" answer (selling and training are different things). A "data may be used to improve the service" clause without explicit no-training language. A free-tier-only product where the no-training commitment requires upgrading. A contract that grants the vendor broad rights to "telemetry" without bounding it.

The Microsoft Copilot Enterprise, ChatGPT Enterprise, Claude for Work, and Gemini for Workspace tiers all pass this question. Most other AI tools pass it on enterprise tiers but fail on free or pro tiers. This question is the single most common reason a vendor gets disqualified before the technical review begins.

Authentication bypass flow diagram
Figure 2: How the authentication bypass vulnerability works

Question 2: "Where is the data physically stored and processed, and can you prove it?"

What's really being asked. Our regulator, our auditor, and our cyber insurance carrier all want to know. "The cloud" is not an answer. PHIPA-regulated Canadian healthcare needs in-Canada processing. HIPAA-regulated US healthcare needs business-associate agreements with documented controls. EU customers need GDPR-compliant data flows. We need a region attestation we can give to our auditor, and increasingly we need bring-your-own-key configurations for the highest-sensitivity data.

The acceptable answer. "We process customer prompts in [specific region]. The choice of region is configurable by tenant. The same region serves model inference, prompt logs, and the audit trail. The encryption keys are managed via [KMS or BYOK option]. We can provide a current data flow diagram and the specific cloud provider regions on request. The data residency commitments are in Section 5 of the MSA."

What disqualifies you. "We use AWS, Azure, and GCP" without specific region commitments. A "best efforts" attestation that does not commit to specific geography in writing. No BYOK option for sensitive deployments. A data flow diagram that you have to ask for three times to receive.

The big enterprise vendors all pass this on their enterprise tiers. The interesting fail mode is when a vendor processes prompts in their committed region but routes to a third-party model API (an OpenAI or Anthropic endpoint) that processes in a different region without disclosing the chain. The procurement team finds this out by reading the sub-processor list on page 14 of the Trust Center documentation.

Question 3: "What happens when it gets something wrong β€” and who's liable?"

What's really being asked. The Air Canada precedent. We need to know your human-in-the-loop checkpoints, your audit log of model outputs, and β€” the part vendors hate β€” your indemnification language for hallucinations that cause downstream financial harm.

The acceptable answer. "Our product surfaces AI outputs as drafts that require human approval before being committed to any customer-facing or regulator-facing surface. The audit log captures the prompt, the AI draft, the human approver, and the final output. Our liability for AI-output errors is capped at [12 months / 24 months / contract value] of fees paid, and the cap explicitly applies to claims arising from inaccurate outputs we deliver. The relevant contract sections are 8.3 and 8.4 of the MSA."

What disqualifies you. A blanket "we are not liable for AI outputs" clause. No human-in-the-loop checkpoint built into the workflow. Audit logs that do not survive the API call (the vendor logs the request internally but you cannot pull the log into your own SIEM). A liability cap so low it would not pay for a single Air Canada-style ruling against you.

This question is the one that exposes whether the vendor has actually thought about the post-Air-Canada world. The vendors who have built workflows around verification gates pass it easily. The vendors who built around "we generate the answer, you ship it" struggle.

Privilege escalation process
Figure 3: Privilege escalation from user to SYSTEM level

Question 4: "Show me your SOC 2 Type II, your ISO 42001 alignment, and your EU AI Act risk classification."

What's really being asked. Our procurement and our cyber insurance carrier both ask for this paperwork now. "We're working on it" does not get us through the gate. ISO/IEC 42001 specifically β€” the AI management system standard β€” is becoming a 2026 RFP requirement in a way it was not last year.

The acceptable answer. "Our SOC 2 Type II report is current and available under NDA β€” it covers the AI service offerings as well as the underlying platform. We are aligned to ISO/IEC 42001 with a published Statement of Applicability; external certification is in progress with audit completion targeted by Q3 2026. Our EU AI Act risk classification for this product is [General Purpose AI / Limited Risk / High Risk] β€” see the documentation packet for the full assessment."

What disqualifies you. SOC 2 Type I "in process" with no Type II date. No mention of ISO 42001. No EU AI Act classification at all. A security page that is mostly marketing copy with no underlying audit reports referenced.

The vendors who pass this question are the ones that recognized in 2024 that compliance was about to fork. The vendors who are scrambling now to meet the standard are visibly behind.

Question 5: "How will we know what our employees are actually doing with this β€” and how do we stop the bad uses?"

What's really being asked. Shadow AI terrifies me, and buying your tool does not solve it; it might make it worse by legitimizing the category. We need to see DLP integration, usage analytics, and the ability to block patterns (PII, PHI, source code, financial documents) at the prompt-submission layer β€” not just trust that a written policy will hold employees back.

The acceptable answer. "Usage analytics are surfaced in the admin console with per-user, per-team, per-prompt-type granularity. We integrate with [Microsoft Purview / Google Workspace DLP / Netskope / Zscaler / Forcepoint] to inspect prompts before submission and block patterns matching your DLP policies. We support custom regex blocking on top of the default PII / PCI / PHI patterns. Audit logs are exportable to your SIEM via [Sentinel / Splunk / Datadog / generic syslog]. We can demo the controls in your sandbox during the technical evaluation."

What disqualifies you. "We trust our users" as the answer. No DLP integration. No SIEM export. Usage analytics that show only license activation, not actual prompt patterns. Inability to block specific data types at submission time.

This question separates the AI vendors that have built for enterprise procurement from the AI vendors that bolted "enterprise" onto a consumer product after the fact. The honest answer to "we are afraid of Shadow AI" is "here are the controls that prevent it on our platform" β€” not "your employees won't do that."

What this means for the AI you are deploying

The five questions above are how serious mid-market procurement is now operating. Three implications for your AI strategy:

1. Pay attention to your vendor's enterprise-tier readiness. The free and pro tiers of most AI products fail at least three of these five questions. The enterprise tier passes them. The cost difference is meaningful but not material to a $250M+ revenue company. The contract difference is enormous. Standardize on enterprise-tier purchasing as policy.

2. Negotiate the contract clauses, do not just sign the standard MSA. The vendors who lead with "we don't customize the contract" are signaling that they are not yet operating at a level of sophistication that warrants your trust. Push back. Insist on the no-training language being in the MSA, not just the Trust Center documentation. Insist on a meaningful liability cap. Insist on data residency in writing.

3. Build the procurement playbook once and reuse it. The five questions above will apply to every AI vendor you evaluate from now on. Make them part of your standard RFP template. Make the answers part of your vendor risk file. Update the file at every renewal. The cost of building this once is dramatically less than the cost of doing AI vendor diligence ad-hoc forever.

The work, and the offer

The free 90-minute IT health check we run for prospective clients includes an AI vendor portfolio review: which AI vendors you currently have under contract, where each one stands on the five questions above, and a prioritized renegotiation list for your next renewal cycle. Yours to keep either way.

The full picture of how we govern AI across vendors lives at /ai/governance. The 6 vendor-specific deep dives β€” Anthropic, OpenAI, Google, Perplexity, GitHub, and self-hosted local LLMs β€” live at /ai/vendors.

AI procurement in 2026 is not what it was in 2024. The vendors have started catching up. The buyers should too.

Related Topics

AIProcurementDue DiligenceVendor RiskGovernance