Anthropic Takes On the World, One Sector at a Time — Starting with Wall Street

Part 2: Regulatory Landscape & Technical Architecture — May 5, 2026
☀️ 🌙
← Part 1: What Happened  |  Part 3: The AI Race →

The Regulatory Gauntlet: Key Deadlines and High-Risk Classifications

Anthropic's 10 financial services AI agents face a multi-jurisdictional regulatory landscape with hard enforcement deadlines beginning in weeks. At least 4 of the 10 agents likely qualify as high-risk AI systems under the EU AI Act, creating immediate compliance obligations for every deploying institution.

Aug 2
EU AI Act Enforcement
High-risk obligations become enforceable for financial AI systems
Jun 30
Colorado AI Act
First comprehensive US state AI law takes effect; $20K/violation
Jun 9
FinCEN AML Comment Deadline
Last chance to influence AI-in-AML compliance standards
4 of 10
High-Risk Agents
KYC Screener, Model Builder, Valuation Reviewer, Statement Auditor

Under Annex III of the EU AI Act, AI systems used for creditworthiness evaluation, credit scoring, risk assessment, and pricing in insurance are explicitly designated as high-risk. Four of Anthropic's agents fall squarely within these categories:

AgentClassificationReasoning
KYC ScreenerHIGH-RISKProcesses personal data for regulatory compliance decisions affecting natural persons; feeds credit/risk determinations
Model BuilderHIGH-RISKFinancial models that feed creditworthiness or lending decisions on natural persons
Valuation ReviewerHIGH-RISK (context-dependent)Valuations affecting credit, insurance, or investment decisions impacting natural persons
Statement AuditorHIGH-RISK (when feeding consumer decisions)Primarily institutional but triggers high-risk if output feeds consumer-affecting decisions
Pitch BuilderLIMITED RISKClient-facing content generation; transparency obligations only
FIS Financial Crimes AgentNOT HIGH-RISKAnnex III explicitly excepts fraud detection AI from the credit scoring high-risk category
Others (Meeting Prep, Earnings, Market Research, GL Reconciler, Month-End)MINIMAL/LIMITEDPrimarily internal analytical or operational tools
Penalty Exposure

High-risk system violations: up to EUR 15 million or 3% of global annual turnover (whichever is higher). For JPMorgan (2025 revenue ~$180B), 3% = a potential $5.4 billion penalty. Prohibited AI practices carry even steeper fines: EUR 35 million or 7% of global turnover.

AIG tested Claude against professional claims adjusters on 100 insurance claims. Claude agreed with human experts 88% of the time. The FCA has explicitly warned that algorithmic systems embedding or amplifying bias, or delivering opaque pricing, will be treated as direct breaches of the Consumer Duty.

What 12% Wrong Means Under FCA Rules

Each wrongly denied claim is a potential Consumer Duty violation. SM&CR creates personal accountability — a named senior manager is responsible for AI-driven claims outcomes. The FCA can impose unlimited fines, require firms to stop using specific AI systems, and mandate consumer redress programs.

The 12% error rate must be analyzed: are errors systematic? Do they disproportionately affect vulnerable customers? FCA will assess whether the error rate represents acceptable performance with adequate human review, or a systemic failure requiring intervention.

Switzerland follows a sector-specific, technology-neutral approach. FINMA Guidance Note 08/2024 (issued December 18, 2024) is the operative compliance framework, requiring:

  • Governance Framework: Cross-functional AI governance integrated into existing risk management
  • Transparency and Explainability: Institutions must explain AI-driven decisions
  • Non-Discrimination: AI must not produce discriminatory outcomes; testing required
  • Outsourcing Compliance: Using Anthropic (US-based) triggers FINMA Circular 2018/3 on outsourcing — risk assessment, contractual audit rights, business continuity planning, and data protection under nDSG

A limited Swiss AI bill and non-binding measures are expected by end-2026. For Swiss-based funds, FINMA guidance applies directly to any regulated activities using AI agents.

The SEC withdrew the proposed Predictive Data Analytics rule in June 2025. There is currently no AI-specific SEC rulemaking. Instead, the SEC pursues an enforcement-led approach through existing frameworks.

However, AI is now a primary focus of SEC 2026 examinations, explicitly addressing AI supervision, AI-driven recommendations, AI vendor due diligence, and documentation of AI system oversight.

FINRA 2026 Oversight Expectations

Firms must demonstrate: formal review and approval processes for each AI use case, comprehensive documentation throughout the AI lifecycle, ongoing monitoring of prompts/outputs/performance, prompt and output logging for accountability, and change management controls when AI models are updated.

All client-facing AI output is treated as regulated communications requiring principal review under FINRA Rule 2210. AI-generated pitchbooks must have pre-use review by a registered principal, fair and balanced presentation, no misleading statements, and full record retention.

Ten Regulatory Frameworks, One AI Deployment

Each framework applies differently to Anthropic's agent suite. Below is a detailed analysis of obligations, penalties, and action items under each regime.

Regulation (EU) 2024/1689. Entered into force August 1, 2024. High-risk obligations enforceable August 2, 2026.

Annex III, Section 5(b) explicitly designates AI systems for creditworthiness evaluation as high-risk. Section 5(c) covers risk assessment and pricing for life and health insurance.

Compliance requirements for high-risk systems (Articles 6-15):

  • Risk Management System (Art. 9): Continuous, iterative process throughout AI lifecycle
  • Data Governance (Art. 10): Training data must be representative, free of errors, address biases
  • Technical Documentation (Art. 11): Detailed documentation before market placement
  • Record-Keeping/Logging (Art. 12): Automatic recording of events for traceability
  • Transparency (Art. 13): Enable deployers to interpret output appropriately
  • Human Oversight (Art. 14): Effective oversight by competent natural persons
  • Accuracy, Robustness, Cybersecurity (Art. 15): Resilience to unauthorized alteration
  • Conformity Assessment (Art. 43): Internal control or third-party assessment

GPAI Model Considerations: Claude Opus 4.7 likely qualifies as a General Purpose AI Model under Article 51, triggering technical documentation, copyright policy transparency, and training data summary requirements. If classified as having systemic risk (trained with >10^25 FLOPs): additional model evaluation, adversarial testing, incident reporting, and cybersecurity obligations.

Penalties

Prohibited AI practices: EUR 35M or 7% global turnover. High-risk violations: EUR 15M or 3%. Incorrect information: EUR 7.5M or 1%.

Key regulations: Securities Exchange Act Section 15(c), Investment Advisers Act, FINRA Rules 2210 (Communications) & 3110 (Supervision), Regulation Best Interest, Regulation S-P (privacy, compliance deadline June 3, 2026).

Current posture: No AI-specific rulemaking. SEC withdrew the Predictive Data Analytics rule in June 2025. AI is a 2026 examination priority through existing frameworks.

Agent-specific implications:

  • Pitch Builder: FINRA Rule 2210 applies to all communications regardless of human or AI origin. AI-generated pitchbooks are regulated communications requiring pre-use approval by registered principal.
  • Model Builder / Valuation Reviewer: If models feed retail recommendations, Regulation Best Interest applies. AI-generated models subject to same supervisory review as human-generated.
  • Market Researcher: If output qualifies as "research" under FINRA Rule 2241, specific conflict-of-interest disclosures apply. Must monitor for MNPI exposure.

What firms must do: Treat all client-facing AI output as regulated communications. Document supervisory procedures for each agent. Comply with Reg S-P amendments by June 3, 2026. Maintain prompt/output logs as books and records under Rules 17a-3/17a-4.

FINMA Guidance Note 08/2024 (December 18, 2024). Swiss Financial Market Supervisory Authority Act. Swiss Data Protection Act (nDSG). "Same business, same risks, same rules" principle.

Requirements:

  • Governance: Cross-functional AI governance integrated into risk management
  • Transparency: Institutions must explain AI-driven decisions
  • Robustness: Monitor for drift and degradation
  • Non-Discrimination: Test for discriminatory outcomes
  • Accountability: Clear chains; senior management oversight
  • Outsourcing: Third-party AI (Anthropic) triggers FINMA Circular 2018/3 — risk assessment, audit rights, BCP, data protection

Swiss AI bill expected end-2026 — limited bill and non-binding measures. Organizations should build AI governance into operational design now, not retrofit later.

FCA Consumer Duty (PS22/9), effective July 31, 2023. FCA Principles for Businesses (PRIN 2A). Senior Managers and Certification Regime (SM&CR).

Firms must deliver good outcomes across four areas: products and services, price and value, consumer understanding, and consumer support.

AI-specific enforcement posture:

  • Algorithmic bias or opaque pricing = direct breach of Consumer Duty
  • Firms must record how customer outcomes were considered at design and deployment
  • SM&CR creates personal accountability for AI outcomes — a named senior manager is responsible
  • FCA can impose unlimited fines, require stopping AI use, mandate consumer redress

AIG 88% accuracy: 12% error rate requires analysis. Are errors systematic? Disproportionately affect vulnerable customers? Each wrongly denied claim is a potential violation. FCA assessing AI in underwriting, claims, and consumer services throughout 2026.

SR 11-7 (Federal Reserve, April 2011). OCC Bulletin 2011-12. OCC Bulletin 2026-13 (updated guidance).

Is Claude a "model"? Yes, when producing quantitative outputs for risk, capital, or financial reporting decisions. However, 2026 OCC guidance explicitly states: "generative AI and agentic AI models are novel and rapidly evolving, and as such, they are not within the scope of this guidance." This creates a regulatory gap — examiners apply SR 11-7 principles through examination findings rather than formal guidance.

Industry Readiness

Only 26.4% of financial institutions express confidence in their AI compliance readiness (Wolters Kluwer Q1 2026). 58.8% say clearer guidance is the single biggest barrier. Examiners are issuing MRAs (Matters Requiring Attention) based on SR 11-7 principles applied to AI.

LLM validation challenges: Outputs are probabilistic (reproducibility issue), historical back-testing is difficult, input space is infinite (natural language), model architecture is proprietary, and Anthropic updates change behavior without bank control.

Bank Secrecy Act, USA PATRIOT Act Section 326, FinCEN Final Rule (effective Jan 1, 2026), FinCEN Proposed Rule (April 13, 2026; comments due June 9, 2026).

Can AI replace human SAR filing decisions? No. The regulatory framework requires:

  • Human-in-the-loop for SAR determinations — no fully autonomous filing permitted
  • Qualified BSA/AML analyst must review AI case packages
  • Human must exercise independent judgment (not rubber-stamp)
  • Human must have authority to override AI recommendations
  • Review process must be documented and auditable
  • Institutions must demonstrate reviewers have adequate training and time

FinCEN 2026 proposed rule creates incentive for AI: Focuses on effectiveness over checklists, allows AI to automate alert triage, permits AI for "lower-risk SARs" with oversight, ties enforcement to material/systemic failures rather than isolated issues.

Penalties

Civil: up to $1 million per BSA violation. Criminal: up to $500,000 and/or 10 years for willful violations. Individual liability for compliance officers who fail to supervise AI properly.

GDPR (EU) 2016/679, Swiss nDSG, EU-US Data Privacy Framework, Digital Operational Resilience Act (DORA, effective January 17, 2025).

The Moody's 600M+ company dataset issue: When the KYC Screener processes data on EU-based entities or natural persons via Moody's MCP connection:

  • Lawful Basis (Art. 6): "Legal obligation" (Art. 6(1)(c)) or "legitimate interest" (Art. 6(1)(f)) — each deployer must document
  • Data Processing Agreements (Art. 28): Required between bank/Anthropic and bank/Moody's; must specify purposes, duration, data types
  • Cross-Border Transfer (Chapter V): EU personal data processed by US-based Claude infrastructure requires transfer mechanisms (DPF adequacy or SCCs + TIA)
  • DPIA (Art. 35): Mandatory for systematic profiling with significant effects (KYC screening). Must be completed before deployment.
  • Automated Decision-Making (Art. 22): Data subjects have right to not be subject to solely automated decisions with legal effects; right to human intervention and right to contest

DORA obligations: ICT risk assessments for third-party AI providers, contractual audit rights, incident reporting for AI failures, operational resilience testing.

Penalties: GDPR: EUR 20 million or 4% of global annual turnover.

Solvency II Directive, NAIC Model Bulletin on AI (December 2023), State Unfair Claims Settlement Practices Acts.

AIG liability for wrongful AI denials:

  • The insurer cannot delegate its duty of good faith and fair dealing to AI — a wrongful denial is the insurer's denial regardless of whether human or AI made it
  • Courts increasingly hold that AI-driven denials without meaningful human review constitute evidence of bad faith
  • Class action exposure for systematic AI-driven wrongful denials

ISO AI Exclusion Endorsements (2026): CG 40 47, CG 40 48, CG 35 08 create AI-specific exclusions in commercial general liability policies. Firms deploying AI face potential coverage gaps — traditional E&O/D&O policies may also exclude AI-related claims.

Non-Delegable Duty

The duty of good faith in insurance claims processing is non-delegable. AI is a tool, not a substitute for the insurer's obligations. Every AI-denied claim must have a meaningful human review pathway and clear disclosure to policyholders.

EU Revised Product Liability Directive (entered into force December 2024; transposition by December 9, 2026). US product liability (state-by-state). Professional negligence frameworks.

When Claude generates a bad credit memo — liability layers:

1

Bank/GP (Most Exposed)

Primary liability. Fiduciary duty cannot be delegated to AI. "AI told me to" is not a defense. The firm adopted and relied on AI output. Defense: robust human review, independent verification, documentation of overrides.

2

Anthropic (Product Liability)

Under revised EU PLD: AI/software is now explicitly a "product." Non-compliance with AI Act = presumption of defect. Strict liability — no need to prove fault, only defect and causation. Claimant can compel Anthropic to disclose technical evidence.

3

Data Providers (Moody's)

If credit memo relied on inaccurate Moody's data, contractual claims possible. Standard disclaimers may not fully insulate against negligence claims in financial services context.

4

GP (Personal Liability)

Mechanical reliance on AI without independent judgment = breach of fiduciary duty to LPs, breach of duty of care, potential securities fraud if material misrepresentations result.

Key Principle

Under the revised EU PLD, non-compliance with the AI Act directly informs defectiveness. A high-risk AI system lacking required safety features is presumed defective. This creates a direct regulatory-to-liability pipeline: fail compliance = face strict liability.

Immediate deadlines:

DateRegulationImpact
June 3, 2026SEC Regulation S-P amendmentsAI vendors touching client data must comply with updated privacy/breach rules
June 9, 2026FinCEN proposed AML rule comment periodLast chance to influence AI-in-AML standards
June 30, 2026Colorado AI Act (SB 24-205)First comprehensive US state AI law; financial services in scope; algorithmic discrimination prevention
August 2, 2026EU AI Act high-risk obligationsConformity assessments, human oversight, logging required
December 9, 2026EU Revised PLD transpositionAI/software as "product" under strict liability across EU member states

Emerging landscape:

  • US: State AI legislation proliferating; SEC enforcement-led (no new rules); FINRA intensifying AI documentation focus; FinCEN AML reform likely finalized 2027
  • EU: Full AI Act enforcement by August 2, 2027; DORA creating third-party AI risk management obligations; AMLR effective 2027
  • Switzerland: Federal AI bill expected end-2026; FINMA likely to update Guidance Note 08/2024
  • UK: FCA enforcement under Consumer Duty active and intensifying; AI Safety Institute developing evaluation frameworks
  • International: ISO/IEC 42001 adoption growing; Basel Committee likely to issue AI-specific model risk guidance

Under the Hood: Claude Opus 4.7 and the Financial Services Stack

Anthropic's financial services play is not just a model — it is a full-stack platform: model capability (Opus 4.7) + data layer (MCP/Moody's) + distribution (M365) + services (FDE JV) + vertical templates (10 agents). Each piece reinforces the others and creates switching costs.

1M
Token Context Window
Entire 10-K + years of priors in one query
64.4%
Finance Agent Benchmark
Vals AI v1.1 — industry-leading for multi-step financial workflows
82.7%
FinanceBench Score
Financial document extraction & calculation accuracy
77.3%
Tool Orchestration
MCP-Atlas benchmark; leads GPT-5.4 (68.1%)

Shipped April 16, 2026. The 1 million token context window is transformative for finance: an entire 10-K filing (80,000-120,000 words) plus supplementary filings, comparables, and the analyst's existing model — all in a single query. Previous models forced document chunking and lost cross-reference ability.

BenchmarkOpus 4.7GPT-5.4Gemini 3.1 Pro
SWE-bench Pro (coding)64.3%57.7%54.2%
MCP-Atlas (tool orchestration)77.3%68.1%73.9%
GPQA Diamond (knowledge)94.2%94.4%94.3%
BrowseComp (web research)79.3%89.3%85.9%
Vals AI Finance Agent v1.164.4%
FinanceBench82.7%

The pattern: Opus 4.7 leads on agentic work (coding, tool use) and is competitive on knowledge. It trails on web research. For financial services, the agentic lead matters most — building models, orchestrating multi-step analyses, and using tools like Moody's connectors requires exactly the capabilities where Opus leads.

What 64.4% Actually Means

The Finance Agent benchmark tests multi-step workflows: planning, tool calling, producing coherent output. 64.4% means Claude can complete roughly two-thirds of complex financial workflows autonomously. The remaining third requires human intervention — ambiguous tasks, judgment calls, or edge cases. No model has broken 70%. These agents are accelerants for analysts, not replacements.

Model Context Protocol (MCP) is an open protocol standardizing how AI models connect to external data sources — "USB for AI." Architecture:

  • Protocol: JSON-RPC with stateful sessions per connection
  • Authentication: OAuth with audience-restricted tokens; DPoP or mTLS prevents token theft/replay
  • Transport: TLS 1.3 required; AES-256 encryption at rest and in transit
  • Session isolation: Each client-server pair has its own session — Goldman's queries cannot leak into Citi's session
  • Data scope: Moody's provides 600 million entities and 2 billion ownership links
  • Enterprise gateway: Centralized control plane for monitoring, policy enforcement, and risk mitigation

How a credit memo works: Claude parses the request, invokes Moody's MCP server with structured requests, retrieves ratings/financials/ownership chains, spawns methodology-check subagents, and assembles the formatted output — all within the same session context with full audit logging.

Latency: MCP adds network round-trip per tool call. For 5-10 Moody's queries, expect seconds of additional latency per query. Fine for back-office work; unacceptable for execution desks.

Claude operates as Office add-ins embedded within Excel, PowerPoint, and Word (Outlook in beta). The April 2026 upgrade: full conversation context now carries across applications.

What this means: Analyzing an LBO model in Excel, then opening PowerPoint — Claude carries the enterprise value, debt structure, and return assumptions without re-explaining. Persistent session state across Microsoft applications.

DimensionClaude for M365Microsoft Copilot
Underlying modelClaude Opus 4.7GPT-4o (OpenAI)
Financial reasoningStronger (FinanceBench 82.7%)Adequate for general tasks
Integration depthAdd-in (requires installation)Native (built into M365)
External data accessMCP connectors (Moody's, etc.)Microsoft Graph ecosystem
Agentic capabilityMulti-step financial workflowsGeneral productivity

Copilot conversion problem: Microsoft has 345 million M365 seats but only ~3.3% Copilot conversion. This leaves massive addressable market for Claude add-ins — but requires per-seat installation rather than native integration.

An agent template packages three components — closer to microservices architecture than simple prompt templates:

1

Skills

Structured domain expertise. The KYC Screener has a "kyc-rules" skill encoding firm-specific KYC/AML rules applied to parsed onboarding records. Not generic prompts — structured knowledge.

2

Connectors

Governed access to live data sources. Each connector defines what data the agent can access and what it cannot. Per-tool permissions gate destructive actions.

3

Subagents

Smaller Claude instances handling specific subtasks. The Valuation Reviewer spawns a comparables-selection subagent, then passes results to a methodology-check subagent. Composable workflows with multiple Claude instances coordinating through defined interfaces.

State management: Claude Managed Agents provide long-running sessions (not single-query interactions). Audit logging records every tool call and decision. Per-tool permissions ensure agents can read but not write without human approval. Managed credential vaults control system access.

Human-in-the-loop enforced: Agents produce draft outputs, not final deliverables. Per-tool permissions gate destructive actions (posting entries, filing documents).

CertificationStatus
SOC 2 Type I and Type IICertified
ISO 27001:2022Certified
ISO/IEC 42001:2023 (AI Management Systems)Certified
HIPAABAA available
FedRAMP HighAuthorized (Claude for Government)

Key risks:

  • Prompt injection in financial documents: Malicious actors could embed adversarial text in KYC docs, earnings filings, or market reports to manipulate analysis. Opus 4.7 includes automated detection/blocking of high-risk requests, but no model is immune.
  • KYC document adversarial inputs: Documents come from the entities being screened — fraudulent entities could craft evasion documents. Human-in-the-loop for compliance escalations is the critical safeguard.
  • No on-premise option: Regulated institutions requiring air-gapped environments must use Bedrock GovCloud or similar government-certified channels.

Cost at scale (10,000 daily KYC checks):

  • Input: 1.5 billion tokens/day at $5/million = $7,500/day
  • Output: 100 million tokens/day at $25/million = $2,500/day
  • Raw total: ~$3.6 million/year
  • With Batch API (50% off) and prompt caching (up to 90% reduction for repeated inputs): $1-2 million/year
  • Equivalent human team (50-100 KYC analysts): $4-12 million/year
The AIG 88% Statistical Problem

AIG's 88% accuracy claim was tested on only 100 claims. The confidence interval on 88/100 is roughly 80-94%. This is not statistically robust enough to make production deployment decisions. More data is needed before drawing conclusions about error rates at scale. The true accuracy could be anywhere in that range.

Economics work only at volume: Small institutions will find unit economics challenging. The model is designed for large banks processing thousands of transactions daily, not boutique firms with dozens.

The Deployment War: Two AI Giants, Two Financial Services Strategies

On May 4-5, 2026, both Anthropic and OpenAI made major financial services announcements within hours of each other. Same day, radically different strategies.

$1.5B
Anthropic JV
Forward-deployed engineers with Blackstone, Goldman, H&F
$10B
OpenAI JV
"The Deployment Company" with TPG, Brookfield, Advent, Bain
17.5%
Guaranteed Return
OpenAI guarantees annual IRR to PE backers over 5 years
345M
M365 Seats
Microsoft plays both sides; only 3.3% Copilot conversion
DimensionAnthropicOpenAI
Capital raised$1.5 billion$10 billion
Delivery modelForward-deployed engineers (Palantir model)Consulting firm partnerships (PwC, McKinsey)
Financial guaranteeNone disclosed17.5% annual IRR guaranteed to PE backers
Anchor investorsBlackstone, Goldman Sachs, Hellman & FriedmanTPG, Brookfield, Advent, Bain Capital
Agent templates10 purpose-built for financePwC co-developed agents
Data partnershipsMoody's (MCP native), S&P, Bloomberg, LSEGNot announced for finance specifically
M365 integrationAdd-ins (Excel, PPT, Word)Native via Copilot relationship
Target marketMid-market PE portfolio companiesEnterprise (Big 4 clients)
Consulting partnersAccenture (30K trained), Deloitte (470K access)PwC, McKinsey
Revenue scale~$2B ARR (estimated)$25B+ ARR
1

Model Quality for Finance

64.4% on Vals AI Finance Agent (industry-leading), 82.7% FinanceBench, 77.3% tool orchestration vs 68.1% for GPT-5.4. For the agentic work that defines financial services automation — building models, orchestrating analyses, using data connectors — Opus leads.

2

Wall Street Credibility

Goldman Sachs and Blackstone as anchor investors. Jamie Dimon publicly endorsing. JPMorgan, Goldman, Citi, AIG, and Visa as named production customers. These are not pilots — these are production deployments at the firms that ARE Wall Street.

3

Purpose-Built Vertical Agents

10 agents designed specifically for financial workflows: KYC screening, credit modeling, pitchbook generation, month-end close, valuation review. Not general-purpose agents with a finance prompt — pre-configured, workflow-specific, with domain skills and data connectors built in.

4

Data Partnership Moat

Moody's MCP integration creates switching costs. Once workflows depend on MCP connectors to Moody's through Claude, moving to a competitor requires rebuilding data pipelines. 600M entities + 2B ownership links accessible in-session.

1

Scale and Capital

$10 billion vs $1.5 billion. $25B+ ARR vs ~$2B. OpenAI has 6-7x more deployment capital and 12x more recurring revenue. The scale advantage means more engineers, faster iteration, broader geographic coverage.

2

Distribution via Consulting Partnerships

PwC already has partners embedded at every major bank. McKinsey has relationships at the C-suite level across financial services. These consulting channels can deploy AI solutions across thousands of clients simultaneously — scale that forward-deployed engineers cannot match.

3

Broader Enterprise Footprint

OpenAI is cross-industry. A bank deploying OpenAI for finance also uses it for legal, HR, operations, and marketing. One vendor, one relationship, one security review. Anthropic's finance-specific approach means banks need multiple AI vendors.

Microsoft plays both sides. It is the largest investor in OpenAI AND hosts Claude on Azure (via Anthropic's multi-cloud strategy). With 345 million M365 seats but only ~3.3% Copilot conversion (~11.4 million paying users), the vast majority of enterprise seats remain unmonetized for AI.

The Copilot Conversion Problem

3.3% conversion on 345 million seats means ~334 million seats are not paying for AI assistance. Claude's add-in model competes for these unconverted seats — offering superior financial reasoning without requiring the full Copilot license. Microsoft benefits regardless: it earns cloud revenue whether customers choose Copilot or Claude-via-Azure.

Anthropic's FDE Model

Puts AI engineers directly inside institutions. They understand the codebase, data architecture, regulatory constraints. The agent templates are production-ready on day one. The downside: it does not scale. You cannot embed engineers in 10,000 companies. Deeper integration at fewer clients.

OpenAI's Consulting Model

Leverages existing relationships — PwC already has partners at every major bank. Scales through existing distribution channels. The downside: adds a translation layer. Consultants are not AI engineers. There is friction between "what the AI can do" and "what the consultant configures." Broader reach at shallower integration.

For large banks with internal engineering teams, Anthropic's approach is likely superior — the FDE works alongside the bank's own engineers to build production systems.

For mid-market firms without AI engineering capacity, OpenAI's consulting channel may be more accessible.

The 17.5% guaranteed return is notable. Either OpenAI is so confident in revenue generation that it guarantees PE-grade returns, or it needed to offer PE-grade returns to attract capital at $10B scale. The market will determine which interpretation is correct.