Anthropic made meaningful policy changes in late 2025 (privacy/model training and retention) and followed with access enforcement in early 2026 that broke many unofficial Claude “harnesses.” If you use third-party tools or care about data retention, this matters.

Timeline

Aug 28, 2025: Anthropic announces updated consumer terms and privacy policy, with a required choice by October 8 for whether new or resumed chats can be used for model improvement.
Oct 8, 2025 (effective): Privacy-policy changes for consumer plans take effect, including the model-training choice and a 5-year retention window for data used in training.
Jan 9, 2026: Anthropic deploys technical safeguards that block third-party tools spoofing the official Claude Code client.
Jan 12, 2026: Healthcare Privacy Policy updates take effect (Consumer Health Data Privacy Policy for US states with health data laws)—separate from model training changes.

What changed in policy (late 2025)

For consumer products (Claude Free, Pro, and Max, including Claude Code when used with those accounts), Anthropic introduced a model-improvement setting that controls whether chats and coding sessions can be used to improve Claude. If you enable that setting, Anthropic may retain your data (de-identified) for up to 5 years in training pipelines. Otherwise, deleted conversations are removed from backend systems within 30 days. This applies to new or resumed chats after the setting is enabled; incognito chats are not used for model training.

The choice architecture matters: Anthropic structured the consent flow to maximize opt-ins—the model-improvement toggle was pre-checked and the ‘Accept’ button locked in that choice. This design made it significantly easier for users to agree to data sharing than to decline, a friction-reducing pattern that privacy advocates criticize as undermining meaningful consent.

What got blocked (Jan 2026 enforcement)

Anthropic confirmed it tightened technical safeguards to prevent third-party tools from spoofing the Claude Code client. This severed access for tools that used consumer OAuth tokens outside official interfaces (e.g., OpenCode and similar “harnesses”). Anthropic also acknowledged some false positives that led to account bans, which it said were being reversed.

The economic tension: Subscription arbitrage

The enforcement action targets a fundamental economic mismatch. Anthropic’s $200/month Max subscription provides unlimited tokens through Claude Code, while the same usage via API would cost $1,000+ per month for heavy users. Third-party harnesses like OpenCode removed Claude Code’s artificial speed limits, enabling autonomous agents to execute high-intensity loops—coding, testing, and fixing errors overnight—that would be cost-prohibitive on metered plans.

As one developer noted: “In a month of Claude Code, it’s easy to use so many LLM tokens that it would have cost you more than $1,000 if you’d paid via the API.” By blocking these harnesses, Anthropic forces high-volume automation toward metered API pricing or their controlled Claude Code environment.

Precedent: This continues a pattern

January 2026’s enforcement wasn’t isolated. Anthropic had previously blocked competitors from accessing Claude through similar technical and contractual means:

  • August 2025: Anthropic revoked OpenAI’s access to the Claude API for benchmarking and safety testing—practices Anthropic flagged as competitive restrictions under their Terms of Service
  • June 2025: Windsurf faced a sudden blackout when Anthropic “cut off nearly all of our first-party capacity” for Claude 3.x models with less than a week’s notice, forcing Windsurf to pivot to a “Bring-Your-Own-Key” model

These actions established Anthropic’s boundary: while tools may coexist, the company reserves the right to sever access when usage threatens competitive advantage or business model sustainability.

Ecosystem response: Adaptation and alternatives

The enforcement triggered rapid adaptation. OpenCode—the popular open-source coding agent most affected—immediately launched OpenCode Black, a $200/month premium tier routing traffic through an enterprise API gateway to bypass consumer OAuth restrictions. Additionally, OpenCode announced a partnership with OpenAI to integrate Codex as an alternative to Claude, demonstrating how platform consolidation drives competitive realignment.

What still works

  • Claude.ai web/app and the official Claude Code CLI
  • Anthropic API (metered pricing under commercial terms)
  • Authorized integrations that use official, documented access

The data retention change

If you enable model improvement, Anthropic may retain your de-identified chats or coding sessions for up to 5 years in model training pipelines. If you disable it, standard retention for deleted chats is up to 30 days, and deleted chats are not used for future model training. You can also delete chats at any time, and Incognito mode keeps those sessions out of training even if model improvement is enabled.

Why this matters

Power users who built workflows around third-party tools now face a choice:

  1. Use official access paths: Claude Code or the commercial API
  2. Re-architect workflows around supported integrations
  3. Switch providers if unofficial tooling is a hard requirement

The broader lesson: unofficial access is increasingly fragile. If your workflow depends on spoofed clients or unsupported tokens, expect breakage and plan for official paths or alternatives.

  • Claude terms hub: /verify/claude-terms/
  • Anthropic API terms: /verify/anthropic-api-terms/
  • Verify official clients: /verify/anthropic-api-clients/
  • Verify policy claims: /verify/anthropic-policy-claims/
  • Third-party access risk: /risks/anthropic/third-party-access/
  • Account ban risk: /risks/anthropic/account-bans/

Ecosystem Impact

The enforcement documented above created immediate demand for alternatives. Within weeks, OpenClaw (formerly ClawdBot, MoltBot) became the fastest-growing open-source AI project in history—reaching 100K+ GitHub stars in 48 hours.

OpenClaw’s self-hosted architecture sidesteps the OAuth restrictions that broke third-party harnesses. For analysis of how Anthropic’s policy changes shifted the ecosystem toward self-hosted agents, see:

Sources


This article will be updated as enforcement actions continue.