TL;DR

  • Verify whether model-improvement training is opt-in or default for Claude Max.
  • Confirm retention for deleted chats or logs and whether data can be used for training.
  • Ensure your access path is officially supported for this plan.

Scope

This page covers terms you should verify for Claude Max. It is a snapshot checklist, not legal advice.

How to use this page

  1. Confirm you are on the Claude Max plan in Claude.ai.
  2. Work through the checklist and capture the current policy date.
  3. Re-check monthly or after any policy update.
  4. Return to the Claude terms hub for plan comparisons.

What changed recently

See /posts/anthropic-tos-changes-2025/ for the 2025-2026 timeline and enforcement shifts.

Terms snapshot checklist (as of Feb 2026)

Training use: Model-improvement is pre-checked by default (opt-in design). Even on Max plan, users must actively disable training in account settings. Note: High-volume usage makes training opt-out particularly important.

Retention:

  • If training enabled: Up to 5 years retention in de-identified form
  • If training disabled: Deleted conversations removed within 30 days
  • Incognito mode: Excluded from training regardless of settings

Third-party access:

  • Official Claude.ai web/mobile apps
  • Claude Code CLI with higher rate limits than Pro
  • Critical: Third-party harnesses that spoof Claude Code are blocked as of Jan 9, 2026
  • For API access, must use separate Anthropic API account (Max plan does not include API credits)

Data export/deletion: Full conversation history export; priority support for data inquiries.

Enterprise/DPA: Available for Max plan users through account settings. Commercial terms apply.

Pricing & Limits

  • Cost: $200/month
  • Usage: 20x Pro usage at 10x the price (break-even vs API at high volume)
  • Model access: All Claude 3.5 models + extended context windows
  • Claude Code: Included with highest rate limits and priority processing
  • Economic reality: Same usage via API would cost $1,000+/month—this is why spoofing crackdown occurred

Evidence / signals

  • Official Terms of Service and Privacy Policy.
  • Claude pricing or plan page defining Max tier boundaries.
  • Account settings that show model-improvement toggles.

How to verify

  • Read the current Privacy Policy and Terms of Service.
  • Check account settings for model-improvement toggles.
  • Confirm official client lists and API documentation.

If you already shared data

  • Review current settings and disable training if needed.
  • Rotate keys or revoke access for unofficial tools.
  • Document what data was shared and when.

What would invalidate this

Any new policy effective date, plan change, or terms update should replace this checklist.

Terms deltas (auto)

Verified deltas vs baseline

No verified deltas yet.

Potential deltas to verify

No pending deltas detected.

Terms snapshot

FieldValueStatusSource
Training useConsumer plans: model-improvement setting controls training (verify current policy).needs_verification/posts/anthropic-tos-changes-2025/
Retention (deleted data)Consumer plans: deleted chats removed within ~30 days when training is disabled (verify).needs_verification/posts/anthropic-tos-changes-2025/
Retention (training data)Consumer plans: training data may be retained up to 5 years if enabled (verify).needs_verification/posts/anthropic-tos-changes-2025/
Third-party accessOfficial clients only; unofficial access may be blocked (verify).needs_verification/posts/anthropic-tos-changes-2025/
Official clientsClaude app/web, Claude Code, Anthropic API (verify current list).needs_verification/posts/anthropic-tos-changes-2025/
Enterprise DPAAsk sales/support if you require a DPA; availability unconfirmed.unknown

Sources