TL;DR
- Verify whether model-improvement training is opt-in or default for Claude Code.
- Confirm retention for deleted chats or logs and whether data can be used for training.
- Ensure your access path is officially supported for this plan.
Scope
This page covers terms you should verify for Claude Code. It is a snapshot checklist, not legal advice.
How to use this page
- Confirm you are using Claude Code via an official install.
- Work through the checklist and capture the current policy date.
- Re-check monthly or after any policy update.
- Return to the Claude terms hub for plan comparisons.
What changed recently
See /posts/anthropic-tos-changes-2025/ for the 2025-2026 timeline and enforcement shifts.
Terms snapshot checklist (as of Feb 2026)
Training use: Model-improvement is pre-checked by default for Claude Code sessions. Users must disable training in account settings to opt out.
Retention:
- If training enabled: Up to 5 years retention in de-identified form
- If training disabled: Logs removed within 30 days
- Incognito mode: Excluded from training
Third-party access:
- Critical: As of Jan 9, 2026, third-party tools that spoof Claude Code client identity are blocked
- Only official Claude Code CLI (installed via Anthropic) is supported
- Rate limits are enforced—attempts to bypass them via spoofing result in account bans
- Safe: Official Claude Code from anthropic.com
- Blocked: Tools like OpenCode, Cursor (in certain configurations), or any tool spoofing client headers
Data export/deletion: Conversation export via Claude.ai interface covers Claude Code sessions.
Enterprise/DPA: Available for organizations using Claude Code via commercial API or enterprise plans.
Evidence / signals
- Official Terms of Service and Privacy Policy.
- Claude Code documentation defining supported access paths.
- Account settings that show model-improvement toggles.
How to verify
- Read the current Privacy Policy and Terms of Service.
- Check account settings for model-improvement toggles.
- Confirm official client lists and API documentation.
If you already shared data
- Review current settings and disable training if needed.
- Rotate keys or revoke access for unofficial tools.
- Document what data was shared and when.
What would invalidate this
Any new policy effective date, plan change, or terms update should replace this checklist.
Terms deltas (auto)
Verified deltas vs baseline
No verified deltas yet.
Potential deltas to verify
No pending deltas detected.
Terms snapshot
| Field | Value | Status | Source |
|---|---|---|---|
| Training use | Consumer plans: model-improvement setting controls training (verify current policy). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Retention (deleted data) | Consumer plans: deleted chats removed within ~30 days when training is disabled (verify). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Retention (training data) | Consumer plans: training data may be retained up to 5 years if enabled (verify). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Third-party access | Official clients only; unofficial access may be blocked (verify). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Official clients | Claude app/web, Claude Code, Anthropic API (verify current list). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Enterprise DPA | Ask sales/support if you require a DPA; availability unconfirmed. | unknown |
Related links
- Verification methodology
- Claude terms overview
- Anthropic policy claims
- Anthropic API client analysis
- Third-party access risk
- Anthropic TOS changes timeline
- Claude vs OpenAI pricing
- /value/smart-spend/