TL;DR
- Verify whether model-improvement training is opt-in or default for Claude Pro.
- Confirm retention for deleted chats or logs and whether data can be used for training.
- Ensure your access path is officially supported for this plan.
Scope
This page covers terms you should verify for Claude Pro. It is a snapshot checklist, not legal advice.
How to use this page
- Confirm you are on the Claude Pro plan in Claude.ai.
- Work through the checklist and capture the current policy date.
- Re-check monthly or after any policy update.
- Return to the Claude terms hub for plan comparisons.
What changed recently
See /posts/anthropic-tos-changes-2025/ for the 2025-2026 timeline and enforcement shifts.
Terms snapshot checklist (as of Feb 2026)
Training use: Model-improvement is pre-checked by default (opt-in design). Users must actively disable in account settings to prevent training use. Same flow as Free tier.
Retention:
- If training enabled: Up to 5 years retention in de-identified form for model training
- If training disabled: Deleted conversations removed within 30 days
- Incognito mode: Excluded from training regardless of settings
Third-party access:
- Official Claude.ai web/mobile apps
- Claude Code CLI now available on Pro plan ($20/month tier)
- Third-party clients that spoof official interfaces are blocked
- API access requires separate Anthropic API account with commercial terms
Data export/deletion: Full conversation history export available; individual deletion immediate, backend removal within 30 days.
Enterprise/DPA: Not available on Pro tier for individual accounts. Team/Enterprise plans offer DPA.
Pricing & Limits
- Cost: $20/month
- Usage: 5x Free tier rate limits
- Model access: Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus (limited)
- Claude Code: Now included (major change—previously required Max plan)
- Priority: Higher than Free, lower than Max
Evidence / signals
- Official Terms of Service and Privacy Policy.
- Claude pricing or plan page defining Pro tier boundaries.
- Account settings that show model-improvement toggles.
How to verify
- Read the current Privacy Policy and Terms of Service.
- Check account settings for model-improvement toggles.
- Confirm official client lists and API documentation.
If you already shared data
- Review current settings and disable training if needed.
- Rotate keys or revoke access for unofficial tools.
- Document what data was shared and when.
What would invalidate this
Any new policy effective date, plan change, or terms update should replace this checklist.
Terms deltas (auto)
Verified deltas vs baseline
No verified deltas yet.
Potential deltas to verify
No pending deltas detected.
Terms snapshot
| Field | Value | Status | Source |
|---|---|---|---|
| Training use | Consumer plans: model-improvement setting controls training (verify current policy). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Retention (deleted data) | Consumer plans: deleted chats removed within ~30 days when training is disabled (verify). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Retention (training data) | Consumer plans: training data may be retained up to 5 years if enabled (verify). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Third-party access | Official clients only; unofficial access may be blocked (verify). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Official clients | Claude app/web, Claude Code, Anthropic API (verify current list). | needs_verification | /posts/anthropic-tos-changes-2025/ |
| Enterprise DPA | Ask sales/support if you require a DPA; availability unconfirmed. | unknown |
Related links
- Verification methodology
- Claude terms overview
- Anthropic policy claims
- Anthropic API client analysis
- Third-party access risk
- Anthropic TOS changes timeline
- Claude vs OpenAI pricing
- Smart spend guide