skip to content
#ai
[risks] [deploy] [compare] [tools] [lab] [about] [rss]
aihackers.net

practical notes on building with AI

Llm Pricing

tag: Llm Pricing

  • 2026-02-01 | Gemini 3 Flash: 78% SWE-bench, $0.50/1M Input, 8x Cheaper Than Claude Opus Google's Gemini 3 Flash delivers frontier coding performance with $0.50/1M input tokens, 65K output limit, and 1M context. Full benchmarks, pricing breakdown, and when to choose it over Claude Opus 4.5 or Kimi k2.5.
  • 2026-02-01 | GLM 4.7: Thinking Mode, 73.8% SWE-bench, Free via OpenCode & Z.AI Z.AI's GLM 4.7 delivers 73.8% SWE-bench coding performance with unique thinking mode visibility. Free via OpenCode Zen and Z.AI tier, with $3/month coding plan for heavy usage.
  • 2026-01-30 | Claude Opus 4.5: 80.9% SWE-bench, Pricing & When to Use Anthropic's flagship reasoning model delivers the highest verified SWE-bench score at 80.9%. Complete pricing breakdown ($5/1M input), subscription vs API analysis, and comparison to Kimi k2.5 and Gemini 3 Flash.
2026 aihackers.net · rss · tg