LLM Models
Feb 2026 - Model tracking is no longer feasible at broad scope
It is effectively impossible to track 600+ potentially interesting models.
The space is moving too fast for comprehensive monitoring to remain useful.
2026 Jan/Feb
- Arcee AI’s Trinity Large (Jan 27, 2026)
- Moonshot AI’s Kimi K2.5 (Jan 27, 2026)
- StepFun Step 3.5 Flash (Feb 1, 2026) https://arxiv.org/html/2602.10604v2#S2
- This is interesting model very fast: 100 tokens/sec throughput at 128k context length (Hopper GPUs), cheap $0.30 output
- Qwen3-Coder-Next (Feb 3, 2026)
- z.AI’s GLM-5 (Feb 12, 2026)
- MiniMax M2.5 (Feb 12, 2026)
- Nanbeige 4.1 3B (Feb 13, 2026)
- Qwen 3.5 (Feb 15, 2026)
- Ant Group’s Ling 2.5 1T & Ring 2.5 1T (Feb 16, 2026)
- Cohere’s Tiny Aya (Feb 17, 2026)
https://sebastianraschka.com/blog/2026/a-dream-of-spring-for-open-weight.html
Open
Proprietary
- 2025-12-11 42a3⁝ GPT-5.2
- 2025-12-18 42a4⁝ GPT-5.2 codex
- 2025-11-24 Opus 4.5
- 2025-11 GPT-5.1-Codex-Max
- 2025-11 GPT-5.1
- 2025-11 Google Nano Banana
- 2025-11 20251118100704⁝ Gemini 3
- 2025-02 o3-mini
- 2025-01 DeepSeek R1 (R1-Zero, R1-Distill)
- 2024-12 OpenAI o1
- 2023-12 Gemini (DeepMind)
- 2023-12 LLaVA-v1.5-13B
- 2025-11 20251116093417⁝ Chinese AI Labs (ecosystem)