A coding benchmark built from real competitive programming problems. Continuously updated to prevent data contamination. Score is accuracy (%).
Source: Artificial Analysis| Rank | Model | |
|---|---|---|
| #1 | Gemini 3 Flash | 90.8% |
| #2 | Anthropic Claude Opus 4.5 | 87.1% |
| #3 | DeepSeek DeepSeek V3.2 | 86.2% |
| #4 | Grok Grok 4.1 Fast (Reasoning) | 82.2% |
| #5 | Baidu ERNIE 5.0 Thinking | 81.2% |
| #6 | Gemini 2.5 Pro | 80.1% |
| #7 | ChatGPT GPT-5 Nano | 76.3% |
| #8 | Anthropic Claude Sonnet 4.5 | 71.4% |
| #9 | ChatGPT GPT OSS 120B | 70.7% |
| #10 | Gemini 2.5 Flash | 69.5% |
| #11 | Anthropic Claude Sonnet 4 | 65.5% |
| #12 | Anthropic Claude Opus 4.1 | 65.4% |
| #13 | Gemini 2.5 Flash Lite | 64.1% |
| #14 | Anthropic Claude Opus 4 | 63.6% |
| #15 | Anthropic Claude Haiku 4.5 | 61.5% |
| #16 | ChatGPT GPT-5 Mini | 54.5% |
| #17 | ChatGPT GPT-5 | 54.3% |
| #18 | Baidu ERNIE 4.5 300B A47B | 46.7% |
| #19 | ChatGPT GPT-4.1 | 45.7% |
| #20 | Grok Grok 4.1 Fast | 39.9% |
| #21 | Meta AI Llama 4 Maverick | 39.7% |
| #22 | Amazon Nova 2 Lite | 34.6% |
| #23 | Meta AI Llama 4 Scout | 29.9% |
| #24 | Mistral AI Mistral Small 4 | 11.1% |