A hard-level benchmark evaluating AI agents' ability to execute complex shell commands, file operations, and system tasks in a real terminal environment. Score is success rate (%).
ChatGPT
GPT-5.4
Google
Gemini 3.1 Pro
Anthropic
Claude Opus 4.5
Claude Opus 4.6
Meta AI
Muse Spark
Qwen
Qwen3.6 Plus
Z.ai
GLM-5
Claude Sonnet 4.6
Xiaomi
MiMo-V2-Pro
MiniMax
MiniMax M2.7
Gemini 3 Flash
Grok
Grok 4.20 (Reasoning)
Gemma 4 31B
Claude Sonnet 4.5
DeepSeek
DeepSeek V3.2
Qwen3.5 397B A17B
Moonshot AI
Kimi K2.5
MiniMax M2.5
Claude Opus 4.1
GPT-5.4 Mini
GPT-5.4 Nano
Claude Opus 4
Claude Sonnet 4
NVIDIA
Nemotron 3 Super
Claude Haiku 4.5
Gemini 2.5 Pro
Baidu
ERNIE 5.0 Thinking
Gemini 3.1 Flash Lite
Grok 4.1 Fast (Reasoning)
GPT-5 Nano
Grok 4.20
GPT-5 Mini
Grok 4.1 Fast
Gemini 2.5 Flash
GPT-4.1
GPT-5
Meituan
Longcat Flash Chat
Gemini 2.5 Flash Lite
Llama 4 Maverick
Amazon
Nova 2 Lite
ERNIE 4.5 300B A47B
GPT OSS 120B
Llama 4 Scout