Public Enterprise LLM Benchmarks

01/30/2026
Benchmark

ProofBench Released: Evaluating Formal Mathematical Reasoning

View Benchmark

Best Performing Models

Top performing models from the Vals Index. Includes a range of tasks across finance, coding and law.

All Top Performing Models

Vals Index

1/27/2026
Vals Logo
0.00%
Anthropic
Anthropic
Claude Opus 4.5 (Thinking)
Vals Index Score: 63.09%
OpenAI
OpenAI
GPT 5.2
Vals Index Score: 62.21%
Moonshot AI
Moonshot AI
Kimi K2.5
Vals Index Score: 58.84%
1Claude Opus 4.5 (Thinking)
63.09%
2GPT 5.2
62.21%
3Kimi K2.5
58.84%

Best Open Weight Models

Top performing open weight models from the Vals Index. Includes a range of tasks across finance, coding and law.

All Top Open Weight Models

Vals Index

1/27/2026
Vals Logo
0.00%
Moonshot AI
Moonshot AI
Kimi K2.5
Vals Index Score: 58.84%
zAI
zAI
GLM 4.7
Vals Index Score: 53.98%
MiniMax
MiniMax
MiniMax-M2.1
Vals Index Score: 50.6%
1Kimi K2.5
58.84%
2GLM 4.7
53.98%
3MiniMax-M2.1
50.60%

Pareto Efficient Models

The top performing models from the Vals Index which are cost efficient.

View full Pareto curve

Vals Index

1/27/2026
x-axis: cost per test
y-axis: accuracy
Claude Opus 4.5 (Thinking)
Anthropic
Claude Opus 4.5 (Thinking)
Accuracy: 63.09%
Cost per test: $0.90
GPT 5.2
OpenAI
GPT 5.2
Accuracy: 62.21%
Cost per test: $0.76
Kimi K2.5
Moonshot AI
Kimi K2.5
Accuracy: 58.84%
Cost per test: $0.12
1Claude Opus 4.5 (Thinking)
63.09% | $0.90
2GPT 5.2
62.21% | $0.76
3Kimi K2.5
58.84% | $0.12

Industry Leaderboard

Select industry:
Vals Logo

Updates

View more
benchmark
01/30/2026

ProofBench Released: Evaluating Formal Mathematical Reasoning

ProofBench Released: Evaluating Formal Mathematical Reasoning

View Details

Loading benchmark data...

View details
Vals Logo

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.