Public Enterprise LLM Benchmarks

12/11/2025
Model

GPT 5.2 Tops Vals Index

View Model Results
12/12/2025
Model

DeepSeek v3.2 tops Open-Weight Vals Index

View Model Results

Best Performing Models

Top performing models from the Vals Index. Includes a range of tasks across finance, coding and law.

All Top Performing Models

Vals Index

12/12/2025
Vals Logo
0.00%
OpenAI
OpenAI
GPT 5.2
Vals Index Score: 64.49%
Anthropic
Anthropic
Claude Opus 4.5 (Thinking)
Vals Index Score: 63.77%
Google
Google
Gemini 3 Pro (11/25)
Vals Index Score: 58.04%
1GPT 5.2
64.49%
2Claude Opus 4.5 (Thinking)
63.77%
3Gemini 3 Pro (11/25)
58.04%

Best Open Weight Models

Top performing open weight models from the Vals Index. Includes a range of tasks across finance, coding and law.

All Top Open Weight Models

Vals Index

12/12/2025
Vals Logo
0.00%
DeepSeek
DeepSeek
DeepSeek V3.2 (Nonthinking)
Vals Index Score: 49.39%
zAI
zAI
GLM 4.6
Vals Index Score: 46.68%
Kimi
Kimi
Kimi K2 Thinking
Vals Index Score: 44.87%
1DeepSeek V3.2 (Nonthinking)
49.39%
2GLM 4.6
46.68%
3Kimi K2 Thinking
44.87%

Pareto Efficient Models

The top performing models from the Vals Index which are cost efficient.

View full Pareto curve

Vals Index

12/12/2025
x-axis: cost per test
y-axis: accuracy
GPT 5.2
OpenAI
GPT 5.2
Accuracy: 64.49%
Cost per test: $0.94
Claude Opus 4.5 (Thinking)
Anthropic
Claude Opus 4.5 (Thinking)
Accuracy: 63.77%
Cost per test: $0.87
Grok 4.1 Fast (Reasoning)
xAI
Grok 4.1 Fast (Reasoning)
Accuracy: 50.96%
Cost per test: $0.04
1GPT 5.2
64.49% | $0.94
2Claude Opus 4.5 (Thinking)
63.77% | $0.87
3Grok 4.1 Fast (Reasoning)
50.96% | $0.04

Industry Leaderboard

Select industry:
Vals Logo

Updates

View more
model
12/12/2025

DeepSeek v3.2 tops Open-Weight Vals Index

DeepSeek v3.2 tops Open-Weight Vals Index

View Details

Benchmarks

Accuracy

Rankings

Vals Index

0.0%

28/ 28

CaseLaw (v2)

0.0%

56/ 56

CorpFin

0.0%

79/ 79

TaxEval (v2)

0.0%

85/ 85

AIME

0.0%

75/ 75

GPQA

0.0%

76/ 76

IOI

0.0%

37/ 37

LiveCodeBench

0.0%

81/ 81

LegalBench

0.0%

99/ 99

MedQA

0.0%

83/ 83

MGSM

0.0%

72/ 72

MMLU Pro

0.0%

77/ 77

SWE-bench

0.0%

40/ 40

Terminal-Bench

0.0%

42/ 42

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)
Vals Logo

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.