Claude Opus 4.5 (Thinking)

Release Date: 11/24/2025

Benchmarked by Vals Logo
Vals Index

Avg. Accuracy

69.4%

Latency

644.4s

Cost (In/Out)

5.00 / 25.00

Context Window

200k

Max Output Tokens

64k

Input Modality

Benchmarks

Accuracy

Rankings

Vibe Code Bench

0.0%

13/ 13

SAGE

0.0%

28/ 28

FinanceAgent

0.0%

50/ 50

CorpFin

0.0%

64/ 64

CaseLaw

0.0%

48/ 48

TaxEval

0.0%

81/ 81

MortgageTax

0.0%

51/ 51

AIME

0.0%

71/ 71

MGSM

0.0%

73/ 73

LegalBench

0.0%

96/ 96

MedQA

0.0%

77/ 77

GPQA

0.0%

73/ 73

MMLU Pro

0.0%

71/ 71

MMMU

0.0%

47/ 47

LiveCodeBench

0.0%

71/ 71

IOI

0.0%

27/ 27

Terminal-Bench

0.0%

37/ 37

SWE-bench

0.0%

35/ 35

Vals Index

0.0%

23/ 23

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.