o1

Release Date: 12/17/2024

Benchmarked by Vals Logo

Accuracy (Average)

75.88%

Latency (Average)

50.19s

Avg. Cost (In/Out)

15 / 60

Context Window

200k

Max Output Tokens

100k

Input Modality

Hyperparameter settings
Default Provider : OpenAI

Temperature

Default

Top P

Default

Top K

Default

Max Output Tokens

100,000

Reasoning Effort

high

Benchmarks

Accuracy

Rankings

TaxEval (v2)

0.0%

± 0.86

14/ 96

AIME

0.0%

± 1.46

45/ 88

GPQA

0.0%

± 2.23

44/ 91

LiveCodeBench

0.0%

± 1.39

72/ 98

LegalBench

0.0%

± 0.58

43/ 110

MedQA

0.0%

± 0.33

1/ 94

MMLU Pro

0.0%

± 0.36

32/ 89

MMMU

0.0%

± 1.01

22/ 60

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.