o1

Release Date: 12/17/2024

Benchmarked by Vals Logo

o1 is a previous full o-series reasoning model trained with reinforcement learning to perform complex reasoning.

Avg. Accuracy

73.41%

Latency

43.77s

Cost (In/Out)

15 / 60

Context Window

200k

Max Output Tokens

100k

Input Modality

Benchmarks

Accuracy

Rankings

Finance Agent

0.0%

55/ 55

TaxEval (v2)

0.0%

86/ 86

AIME

0.0%

76/ 76

GPQA

0.0%

77/ 77

LiveCodeBench

0.0%

84/ 84

LegalBench

0.0%

100/ 100

MATH 500

0.0%

59/ 59

MedQA

0.0%

84/ 84

MGSM

0.0%

73/ 73

MMLU Pro

0.0%

78/ 78

MMMU

0.0%

55/ 55

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.