GPT 4.1 Mini

Release Date: 4/14/2025

Benchmarked by Vals Logo

Accuracy (Average)

65.09%

Latency (Average)

30.41s

Avg. Cost (In/Out)

0.4 / 1.6

Context Window

1M

Max Output Tokens

33k

Input Modality

Hyperparameter settings
Default Provider : OpenAI

Temperature

1

Top P

Default

Top K

Default

Max Output Tokens

32,768

Reasoning Effort

high

Benchmarks

Accuracy

Rankings

CorpFin

0.0%

± 0.97

45/ 88

MortgageTax

0.0%

± 0.94

19/ 63

TaxEval (v2)

0.0%

± 0.88

38/ 96

AIME

0.0%

± 1.12

55/ 88

GPQA

0.0%

± 2.35

57/ 91

LiveCodeBench

0.0%

± 1.18

65/ 98

LegalBench

0.0%

± 0.42

60/ 110

MedQA

0.0%

± 0.33

60/ 94

MMLU Pro

0.0%

± 0.41

62/ 89

MMMU

0.0%

± 1.10

38/ 60

SWE-bench

0.0%

± 2.13

51/ 58

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.