New Finance Agent Benchmark Released
openai/o4-mini-2025-04-16 o4 Mini

Released Date: 4/16/2025

Avg. Accuracy:

70.5%

Latency:

116.86s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

FinanceAgent

36.5%

( 12 / 33 )

CorpFin

70.1%

( 6 / 44 )

CaseLaw

64.0%

( 21 / 26 )

TaxEval

78.8%

( 4 / 60 )

MortgageTax

77.1%

( 8 / 35 )

Math500

94.2%

( 9 / 54 )

AIME

83.7%

( 11 / 50 )

MGSM

93.4%

( 4 / 53 )

LegalBench

79.0%

( 25 / 75 )

MedQA

96.0%

( 5 / 56 )

GPQA

74.5%

( 10 / 52 )

MMLU Pro

80.6%

( 14 / 50 )

LiveCodeBench

82.2%

( 5 / 51 )

IOI

5.3%

( 6 / 11 )

MMMU

79.7%

( 4 / 32 )

SWE-bench

33.4%

( 10 / 13 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$1.10 / M Tokens

Output Cost

$4.40 / M Tokens

Input Cost (per char)

N/A

Output Cost (per char)

N/A

Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.