New Finance Agent Benchmark Released
openai/o4-mini-2025-04-16 o4 Mini

Released Date: 4/16/2025

Avg. Accuracy:

78.1%

Latency:

26.13s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

FinanceAgent

36.5%

( 4 / 22 )

CorpFin

70.1%

( 3 / 38 )

CaseLaw

81.1%

( 23 / 60 )

ContractLaw

68.9%

( 15 / 67 )

TaxEval

78.8%

( 3 / 47 )

MortgageTax

77.1%

( 8 / 26 )

Math500

94.2%

( 5 / 43 )

AIME

83.7%

( 6 / 37 )

MGSM

93.4%

( 1 / 41 )

LegalBench

79.0%

( 18 / 65 )

MedQA

96.0%

( 3 / 45 )

GPQA

74.5%

( 6 / 38 )

MMLU Pro

80.6%

( 7 / 38 )

MMMU

79.7%

( 3 / 24 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$1.10 / M Tokens

Output Cost

$4.40 / M Tokens

Input Cost (per char)

$0.67 / M chars

Output Cost (per char)

N/A

Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.