New Finance Agent Benchmark Released
openai/nectarine-mini-alpha-2025-08-04 GPT 5 Mini

Released Date: 8/7/2025

Avg. Accuracy:

84.2%

Latency:

32.67s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

CaseLaw

82.1%

( 22 / 69 )

TaxEval

80.1%

( 1 / 56 )

MortgageTax

75.4%

( 10 / 33 )

Math500

94.8%

( 4 / 52 )

AIME

90.8%

( 2 / 46 )

MGSM

92.6%

( 7 / 49 )

LegalBench

81.7%

( 12 / 72 )

GPQA

80.3%

( 5 / 48 )

MMLU Pro

82.5%

( 9 / 46 )

LiveCodeBench

86.6%

( 1 / 47 )

MMMU

78.9%

( 5 / 30 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$0.25 / M Tokens

Output Cost

$2.00 / M Tokens

Input Cost (per char)

$0.07 / M chars

Output Cost (per char)

N/A

Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.