New Finance Agent Benchmark Released
openai/o3-mini-2025-01-31 o3 Mini

OpenAI's most recent small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini.

Released Date: 1/31/2025

Avg. Accuracy:

73.0%

Latency:

58.06s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

FinanceAgent

12.7%

( 25 / 34 )

CorpFin

55.7%

( 28 / 45 )

TaxEval

73.8%

( 29 / 62 )

Math500

91.8%

( 16 / 54 )

AIME

86.5%

( 6 / 52 )

MGSM

91.3%

( 21 / 55 )

LegalBench

70.9%

( 55 / 77 )

MedQA

94.8%

( 6 / 58 )

GPQA

75.0%

( 10 / 54 )

MMLU Pro

78.7%

( 25 / 52 )

LiveCodeBench

71.5%

( 10 / 54 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$1.10 / M Tokens

Output Cost

$4.40 / M Tokens

Input Cost (per char)

N/A

Output Cost (per char)

N/A

Overview

O3 Mini represents OpenAI’s latest model in its reasoning series. Like o1-mini, it is a smallar, more cost-efficient model - OpenAI clames it is also faster than o1-mini.

New in this model, people can select the “level” of reasoning they want the model to do - low, medium, or high.

Key Specifications

  • Context Window: 200,000 tokens
  • Max Output Tokens: 100,000 tokens
  • Training Cutoff: October 2023
  • Pricing:
    • Input: $1.10 / 1M tokens
    • Output: $4.40 / 1M tokens
Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.