New Finance Agent Benchmark Released
openai/gpt-4o-mini-2024-07-18 GPT 4o Mini

gpt-4o-mini currently points to this version.

Released Date: 7/18/2024

Avg. Accuracy:

54.5%

Latency:

17.36s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

FinanceAgent

10.8%

( 25 / 33 )

CorpFin

55.0%

( 28 / 44 )

TaxEval

64.9%

( 48 / 60 )

MortgageTax

69.2%

( 21 / 35 )

Math500

72.6%

( 43 / 54 )

AIME

11.5%

( 42 / 50 )

MGSM

86.2%

( 43 / 53 )

LegalBench

76.2%

( 41 / 75 )

MedQA

72.4%

( 43 / 56 )

GPQA

44.2%

( 43 / 52 )

MMLU Pro

62.7%

( 46 / 50 )

MMMU

56.6%

( 29 / 32 )

LiveCodeBench

26.4%

( 48 / 52 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$0.15 / M Tokens

Output Cost

$0.60 / M Tokens

Input Cost (per char)

N/A

Output Cost (per char)

$0.18 / M chars

Overview

GPT-4o Mini represents OpenAI’s effort to provide a cheaper, more lightweight version of GPT-4. It offers a compelling balance of performance and cost, making it particularly suitable for production deployments where both quality and economics matter.

Key Specifications

  • Context Window: 128,000 tokens
  • Output Limit: 16,384 tokens
  • Training Cutoff: October 2023
  • Pricing:
    • Input: $0.15 per million tokens
    • Output: $0.60 per million tokens

Performance Highlights

  • Cost Efficiency: Significantly cheaper than GPT-4 while maintaining strong performance
  • Legal Tasks: Shows strong performance on legal reasoning tasks
  • Consistency: Reliable performance across various benchmark categories

Benchmark Results

The model demonstrates competitive performance across our benchmarks:

  • group performance ranking results:

Use Case Recommendations

Best suited for:

  • High-volume production deployments
  • Cost-sensitive applications
  • Tasks requiring balance of performance and efficiency
  • Legal document analysis at scale

Limitations

  • Lower performance ceiling compared to GPT-4o
  • May struggle with highly complex legal and financial tasks

Comparison with Other Models

  • More capable than GPT-3.5 Turbo
  • More cost-effective than GPT-4
  • Competitive with Claude 3.5 Haiku in terms of performance/cost ratio
Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.