New Finance Agent Benchmark Released

First snapshot that supports Structured Outputs. gpt-4o currently points to this version.

Released Date: 8/6/2024

Avg. Accuracy:

65.4%

Latency:

16.46s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

FinanceAgent

19.3%

( 11 / 22 )

CorpFin

49.3%

( 30 / 38 )

CaseLaw

83.3%

( 16 / 60 )

ContractLaw

61.7%

( 55 / 67 )

TaxEval

75.0%

( 16 / 47 )

MortgageTax

75.2%

( 9 / 26 )

Math500

75.2%

( 29 / 43 )

AIME

14.0%

( 27 / 37 )

MGSM

90.6%

( 17 / 41 )

LegalBench

79.0%

( 20 / 65 )

MedQA

88.2%

( 18 / 45 )

MMLU Pro

74.1%

( 22 / 38 )

MMMU

65.5%

( 17 / 24 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$2.50 / M Tokens

Output Cost

$10.00 / M Tokens

Input Cost (per char)

$1.05 / M chars

Output Cost (per char)

$2.25 / M chars

Overview

GPT-4o is OpenAI’s latest flagship model, optimized for multi-step tasks. It represents a sweet spot between performance and efficiency, making it particularly attractive for production deployments that require high intelligence but need to manage costs.

Key Specifications

  • Context Window: 128,000 tokens
  • Output Limit: 16,384 tokens
  • Training Cutoff: October 2023
  • Pricing:
    • Input: $2.50 per million tokens
    • Cached Input: $1.25 per million tokens
    • Output: $10.00 per million tokens

Performance Highlights

  • Speed: Faster inference than standard GPT-4
  • Cost Efficiency: 4x cheaper than GPT-4 Turbo
  • Reasoning: Strong performance on complex logical tasks
  • Consistency: Reliable outputs across different domains

Benchmark Results

Excellent performance across our benchmarks:

  • TaxEval: Near top performance in tax reasoning
  • LegalBench: Strong showing in legal analysis
  • ContractLaw: High accuracy in contract interpretation
  • CaseLaw: Competitive performance in case law understanding

Use Case Recommendations

Best suited for:

  • Production API deployments
  • Complex reasoning tasks
  • Legal document analysis
  • Financial modeling
  • Tasks requiring balance of cost and capability

Limitations

  • Unable to perform the same complex, multi-step reasoning as o1

Comparison with Other Models

  • More powerful than GPT-4o Mini
  • Competitive with Claude 3.5 Sonnet
  • Better performance/cost ratio than most competitors
Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.