First snapshot that supports Structured Outputs. gpt-4o currently points to this version.

Released Date: 8/6/2024

Avg. Accuracy:

69.2%

Latency:

14.85s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

CorpFin

49.3%

( 23 / 31 )

CaseLaw

83.3%

( 11 / 50 )

ContractLaw

61.7%

( 47 / 57 )

TaxEval

75.0%

( 11 / 37 )

MortgageTax

75.2%

( 5 / 18 )

Math500

75.2%

( 19 / 33 )

AIME

14.0%

( 19 / 29 )

MGSM

90.6%

( 12 / 31 )

LegalBench

79.0%

( 13 / 55 )

MedQA

88.2%

( 12 / 35 )

MMLU Pro

74.1%

( 15 / 30 )

MMMU

65.5%

( 12 / 17 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$2.50 / M Tokens

Output Cost

$10.00 / M Tokens

Input Cost (per char)

$0.61 / M chars

Output Cost (per char)

$2.69 / M chars

Overview

GPT-4o is OpenAI’s latest flagship model, optimized for multi-step tasks. It represents a sweet spot between performance and efficiency, making it particularly attractive for production deployments that require high intelligence but need to manage costs.

Key Specifications

  • Context Window: 128,000 tokens
  • Output Limit: 16,384 tokens
  • Training Cutoff: October 2023
  • Pricing:
    • Input: $2.50 per million tokens
    • Cached Input: $1.25 per million tokens
    • Output: $10.00 per million tokens

Performance Highlights

  • Speed: Faster inference than standard GPT-4
  • Cost Efficiency: 4x cheaper than GPT-4 Turbo
  • Reasoning: Strong performance on complex logical tasks
  • Consistency: Reliable outputs across different domains

Benchmark Results

Excellent performance across our benchmarks:

  • TaxEval: Near top performance in tax reasoning
  • LegalBench: Strong showing in legal analysis
  • ContractLaw: High accuracy in contract interpretation
  • CaseLaw: Competitive performance in case law understanding

Use Case Recommendations

Best suited for:

  • Production API deployments
  • Complex reasoning tasks
  • Legal document analysis
  • Financial modeling
  • Tasks requiring balance of cost and capability

Limitations

  • Unable to perform the same complex, multi-step reasoning as o1

Comparison with Other Models

  • More powerful than GPT-4o Mini
  • Competitive with Claude 3.5 Sonnet
  • Better performance/cost ratio than most competitors
Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.