openai/o1-2024-12-17 o1

Points to the most recent snapshot of the o1 model: o1-2024-12-17

Released Date: 12/17/2024

Avg. Accuracy:

81.1%

Latency:

34.34s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

CaseLaw

81.9%

( 13 / 50 )

ContractLaw

72.7%

( 6 / 57 )

TaxEval

78.6%

( 2 / 37 )

Math500

90.4%

( 6 / 33 )

AIME

71.5%

( 5 / 29 )

MGSM

88.9%

( 17 / 31 )

LegalBench

77.6%

( 22 / 55 )

MedQA

96.5%

( 1 / 35 )

GPQA

73.0%

( 6 / 30 )

MMLU Pro

83.5%

( 2 / 30 )

MMMU

77.7%

( 2 / 17 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$15.00 / M Tokens

Output Cost

$60.00 / M Tokens

Input Cost (per char)

$3.75 / M chars

Output Cost (per char)

N/A

Overview

OpenAI o1 is the production-ready version of their previously released o1-preview. OpenAI claims it generally has lower latency than its predecessor. Unlike o1-preview, it also supports system prompts and temperature.

It also supports a new “reasoning effort” parameter, which allows users to control the level of reasoning the model will do.

Key Specifications

  • Context Window: 200,000 tokens
  • Output Limit: 100,000 tokens
  • Training Cutoff: October 2023
  • Pricing:
    • Input: $15.00 per million tokens
    • Cached Input: $7.50 per million tokens
    • Output: $60.00 per million tokens
Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.