Task Type:

LegalBench Benchmark

Last updated

1

OpenAI

o1 Preview

81.7%

$15.00 / $60.00

10.33 s

2

OpenAI

GPT 4o (2024-11-20)

79.8%

$2.50 / $10.00

0.35 s

3

Google

Gemini 2.0 Flash (001)

$⚡︎

79.7%

$0.10 / $0.70

0.34 s

4

Anthropic

Claude 3.7 Sonnet (Thinking)

79.3%

$3.00 / $15.00

6.60 s

5

Alibaba

Qwen 2.5 Instruct Turbo (72B)

79.2%

$1.20 / $1.20

0.62 s

6

Meta

Llama 3.1 Instruct Turbo (405B)

79.0%

$3.50 / $3.50

0.81 s

7

OpenAI

GPT 4o (2024-08-06)

79.0%

$2.50 / $10.00

0.39 s

8

Anthropic

Claude 3.5 Sonnet Latest

78.8%

$3.00 / $15.00

0.83 s

9

OpenAI

o1 Mini

78.7%

$3.00 / $12.00

3.81 s

10

Meta

Llama 3.1 Instruct Turbo (70B)

78.2%

$0.88 / $0.88

3.01 s

11

Anthropic

Claude 3.7 Sonnet

78.1%

$3.00 / $15.00

1.13 s

12

Meta

Llama 3.3 Instruct Turbo (70B)

78.0%

$0.88 / $0.88

0.56 s

13

OpenAI

GPT 4 Turbo

78.0%

$10.00 / $30.00

1.01 s

14

Anthropic

Claude 3.5 Sonnet

77.6%

$3.00 / $15.00

1.40 s

15

OpenAI

o1

77.6%

$15.00 / $60.00

8.47 s

16

Anthropic

Claude 3 Opus

77.5%

$15.00 / $75.00

4.61 s

17

OpenAI

GPT 4o Mini

76.2%

$0.15 / $0.60

0.40 s

18

AI21 Labs

Jamba 1.5 Large

74.8%

$2.00 / $8.00

0.73 s

19

Google

Gemini 1.5 Pro (001)

74.4%

$1.25 / $5.00

1.54 s

20

Google

Gemini 1.0 Pro 002

73.4%

$0.50 / $1.50

0.44 s

21

Anthropic

Claude 3.5 Haiku Latest

73.0%

$1.00 / $5.00

1.00 s

22

Alibaba

Qwen 2.5 Instruct Turbo (7B)

72.4%

$0.30 / $0.30

0.33 s

23

Anthropic

Claude 3 Sonnet

72.3%

$3.00 / $15.00

1.98 s

24

Google

Gemini 1.5 Pro (002)

71.6%

$1.25 / $5.00

0.62 s

25

OpenAI

o3 Mini

70.9%

$1.10 / $4.40

9.80 s

26

Cohere

Command R+

70.5%

$2.50 / $10.00

0.27 s

27

Google

Gemma 2 (9B)

70.0%

$0.20 / $0.20

0.43 s

28

DeepSeek

DeepSeek R1

69.9%

$8.00 / $8.00

18.66 s

29

Cohere

Command

68.8%

$1.00 / $2.00

2.00 s

30

AI21 Labs

Jamba 1.5 Mini

68.1%

$0.20 / $0.40

0.25 s

31

Google

Gemma 2 (27B)

65.5%

$0.50 / $0.50

0.59 s

32

OpenAI

GPT 3.5

64.8%

$0.50 / $1.50

0.48 s

33

Google

Gemini 1.5 Flash (002)

62.0%

$0.07 / $0.30

0.41 s

34

Meta

Llama 2 (70B)

59.4%

$0.90 / $0.90

13.08 s

35

Mistral

Mixtral (8x7B)

53.3%

$0.60 / $0.60

13.81 s

36

Meta

Llama 3.1 Instruct Turbo (8B)

52.5%

$0.18 / $0.18

0.56 s

37

Meta

Llama 2 (13B)

51.6%

$0.20 / $0.20

1.35 s

38

Mistral

Mistral (7B)

51.0%

$0.18 / $0.18

1.95 s

39

Technology Innovation Institute

Falcon (40B)

50.9%

$0.80 / $0.80

5.05 s

40

Meta

Llama 2 (7B)

49.3%

$0.20 / $0.20

0.85 s

41

Technology Innovation Institute

Falcon (7B)

47.1%

$0.20 / $0.20

1.24 s

42

Databricks

DBRX Instruct

35.6%

$2.25 / $6.75

0.82 s

43

Cohere

Command R

35.0%

$0.15 / $0.60

0.19 s

Task type :

Partners in Evaluation


Key Takeaways

  • By 1.7 percentage points, o1 preview has found the lead on LegalBench. It does especially well on Rule application tasks - most likely because of its enhanced reasoning capabilities.
  • Gemini 2.0 Flash gets the 3rd place, a big improvment over the previous models from Google. It is also the faster and most affordable model given its performance.
  • GPT-4o Mini and Llama 3.1 70B stand out as great budget models, achieving strong performance while being significantly cheaper than flag-ship models.
  • A given model’s performance can vary dramatically across different legal tasks. There is still significant room for improvement for these models to perform well on legal tasks.

Context

There has been a considerable effort to measure language model performance in academic tasks and chatbot settings but these high-level benchmarks are not applicable to specific industry use cases. Here we start to remedy this by reporting our application-specific findings and live leaderboard results on LegalBench, a large crowd-sourced collection of legal reasoning tasks. The data set is quite comprehensive, covering six major categories.

  • Issue-spotting, where a model must determine if a fact has relevance to a particular law or legal entity.
  • Rule-recall, where a model must identify a relevant rule or state its characteristics.
  • Rule-conclusion, where a model must predict a legal outcome.
  • Rule-application, where a model must analyze how a rule was applied to reach a conclusion.
  • Interpretation, where a model must parse and understand legal text.
  • Rhetorical understanding, where a model must determine whether a legal argument performs a certain function.

Overall Results

On the overall task, o1 Preview is the clear winner. It is able to outperform the other models on every question type. the bump in performance for o1 Preview came from tasks in the “Rule” category - for example, it got a nearly perfect 98% on “Rule QA”, one of the few free response tasks in LegalBench.

o1 Preview

o1 Preview

Released date : 9/12/2024

Accuracy :

81.7%

Latency :

10.3s

Cost :

$15.00 / $60.00

  • o1 preview is the new state-of-the-art, performing especially well on Rule Tasks.
  • It is very expensive - much more than 4o or Claude 3.5 Sonnet. It also comes at a high latency cost.
  • It's also harder to prompt - it lacks the fine-grained controls we have collectively become accustomed to.
  • o1 was very verbose on the non-multiple-choice tasks (see the model examples).

View Model


Llama 3.1 Instruct Turbo (405B)

Llama 3.1 Instruct Turbo (405B)

Released date : 7/23/2024

Accuracy :

79.0%

Latency :

0.8s

Cost :

$3.50 / $3.50

  • The latest Llama-3.1 70B model had a standout performance on LegalBench, and was previously state-of-the-art before being dethroned by O1.
  • The model is priced higher than other open-source models, at $ 3.50 / MTok for both input and output. This is still in-line with GPT-4o and 3.5 Sonnet's pricing though.
  • We will see how the closed-source models respond to this new entry into the foundation model market.

View Model

The results per question type are summarized in the graph below.

LegalBench

/

On a task-by-task basis, o1 preview was often the best - but 4o and Llama 3.1 also claimed two of the top task-specific spots. We can also see that a lot of the bump in performance for o1 came from tasks in the “Rule” category - for example, it got a nearly perfect 98% on “Rule QA”, one of the few free response tasks in LegalBench.

Gemini Pro, Claude Sonnet, and Cohere Command R+ competed for the middle spots on the leaderboard. The open source models also generally had average performance, although amongst them, the Llama models were significantly better. Interestingly, the Llama models did not perform well on Rule tasks, and 3.5 Sonnet upgraded did not perform well on issue tasks.

LegalBench

/

The cost-accuracy graph shows a few models that define a Pareto curve of tradeoffs: GPT-4o Mini, Llama 3.1 70B, Llama 3.1 405B, and o1 preview. Shortly off the Pareto curve are GPT-4o and Claude 3.5 Sonnet. Among these four models, the objective difference in performance is small (a matter of a few percentage points), however, they have wildly different prices. GPT-4o Mini particularly stands out as having a very high quality to price ratio. A cheaper model may still be a better choice in domains with high token usage or cost sensitivity.

Amongst the other models, there is a somewhat logarithmic trend, with the more expensive models seeing diminishing returns for marginal cost. Also, models of previous “generations” (Claude 2, GPT 3.5, etc.), perform strictly worse for their price than the newer models — likely, providers are disincentivizing their use.

Gemini 1.5 does not perform significantly better than Gemini 1.0 — although its performance is better on certain tasks and categories, it performs significantly worse on others. It often is overly verbose, or does not understand the in-context examples without additional prompting.


Model Output Example

We show some examples from one of the LegalBench free response tasks (Rule QA) here. The models were asked “Where in the Federal Rules of Civil Procedure are notice requirements described?”

This question is tricky because both Rules 4 and 5 talk about the service process and notice requirements. However, Rule 4 is the main rule that defines the first notice in a proceeding, whereas Rule 5 gives the procedures for filing pleadings, etc.

This question is also difficult because implicit in the question is that it’s “notice” of a suit. GPT correctly determines that the question is talking about the initial notice, whereas Opus does not. Interestingly, Gemini gets the answer wrong, but it is much more terse than the other two models. Despite performing best overall, Llama 3.1 makes the same mistake as Opus, citing Rule 5 instead of Rule 4. Interestingly, o1 preview is a lot more verbose than other models for this question.

It’s important this is one question from a very large evaluation data set and is not representative of overall performance.

Q

Where in the Federal Rules of Civil Procedure are notice requirements described?

A

Response:

The notice requirements in federal civil litigation are primarily described in Rule 5 of the Federal Rules of Civil Procedure (FRCP). Specifically: 1. Rule 5(a) outlines the service requirements for pleadings and other papers, stating that unless the rules provide otherwise, each paper must be served on every party. 2. Rule 5(b) describes how service can be made, including the methods of service and the permissible means of electronic service. 3. Rule 5(c) states that all papers after the complaint required to be served must be filed with the court within a reasonable time after service. 4. Rule 5(d) covers the filing requirements, including electronic filing and the filing of discovery materials. Additionally, other rules in the FRCP may contain specific notice requirements for certain situations, such as Rule 4 for serving a summons and complaint, Rule 6 for computing and extending time, and Rule 55 for default judgments.

INCORRECT


Additional Notes

Gemini

The Gemini documentation and error handling were extremely poor. Even after turning content moderation to the least restrictive setting, the API frequently returned an “unsafe content” error. In some cases, we could circumvent this because the error payload included the supposedly unsafe generation

Often, we’d get an index out-of-bounds error originating from within Gemini’s own Python SDK, instead of a more meaningful error message. We debugged further, now believing this to be an additional level of content moderation not exposed to the user in any capacity.

In general, Gemini has much stricter content moderation on its output than most other models. This is true even if Content Blocking was set to the lowest possible settings. Because of this, the model was not able to successfully produce outputs for many tests in the learned_hands tasks and others. These cases are treated as failures. The Gemini Pro model results may improve considerably if the content moderation is better calibrated.

Gemini Pro pricing is per character, not per token. We go by the pricing listed here, and assume an average of 4 characters per token.

Claude 2

Almost all LegalBench tasks are multiple-choice, and expect a single word or choice as an output. Claude-2 has extreme difficulty producing outputs in this format. Even if explicit instructions are provided, such as “Don’t include an explanation for your choice”, “Answer in a single word only”, etc., the model reliably produced a paragraph-length output with some explanation.

To give the model a chance, we wrote a custom parser for Claude 2. We asked it to produce outputs in a JSON format with ‘explanation’ and ‘answer’ keys, then extracted the ‘answer’ key. We did not perform this for any subsequent Claude models - Claude 3.0 and 3.5 were evaluated normally.

When asked for a single-word response, the Claude 3 models still struggled to consistently follow these directions. These problems were solved once we provided a system prompt with these expectations.

Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.