Public Enterprise LLM Benchmarks

02/05/26
Model

Claude Opus 4.6 is the new SOTA

View Model Results
02/11/26
Benchmark

FAB v1.1 Released!

View Benchmark

Best Performing Models

Top performing models from the Vals Index. Includes a range of tasks across finance, coding and law.

All Top Performing Models

Vals Index

2/10/2026
Vals Logo
0.00%
Anthropic
Anthropic
Claude Opus 4.6 (Thinking)
Vals Index Score: 65.98%
OpenAI
OpenAI
GPT 5.2
Vals Index Score: 63.71%
Google
Google
Gemini 3 Pro (11/25)
Vals Index Score: 61.47%
1Claude Opus 4.6 (Thinking)
65.98%
2GPT 5.2
63.71%
3Gemini 3 Pro (11/25)
61.47%

Best Open Weight Models

Top performing open weight models from the Vals Index. Includes a range of tasks across finance, coding and law.

All Top Open Weight Models

Vals Index

2/10/2026
Vals Logo
0.00%
Moonshot AI
Moonshot AI
Kimi K2.5
Vals Index Score: 59.74%
zAI
zAI
GLM 4.7
Vals Index Score: 54.28%
MiniMax
MiniMax
MiniMax-M2.1
Vals Index Score: 51.5%
1Kimi K2.5
59.74%
2GLM 4.7
54.28%
3MiniMax-M2.1
51.50%

Pareto Efficient Models

The top performing models from the Vals Index which are cost efficient.

View full Pareto curve

Vals Index

2/10/2026
x-axis: cost per test
y-axis: accuracy
Claude Opus 4.6 (Thinking)
Anthropic
Claude Opus 4.6 (Thinking)
Accuracy: 65.98%
Cost per test: $1.00
GPT 5.2
OpenAI
GPT 5.2
Accuracy: 63.71%
Cost per test: $0.78
Gemini 3 Pro (11/25)
Google
Gemini 3 Pro (11/25)
Accuracy: 61.47%
Cost per test: $0.34
1Claude Opus 4.6 (Thinking)
65.98% | $1.00
2GPT 5.2
63.71% | $0.78
3Gemini 3 Pro (11/25)
61.47% | $0.34

Industry Leaderboard

Select industry:
Vals Logo

Updates

View more
benchmark
02/11/26

FAB v1.1 Released!

FAB v1.1 Released!

View Details

Loading benchmark data...

View details
Vals Logo

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.