Benchmark

Vals Index

10/09/2025

Benchmark consisting of a weighted performance across finance, law and coding tasks. Showing the potential impact that LLM's can have on the economy.

Motivation

As AI capabilities rapidly advance, understanding their potential to transform economic sectors has become increasingly critical for organizations making deployment decisions. Unlike existing aggregated metrics that treat all capabilities equally, the Vals Index is designed to reflect the potential economic impact of AI models on the U.S. economy. We accomplish this by computing a weighted average of model performance across key sectors, where the weights correspond to each sector’s contribution to the U.S. economy in trillions of dollars.

Vals AI has developed a comprehensive suite of benchmarks measuring AI models’ ability to perform real-world tasks across finance, law, and software engineering. These benchmarks were designed to evaluate practical performance on actual professional workflows, making them well-suited for assessing economic impact. The Vals Index leverages this existing work to provide a high-signal measure that accounts for the real-world tradeoffs between capability, latency, and cost that practitioners face when deploying AI systems.

Results

Industry Average Accuracy Comparison

Key Takeaways

AI models are advancing rapidly in their ability to handle complex, real-world tasks across critical economic sectors. The results demonstrate that frontier models are becoming increasingly capable at automating work in finance, law, and software engineering—domains that collectively represent a substantial portion of economic activity. While current performance levels suggest transformative potential for many industries, there remains significant room for continued improvement across all evaluated dimensions.

Vals Index
AI21 Labs
Alibaba
Anthropic
Cohere
DeepSeek
Google
Kimi
Meta
Mistral
OpenAI
xAI
zAI

Methodology

Benchmark Selection, Economic Weighting, and Formula

The Vals Index aggregates performance across three major economic sectors, weighted by their approximate contribution to U.S. GDP. Market size estimations were computed based on data from the Federal Reserve Economic Data (FRED) and the Bureau of Labor Statistics. While this represents a vast oversimplification of how AI might impact the economy, it provides a useful proxy for measuring the potential economic significance of model capabilities:

Finance (weight: 2.0): ~$2T contribution to U.S. GDP

Law (weight: 0.3): ~$360B contribution to U.S. GDP

  • CaseLaw: Legal document analysis and reasoning

Coding (weight: 1.4): ~$1.4T contribution to U.S. GDP

These weights combine in the following formula:

Vals_Index = (2.0 * AVG(CorpFin, FinanceAgent) + 0.3 * CaseLaw + 1.4 * AVG(SWE_Bench, TBench)) / 3.7

The denominator (3.7) normalizes the index to a 0-100 scale, where the score represents the weighted average performance across sectors proportional to their economic contribution.

Subset Selection Process

To enable efficient and cost-effective evaluation while maintaining strong correlation with full benchmark performance, we developed representative subsets for three benchmarks:

Selection Methodology: To balance evaluation efficiency with accuracy, we created representative subsets for select benchmarks using a sampling process that maximizes correlation with full benchmark scores. We validated this approach using holdout models to ensure that subset performance reliably predicts full benchmark results.

Benchmark-Specific Subsets:

  • SWE-Bench: 33 randomly sampled instances from each difficulty level (categorized by solution time: <15min, 15min-1hr, 1-4hr, >4hr), plus all 3 instances from the hardest category
  • CorpFin: 2 randomly selected questions per unique document from the original test set
  • Finance Agent: 10 questions per task category (9 categories total, 90 questions)

Full Benchmarks:

This methodology ensures the Vals Index provides a rapid, cost-effective evaluation framework while maintaining the predictive validity needed for reliable model comparison.

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.