Llama 4 Scout

Release Date: 4/5/2025

Benchmarked by Vals Logo

Llama 4 Scout 17B 16E Instruct FP8

Avg. Accuracy

55.19%

Latency

10.59s

Cost (In/Out)

0.18 / 0.59

Context Window

10M

Max Output Tokens

16k

Input Modality

Benchmarks

Accuracy

Rankings

CorpFin

0.0%

80/ 80

MortgageTax

0.0%

56/ 56

SAGE

0.0%

36/ 36

TaxEval (v2)

0.0%

86/ 86

AIME

0.0%

76/ 76

GPQA

0.0%

77/ 77

LiveCodeBench

0.0%

84/ 84

LegalBench

0.0%

100/ 100

MATH 500

0.0%

59/ 59

MedQA

0.0%

84/ 84

MGSM

0.0%

73/ 73

MMLU Pro

0.0%

78/ 78

MMMU

0.0%

55/ 55

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Join our mailing list to receive benchmark updates

Model benchmarks are seriously lacking. With Vals AI, we report how language models perform on the industry-specific tasks where they will be used.

By subscribing, I agree to Vals' Privacy Policy.