Llama 4 Maverick
Release Date: 4/5/2025
Benchmarked by
Llama 4 Maverick SOTA 128-expert MoE powerhouse for multilingual image/text understanding.
Accuracy (Average)
49.38%
Latency (Average)
90.07s
Avg. Cost (In/Out)
0.22 / 0.88
Context Window
1M
Max Output Tokens
16k
Input Modality
Hyperparameter settings
Default Provider :
Meta
Some benchmarks may use different provider and parameters. Please refer to the benchmark page for more information.
Temperature
Default
Top P
Default
Top K
Default
Max Output Tokens
16,384
Benchmarks
Accuracy
Rankings
Academic Benchmarks
Proprietary Benchmarks (contact us to get access)