European Union excluded from Llama 4 multimodal models
Meta's LLaMA 4 is groundbreaking and open source—but not for the EU. Companies and residents are excluded from accessing the models.
Published on April 8, 2025

Merien co-founded E52 in 2015 and envisioned AI in journalism, leading to Laio. He writes bold columns on hydrogen and mobility—often with a sharp edge.
Meta's Llama 4 models are another step forward in AI innovation, with impressive multimodal capabilities surpassing competing models such as GPT-4.5 and Claude Sonnet 3.7. Meta's first mixture-of-experts models, inspired by DeepSeek, are offered as open source. Can anyone download and install them? No, EU companies and residents are excluded.
Llama 4: A new generation of AI models
Meta's Llama 4 models, including the Scout and Maverick variants, represent a significant leap forward in the evolution of artificial intelligence. These models are built on a multimodal architecture capable of simultaneously processing text and visual input within a single interaction.
The Scout version, equipped with 17 billion active parameters and 16 experts, delivers impressive performance in long text processing and visual comprehension. In addition, this model can run efficiently on just one NVIDIA H100 GPU, making it very accessible.
The Maverick version, on the other hand, is more advanced and has 128 experts, enabling it to surpass the performance of leading models such as GPT-4o – and at an optimized cost. Both models are among the most potent open-source AI models currently available.

Meta ends collaboration with fact checkers. 'A bolt from the blue' say experts
Meta stops fact-checking in the US. We talked about it with fact-checking experts and with the editor-in-chief of ANP.
Outside the European market: restrictions in the license conditions
Despite the technological breakthroughs made possible by the Llama 4 models, European companies and individuals are excluded from the license to install these models or to provide customized training. This limitation is a consequence of the Llama 4 Community License Agreement, which states that the rights do not apply to persons residing in the EU or companies with headquarters in the EU. This seems to be motivated by the transparency and compliance requirements arising from the EU's AI Act, which came into force in August 2024 and introduces strict rules and requirements for AI systems within the European market. Nevertheless, end users in the EU can use services in which Llama 4 models are integrated, provided these services originate outside the Union.
Innovation through Mixture-of-Experts architecture
The Mixture-of-Experts (MoE) architecture is a core element in the innovation of Llama 4. This architecture uses a collection of experts, only a portion of which are activated, depending on the input tasks, leading to a more efficient use of resources and potentially greater scalability. This design redefines the balance between cost and performance by using only a subset of parameters for each operation. This reduces the initial load and significantly improves response time and input efficiency. In the case of Llama 4 Maverick, this leads to significant cost savings in task execution compared to other top models such as GPT-4o and Gemini 2.0 Flash.
Differences between Scout, Maverick, and Behemoth
The various models within the Llama 4 series each have their unique advantages. Llama 4 Scout is optimized for processing extended text context with 10 million tokens, which is ideal for extensive document processing and visual question answering. The Maverick, conversely, can understand meaningful images and creative writing with lower asset parameters, making it a cost-effective model for multimodal tasks. The Behemoth is still under construction, but with its 2 trillion parameters, it has the potential to push the boundaries of AI capacity even further and to offer stiff competition to models such as GPT-4.5 and Claude Sonnet 3.7 on benchmarks such as MATH-500 and GPQA Diamond.
Performance comparison with competing models
Llama 4 models deliver impressive performance compared to some of the current top models on the market. The Maverick outperforms GPT-4o on indicators such as ChartQA and DocVQA, while using less than half of the active parameters of DeepSeek v3.1. In addition, the Behemoth shows encouraging benchmark results that further raise expectations of new-generation AI, even if it lags just behind DeepSeek R1 on specific benchmarks such as MMLU. The models not only promote a substantial improvement in performance and efficient implementation costs, but also raise the standard for future developments within the AI field.

EU demands answers from Meta on CrowdTangle shutdown
Meta has until September 6 to respond to the European Commission’s inquiries. The company has stated that it is gathering feedback to improve the Meta Content Library.