The AI Office: Structure and Investigative Powers
Deep dive into how the European Commission's new body is tracking systemic risk.
As we navigate the first quarter of 2026, the European Union Artificial Intelligence Act (AI Act) is no longer a looming legislative threat—it is an operational reality reshaping the global technology ecosystem. Today, on March 5, 2026, we stand at a critical juncture. The transition periods for General Purpose AI (GPAI) ended in mid-2025, and the monumental 24-month deadline for compliance regarding "High-Risk AI Systems" is only weeks away.
What began as a regional regulatory framework has successfully triggered the much-anticipated "Brussels Effect." American tech conglomerates, Asian hardware manufacturers, and international software vendors are currently fundamentally restructuring their machine learning operations, not just within the EU, but globally, to avoid regulatory fragmentation and draconian fines.
Yes. The EU AI Act operates on an extraterritorial basis. If an American or Asian company develops an AI system and places it on the EU market, puts it into service in the EU, or if the output of that system is utilized within the EU, the provider is fully subject to the Act's compliance rules and penalties.
As the six-month grace period expired in early 2025, several systems are now strictly prohibited. These include AI systems deploying subliminal techniques to distort behavior, biometric categorization systems based on sensitive traits (e.g., race, political opinions), untargeted scraping of facial images from the internet or CCTV, and emotion recognition in workplaces and educational institutions.
Fines operate on a sliding scale based on the severity of the violation. Engaging in prohibited AI practices can result in fines up to €35 million or 7% of a company's total worldwide annual turnover—whichever is higher. Violations regarding High-Risk systems fetch up to €15 million or 3%, while supplying incorrect information to authorities results in fines up to €7.5 million or 1.5%.
Yes. The 12-month transition period for GPAI and foundation models concluded in mid-2025. Providers of GPAI must now maintain up-to-date technical documentation, comply with EU copyright laws, and disseminate detailed summaries of the content used for training. Models designated as having "systemic risk" face even stricter auditing and red-teaming requirements enforced by the European AI Office.
Since its inception, the European AI Office has grown from a bureaucratic concept into a formidable enforcement mechanism. Operating within the European Commission, the Office is currently fully staffed with technologists, legal experts, and AI ethicists. As of March 2026, the Office has shifted from drafting secondary legislation and harmonized standards to active investigation and auditing.
At the national level, Member States have designated their national competent authorities (NCAs). We are seeing a divergence in enforcement styles; for example, data protection authorities in Germany and France (CNIL) are taking aggressive stances, seamlessly blending GDPR enforcement with the AI Act's data governance requirements. This dual-threat regulatory environment is forcing enterprise compliance officers to merge their privacy and AI risk management silos.
The "Brussels Effect"—the phenomenon where EU regulations become the de facto global standard due to market size—is vividly apparent in 2026. Global tech giants have realized that developing parallel AI systems (one compliant with the EU, one for the rest of the world) is economically and technically unfeasible.
Instead, we are witnessing a global harmonization of AI products. Top-tier LLM providers in the United States have adjusted their core model training practices, particularly regarding transparency and copyright data logging, to ensure global market access. Similarly, software developers in Japan, South Korea, and the UK are voluntarily adopting the EU's "High-Risk" conformity assessments as a mark of quality and safety for international clients, even outside the European bloc.
While the bans on unacceptable risk AI and the GPAI transparency mandates are already in full force, the tech industry is currently bracing for the expiration of the 24-month transition period for High-Risk AI Systems. Systems falling under Annex III—which include AI used in critical infrastructure, education, employment (e.g., CV sorting algorithms), essential private and public services (credit scoring, healthcare), and law enforcement—will face intense scrutiny.
By mid-2026, providers of these systems must have established a comprehensive Quality Management System (QMS), completed conformity assessments, implemented human oversight interfaces, and registered their systems in the EU database. The backlog for third-party conformity assessment bodies (Notified Bodies) is currently causing bottlenecks in the market, making March 2026 a highly stressful period for AI product managers worldwide.
The enforcement of rules surrounding General Purpose AI has sparked some of the most intense legal battles of early 2026. The requirement to publish a "sufficiently detailed summary" of training data has armed copyright holders with the evidence needed to pursue licensing fees and infringement lawsuits.
Furthermore, while the EU AI Act provides certain exemptions for open-source AI models, the "systemic risk" threshold remains a point of contention. Major open-source AI platforms are currently lobbying the AI Office for clearer guidance, as models exceeding the threshold of 10^25 FLOPs (Floating Point Operations) used during training are automatically classified as posing systemic risks, effectively stripping away open-source leniency and triggering mandatory red-teaming and severe cybersecurity obligations.
As we look past March 2026, the global AI landscape will be defined by enforcement precedents. The first major fine levied under the AI Act will serve as a shockwave, establishing exactly how aggressively the EU intends to police international tech firms.
Concurrently, other jurisdictions are accelerating their own frameworks to remain competitive. The US is relying heavily on sector-specific agency guidelines and executive orders, while the UK continues its "pro-innovation" agile regulatory approach. However, for multinational corporations, the European Union's prescriptive, risk-based approach remains the highest bar to clear, making it the definitive baseline for global AI compliance through the end of the decade.
The EU AI Act was formally adopted by the European Parliament in March 2024 and entered into force in mid-2024. Its rules were phased in gradually: 6 months for prohibited systems, 12 months for GPAI, and 24 months for high-risk systems.
Prohibited systems include social scoring by governments, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), emotion recognition in workplaces, and AI designed to manipulate human behavior or exploit vulnerabilities.
GPAI refers to AI models, including large generative AI models and foundation models, capable of competently performing a wide range of distinct tasks, regardless of the way they are placed on the market. They face transparency and copyright compliance rules.
Under the Act, a GPAI model poses a systemic risk if it has high impact capabilities, evaluated generally by cumulative computing power used for training (currently set at 10^25 FLOPs). These models require rigorous auditing and risk mitigation.
Yes, but with caveats. Free and open-source AI models are exempt from many of the transparency requirements unless they are classified as High-Risk or fall under the criteria for GPAI with systemic risk.
The rules are enforced primarily by National Competent Authorities (NCAs) in each EU member state, coordinated and overseen by the newly established European AI Office at the EU Commission level.