Implementation of the EU AI Act: Spain’s and Ireland’s regulatory approach

Flags of the European Union (EU).

The European Union’s Artificial Intelligence Act (AI Act), a landmark regulatory framework for governing AI technologies, is entering its implementation phase across member states. Spain and Ireland have emerged as early adopters, unveiling distinct yet structurally comparable approaches to operationalizing the legislation. While both countries prioritize alignment with EU-wide standards, their national strategies reflect nuanced adaptations to balance centralized oversight, sectoral expertise, and innovation facilitation.

AI Act implementation in Spain: A hybrid governance model

Spain has positioned itself as a frontrunner in AI Act implementation through legislation that combines centralized coordination with decentralized, sector-specific enforcement. The Spanish framework establishes the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) as the primary regulatory authority. This agency will function as the national Single Point of Contact for EU coordination, chair a cross-sectoral Joint Committee for Coordination, and oversee compliance with broad AI governance principles.

To complement this centralized structure, Spain has designated specialized authorities to regulate AI systems within their domains. The Spanish Data Protection Agency (AEPD) will oversee AI applications involving personal data, while the Bank of Spain will govern financial sector AI tools, such as algorithmic trading systems or credit risk models. Additional sectoral regulators, including those in healthcare and telecommunications, are expected to assume oversight roles for AI systems within their jurisdictions. This bifurcated model aims to leverage institutional expertise while maintaining consistency through AESIA’s coordinating function.

Notably, Spain’s legislation extends beyond baseline EU requirements in several areas:

  • Biometric surveillance: Mandates judicial authorization for real-time biometric identification systems in public spaces, introducing an additional layer of accountability.
  • Right to disconnect: Grants individuals the right to opt out of interactions with AI systems deemed harmful or non-compliant.
  • Enforcement mechanisms: Implements a graduated penalty system, with fines scaling according to violation severity, and establishes anonymous reporting channels for whistleblowers.
  • Phased implementation: Prohibited AI systems (e.g., social scoring) will face bans starting in August 2025, with high-risk system regulations rolling out incrementally through 2027.

AI Act implementation in Ireland: A coordinated regulatory approach

Ireland’s recently announced strategy shares structural similarities with Spain’s model, though initially framed as a decentralized system. The Irish government has clarified that its framework will feature a central “super regulator” to harmonize oversight across sectors. This authority will collaborate with existing regulators, including the Central Bank of Ireland (financial AI systems), the Data Protection Commission (privacy-related AI), and sector-specific bodies in healthcare, education, and transportation.

The Irish model emphasizes interagency collaboration, with the central regulator tasked with resolving jurisdictional overlaps, developing standardized risk assessment protocols, and ensuring alignment with EU guidelines. Like Spain, Ireland’s approach avoids creating entirely new regulatory infrastructures, instead building on established institutions’ domain-specific knowledge.

Comparative observations

Structural parallels:

Both countries adopt a two-tier governance architecture:

  • A central authority ensures EU alignment and cross-sector coordination.
  • Sectoral regulators address technical and domain-specific challenges (e.g., financial stability, data privacy).

This reflects a pragmatic recognition of AI’s cross-cutting impact while mitigating regulatory fragmentation.

Sectoral alignment:

Financial regulators—the Bank of Spain and Central Bank of Ireland—are positioned to play analogous roles in overseeing AI-driven financial technologies, underscoring the prioritization of economic stability. Similarly, data protection authorities in both jurisdictions will handle AI systems involving personal data processing.

Divergences in scope:

Spain’s framework introduces judicial oversight for biometric systems and user empowerment mechanisms absent in Ireland’s current proposal. Conversely, Ireland’s model emphasizes procedural harmonization across sectors, potentially reducing compliance complexity for multinational enterprises.

Enforcement strategies

Spain has defined a detailed sanction regime with fines up to €50 million for severe violations, whereas Ireland’s penalty structure remains under development. Both jurisdictions, however, prioritize incentivizing proactive compliance over punitive measures.

Broader implications for the European Union’s AI governance

The Spanish and Irish models exemplify a growing consensus among EU member states toward hybrid regulatory frameworks. By integrating centralized oversight with sectoral delegation, these strategies aim to address several challenges:

  • Technical complexity: Domain-specific regulators are better equipped to evaluate AI risks in specialized contexts (e.g., healthcare diagnostics vs. financial fraud detection).
  • Regulatory agility: Central bodies can streamline updates to governance protocols as AI technologies evolve.
  • Innovation balance: Graduated enforcement and phased implementation seek to mitigate compliance burdens for startups and SMEs.

Critics, however, note potential challenges, including jurisdictional conflicts between central and sectoral authorities, inconsistent enforcement across industries, and varying compliance costs for multinational organizations operating in multiple EU markets.

Summary

Spain and Ireland’s implementation strategies highlight the flexibility permitted under the EU AI Act while underscoring shared priorities: harmonization with EU standards, risk-based oversight, and support for ethical innovation. Their approaches suggest a convergence toward hybrid governance models that blend centralized coordination with decentralized execution—a structure likely to influence other member states as the AI Act’s implementation progresses.

With the maturing of these frameworks, ongoing monitoring will be critical to assess their effectiveness in addressing emerging AI risks, such as generative AI systems and their impact on society, while maintaining the EU’s competitiveness in global tech market. The success of these models may ultimately depend on their ability to adapt to technological advancements without compromising regulatory clarity or stakeholder trust.


Explore more articles and resources in Insights or our Knowledge base.

Insights published by Media Scope Group are only a small taste of what we do. If you need comprehensive reports, forecasts and monitoring of global affairs, you can contact us and we will come back to you shortly.

Media Scope Group has more than 10,000 international experts, including researchers, analysts and former officials.