Scope: Who Is Caught?
The EU AI Act (Regulation 2024/1689) applies to any provider that places an AI system on the EU market or puts it into service within the EU — irrespective of where that provider is established. The extraterritorial reach is explicit: a US-incorporated company whose software-as-a-service product is used by European businesses or consumers is within scope.
Risk Classification Tiers
The regulation establishes a tiered risk hierarchy that determines the compliance obligations applicable to each system:
- Unacceptable risk — prohibited outright (e.g. social scoring, real-time biometric surveillance in public spaces)
- High risk — subject to mandatory conformity assessment, technical documentation, and post-market monitoring
- Limited risk — subject to transparency obligations only (e.g. chatbots must disclose their AI nature)
- Minimal risk — no mandatory obligations; voluntary codes of practice apply
Most enterprise AI applications — those used in hiring, credit scoring, critical infrastructure management, and certain customer-service contexts — fall into the high-risk category.
The Authorised Representative Obligation
Non-EU providers of high-risk AI systems are required to appoint an EU-based authorised representative by written mandate. This representative serves as the formal point of contact for EU market surveillance authorities and carries responsibility for ensuring that the provider's obligations under the Act are fulfilled.
The representative must be able to:
- Cooperate with national competent authorities on request
- Provide access to technical documentation within prescribed timelines (as short as 15 working days)
- Forward complaints and incidents from EU users to the provider
- Maintain an up-to-date register of high-risk AI systems placed on the EU market
This obligation mirrors the structure established under the GDPR for non-EU data controllers and the EU MDR for medical device manufacturers. Companies that have already established Article 27 GDPR representatives will find the structural template familiar, but the AI Act representative must have deeper technical engagement with the system's conformity assessment process.
Documentation and Conformity Assessment
High-risk AI systems must be accompanied by technical documentation that is maintained and updated throughout the system's lifecycle. For certain high-risk categories, third-party conformity assessment by a notified body is mandatory rather than self-certification.
Required Technical Documentation
- System architecture and component description
- Training data governance and data quality measures
- Performance metrics disaggregated across relevant demographic groups
- Cybersecurity measures and known vulnerabilities
- Post-market monitoring plan and incident reporting procedures
- Human oversight mechanisms and override capabilities
Greece has designated EETT (the Hellenic Telecommunications and Post Commission) and ELOT (the Hellenic Organisation for Standardisation) as national competent bodies with roles in AI Act oversight, providing an accessible entry point for companies establishing EU compliance infrastructure through a Greek entity.
Practical Steps for Technology Companies
Companies should prioritise four actions in the near term to build a defensible compliance position before the August 2026 enforcement date.
- Conduct an AI Act inventory — catalogue all AI systems deployed to or accessible by EU users and classify each against the risk hierarchy. This scoping exercise defines the compliance perimeter.
- Appoint an authorised representative in an EU member state with the legal mandate, technical access, and regulatory relationships required to discharge the obligation credibly — not merely a postal address.
- Initiate technical documentation programmes for high-risk systems immediately, as documentation gaps are the most common reason conformity assessments stall.
- Build post-market monitoring mechanisms into system architecture — ongoing incident reporting and performance tracking are continuing obligations, not one-time compliance activities.
Greece offers a practical base for EU authorised representative functions: English-language proficiency among legal and regulatory professionals is high, time-zone alignment with both European and Middle Eastern markets is advantageous, and operating costs are meaningfully lower than in Western European regulatory hubs.
Sources
- Regulation (EU) 2024/1689 — Artificial Intelligence Act — Official Journal of the European Union (2024)
- AI Act Implementation Timeline and Transitional Provisions — Guidance Note — European AI Office (2025)
- Authorised Representatives Under EU Product Regulation: Comparative Analysis — Fieldfisher LLP (2025)
- EETT Designation as National Competent Authority for AI Act Oversight — Hellenic Telecommunications and Post Commission (2025)