What the Act actually is
The Artificial Intelligence Act – formally Regulation (EU) 2024/1689 – was adopted by the European Parliament on 13 March 2024 and published in the EU's Official Journal on 12 July 2024. It entered into force on 1 August 2024. It is the first horizontal, comprehensive AI law in any major jurisdiction, and its structure is being studied (and in some cases copied) by regulators in the UK, the US, Japan, and Singapore.
The Act is built around a risk-based framework. Every AI system that falls in scope is sorted into one of four buckets, and the obligations scale with the bucket. This is the lens every legal, product, and engineering lead should internalise first – most of the headline-grabbing coverage is really about what each bucket requires.
The four risk tiers
Understanding which tier your product sits in is the single most important determination under the Act. The difference between "limited risk" and "high risk" is the difference between a transparency notice and a full conformity-assessment, quality-management-system regime.
- Unacceptable risk – prohibited outright (e.g. social scoring by public authorities, real-time remote biometric identification in public spaces with narrow exceptions, certain manipulative or exploitative AI). Prohibitions have applied since 2 February 2025.
- High risk – permitted but heavily regulated. Includes AI used in critical infrastructure, education, employment/HR, essential services, law enforcement, migration/asylum, and administration of justice. Obligations cover data governance, technical documentation, human oversight, accuracy, robustness, and post-market monitoring.
- Limited risk – transparency obligations only. Users must be told they're interacting with AI (chatbots), and synthetic content such as deepfakes must be labelled.
- Minimal risk – no additional obligations. The vast majority of AI systems (spam filters, game AI, inventory optimisers) fall here.
Why this reaches APAC
Article 2 of the Act is the provision that brings in teams outside the EU. The regulation applies not only to providers established in the EU, but also to providers and deployers established in third countries when the output produced by the AI system is used in the Union. In practice, that pulls in APAC enterprises with European customers, European partners, or SaaS flows that ultimately touch European end-users.
Concretely: if a Hanoi-based ML team builds a CV-screening model used by a German client, the client is a deployer in the EU and the Vietnamese team is a provider under the Act. Both sides carry obligations. The same logic applies to Sydney fintechs scoring EU-resident borrowers, Singapore logistics firms routing parcels through Rotterdam, or Bangkok media companies serving generative content into European markets.
The enforcement timeline you should pin to the wall
The Act applies in waves. This staggered approach is deliberate – it gives providers time to bring systems into compliance – but it also means "the AI Act is in force" and "the AI Act applies to my system" are different statements. Map your products against these dates:
- 2 February 2025 – Prohibitions (unacceptable-risk systems) and the AI-literacy obligation for staff already apply.
- 2 August 2025 – General-Purpose AI (GPAI) model obligations apply. Providers of foundation models must publish training-data summaries, respect EU copyright law, and (for systemic-risk models) meet additional testing and incident-reporting duties.
- 2 August 2026 – Most provisions applicable, including the full high-risk system regime.
- 2 August 2027 – Full application, including high-risk AI embedded in regulated products (e.g. medical devices, machinery) under Annex I.
What high-risk actually requires
If your system lands in the high-risk tier, the operational burden is real. You need to run (and document) a risk-management process across the lifecycle, operate a quality-management system, demonstrate data-governance practices on training / validation / test sets, keep technical documentation that lets authorities reconstruct your development, provide automatic event logging, design for human oversight, and hit accuracy, robustness, and cybersecurity thresholds.
On top of that, you must register the system in the EU database, complete a conformity assessment before placing it on the market, and run post-market monitoring and incident reporting. Deployers of high-risk AI (the organisations using it) carry their own parallel obligations – including a fundamental-rights impact assessment for public bodies and entities performing public services.
The penalties
Non-compliance is not priced like a GDPR-lite cost of doing business. The Act sets three tiers of administrative fines, and they are cumulative with other EU regimes:
- Up to €35 million or 7% of global annual turnover (whichever is higher) for using a prohibited AI system.
- Up to €15 million or 3% of global annual turnover for breaches of provider/deployer obligations on high-risk systems or GPAI models.
- Up to €7.5 million or 1% of global annual turnover for supplying incorrect or incomplete information to notified bodies or national competent authorities.
What APAC teams should be doing now
The practical advice we give clients in Hanoi, Brighton VIC, and across the region is simple: do the classification work first, then the gap analysis. Most of the scary operational burden only applies to systems in the high-risk tier or to GPAI models. Getting a confident read on which of your AI products fall into which bucket reduces the compliance surface dramatically.
For any high-risk system, the long-lead items are data governance and technical documentation. Both take months to retrofit and are the items auditors will ask to see first. For GPAI deployments, the urgent item is supplier diligence – if you're routing requests through a third-party foundation model, you need contractual assurance that the upstream provider will meet the 2 August 2025 obligations, because downstream deployers cannot cure them.
Governance is the final piece. AI-literacy obligations are already in effect; documented training for staff who develop or operate AI systems is no longer optional. And fundamental-rights impact assessments, where required, are not a form-filling exercise – they're the audit trail regulators will want to see.