EU AI Act: Extension of Deadlines for High-Risk AI Systems – What to do now
Last week, on 7 May 2026, the European Parliament and the Council reached an agreement on postponing key deadlines in the AI Act. The agreement still needs to be formally approved – this is expected in the coming weeks. For companies that already use AI systems, this initially provides some relief – but those who do not use the extra time strategically risk significant compliance gaps.
Current Status: What Already Applies, What Is Still to Come?
With the AI Act, the EU is pursuing a phased implementation plan. From February 2025, certain AI practices will already be prohibited, including social scoring systems and specific forms of emotion recognition in the workplace. In parallel, AI literacy requirements are coming into force: companies must ensure that their teams have sufficient AI skills. From August 2025, documentation and information obligations will also apply to providers of general-purpose AI systems (GPAI systems), such as ChatGPT.
The obligations for high-risk AI systems, originally scheduled to come into force on 2 August 2026, are now expected to be postponed until 2 December 2027. This postponement is part of the so-called Digital Omnibus, through which the EU is responding to the reality that technical standards and testing procedures are not yet widely available. For product-related high-risk AI systems as defined in Annex I – such as those found in medical devices, machinery or toys – the deadline has even been extended to 2 August 2028.
At the same time, new bans are being introduced: from December 2026, AI systems that generate non-consensual sexual or intimate content will be prohibited, as will systems that generate depictions of child sexual abuse or assist in their creation. For existing AI systems, a grace period until December 2026 is also provided for the labelling of AI-generated content.
Why the Extension Is Not a Free Pass
The postponement provides breathing room, but does not change the substance of the requirements. High-risk AI systems include applications in particularly sensitive areas: recruitment and human resources management, access to education, healthcare, credit provision, insurance, as well as law enforcement and migration. Many companies already deploy such systems without being aware of their regulatory classification.
Examples from HR and recruitment demonstrate the scope: automated candidate pre-selection, AI-supported aptitude diagnostics, performance management systems, or algorithmic decision support for promotions and terminations may fall under the high-risk category. For these systems, comprehensive measures will be required in the future, such as risk management, technical documentation, human oversight, transparency obligations, as well as high standards for data quality and cybersecurity.
The Central Challenge: Responsibility Also Lies with the Deployer
A common misunderstanding concerns the distribution of roles. The AI Act distinguishes between providers (who make available, place on the market, or put into service AI systems, or provide GPAI) and deployers (who place on the market, put into service, or use the generated output in the EU). Many companies assume that compliance responsibility lies solely with the provider. This is incorrect. As a deployer, you bear independent obligations – regardless of whether your provider is compliant.
This responsibility often catches organizations off guard. Frequently, AI tools were not introduced as part of a well-considered digital strategy, but are part of standard software that has been gradually expanded, such as an HR system that suddenly receives an AI module for candidate selection. In such cases, the company is the deployer – and thus bearer of obligations.
Concrete Steps for Preparation
- Conduct an inventory: Comprehensively capture all AI systems that make or influence decisions about individuals. Start with recruitment and HR, then examine access control systems, performance evaluations, health diagnostics, and credit scoring.
- Clarify roles and assign responsibility: Determine for each system whether your company acts as a provider, deployer, or both. Designate specific individuals by name with a clear mandate for monitoring, escalation, and, if necessary, stopping operations.
- Conduct risk assessments: Document for each high-risk system what harm could arise, who could be affected, and what protective measures apply. Request risk documentation from providers, but conduct your own assessments for your specific deployment context.
- Establish transparency and explanation mechanisms: Create processes through which affected individuals can receive comprehensible explanations. Clearly define who answers such inquiries, according to what criteria, and how comprehensibility is ensured.
- Build documentation: Begin systematic documentation now. The requirements for technical documentation, conformity proofs, monitoring logs, and evidence of human oversight are extensive and cannot be created retroactively at short notice.
Why Act Now?
The extension until December 2027 is helpful. But it is not a signal to relax. Experience shows that companies significantly underestimate the effort required. Integrating AI governance into running systems retrospectively is more complex than new implementation with built-in compliance. Companies that start now have time to test, adjust, and refine processes.
The provisional agreement must now be formally implemented by the Council and the European Parliament.
The additional time is valuable. But only if it is used.
At ATTYS, we are happy to assist you in preparing for the requirements of the AI Act and in establishing AI governance structures that comply with the law.
