AI Act officially comes into force!
On July 12, 2024, the official text of the AI Act was published in the EU Official Journal. Therefore, we can confidently specify the important dates related to the entry into force and the obligation to apply the AI Act:
- The regulation applies from August 2, 2026. However,
- Chapters I (General provisions) and II (Prohibited practices) apply from February 2, 2025.
- Chapter III, Section 4 (High-Risk AI systems – Notifying authorities and notified bodies), Chapter V (General purpose AI models), Chapter VII (Governance), and Chapter XII (Penalties), as well as Article 78 (Confidentiality), apply from August 2, 2025, except for Article 101 (Fines for providers of general purpose AI models).
- Article 6, paragraph 1 (Principles for the classification of high-risk AI systems) and the corresponding obligations established in this regulation apply from August 2, 2027.
What are the potential implications for businesses?
Below are the obligations arising from the AI Act for entrepreneurs who create AI solutions and for entities that use AI-supported solutions.
Category | Entrepreneurs creating AI solutions | Entities using AI-supported solutions |
Risk Management | Implement a risk management system to identify and mitigate risks associated with AI systems. | Ensure the AI systems used comply with risk management protocols and are regularly assessed for risks. |
Data Governance | Ensure high-quality data sets are used for training AI models, including proper data management and documentation. | Validate that AI systems use reliable data and verify data governance practices of AI solution providers. |
Transparency and Documentation | Maintain comprehensive documentation of AI systems, including design, development, and testing phases. | Require access to AI system documentation to understand the functionality and limitations of the AI solutions used. |
Human Oversight | Implement measures for effective human oversight to intervene and manage AI systems when necessary. | Establish protocols for human oversight to monitor AI systems’ performance and intervene if required. |
Safety and Performance | Regularly test AI systems to ensure they meet safety and performance standards. | Verify that the AI systems in use are tested and certified to meet safety and performance criteria. |
Accountability | Assign clear responsibilities for compliance with AI Act requirements within the organization. | Ensure accountability for AI systems used, including compliance with legal and regulatory requirements. |
Reporting Obligations | Report significant incidents and malfunctions of AI systems to relevant authorities. | Maintain a procedure for reporting any issues or malfunctions encountered with AI systems to the provider and authorities. |
User Information | Provide users with clear information on how to operate AI systems and understand their outputs. | Seek comprehensive information on the operation and output of AI systems from providers to ensure proper usage. |
AI = new responsibilities for businesses
Regardless of whether your entity provides or uses solutions implementing AI, a set of obligations will soon be introduced to legalize such practices. Of course, the regulation of AI applies to specific fields and levels of risk, however, the basis for determining their scope is to conduct a thorough inventory of processes and systems.
Our law firm’s experienced team is available to provide guidance on both this specific matter and broader corporate compliance issues. Please direct any inquiries to [email protected].