AI /

Millions in fines for failing to comply with AI Act – Check out the new regulations!

Artificial intelligence (AI) technology is advancing at a dizzying pace, and one of its most controversial and potentially harmful uses is deepfakes. According to the AI Act, deepfake is AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful”. This is a broad definition that encompasses both videos and images, as well as audio content that could mislead the viewer.

While deepfakes are used in many legitimate industries, such as film, marketing and education, their growing abuse – including crimes of image manipulation, fraud and blackmail – has prompted the European Union to introduce regulation.

The AI Act aims not only to protect against such abuses, but also to introduce severe penalties for their unethical use.

dates of new regulations

AI Act – When will the new regulations apply?

Although the AI Act already applies from 1 August 2024, its full implementation will be gradual. This arrangement will allow time for companies and AI users to adapt to the new legal requirements.

From 2 February 2025, regulations on prohibited uses of AI will come into force. These bans include, among other things, the use of AI technology to influence people without their awareness. Examples include advertisements or messages that subtly change our decisions without our fully understanding how this happens.

It will also be prohibited to use so-called social scoring, or judging people based on their personal data and behaviour, which the EU considers unethical and unfair.

From 2 August 2026, the labelling of AI-generated content will also become mandatory. Under the new regulation, any content generated or manipulated by artificial intelligence that could mislead the viewer must be clearly labelled as artificially created. For example, if a company creates a deepfake for marketing purposes, it must clearly indicate that the video is not authentic.

Serious consequences of deepfakes

In recent years, deepfakes have become a tool for financial fraud, political manipulation and blackmail.

Imagine a situation where a fake video depicting a well-known person, such as a politician, spreads on social media, causing a huge commotion and influencing public opinion. This kind of use of AI shows how serious a threat deepfakes can be to democratic processes and political stability.

In the private context, deepfakes can be used for blackmail, for example by creating compromising material that did not actually happen. Victims of such actions often find themselves in dramatic situations, and the violation of their privacy can have long-lasting psychological and social consequences. It is this kind of criminal activity that makes AI technology of increasing concern and requires precise regulation.

The AI Act introduces an obligation to cooperate with providers of AI systems. They will be required to implement mechanisms to protect against deepfakes.

Technological challenges and collaboration with AI providers

One of the biggest challenges is the detection of deepfakes. AI technology is constantly evolving, making the generated content more and more realistic. It is for this reason that the AI Act introduces a duty to cooperate with AI system providers who must develop tools to identify and flag deepfakes. Technology companies will be required to implement security mechanisms that detect manipulated content.

Harsh penalties for non-compliance

The AI Act provides for very severe penalties for non-compliance. For prohibited practices, such as social scoring or subliminal manipulation, the penalties can be up to €35 million or 7% of the company’s annual turnover. The amount of the penalty will be derivative of whichever amount is higher.

These are some of the highest penalties foreseen. They are designed to discourage companies and users from engaging in illegal practices.

For non-compliance with other regulations, such as transparency obligations or labelling of AI-generated content, the fines are up to 15 million euros or 3% of annual turnover.

PREPARE FOR THE NEW LEGISLATION AND AVOID FINANCIAL PENALTIES!

How to prepare for the new AI Act regulations?

Companies and individuals using AI technologies, including deepfakes, should take the appropriate steps to comply with the upcoming regulations. Here are 4 of the most important ones:

  • Labeling of AI-generated content: Any content generated by AI or manipulated in a way that suggests its authenticity will have to be appropriately labeled so as not to mislead the audience. It must not mislead the audience.
  • Conducting a compliance audit: Those using artificial intelligence, should review their AI practices and technology to ensure they are compliant with all new requirements. The audit will identify potential areas of risk and develop an action plan to comply with the regulations. Solutions may include implementing automated content labeling tools for AI-generated content.
  • Staff training: Employees of entities should be adequately trained on the new regulations in order to use AI legally.
  • Monitoring future changes: The AI Act provides for further changes and updates, so it is important to regularly follow the regulations and their implementation.

Consequences of the new legislation

The AI Act is an EU-wide regulation that introduces broad regulations on the use of artificial intelligence. It aims to ensure responsible and safe use of AI, prevent abuse and eliminate the risks of deepfakes. The legislation makes it mandatory to clearly label content generated or modified by AI to prevent manipulation and misinformation.

In addition, the AI Act sets strict legal standards that will have to be complied with by not only users but also AI technology providers. Failure to do so will result in heavy fines for them.

The introduction of the new regulations will certainly also increase public awareness of the impact of AI on everyday life. They will also contribute to a better understanding of both the risks and opportunities of the technology. As a result, this will allow for a more responsible and informed approach to its use.

Want to keep up to date with the latest artificial intelligence regulations and important legal developments? Subscribe to our newsletter and receive the most interesting news and legal advice straight to your email!

Author team leader DKP Legal MARCIN WASZAK
Contact our expert
Write an inquiry: [email protected]
check full info of team member: PhD Marcin Waszak

Contact us

Flaga Polski.POZNANPOLAND
Młyńska 16
61-730 Poznań
+48 61 853 56 48[email protected]
Flaga Polski.WARSAWPOLAND
Rondo ONZ 1
00-124 Warsaw
+48 22 300 16 74[email protected]
Flaga Polski.WROCLAWPOLAND
Swobodna 1
50-088 Wrocław
+48 61 853 56 48[email protected]
Flaga Polski.KRAKOWPOLAND
Opolska 110
31-355 Kraków
+48 61 853 56 48[email protected]
Flaga Polski.ZIELONA GÓRAPOLAND
Jana Sobieskiego 2/3
65-071 Zielona Góra
+48 61 853 56 48[email protected]