AI Law in Poland

Post navigation

Chapter navigation:
Article navigation:
Michał Dudkowiak
Contact our expert
Write an inquiry: [email protected]

Last updated: 10.07.2025

AI Law in Poland 2025

Is there specific AI regulation in Poland?

As of June 2025, Poland does not yet have a dedicated, comprehensive law specifically regulating artificial intelligence (AI), big data, or machine learning.

However, Poland is actively working towards implementing such legislation, primarily to align with the European Union’s AI Act, which came into force in August.


What is the recent legal framework of AI in Poland?

Until this dedicated AI-specific law is enacted, AI-related activities in Poland are governed by existing legal frameworks that apply to various aspects of AI, big data, and machine learning. Key legal acts include:

  1. EU Artificial Intelligence Act (AI Act) – While not yet fully implemented nationally, it is the primary legislative reference for AI regulation across the EU and is already shaping Poland’s approach.
  2. General Data Protection Regulation (GDPR): This EU regulation governs the processing of personal data, including issues related to profiling and automated processing.
  3. Personal Data Protection Act: This national law complements the GDPR and provides additional provisions specific to Poland.
  4. Act on Combating Unfair Competition: This act allows for the protection of algorithms, models, and training data as trade secrets.
  5. Telecommunications Act and Act on the Provision of Electronic Services: These laws regulate data processing in digital services.
  6. Civil Code: This code may apply to liability issues arising from damages caused by artificial intelligence systems.
  7. Copyright and Industrial Property Law: These laws govern the legal status of source code, data, and models, including aspects of intellectual property rights and data quality.

Until this dedicated AI-specific law is enacted, AI-related activities in Poland are governed by existing legal frameworks that apply to various aspects of AI, big data, and machine learning.

Is there any soft law in Poland with respect to AI?

Yes, in addition to the above legal frameworks, Poland is developing guidelines and policies to support the ethical and responsible implementation of AI technologies. For instance, the ‘AI Development Policy in Poland’ was updated in 2024 to emphasize administrative competence and ethical considerations.

Furthermore, the Polish Data Protection Authority (UODO) has issued guidelines on profiling and has intervened in cases involving AI-based systems and automated processing.

These initiatives are aligned with Poland’s national AI strategy, which outlines long-term goals for AI development and governance.


Is Poland about to enact AI dedicated regulation?

Yes, Poland is preparing to introduce AI-specific laws targeting artificial intelligence. Poland is in the process of adapting its national legislation to align with the European Union’s AI Act and AI directive.

In October 2024, the Ministry of Digital Affairs opened a public consultation on a draft law focused on AI systems. This consultation concluded in mid-November, and the feedback received was used to refine the draft. The revised version was made public in February 2025.

The proposed law outlines the creation of a dedicated body – the AI Office, officially named the Commission on the Development and Safety of Artificial Intelligence – which will be responsible for overseeing the AI sector in Poland.

The proposed law outlines the creation of a dedicated body - the AI Office, officially named the Commission on the Development and Safety of Artificial Intelligence - which will be responsible for overseeing the AI sector in Poland.

It also includes mechanisms for system notifications, compliance evaluations, and penalties for non-compliance. These steps clearly show that Poland is moving toward implementing comprehensive AI legislation to govern the use and development of evolving AI technologies.

The new body is also expected to support AI innovation by offering guidance and oversight without stifling technological growth.


Ownership of AI-Created Works under Polish Law

The issue of ownership of works created by artificial intelligence (AI) remains legally complex and largely unsettled in the Polish legal system.

There is currently no dedicated legal framework that directly addresses the authorship or ownership of AI-generated works. Instead, these matters are governed by general principles of copyright law, industrial property law, trade secret protection, and contractual arrangements.

In the absence of AI-specific legislation, organisations rely on well-defined agreements and internal IP policies to manage rights and obligations.

AI as a Non-Legal Entity

Artificial intelligence systems do not possess legal personality and therefore cannot hold rights or ownership. Any output generated by AI must, from a legal standpoint, be attributed to a human or legal person.

Artificial intelligence systems do not possess legal personality and therefore cannot hold rights or ownership. Any output generated by AI must, from a legal standpoint, be attributed to a human or legal person.

Human Involvement and Copyright Eligibility

Where a human uses AI as a tool and exercises creative control over the generated output, the resulting work may qualify for copyright protection. In such cases, the human creator is deemed the author, and standard copyright rules apply pursuant to the Act on Copyright and Related Rights of 4 February 1994.

However, if the output is produced autonomously by an AI system, without sufficient human creative input, the work may not be eligible for copyright protection. According to Article 1(2¹) of the Copyright Act, ideas, procedures, methods of operation, or mathematical concepts themselves are excluded from protection.

However, if the output is produced autonomously by an AI system, without sufficient human creative input, the work may not be eligible for copyright protection. According to Article 1(2¹) of the Copyright Act, ideas, procedures, methods of operation, or mathematical concepts themselves are excluded from protection.

Therefore, purely AI-generated works – absent human originality – may fall outside the scope of copyright.

This legal gap has led to calls for new AI policy at both national and EU levels.

Ownership of the AI Algorithm and Model

The ownership of the underlying AI algorithm or trained model depends on several factors:

  • Employment Relationship: If the algorithm is created by an employee as part of their professional duties, the employer is typically the holder of the economic rights under Article 12 of the Copyright Act.
  • Originality Requirement: To be eligible for copyright, the code or model must demonstrate originality and creative character. If it does not meet this threshold, it may still be protected through know-how or trade secrets, as governed by the Act on Combating Unfair Competition.
  • Contractual Provisions: In collaborative environments – such as research consortia or technology partnerships – ownership and rights to use, modify, sublicense, or commercialise AI-based solutions are typically regulated by contract. These agreements are especially important when AI tools are deployed across borders or industries with varying legal obligations.

Protection of Non-Copyrightable Elements in Poland

When AI-generated works or components (e.g. model architecture, training data, output) do not qualify as protected works under copyright law, companies often resort to:

  • Confidentiality Agreements
  • Internal IP Policies
  • Trade Secret Protection

This layered approach is particularly common in the context of proprietary source code, datasets, and model structures, where system security and the protection of economic value are essential.

Regulatory Context

Current European Union regulations, including the AI Act, do not explicitly regulate the ownership of AI-generated content or AI algorithms. These matters are left to be governed by national regulations and private agreements.


Who Owns AI-Generated Content? A Case-by-Case Overview Under Polish Law

Under Polish law, the ownership of works created by generative AI depends on the nature of the creation process, the level of human involvement, and the applicable legal or contractual framework.. In summary:

Ownership/Protection
AI used as a tool with human creativity Human author holds copyright
Fully autonomous AI-created content Likely unprotected by copyright; may be treated as data or know-how
Algorithm developed by employee Employer holds economic rights
Joint or consortium-based development Ownership governed by contract
Non-copyrightable elements Protection through trade secrets or confidentiality measures

In the absence of specific legislation, clearly defined contracts and internal IP policies remain the most reliable tools for safeguarding rights in AI-related developments.


Does AI-Generated Content Benefit from Intellectual Property Protection in Poland?

Under the current Polish legal framework, AI-generated content does not generally benefit from intellectual property (IP) protection unless it meets specific criteria tied to human authorship or inventiveness. This creates significant legal uncertainty, particularly in areas such as copyright, patent law, and ownership rights.

AI and Copyright Protection in Poland

Polish copyright law, in line with European legal standards, recognizes only works that are the result of creative activity by a natural person. Pursuant to Article 1 of the Act on Copyright and Related Rights, a work must exhibit originality and individual character, which presumes a human creator.

As a result, content autonomously produced by current AI systems – such as literary texts, images, music, or code- is not protected by copyright, unless a human being has played a significant and creative role in its production.

Minimal human involvement, such as selecting prompts or editing outputs, may in some cases support a claim of authorship, but the boundaries remain legally undefined and subject to interpretation.

Minimal human involvement, such as selecting prompts or editing outputs, may in some cases support a claim of authorship, but the boundaries remain legally undefined and subject to interpretation.

Patentability of AI-Related Inventions in Poland

Under Polish and EU patent law, only natural persons can be named as inventors. The European Patent Office (EPO) has confirmed in decisions such as the DABUS case that AI systems cannot be recognised as inventors, even if they autonomously generate technical solutions.

Nevertheless, inventions developed with the assistance of AI – where a human has guided the creative process or made essential decisions – can be patentable, provided they meet the standard criteria of novelty, inventive step, and industrial applicability.

Ownership and IP Challenges in Poland

Several key IP issues arise in the context of AI-generated content:

  • Legal Status and Ownership: The lack of human authorship disqualifies purely AI-generated works from copyright or patent protection. This raises questions about who – if anyone – can claim legal rights to such content, particularly in commercialisation or licensing contexts.
  • Training Data and IP Infringement: The use of copyrighted or patented material in training AI models introduces risks of infringement. While the Directive on Copyright in the Digital Single Market (2019/790) allows limited text and data mining (TDM), its commercial use is subject to licensing and may be restricted by rights holders. Increasingly, legal scrutiny also concerns the ownership and licensing status of the input data used to train AI models, particularly when sourced from copyrighted databases or publicly accessible content.
  • Co-Authorship and Attribution: In cases where content is generated through human–AI collaboration, determining the extent of human creative contribution remains complex. Without clear legal guidelines, disputes over co-authorship and rights allocation are likely to increase.

Alternative Forms of Protection in Poland

Given the limitations of traditional IP regimes, organisations increasingly rely on alternative legal instruments to safeguard AI-generated content and related assets:

  • Trade Secret Protection: Elements such as model architecture, training datasets, prompts, fine-tuning parameters, and generated outputs may be protected under the Act on Combating Unfair Competition, provided adequate confidentiality measures are in place.
  • Contractual Safeguards: In B2B settings, ownership and usage rights are frequently defined by licence agreements, non-disclosure agreements (NDAs), and specific contractual clauses addressing the creation, modification, and commercialisation of AI-generated outputs.

Alternative Forms of Protection in Poland Given the limitations of traditional IP regimes, organisations increasingly rely on alternative legal instruments to safeguard AI-generated content and related assets: - Trade Secret Protection - Contractual Safeguards

AI Law in Poland – Conclusion

At present, AI-generated content in Poland enjoys only limited or indirect IP protection, contingent on the involvement of a human creator. Fully autonomous outputs lack formal recognition under both copyright and patent law.

In response, stakeholders are increasingly turning to trade secret law and contractual solutions to secure rights and mitigate legal risk. As AI technologies continue to evolve, the need for a clearer and more harmonised legal framework will become increasingly urgent.


FAQ: AI legislation in Poland

AI legislation in Poland

How Is AI Technology Protected in Poland?

In the absence of dedicated AI-specific intellectual property laws, companies in Poland protect their AI solutions and data through a combination of contractual, regulatory, technical, and organisational measures:

  • Contractual Protections: IP ownership, licensing, and rights to source code, models, and data are clearly defined in contracts—particularly in outsourcing, R&D, and joint ventures. Clauses typically address modification rights, sublicensing, and confidentiality.
  • Data Protection and GDPR Compliance: Where training data includes personal or sensitive information, companies must comply with GDPR and Data Act, applying principles such as data minimisation, purpose limitation, and anonymisation or pseudonymisation. These safeguards are not only essential for legal compliance but also serve as critical components of consumer protection.
  • Technical Safeguards: AI systems are increasingly secured through internal audits, model accuracy checks, data poisoning resistance, and monitoring of input/output data flows.
  • AI Governance: Many organisations adopt formal governance frameworks, including MLOps practices, model documentation, ethics committees, and internal policies to ensure transparency and accountabilit in the use of AI innovation.
  • Standards and Compliance: Industry certifications and compliance-by-design approaches are gaining importance, especially in anticipation of the EU AI Act, which promotes robust oversight for high-risk AI systems. This includes applications such as credit scoring, biometric identification, and facial recognition technology, all of which may trigger mandatory conformity assessments due to their potential impact on privacy and individual freedoms.

These layered strategies enable Polish companies to safeguard their AI assets while ensuring legal compliance and operational integrity.

Is the Use of AI Allowed in the Workplace in Poland?

Yes, the use of artificial intelligence in the workplace is legally permitted in Poland, provided that it complies with applicable laws, particularly those related to labour law, data protection, and non-discrimination.

Employers may implement AI systems to support or automate various functions, including recruitment, HR analytics, productivity monitoring, and customer service. However, such practices must adhere to key legal and ethical standards:

  • Labour Law Compliance: The use of AI must respect employee rights under the Polish Labour Code, including protection from unjustified monitoring, profiling, or discriminatory treatment. Use of facial recognition or productivity-tracking tools must be justified and proportionate to avoid infringing on fundamental rights.
  • GDPR and Privacy: AI systems processing employee data must comply with the General Data Protection Regulation (GDPR). This includes ensuring transparency, lawful data processing, data minimisation, and the right to human review in case of automated decision-making. Employers must also respect the rights of data subjects, meaning employees or job applicants whose personal data is being processed, particularly when AI is used for profiling or monitoring purposes.
  • Equal Treatment and Non-Discrimination: Employers must ensure that algorithmic tools do not result in biased or discriminatory outcomes, especially in recruitment, promotion, or termination processes.

While AI adoption is accelerating across sectors, including administration, finance, and logistics, its growing role in decision-making raises concerns about transparency, fairness, and accountability.

As of now, there is no comprehensive statutory framework in Poland regulating the use of AI in employment, though policy discussions and EU-wide initiatives (such as the AI Act) are shaping future regulatory expectations.

In parallel, public institutions in Poland are promoting reskilling, digital education, and labour market adaptation strategies to prepare workers for the evolving demands of AI-integrated workplaces.

Who is liable for damage caused by AI systems in Poland?

Liability for damage caused by AI systems in Poland depends on the context – tort law, contract law, or product liability – and is assessed on a case-by-case basis.

Currently, Poland does not have a unified legal framework specifically designed to regulate AI liability.

However, under forthcoming EU legislation – particularly the AI Act and the proposed AI Liability Directive – providers or users of high-risk AI systems may face presumed fault, especially in cases involving unacceptable risk to public safety or rights.

Until these regulations are fully implemented, companies should mitigate risk through contractual indemnities, clear user instructions, and robust testing, documentation, and compliance practices.

These measures are especially critical in industries like the financial sector, defence, and healthcare, where the consequences of system failure may affect fundamental rights, national security, or public trust.

What happens if an AI system commits a crime?

Since AI systems do not have legal personality, they cannot be held criminally liable under Polish law. Responsibility is attributed to natural persons or legal entities involved in the development, deployment, or use of the AI.

If a crime occurs due to faulty programming, lack of oversight, or intentional misuse, liability may fall on the developer, operator, or user – typically under theories of negligence, aiding and abetting, or organisational fault.

Criminal responsibility is assessed based on human actions or omissions, not the AI’s autonomous behaviour.

Since AI systems do not have legal personality, they cannot be held criminally liable under Polish law. Responsibility is attributed to natural persons or legal entities involved in the development, deployment, or use of the AI.

What is the AI Office?

The AI Office – formally known as the Commission on the Development and Safety of Artificial Intelligence – is a planned regulatory body responsible for overseeing AI implementation in Poland.

It will manage system notifications, compliance checks, and coordinate with other national and EU institutions, including conformity assessment bodies.

It may also consult with civil society groups during policy development.

How is Poland ensuring the safe development of AI?

Poland’s draft AI bill includes provisions aimed at enhancing AI safety, particularly in relation to high-risk systems. The legislation promotes transparency, oversight, and accountability while encouraging scientific research in areas such as general-purpose AI models and trustworthy data practices.

These measures reflect a broader strategy to align with EU standards and foster responsible AI development at the national level.

What conditions must be met to legally deploy AI systems in Poland?

To deploy AI systems in Poland, organisations must comply with relevant local laws and EU regulations. This includes ensuring data protection, respecting fundamental rights, and demonstrating appropriate technical knowledge to meet safety, transparency, and documentation standards.

High-risk systems may require conformity assessments before entering the market.

Who monitors AI use in the financial sector in Poland?

In Poland, AI systems used in banking or insurance may be subject to additional oversight by the Polish Financial Supervision Authority, which can issue guidance to ensure compliance with sector-specific risk, transparency, and accountability standards.

In Poland, AI systems used in banking or insurance may be subject to additional oversight by the Polish Financial Supervision Authority, which can issue guidance to ensure compliance with sector-specific risk, transparency, and accountability standards.

What role does the EU play in shaping Poland’s AI regulations?

The European Commission plays a central role in setting the regulatory framework for artificial intelligence across the EU.

Its adoption of the AI Act has directly influenced Poland’s efforts to create national AI laws that align with European standards and legal obligations.

Expert team leader D&P Legal Michał Dudkowiak
Contact our expert
Write an inquiry: [email protected]
check full info of team member: Michał Dudkowiak