The Artificial Intelligence Act is now in effect: how will it affect your company’s digital transformation?
On August 1, 2024, Regulation (EU) 2024/1689 of the European Parliament and of the Council of June 13, 2024, entered into force, establishing harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139, and (EU) 2019/2144, and Directives 2014/90/EU, (EU) 2016/797, and (EU) 2020/1828 (Artificial Intelligence Regulation) (Text with EEA relevance), hereinafter referred to as the EU Artificial Intelligence Act, or AI Act.
The regulation applies to all 27 member states of the European Union, including Spain. On the SAIMA SYSTEMS blog, we explain the key aspects covered by this regulation: objectives, implementation deadlines, penalties, and more. This information is essential for understanding how the regulation will impact businesses and companies as they undergo digital transformation. In this regard, companies must pay particular attention to the categorization of risk levels for different AI systems and applications. They must also be aware of companies’ obligations regarding citizens’ rights to information about the use of AI systems.
The AI Act is the world’s first regulation on artificial intelligence. After years of negotiation, in December 2023, Member States reached an agreement on this pioneering and complex legislation, which seeks a delicate balance between protecting citizens’ rights, ensuring the ethical use of this technology, and, at the same time, fostering its development and innovation.
Table of Contents
Purpose of the AI Act
The AI Act aims to provide AI developers and implementers with clear requirements and obligations regarding specific uses of AI. The European Commission notes that “the Regulation aims to reduce administrative and financial burdens on businesses, particularly small and medium-sized enterprises (SMEs)”.
According to the official statement issued by the European Commission: “The AI Act is the world’s first comprehensive legal framework on AI. The goal of the new rules is to foster trustworthy AI in Europe and beyond, ensuring that AI systems respect fundamental rights, safety, and ethical principles, and addressing the risks posed by highly powerful and impactful AI models”.
The provisions of the European AI legislation address issues such as:
- Risks specifically created by AI applications.
- The prohibition of AI practices that pose “unacceptable risks”.
- The identification of a list of high-risk AI applications.
- The establishment of clear requirements for AI systems used in high-risk applications.
- Specific obligations for implementers and providers of high-risk AI applications.
- Prior assessment, before a given AI system is put into service or placed on the market.
- The establishment of a governance structure at the European and national levels.
It should be noted that the Act is part of a broader package of policy measures designed to support the development of trustworthy AI. This package also includes the AI Innovation Package and the Coordinated AI Plan (for more information, see Section 4).
AI risk levels
The law identifies three levels of risk for AI systems. In the most restrictive scenarios, the law prohibits all artificial intelligence systems that pose a clear threat to people’s safety, livelihoods, and rights. This includes everything from systems that generate social scores for governments to toys that use voice assistance and encourage dangerous behavior.
High risk
AI systems identified as high-risk include AI technology used in:
- Critical infrastructure (e.g., transportation), which could endanger the lives and health of citizens.
- Educational or vocational training, which can determine a person’s access to education and career path (e.g., exam scoring).
- Product safety components (e.g., AI applications in robot-assisted surgery).
- Employment and workforce management, and access to self-employment (e.g., CV screening software for hiring procedures).
- Essential public and private services (e.g., credit scoring that denies citizens the opportunity to obtain a loan).
- Law enforcement that may interfere with individuals’ fundamental rights (e.g., evaluation of the reliability of evidence).
- Migration, asylum, and border control management (e.g., automated review of visa applications).
- Administration of justice and democratic processes (e.g., AI solutions for searching judicial rulings).
These high-risk AI systems will be subject to strict assessments and requirements before they can be placed on the market.
Biometric Identification: High Risk
All remote biometric identification systems not intended for the verification of a person are considered high risk and are subject to strict requirements. The use of remote biometric identification in public spaces for law enforcement purposes is, in principle, prohibited.
However, the regulation strictly defines and governs limited exceptions, such as when necessary to search for a missing child, prevent a specific and imminent terrorist threat, or detect, locate, identify, or prosecute a suspect in a serious crime.
Limited risk
Limited risk refers to the risks associated with a lack of transparency in the use of AI. The AI Act introduces specific transparency obligations to ensure that individuals are informed when necessary, thereby fostering trust.
For example, when AI systems such as chatbots are used, users must be aware that they are interacting with a machine and, thus, be in a position to make an informed decision about it. Providers will also have to ensure that AI-generated content is identifiable.
Audio and Video
Furthermore, AI-generated text published to inform the general public about matters of public interest must be labeled as artificially generated. This regulation also applies to audio and video content, which, if not identified as AI-generated, will constitute deepfakes.
Minimal or no risk
The AI Act allows for the free use of low-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently in use in the EU fall into this category.
Implementation timelines for the AI Act: phased implementation
Although the AI Act took effect on August 1, 2024, it will not become fully enforceable until 24 months later, that is, in August 2026.
The European Union’s plan provides for this phased implementation of the obligations
- As of February 2, 2025, the general provisions and prohibitions on unacceptable risk must be complied with.
- As of May 2, 2025, codes of good practice must be complied with.
- As of August 2, 2025, the general rules must be complied with, and the various countries must have updated their regulations regarding penalties and fines. At that time, each country must have its penalty system in place for companies that fail to comply with the regulations.
More information
Official Journal of the European Union
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
European Commission, December 9, 2023. The Commission welcomes the political agreement on the Artificial Intelligence Act
https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473
Innovation Package
https://digital-strategy.ec.europa.eu/en/policies/plan-ai
Coordinated Plan on AI
https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383