The new AI regulation (AI Act), which came into effect in August 2024, aims to ensure the reliability of AI systems used within the EU and safeguard fundamental rights. This regulation adopts a risk-based approach, imposing obligations primarily on companies that provide high-risk AI systems.
AI is a rapidly advancing technology that presents businesses with numerous opportunities while introducing new regulatory challenges. The EU’s long-awaited AI Act (2024/1689) finally came into force on August 1, 2024. The AI Act will be implemented gradually within two years of its entry into force, with certain exceptions. This marks the world’s first significant regulatory framework for AI systems, designed to ensure that AI is safe, transparent, and respectful of fundamental rights. Additionally, the Act seeks to create a harmonized internal market for AI within the EU, promote the adoption of AI technologies, and stablish an environment that supports innovation and investment. (European Commission, 2024).
It is crucial for small and medium-sized enterprises (SMEs) to understand the practical implications of this regulation and how it will affect the use of AI in their business operations. With responsible AI practices, companies can position themselves as industry leaders and differentiate themselves from competitors. This article outlines the key aspects of the AI Act that SMEs need to be aware of and offers guidance on how to prepare for the changes it will bring.
Company Obligations are Based on Risk Classification and Role
The AI Act classifies AI systems into four risk categories based on their intended use: prohibited, high-risk, specific transparency risk, and minimal-risk AI systems. Each category comes with different levels of obligations that companies must comply with.
SMEs need to accurately identify which category their AI systems fall into to ensure they meet regulatory requirements.
Read a summary of the risk classifications under the AI Act here:
Commentaires