Following the introduction of the risk-focused Artificial Intelligence Act, which came into effect on 1st August, EU-based or connected businesses that use AI features and programming now face a range of new regulations. Here are the details.
The European Union’s new cross-sector regulations concerning the use of artificial intelligence were voted in on 21st May 21st and have since been approved by the European Council. Now recognised as the Artificial Intelligence Act, the new rules and regulations formally came into effect on 1st August.
The act is the first of its kind globally, and incorporates a definition of AI based on the “product safety and risk-based approach” championed by the EU. It categorises AI-related risk into four levels – minimal, specific transparency, high, and unacceptable – that designate the type of rules businesses must now adhere to.
Minimal risk to unacceptable influence
Many companies fall into the minimal risk category and are not required to take any action at all, though they may choose to comply with additional rules voluntarily.
For AI systems classified under specific transparency risk, such as chatbots, there is a requirement to clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deepfakes, must be clearly identified, and users must be notified when biometric categorisation or emotion recognition systems are in use.
High-risk systems, including those used for recruitment or determining eligibility for bank loans, must adhere to stringent regulations. These include implementing risk-mitigation systems, ensuring high-quality data sets, maintaining activity logs, providing detailed documentation, offering clear user information, enabling human oversight and upholding a high level of cybersecurity.
AI systems with unacceptable risk are banned outright due to their potential threats to fundamental human rights or user safety. Examples include AI that manipulates behaviour to circumvent users’ free will, such as toys using voice assistance to encourage dangerous or poor behaviour in minors, systems enabling ‘social scoring’ by governments or companies, predictive police profiling networks and real-time remote facial recognition systems used in public spaces for law enforcement.
“The Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights,” reads an EU press release. “The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment.”
The new law applies to all 27 Member States, as well as non-EU businesses and companies with a customer base in the bloc. Additionally, EEA countries like Liechtenstein, Norway and Iceland will also need to comply.
Non-compliance with the guidelines will result in fines of up to 7% of global annual turnover for violations involving banned AI applications, up to 3% for other violations, and up to 1.5% for providing incorrect information.
Companies will have a year to adjust to the new regulations.
To read more about the Artificial Intelligence Act, click here.
Read related:
Advances and challenges of AI and blockchain explored at Monaco’s Digital Innovation Summit
Monaco Life is produced by real multi-media journalists writing original content. See more in our free newsletter, follow our Podcasts on Spotify, and check us out on Threads, Facebook, Instagram, LinkedIn and Tik Tok.
Photo source: Levart, Unsplash