Legal Zyte-geist #4: Overview of the EU AI Act
What is the EU AI Act and how impacts my business
Welcome to the monthly column about web scraping and legal themes by Sanaea Daruwalla. She is the Chief Legal & People Officer at Zyte. Sanaea has over 15 years of experience representing a wide variety of clients and is one of the leading experts on web data extraction laws.
Disclaimer: This post is for informational purposes only. The content is not legal advice and does not create an attorney-client relationship.
What is the EU AI Act?
The EU AI Act is a regulation to govern the development and usage of artificial intelligence (AI). The Act is designed to ensure that AI systems used in the EU are safe and properly regulated.
Who needs to follow the EU AI Act?
The Act applies to (1) providers of AI (companies that develop AI systems) and (2) deployers of AI (companies who use AI systems in their operations or in the course of their professional activities).
What if I’m not based in the EU?
Like GDPR, the Act has what is called “extraterritorial effect.” It applies to all companies, regardless of where they are based, if they are offering AI services in the EU or to EU citizens. As such, it is very much a global regulation that will touch almost all businesses developing, offering, or deploying AI across the world.
When does it go into effect?
The Act has been approved and will be published in May or June of 2024. The Act will then go into effect 20 days after its publication and has various timelines built in for compliance. General compliance is required 2 years after it comes into effect, provisions related to generative AI apply after 1 year, and provisions related to prohibited use of AI apply after 6 months.
What does the act mean for my business?
The EU AI Act takes a risk-based approach to compliance, meaning the riskier the system the more the Act regulates it. The Act breaks risk down into four categories (1) Unacceptable Risk, (2) High Risk, (3) Limited Risk, and (4) Minimal Risk. Below is a very basic chart to show you how the risk based approach will work and your obligations under each risk category. Note that if you sit in the High Risk or Limited Risk categories, you will still need to adhere to the obligations in the categories below it as well.
What are my obligations for a High Risk AI system?
High Risk AI requires what is called a conformity assessment. The conformity assessment includes various obligations, including a risk management system, risk assessments, data quality standards, security standards, detailed documentation and record keeping, human in the loop, and transparency.
Is ChatGPT High Risk?
The Act categorizes chatbots as Limited Risk systems. As such, companies using AI chatbots of any kind must be transparent about their usage. This means that you need to be clear when you are using a chatbot so that individuals know they are engaging with an AI system rather than a human.
What level of transparency is required?
As stated above, companies must be transparent about when AI is being used. But often more is required, so explainability statements are being used in order to explain the rationality of the AI system, where the data comes from, and how it was trained. In relation to automated processing, controllers are also expected to explain the logic behind their decision-making and this would normally be provided in an explainability statement too.
How can my company prepare for the EU AI Act?
First, you need to determine whether your company is required to comply with the Act, and if so, are you using or building any AI systems that fall under the purview of the Act? You will need to create a register of all AI that you are using and document it from end to end. Next, you will need to determine which risk category your AI falls under and adhere to the appropriate obligations listed above.
For limited and minimal risk systems, your obligations will be limited to a code of conduct and transparency. These obligations are significantly easier to comply with and should be adhered to for any AI system even if it doesn’t fall under the Act, as it is simply best practice to document your AI, have policies that govern your usage of AI, and be transparent about its usage.
High risk systems will present more of a challenge, as the full conformity assessment is required. If you are creating or utilizing a high risk system, you will need to ensure that you have the proper documentation and that you keep that up to date, and that you have the requisite security, accuracy, and transparency measures in place.
Over time you will want to have an audit system in place and a policy that governs the introduction of any new AI. These will allow for you to continue your usage of AI while also maintaining compliance with the Act. You should also have an annual employee training so that your workforce understands the legal and ethical requirements.
For more helpful resources, check out my post on the legality of web scraping and Zyte’s more detailed compliant web scraping checklist.
Explore, connect, and collaborate with us. Join us on LinkedIn and in our Extract Data Community on Discord.