As of the first of August, the EU AI Act rolls out in full force across the Eurozone, with full enforcement staggered over the following two years.
What the EU AI Act proposes is limitations for AI use, which has been met with heavy lobbying from companies that develop and maintain AI tools. While there’s been a lot said about the bigger corporations that have developed or are in the process of developing AI, little has been said about the smaller businesses who are working on their own models or are planning to scale with AI in the future.
Here’s a comprehensive explanation of what the EU AI Act means for small businesses, and whether or not it affects your business.
What is the EU AI Act?
The EU AI Act is a comprehensive guideline for AI within the Eurozone that stipulates AI use in Europe. Certain applications of AI are outright banned underneath the act, while others have specific regulations that they must abide by in order to continue functioning.
The aim of the EU AI Act is to provide a baseline for the protection of human rights and democracy, limit the use of ‘high risk’ AI, and still allow companies within Europe the freedom to create innovative products.
Underneath the guidelines of the AI Act, AI is classed under one of four risk categories:
- Unacceptable Risk – these are fully banned underneath the EU AI act, and include systems that use biometric categorisations, facial recognition databases, emotion recognition in the workplace and schools, social scoring, and manipulative AI systems that can be used to circumvent free will. There are some narrow use-cases for these under law enforcement or for safeguarding, but they require judicial approval.
- High Risk – classified as systems which can cause a lot of harm if misused, this category specifically deals with safety measures against stand-alone and regulated products that can affect the safety of people or the environment. For most businesses, this is the category of risk that concerns them, as they require organisations adopting high-risk AI to adhere to regulations, and AI systems to meet stipulations before they’re allowed to operate in the EU.
- Limited risk – these are systems that could pose a risk of manipulation, such as generative AI tools. Within the EU, AI tools falling underneath this category must disclose they are AI on interaction with humans.
- Minimal risk – this encompasses every other AI system that doesn’t fit into the other three categories, with no legal restrictions, though it is heavily suggested by the EU AI Act that they follow other mandatory requirements for fairness and non-discrimination to avoid the risk of sanctions.
What about generative AI tools?
Generative AI tools, which have become an indispensable part of the workforce, do not specifically fall into either category. What the EU AI Act has stipulated is that generative AI or general purpose AI tools start underneath the category of ‘limited risk’.
However, the EU AI Act then classifies these tools under two separate and distinct categories for risk level: non-systematic and systematic.
All generative and general-purpose AI systems must first adhere to the following requirements:
- Labelled outputs in a machine-readable format.
- Well-documented training, testing, and evaluation processes to be made publicly available and maintained with up-to-date information.
- A publicly available and detailed summary of the content used to train the model.
General purpose AI systems that fall underneath the systematic risk need to adhere to much stricter regulation: providers must inform the European Commission about the model to include it in public databases, and to report any serious incidents. Systematic risk GPAIs are also required to undertake comprehensive risk assessments, cybersecurity and infrastructure security measures, and to demonstrate that they are aligned with AI Act compliance.
The systematic risk assessment can be avoided if providers decide to offer a free open-source licence for their model, depending on how publicly accessible and distributable this model is.
What does the EU AI Act mean for small businesses using generative AI or general purpose AI?
The bulk of the EU AI Act directly affects the creators of AI tools and systems, but small businesses who are using generative AI for their products are also at risk of non-compliance with the EU AI Act. The act considers every business that uses a generative AI system as the foundation for a new product to be a distributor or downstream provider, and you can be subject to the same legal requirements as bigger companies.
However, it depends on what you use the tool for.
General purpose AIs which are not classified as high-risk, but can become so out of your usage, need to adhere to the requirements set in place for high-risk AI systems – that is, inform the authorities, provide all the necessary information, and comply with the authorities.
For limited risk AIs, this only includes the labelling of AI-generated content and informing users about these interactions.
What happens if I do not comply with the EU AI Act?
As with all EU data regulations, breaking the rules is subject to fines of up to 3% of your business’ global annual turnover, but SMEs and start-ups may receive smaller fines. In any case, breaking the EU AI Act will directly impact the business, and can be a big risk for small businesses.
How do I make sure I’m using AI legally?
The EU AI Act is a comprehensive approach to implementing AI tools safely, but if you’re still worried that you might miss a trick, we recommend rolling out or implementing AI tools with the help of a seasoned provider who can guide you along the path to regulated AI use. This can significantly help reduce the risk of implementing AI into your business.
For small businesses who are producing their own AI models, we recommend collaborating with a seasoned provider to make sure you are meeting the necessary document requirements.
Need more guidance?
AI is a highly effective tool for more small businesses, but the legal ramifications can have a significant impact on small businesses – however, there’s a way through to the other side. If you’re interested in leveraging AI’s potential within the bounds of the law, work with us! At AIRO, we are seasoned IT professionals who specialise in tailored guidance and consultation, and we’ve been keeping an eye on the way that the EU AI Act has developed. Partnering with us can help reassure you that the work you’re doing with AI is both effective and compliant, and can elevate your business to the next level.
Safeguard your business’ future – reach out to AIRO Ltd today!