Setting the stage for global AI regulation standards, ensuring safety, rights, and ethical AI use, shaping the future of AI governance.
Get the latest news in your inbox
As the first legislation of its kind, the AI Act aims to establish a legal framework and set guardrails that ensures AI systems are trustworthy, respect fundamental rights, safety, and ethical principles, and address risks posed by powerful AI models. It seeks to provide clear requirements for AI developers and deployers before releasing to the public.
The AI Act introduces a refined approach to AI regulation, categorising applications based on their risk levels. This categorisation ensures that the regulatory response is proportionate to the potential risk posed by different AI systems.
The four risk categories are:
High-risk AI systems must undergo a rigorous process before market introduction, including further risk assessment, data quality management, traceability, detailed documentation, clear information for deployers, human oversight, and robustness and accuracy measures.
Unlike the fragmented approach by China or the sector-specific AI policies in the U.S., the EU's legislation offers a comprehensive framework, highlighting its leadership in tech governance.
In line with the new “risk level” approach, the use of biometric data is tightly controlled. Prohibited practices include indiscriminate scraping of facial images and emotion recognition in sensitive areas like schools and workplaces. By setting clear boundaries on what is permissible, particularly in sensitive contexts like law enforcement and public spaces, the Act reinforces its commitment to safeguarding individual rights while harnessing the benefits of AI.
For organisations globally, the EU's AI Act is a call to align with stringent new standards, reflecting the GDPR's influence on data privacy.
Organisations must ensure they have an AI policy and framework to rigorously vet their AI technologies, ensuring safety, transparency, and compliance, especially for high-risk applications. This extends beyond EU borders, affecting any entity whose AI interacts with EU citizens. Non-compliance carries severe penalties, pressing businesses to adapt swiftly. This urgency is amplified if you are working in Government, law enforcement, or national infrastructure and plan on using AI.
This act will reshape the AI landscape, with similar acts in the pipeline in multiple countries, and even Sam Altman (Open AI CEO) calling for further regulation in the US and saying things could go ‘horribly wrong’.
This law isn't just for the EU; it sends a message to the whole world about the importance of controlling AI technology wisely. It's designed to grow and adapt as technology evolves, making sure that as AI advances, it remains in line with what's good for society.
Essentially, this act is a big step toward a future where technology is developed with care and respect for everyone's rights, encouraging other countries to think about how they handle AI too. As the AI Act begins to take effect, it will gradually introduce these rules, allowing time for adjustments and ensuring that the technology benefits us while keeping our values in check.
------------------------------------------------------------------------------------------------------
To navigate AI's complexities while ensuring ethical use, it's important for organizations to have a clear AI policy. We invite you to download our AI Policy & Checklist template, a practical tool to help you implement AI responsibly.
This template will help you:
- Set up guidelines to protect your organisation and its data.
- Address bias to ensure fairness in AI applications.
- Foster transparency in how AI decisions are made, building trust.
Use our template as a starting point for adopting AI in a way that's mindful of risks and committed to ethical practices. In the rapidly evolving AI landscape, having clear policies is key to leveraging AI's benefits while avoiding pitfalls.