Andrew Patel, Senior Researcher at WithSecure Intelligence, offers the following insight: “Some of the requirements set forth by the AI Act are going to be difficult to enforce. These include disclosing that content was generated by an AI and ensuring that models cannot generate illegal content."
He adds, "Many open-source generative models are now available to the public and these models can be finetuned and repurposed by anyone with enough know how. Large language models have already been customised by extremists into AI systems capable of enabling and amplifying online radicalisation. Regulations probably aren’t going to be able to stop those efforts."
Interpret EU laws
Paul Brucciani, Cyber Security Advisor at WithSecure, agrees with Andrew, adding: “Ensuring that LLMs can’t generate illegal content is a technical and legal minefield. Good luck to the developer that needs to interpret EU hate laws in each jurisdiction."
"I would like to make two points:
- Compared to the glacial pace at which bureaucracies move compared to the pace of technology evolution, the EU has acted with foresight and leopard-like agility on the matter of AI.
- The EU’s position is very different to US regulatory position, and it will take years before a transatlantic agreement on regulating AI can be reached. This will affect investment in the EU in AI."
Proposed EU AI Act
The AI Act aims to provides a legal framework that is innovation-friendly, future-proof and resilient to disruption. The act will classify AI systems based on their perceived risk and require the companies responsible for building the most impactful tools to disclose important data about safety, interpretability, performance and cyber security.
AI Act aims to provides a legal framework that is innovation-friendly, future-proof and resilient
As with previous tech regulations pushed by the EU such as GDPR, the AIAct will undoubtedly have an effect on how tech companies do business. It is likely to come into force in 2025 or later.
EU’s forthcoming AI Act
A late provision reportedly added to the EU’s forthcoming AI Act would force companies like OpenAI to disclose their use of copyrighted training data. Already, a number of high-profile AI firms have been hit by copyright lawsuits.
Some of the AI world’s biggest players, like OpenAI, have avoided scrutiny by simply refusing to detail the data used to create their software. But legislation proposed in the EU to regulate AI could force companies to disclose this information, according to reports from Reuters and Euractiv.
AI Bill of Rights
On October 4, 2022, the White House published a Blueprint for an AI Bill of Rights (the “Blueprint”).
It identifies five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.
- Safe and effective systems
- Algorithmic discrimination protections
- Data privacy
- Notice of automated system use and explanation of its outcome
- Human alternatives, consideration, and fallback
EU's planned AI Act
Unlike the EU's planned AI Act, the Blueprint is non-binding but the blueprint breaks ground by framing AI regulations as a civil-rights issue. It asserts that communities have the right to protection and redress against harm to the same extent that individuals do.
The choice of language and framing clearly positions it as a framework for understanding AI governance broadly as a civil-rights issue, one that deserves new and expanded protections under American law.