SAN JOSE, Calif. -- The European Union has released the world's first set of rules for the use of artificial intelligence. Companies around the world can now see regulations designed to govern the technology.
Joseph Thacker is the Principal AI engineer for the San Mateo, California-based company AppOmni. Thacker is a security researcher who specializes in both application security and AI.
"I really love that they are focusing on AI and they're willing to just take a stab at it even though there is a lot of uncertainty around it," Thacker said.
The Artificial Intelligence Act takes a risk-based approach - the higher the risk, the stricter the rules.
The "Unacceptable risk" category includes AI systems that will be banned, such as certain biometric identification systems, like facial recognition or social scoring systems.
The "High-risk" AI systems include critical infrastructure like gas and electricity or medical devices.
And then there's "Specific transparency risk" which means users on chatbots for example should be aware that they are interacting with a machine.
"I think it feels very focused on AI safety and on kind of fighting disinformation and the ways we can prevent AI from being used to discriminate or invade people's privacy," Thacker said. "But I didn't see a strong focus on like core components the practical aspects of AI security."
Thacker said the AI Act provision didn't include prompt injection which is something we're going to see more of going forward.
"Right now we don't see much of it because it hasn't been an attack vector," Thacker said.
MORE: Biden executive order imposes new rules for AI. Here's what they are
When looking at AI companies broadly, Thacker said they tend to fall short on certain security vulnerabilities.
"These companies that are large like OpenAI or Google, or Meta, I think that the biggest security vulnerabilities they're going to introduce are ones that we don't really have a full understanding of," Thacker said. "When you introduce AI to a system - it's intentionally manipulatable. Cause the goal of Large Language Models, LLMs, and other generative AI even like AI art is to do what the user wants to do. So when you have a malicious user telling it to do something malicious, you're in a conundrum because the AI wants to comply with what the user wants but if the user is malicious then you're stuck between a rock and hard place."
On the other side of the spectrum, Thacker said some AI startups have been focused on the AI feature and not enough on security.
"In a lot of these small startups, like AI-based startups, they're failing just traditional security in regards to things like authentication, bypasses, you can access other user's data, you can get features for free or you can use the AI models that they're paying a lot of money for without being charged or you can rack up a huge bill for the service provider because they just want it to be a frictionless sign-on," Thacker said.
Thacker said the top-level experts would rate the existential risk of AI as significant.
"Historically government is very slow to respond to technology and I don't think we can afford to do that this time," Thacker said.
The AI Act is a provisional agreement. San Jose State Professor and tech expert Ahmed Banafa said it's like a 'wish list' with changes to come.
"There's a lot of protection for the user more than for the technology. This is where you're going to see the tech companies are really holding the flag and saying let's go there and explain to them this is going to hurt us," Banafa said.
Banafa thinks the 'biometric identification' method, like facial recognition, will be negotiated more.
"This is a provisional agreement and the civil rights organization and agencies and also the tech companies still have negotiations and I think this is one of the sticking points about how far they can allow that," Banafa said.
The AI Act won't take effect until 2025.
Companies not complying with the rules could draw fines of up to 35 million euros or 7 percent of the company's revenue.