Page 41 - EETimes Europe June 2021
P. 41
EE|Times EUROPE 41
OPINION | MARKET & TECHNOLOGY TRENDS
people or circumvent their free will, as well
Europe Can Lead the Way as systems that perform “social scoring” on
behalf of governments.
High-risk systems include AI used for
in Regulation of AI safety-critical applications such as in
transport infrastructure and robot-assisted
surgery. AI systems that affect the course of
people’s lives such as exam scoring, credit
A decisive EU stand on regulation of rapidly evolving AI scoring, and recruitment systems are also
technologies would safeguard citizens’ rights and make in this category, alongside law enforcement
systems used to evaluate evidence or verify
Europe a hub for trustworthy AI. personal documents.
All biometric identification systems are
By Sally Ward-Foxton considered high-risk, with live use in pub-
licly accessible spaces for law enforcement
purposes prohibited in principle. There are
THE POTENTIAL BENEFITS of state-of-the-art AI technologies narrowly defined, heavily regulated excep-
to society are huge — from developing new drugs and vaccines to tions, such as using facial recognition to
transforming education. But as AI systems rapidly evolve to become search for a missing child, but those must be
more powerful, levels of responsibility also rise. While most AI approved by a judge.
systems pose no risk to people’s safety or fundamental rights, certain Makers of high-risk systems must demon-
systems create a risk that needs to be addressed. strate regulatory compliance before the
The EU has recently published proposed regulations for the systems can go on the market. Regulations
development and use of AI systems, arguing that the current legal stipulate adequate risk assessment; high
uncertainty surrounding AI’s use could lead to a slower uptake of AI quality of training datasets to minimize the
technologies. The proposed regulations have the stated aim of turning Europe into a global hub possibility of bias; high levels of robustness,
for trustworthy AI that is designed to guarantee the safety and fundamental rights of people and security, and accuracy; traceability of results;
businesses, while strengthening AI uptake, innovation, and investment across the EU. and appropriate human oversight.
“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, execu- Putting a system on the market that does
tive vice president of the European Commission for a Europe fit for the Digital Age, said in a not conform to regulations will attract fines of
statement. “With these landmark rules, the EU is spearheading the development of new global up to €30 million or 6% of the total worldwide
norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical annual turnover, whichever is higher (for the
technology worldwide and ensure that the EU remains competitive along the way. Future-proof FAANG companies, these could amount to
and innovation-friendly, our rules will intervene where strictly needed: when the safety and billions of dollars).
fundamental rights of EU citizens are at stake.” Limited-risk systems include chatbots
The proposed regulations split AI systems into four categories based on the level of risk and deepfakes, which will need to be labeled
those systems pose. appropriately to comply.
Unacceptable-risk AI systems are those that pose a clear threat to the safety, livelihoods, Minimal-risk systems, which can be freely
and rights of people. These include systems that use subliminal techniques to manipulate used, include the vast majority of AI systems,
such as AI-powered video games and email
spam filters.
Here’s why these regulations are so import-
ant: Where the EU leads, others follow. And
in the realm of digital technology, local laws
have wider global impact. In the past couple
of years, for example, the EU’s General Data
Protection Regulation (GDPR) has become a
de facto standard around the world. Because
it’s the most stringent regulation of its type,
many companies simply comply with GDPR
worldwide to avoid having to navigate a
hodgepodge of laws in different territories. As
a result, GDPR-copycat regulations have been
adopted in territories outside the EU, includ-
ing Brazil, India, and several U.S. states.
Could the proposed EU regulations on the
use of high-risk AI systems become a de facto
global standard like GDPR? As Vestager said,
public trust is essential to the successful rollout
of any AI technology, high-risk or otherwise.
The best way to build that trust is to develop
AI systems within a regulatory framework that
Law enforcement use of biometric identification systems will be banned in publicly considers people’s safety and fundamental
accessible spaces in the EU, unless a judge’s order can be obtained. rights at its core, along with a necessary focus
www.eetimes.eu | JUNE 2021