‘Trust Is a Must’: European Commission Proposes Stringent Regulations on AI
The European Commission’s first legal framework on Artificial Intelligence aims for “human-centric, sustainable, secure, inclusive and trustworthy AI”
The artificial intelligence market is skyrocketing—so much so that it may beat NASA’s Artemis mission to the moon.
With a five-year compound annual growth rate of 17.5%, the market is expected to cross the $500 billion mark by 2024—unfolding an era where AI is all-pervading—in every industry, ecosystem, and walk of life.
Taking cognisance of the exponential growth in the investment of AI across Europe, the European Union is addressing a beckoning question—will this AI omnipresence come with a hidden cost?
The 2021 Coordinated Plan on Artificial Intelligence is the EU’s landmark step forward in implementing regulatory legislation for ‘trustworthy AI’—safeguarding the fundamental rights of the citizens and the values of the Union while facilitating the development of a ‘single market for lawful, safe and trustworthy’ AI.
The wide-ranging set of regulations will apply to hardware and software-based AI solutions and services, ranging from autonomous vehicles to facial recognition software across the 27 member states of the EU.
The proposed new regulations come from the executive body of the bloc, the European Commission.
“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” said the Executive Vice President for ‘A Europe fit for the Digital Age’, Margrethe Vestager, in a press statement on Wednesday.
“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
The EU accentuates several AI practices, systems or applications in contravention of the Union values and categorizes the aforesaid as “high-risk” methodological implementations of AI. The applications encompass subliminal techniques, gauging social scores and remote biometric identification.
The systems in the high-risk category would be subject to strict obligations—including series of risk mitigation measures, detailed documentation and erudite human oversight that attest to their reliability.
The European Commission has empowered member states to impose stringent penalties if a firm’s practices are found in infringement of its regulations. The member states can fine up to 6% of global revenues of the defaulters flouting the regulations.
The key objective of the framework is to consolidate private-public partnerships, reprise popular trust in artificial intelligence systems, endorse investments in AI technology across the Union whilst simultaneously developing ethical guidelines to safeguard civil rights.
The implementation of the regulation and its subsequent implementation will be spearheaded by a governmental arm of the EU called the European Artificial Intelligence Board.
The proposed legislation is a precedent-setting measure that alters the paradigm of the global AI market. That being said, the legislation may fall woefully short in establishing a utopian market without necessary amendments. While the Union’s approach is balanced in its horizontal implementation and designed not to impede technological progress, its model of ‘self-assessment’ in the enforcement is no credible guarantor of conformity.
While the framework may have as many holes as swiss cheese, it succeeds in setting a global standard for AI—where governments come together to hold companies accountable for what they bring to society.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.
Our UX team designs customer experiences and digital products that your users will love.