“By fostering innovation and safeguarding fundamental rights, Europe is positioning itself as a global leader in the responsible use of artificial intelligence,” concludes Paul Berenguer (Business Innovation Manager), the author of this tribune.

 

Artificial Intelligence (AI) has rapidly become a key technology that is transforming the way industries operate and, with economic and social implications that are as obvious as they are uncertain in their medium- and long-term realisation, offering both revolutionary advances and significant challenges. To address these complexities, the European Union (EU) has taken a visionary approach with the AI Regulation (AI Act) in a regulatory effort that seeks to balance innovation with the protection of citizens’ fundamental rights to ensure its safe and beneficial integration into our society.

 

The EU regulatory framework, exemplified by initiatives such as the European AI Regulation, the General Data Protection Regulation (GDPR) and the Cyber Resilience Act, provides a comprehensive roadmap for the development of artificial intelligence, prioritising security, transparency and ethical governance. This risk-based, end-to-end approach treats AI systems as products, focusing on their application and marketable use rather than the development of the technology itself. The regulation therefore divides them into three distinct categories – low, high and unacceptable risk – with the aim of establishing appropriate levels of oversight and intervention.

 

The first of these categories (or low-risk AI) is configured around systems that pose a low risk to fundamental rights and personal security, and includes applications that generate audiovisual content or images without significant impact on critical decisions. Chatbots or virtual assistants based on generative AI, such as OpenAI’s ChatGPT or Microsoft’s Copilot, would be some examples of the use of this type of AI in this category.

Secondly, there would be high-risk AI applications, i.e. systems that could directly affect the rights and freedoms of individuals and are used in sensitive sectors such as health (e.g. medical diagnostics and patient management), transport (autonomous vehicles and navigation systems), critical infrastructure (energy and water supply), banking and insurance (credit assessment and fraud detection), and human resources management (employee selection and evaluation processes), among others.

And finally, AI products with unacceptable risk. These are systems whose implementation and commercialisation (but not their development) would be prohibited on European territory due to their potential to infringe fundamental rights. Examples include subliminal manipulation (systems that influence behaviour without consent), social scoring (assessment of individuals based on behaviour or personal characteristics), and real-time biometric identification in public spaces without legal justification.

 

These regulations are a sign of the EU’s commitment to proactive governance in the face of rapid technological development. As AI technologies penetrate sensitive aspects or areas of everyday life, the need for clear, adaptable and enforceable rules becomes increasingly important. In this regard, the EU’s decision to classify AI risks according to the severity of their potential impact serves as a model of prudent regulatory practice that other regions could (or should) emulate.

However, regulation also poses challenges, particularly in terms of compliance with existing legislation such as the GDPR. AI’s insatiable demand for data conflicts with legal principles such as data minimisation and transparency, which are specific to the European regulatory framework as a whole.

While this would initially only apply in the European context, its acceptance and application are gaining ground internationally and is being adopted as best practice in other regulatory frameworks and corporate environments. The ‘black box’ nature of AI makes it difficult for users to understand how decisions are made, which ultimately makes it difficult to ensure accountability and data subjects’ rights.

 

Another consideration is that regulation is not about stifling innovation, but about building trust in the use of this technology. For businesses, adapting to this new reality is not just a matter of compliance, but an opportunity to lead responsibly and gain a competitive advantage. By proactively committing to responsible governance of AI, organisations can demonstrate to customers and other stakeholders their commitment to ethical, secure and transparent practices, thereby building trust and meeting legal obligations.

In addition, the responsible use of AI requires cross-sector collaboration: technology developers, policymakers and end users must work together to continually refine these rules. Such cooperation is essential to ensure that this technology remains a beneficial tool for society, while protecting our rights and security.

 

In conclusion, the EU’s AI regulatory framework represents a balanced approach to managing the risks and opportunities that artificial intelligence undoubtedly presents. By fostering innovation and safeguarding fundamental rights, Europe is positioning itself as a global leader in the responsible use of AI. The onus is on businesses to embrace these changes and not only ensure compliance, but also lead with integrity in an increasingly AI-driven world.

CategoryPublications