Melih Yönet

The EU AI Act: Regulatory standard for responsible AI

December 12, 2024
Table of contents

On March 13, 2024, the European Parliament passed the EU Artificial Intelligence Act (“EU AI Act”), marking the world’s first comprehensive legislation aimed at governing artificial intelligence (AI) systems.  It was later published in the Official Journal (OJ) of the European Union on July 12, 2024.

As a regulatory standard for responsible AI usage,the EU AI Act is a legal framework designed to regulate AI across sectors, placing a focus on human rights, safety, and transparency.

This new regulation categorizes AI applications by risk levels, setting strict requirements on high-risk applications while leaving minimal oversight for low-risk technologies.

This blog offers a thorough overview of the EU AI Act, detailing its objectives, scope, and risk-based approach, and highlights its relevance to Intenseye’s mission of enhancing workplace safety through responsible AI.

What is the EU AI Act?

The EU AI Act, as part of the European Union’s broader digital strategy, represents the world’s first comprehensive attempt to regulate artificial intelligence on a large scale.

The Act introduces a set of rules applicable across various industries , ensuring that AI technologies align with the EU’s commitment to safeguarding human rights, protecting public safety, and fostering responsible AI development.

By emphasizing ethical AI practices, the EU AI Act seeks to balance the advancement of AI with the protection of fundamental rights.

Core objectives of the EU AI Act

The EU AI Act is built on three core objectives:

1. Protecting fundamental rights and public safety: The Act emphasizes protecting individuals from potential harm and abuse, including privacy invasions and discrimination.

2. Fostering safe and transparent AI: High-risk AI applications must comply with strict regulatory requirements to prevent misuse, improve transparency, and uphold accountability.

3. Promoting innovation responsibly: By providing clear guidelines, the Act gives AI developers and companies with the assurance needed to innovate responsibly, reducing regulatory uncertainty while ensuring AI applications align with European values.

Stakeholders under the EU AI Act

The EU AI Act outlines various compliance roles within the AI ecosystem, focusing on two primary actors—providers and deployers—each with distinct responsibilities and obligations.

Provider: According to the EU AI Act, a provider is any natural or legal person, public authority, agency, or  entity that develops or commissions the development of an AI system or  general-purpose AI model and places it on the market or into service under its own name or trademark, whether for commercial purposes or free of charge. Intenseye, as the developer and provider of workplace safety AI solutions, falls into this category.

As a provider, Intenseye strictly adheres  to the rules set forth in the EU AI Act, ensuring that its AI technology meets the Act’s comprehensive  standards, including risk management, data governance, and transparency. Intenseye’s responsibilities include conducting rigorous conformity assessments and ongoing tracking to verify that its AI systems align with safety, ethical, and regulatory requirements both before and during deployment.

Deployer: A deployer, as defined by the EU AI ACT, is a natural or legal person, public authority, agency, or  entity that uses an AI system under its authority, excluding those using AI in a personal, non-professional capacity.

Intenseye’s customers, typically organizations and EHS teams in workplace environments, are categorized as deployers under the EU AI Act. They are responsible for using Intenseye’s AI systems in a compliant and transparent manner, ensuring that all usage aligns with ethical and regulatory expectations. This includes the obligation to notify employees or relevant stakeholders when AI systems are in use, reinforcing  transparency and fostering trust in the workplace.

By working together within this structured provider-deployer framework, Intenseye and its customers collaboratively uphold the EU AI Act’s standards.

Intenseye's dedication to safety and regulatory compliance ensures that its technology not only supports workplace safety objectives but also fully aligns with the EU AI Act’s rigorous requirements, establishing a strong model of responsible AI deployment across jurisdictions.

Risk-based categorization of AI systems

To manage AI effectively, the EU AI Act employs a risk-based framework, classifying  AI systems into four distinct levels: Unacceptable Risk, High Risk, Limited Risk, and Minimal or No Risk. Each category comes with specific compliance requirements, ensuring that the most restrictive rules apply to applications posing the greatestrisks.

1. Prohibited AI systems

The EU AI Act aims to classify AI systems based on their potential risks to fundamental rights and safety. Unacceptable risk systems, which are strictly prohibited, pose significant  threats to fundamental rights and freedoms. Examples of these unacceptable AI applications include:

• Social scoring systems: Systems designed to rate or classify individuals based on behavior, which could lead to discriminatory practices or invasion of privacy.

• Real-time biometric identification in public spaces: This includes the use of facial recognition for surveillance, which is only permitted under specific, narrowly defined circumstances.

• Manipulative and exploitative AI: AI systems targeting vulnerable individuals or those using subliminal techniques to manipulate behavior are strictly prohibited.

Intenseye's AI powered EHS management softwareis designed to enhance workplace safety by detecting unsafe acts and conditions, thereby preventing accidents and injuries.  It operates to safeguard employee well-being without infringing on their fundamental rights or freedoms. Intenseye’s AI technology enhances workplace safety through transparent, non-intrusive safety analysis, which aligns with the EU AI Act’s commitment to upholding fundamental rights. It does not involve facial recognition for surveillance practices or real-time and biometric identification, thus avoiding the unacceptable risk category.

2. High-risk AI systems

High-risk AI systems face significant regulatory requirements due to their potential impact on safety, well-being, or fundamental rights. These include applications in areas like critical infrastructure, education, and employment. For high-risk systems, the EU AI Act mandates:

• Risk management: Comprehensiverisk assessments and thorough documentation to mitigate harm.

• Transparency and explainability: Clear and detailed explanations of how AI systems make decisions.

• Data governance: Ensuring data quality, representativeness, and freedom from bias.

• Human oversight: Human operators must have the authority to intervene as needed.

Intenseye's safety AI software  is designed to improve safety without compromising it. By seamlessly integrating with existing  cameras, it analyzes workplace environments in real-time, identifying potential safety hazards and enabling EHS teams to take timely preventive actions. Intenseye’s focus on risk identification and privacy-conscious design allows EHS teams to respond quickly without infringing on workers' privacy. This proactive approach to safety risk management, combined with strong data security practices, demonstrates Intenseye’s alignment with the EU AI Act’s goals of promoting safety while upholding rights.

3. Limited risk AI systems

Limited-risk systems, such as chatbots, face fewer restrictions but must meet transparency requirements. For example, users must be informed whenever they interact with an AI system. This transparency measure helps prevent reliance on or misuse of these systems.

Although Intenseye’s AI-powered EHS management software  primarily serves a high-stakes purpose in safety, it also embraces transparency principles seen in limited-risk systems. Safety managers and users are fully informed about how Intenseye’s AI-powered leading safety indicatorsoperate, reinforcing trust and responsible use.

4. Minimal or no-risk AI systems

Minimal-risk systems, like spam filters or recommendation engines, are largely exempt from specific AI regulations. These applications pose negligible risk to users and public welfare, so they do not require specialized compliance measures.

Extraterritorial scope of the EU AI Act

Similar to the General Data Protection Regulation (“GDPR”), the EU AI Act incorporates an extraterritorial scope, extending its provisions beyond the EU’s geographic boundaries under certain conditions.

Specifically, the EU AI Act applies when an AI provider places a system or general-purpose AI models on the EU market, regardless of whether the provider is based within the EU or a third country.

Additionally, the Act  applies to deployers of AI systems with an EU establishment, as well as providers or deployers located in third countries whose AI systems' output is used within the EU—a principle known as the "effects principle."

The implications of this extraterritorial reach are significant, as the EU AI Act’s extensive jurisdiction applies to both European and non-European entities that provide AI products, services, or outputs to the EU.

Relocating operations outside the EU does not exempt entities from the EU AI Act’s requirements if their systems impact EU consumers or markets.

Consequently, non-EU companies serving EU customers must align their AI practices with the EU AI Act’s standards to maintain compliance, underscoring the Act’s role in setting as a global  benchmark for global AI governance.

How the EU AI Act impacts businesses

The EU AI Act encourages all businesses utilizing AI to focus on ethical practices, data governance, and risk management. By promoting responsible innovation, the EU AI Act provides clear compliance guidelines while emphasizing human oversight and accountability.

Businesses, especially those  in high-risk categories, must adhere to rigorous data governance standards, transparency, and human-centered AI practices.

Additionally, organizations must prepare to document their AI systems thoroughly, which involves explaining data sources, training processes, and decision-making pathways. This level of transparency is designed to build public trust, enabling companies to innovate ethically. By emphasizing ethical AI standards, the EU AI Act is expected to influence global AI regulation worldwide, setting a precedent for accountability and fairness in technology.

Intenseye and the EU AI Act: Ensuring responsible, safe and Ethical AI

The EU AI Act’s focus on risk categorization highlights the importance of thoughtful AI design, especially in areas where safety and privacy are paramount. Intenseye’s workplace safety solution aligns with the Act’s regulatory intent, designed to enhance, not undermine, worker well-being.Here’s how:

1. Clear ethical boundaries: Intenseye’s safety AI solution is not classified as an unacceptable risk because it avoids in real-time biometric identification, social scoring, or manipulative practices. Instead, our solution enhances workplace safety by identifying unsafe acts and conditions, empowering EHS teams to take proactive measures to prevent accidents.

2. Proactive safety support: Intenseye’s goal is risk reduction, aligning it with the core objectives of the EU AI Act. By integrating seamlessly with existing security systems, it offers real-time insights that enable swift preventive action, enhancing the safety and security of workplace environments.

3. Data privacy and security: Our safety AI software is built with data privacy and security at its core. Video data is encrypted both in transit and at rest, with faces blurred to protect individual privacy. Data not indicative of safety risks is deleted immediately, while evidence-related media follows a strict, limited retention schedule. These practices not only ensure safety compliance but also demonstrate Intenseye’s commitment to responsible data management.

4. Transparency and user control: Intenseye promotes transparency and control, which are essential for responsible AI. Users are fully informed of how the AI system functions and its scope, reinforcing our commitment to transparency and empowering users to take full ownership of their safety technology.

5. Commitment to continuous improvement: As regulations evolve, so do our practices. Intenseye is dedicated to maintaining the highest standards of safety and privacy, continually refining our technology and ensuring ongoing compliance with regulatory requirements.

Leading the way in workplace safety with responsible AI

The EU AI Act represents a groundbreaking regulatory approach to AI, setting global standards for ethical and accountable AI deployment.

Through its risk-based classification, the EU AI Act provides a structured framework that allows companies to innovate within well-defined ethical boundaries, ensuring AI serves as a tool for good while safeguarding fundamental rights.

At Intenseye, we see the EU AI Act not just as a regulatory mandate but also as an affirmation of our commitment to responsible AI. By aligning our technology with the EU AI Act’s principles of transparency, data privacy, and human oversight, we are able to meet and exceed industry standards.

Our proactive approach to workplace safety exemplifies how AI can be both transformative and ethically sound, and we are proud to be a leader in this evolving field.

As AI technology continues to grow, intenseye remains dedicated to pioneering AI solutions that prioritize safety, uphold rights, and strengthen public trust.

The EU AI Act provides a valuable framework, and we are fully committed to advancing a responsible, future-oriented vision for AI in workplace safety.