intenseye

Inside the EU AI Act and its implications for EHS

May 14, 2024
Table of contents

European Parliament's approval of the Artificial Intelligence Act on March 13, 2024, marked an unprecedented milestone in AI regulation. This legislation not only sets forth new guidelines focused on reducing risks while supporting the innovation of AI technologies – but it also holds considerable implications for workplace safety across industries.

Let’s take a closer look at key elements of the EU AI Act, examine Intenseye’s perspective, and assess what this legislation could mean for the future of environmental health and safety (EHS) initiatives in Europe and beyond.

What is the EU AI Act?

The EU AI Act is the world's first major piece of legislation aimed at regulating artificial intelligence. It seeks to ensure safety, protect fundamental human rights, and foster innovation by introducing stringent rules for high-risk AI applications while promoting transparency, accountability, and ethical deployment of AI technologies. 

When will the AI Act come into force?

Although the Act is approved by the European Parliament, it is yet to be published in the EU Official Journal. It will enter into force 20 days after its publication in the EU Official Journal. Its provisions regarding prohibited AI systems will be applicable after 6 months, provisions regarding general purpose AI after 12 months, and the rest after 2 years.

Who will need to comply with the Act?

The Act applies to providers/developers, importers, distributors and deployers of AI systems that are marketed, sold, and/or used within the EU. As an AI solution provider and developer with a robust operational footprint and install base in the EU, after the AI Act enters into force, Intenseye will need to demonstrate compliance with this legislation in compliance with the deadlines stated therein.

The same goes for our dozens of customers that are located in the EU. However, some cases are outside the scope of the Act, such as AI developed and put into service exclusively for military or scientific research and development, and any research, testing and development regarding AI before being placed on the market/put into service (excluding real-world testing).

Key elements of the EU AI Act

The EU AI Act is extensive, comprising 12 core sections with dozens of articles and annexes spanning hundreds of pages of text. Intenseye has identified the following elements to be among the most relevant to workplace safety and the broader operational landscape:

The Act classifies AI systems by defined levels of risk 

Harm reduction is the overarching objective of this legislation, which sets forth varied criteria that classify different types and uses of AI systems based on whether they are deemed to pose unacceptable, high, limited, or minimal risks of harm. The riskier the classification, the more stringent the Act’s requirements.

Systems with unacceptable risks present a clear threat to the fundamental rights and freedoms of people and are prohibited.

High-risk systems negatively affect the safety or fundamental rights and include systems that are used in products falling under the EU’s product safety legislation (such as toys, cars, and medical devices) and systems falling into specific areas that will have to be registered in an EU database (such as critical infrastructure, educational and vocational training, law enforcement, employment etc.). Most of the Act addresses high-risk systems.

In case of limited risk systems, providers must ensure that users are informed of the nature of the systems, such as the fact that they are interacting with an AI or the content provided is created by AI.

Minimal-risk AI systems have no restrictions.

Intenseye’s perspective

Harm reduction should always be central to both policy and business decisions, so the fact that the Act is rooted in this objective is encouraging. At the same time, it will be crucial for policymakers to continually reassess the Act’s definitions and categorization of risk levels to ensure they remain current. Static legislation is no match for the rapid pace at which AI innovations – and the risks they pose – are emerging and evolving.

The Act bans AI systems that pose “unacceptable risks”

The Act prohibits AI systems that employ manipulative/subliminal/deceptive techniques to distort people’s behavior by impairing their ability to make an informed decision, social scoring, systems exploiting people’s vulnerabilities due to age, disability or socioeconomic situation that may cause significant harm, systems predicating criminal behavior, biometric categorization, biometric identification, facial recognition, or systems to infer emotions of a natural person in workplaces and education institutions (except for medical or safety reasons). Exceptions are made for law enforcement and national security purposes.

Intenseye’s perspective

We have long recognized the critical risks posed by technologies that violate privacy or other personal rights – especially in the context of EHS – which is exactly why we’ve irreversibly embedded privacy-by-design principles into our workplace safety solutions and use techniques such as pseudonymization and 3D-anonymization. We do and will continue to ensure that our product is compliant with the applicable legislation and that the risks arising from its operations are minimal. 

The Act sets forth additional requirements for AI systems in “high-risk” sectors

High-risk systems by definition pose a significant risk of harm to the health, safety or fundamental rights of natural persons.

The Act sets forth the criteria for determining whether a system can be considered high-risk, except for systems that are explicitly stated in the Act and its annexes as high-risk (such as systems in the areas of biometrics, critical infrastructure, education and vocational training, employment, essential private and public services, law enforcement, migration and administration of justice and democratic processes).

The Commission will provide guidelines for the identification of high-risk systems after the Act comes into force.

High-risk systems are subject to transparency, governance, risk management, documentation, accuracy, cybersecurity, oversight, quality management and data management obligations under the Act. Compliance with these requirements ensures accountability and fosters trust in AI technologies. The obligations vary depending on the actor (whether it is a deployer, provider, importer, distributor etc.)

Intenseye’s perspective

As mentioned above, the AI Act mostly regulates high-risk systems and provides a comprehensive, detailed and protective legal framework for the production, release, sale, distribution and use of high-risk systems, especially to ensure transparency, quality of service and security. The proposed framework protects the rights of natural persons without significantly undermining the development of the market.

As the Act comes into force and the implementation of the Act becomes clearer, Intenseye will take the necessary steps to ensure its services are compliant and its customers’ legal rights are protected. In general, we regard the framework stipulated by the Act as a positive development to ensure the protection of the health, safety, or fundamental rights of natural persons. However, it will be a substantial undertaking for Intenseye, its customers and business partners to ensure that the Act is fully complied with.

The Act requires clear labeling of AI-generated content

Providers of AI systems will be required to label AI-generated content and develop detectable AI systems to combat misinformation.

Intenseye’s perspective

Establishing trust is crucial for fostering innovation, productivity, and a positive culture of safety in the workplace. Distinguishing and communicating content that has been AI-generated content is essential for mitigating misinformation and promoting ethical AI usage in the workplace and beyond. All alerts, visual analytics, and other outputs of Intenseye’s AI-powered solutions have always been – and will continue to be – clearly marked as such.

The Act regulates general-purpose AI

General purpose AI are not necessarily classified as high-risk, but they will have to comply with transparency and copyright regulations (such as disclosing that the content was generated by AI and preventing the model from generating illegal content). 

Intenseye’s perspective

General-purpose AI is on the rise and used in various sectors extensively, both as an independent service and as supportive tools integrated into other products. Considering the risks associated with general-purpose AI (including hallucinations – erroneous outputs), the requirements set forth by the Act regarding transparency and copyright are certainly required to mitigate the risks associated with such systems and protect users.

The Act creates processes for reporting AI misuse 

The Act establishes the European AI Office, which will monitor the implementation of the Act and will allow EU citizens to lodge complaints about AI systems' adverse effects. This initiative aims to empower individuals and drive accountability in AI usage across industries.

Intenseye’s perspective

Empowering citizens in the face of AI is significant. However, achieving this will require promoting societal AI literacy to understand algorithmic harms and ensure accountability.

The Act envisions a regulatory sandbox

The Act specifies that member states should establish at least one AI regulatory sandbox at the national level, which can also be established jointly with other member states. Additional sandboxes at the local, regional, or EU level can also be established.

Intenseye’s perspective

Establishing sandboxes for AI development will encourage innovation, entrepreneurship and competition while facilitating supervised development and ensuring that such systems are configured in compliance with the Act. 

Looking ahead

As businesses navigate the implications of the EU AI Act, it will remain essential to prioritize transparency, compliance, and ethical development in all uses and initiatives involving AI. Intenseye is proud to have long been a vocal proponent and embracer of these concepts within our workplace safety solutions – and we look forward to continuing to support our customers on their safety journeys as the regulatory landscape evolves.