Latest News Update: EU AI Act Coming into Action

The European AI Act, the world's first comprehensive AI regulation, ensures trustworthy AI in the EU, protecting fundamental rights and promoting innovation.

UBB Staff
7 Min Read
EU European Artificial Intelligence Act 2024

Finally, it’s here: A risk-based regulation for the use of artificial intelligence in the European Union takes effect on Thursday, August 1, 2024.

Various AI developers and applications will have staggered compliance deadlines starting today. It is expected that most provisions will become fully applicable by mid-2026.

Nevertheless, the first deadline, which prohibits artificial intelligence from being used in specific contexts, such as collecting biometric data from remote locations, must be met within six months.

Most AI applications are not considered high-risk under the European Union’s approach so they won’t be covered by regulation.

- Advertisement -

Read More: Top 18 Artificial Intelligence Important Questions and Answers

Detail into the Press About the Eu’s Ai Act

Several AI applications are deemed high-risk, such as biometrics, face recognition, and educational/employment applications. This will require registering systems in an EU database and ensuring their developers comply with quality and risk management requirements.

The third tier of “limited risk” applies to AI technologies, such as chatbots and tools that create deep fakes. The transparency requirements for these will ensure that users are not deceived.

Another important aspect of the law applies to developers of general-purpose artificial intelligence (GPAIs). GPAI developers face light transparency requirements, again by the EU’s risk-based approach. It is expected that only a fraction of the most powerful models will be needed for risk assessment and mitigation.

Work of Gpai Developers to Comply with Act

As Codes of Practice for GPAI developers have yet to be drafted, it is unclear exactly what will be required under the AI Act. An AI Office that provides strategic oversight and builds AI ecosystems announced this week that it is commencing consultations and inviting participation in the rule-making process.

- Advertisement -

AI Act Evaluates Companies Based on Risk

AI systems used by EU companies are assigned rules based on four levels of risk, determining their timelines.

Risks come in four categories: low risk, minimal risk, high risk, and prohibited AI. The EU is planning to ban certain practices completely by the end of February 2025. Internet scraping can also assist in manipulating a user’s decision-making or expanding facial recognition databases.

In addition, AI systems that are designated high-risk, such as those that collect biometrics or are used for crucial infrastructure decisions or employment decisions, will be regulated most stringently.

- Advertisement -

Companies will also be required to demonstrate their oversight of humans and the datasets used for AI training.

Thomas Regnier, a spokesperson for the European Commission, explains that 85 percent of AI companies fall within the third category of “minimal risk,” which requires minimal regulation.

What Does Openai Have to Say?

OpenAI, hands behind ChatGPT’s GPT large language model, wrote in its primer on the AI Act that it expected to cooperate closely with the EU AI Office upon implementing the new legislation.

EU Artificial Intelligence Act 2024
EU Artificial Intelligence Act 2024

Additionally, the company will prepare technical documentation and other guidance to facilitate the use of its GPAI models by downstream providers and implementers.

When determining how your organization will comply with the AI Act, you should begin by classifying any AI systems subject to the Act. In addition to identifying which GPAI and other AI systems you use, OpenAI also recommends determining how they are classified and considering what obligations follow from your use cases, which it provides as part of its compliance guidance.

When it comes to any AI systems included in the scope of your project, you should also decide whether you will provide or deploy them. Having questions about these issues can be complicated, so you should seek

Must Read: Exciting News: OpenAI launching Search Engine to take over Google

Application of AI Rules and Enforcement

Member States must designate national competent authorities by 2 August 2025, responsible for overseeing the implementation of guidelines for AI systems and conducting market surveillance. The AI Office of the Commission will implement the AI Act and enforce the rules regarding general-purpose AI models at the EU level.

The rules will be implemented with the support of three advisory bodies. As the main body for EU-State cooperation regarding the AI Act, the European Artificial Intelligence Board ensures that the AI Act is applied uniformly across EU Member States. A scientific advisory panel of independent experts will provide technical advice and input into enforcement.

Particularly, the panel can issue alerts regarding general-purpose AI models and their risks. There is also the possibility of the AI Office receiving advice from an advisory forum comprising various stakeholders.

Companies that fail to follow the rules are subject to fines. If an AI application is prohibited, the penalty could be up to 7% of the global annual turnover, with a 3% penalty if other requirements are not met and a 1.5% penalty for inaccurate information.

Next Steps in EU’s AI Act

AI Act rules will become effective on 2 August 2026. As soon as six months have passed, however, AI systems that present an unacceptable risk are prohibited, while 12-month rules for general-purpose AI systems apply.

The Commission has launched the AI Pact to ensure a smooth transition into full implementation. A key goal of this initiative is to encourage AI developers to adopt the AI Act’s key obligations before the deadline.

Besides developing guidelines for implementing the AI Act, the Commission also develops co-regulatory instruments such as standards.

To participate in developing the first general AI Code of Practice, the Commission formally invited expressions of interest, followed by a multi-stakeholder consultation, in which all stakeholders were invited to submit input on the first Code of Practice.

Share This Article
Leave a review

Leave a review

Your email address will not be published. Required fields are marked *