EU Assumes Leadership in Artificial Intelligence

Lilu Anderson
Photo: Finoracle.me

EU Member States Reach Agreement on Landmark Artificial Intelligence Act

Representatives from EU Member States have reached an agreement on the Artificial Intelligence Act (AI Act), setting the stage for its entry into force. The legislation establishes a comprehensive framework for the regulation of artificial intelligence across the EU, with a focus on ensuring safety, transparency, and accountability in AI technologies.

Toomas Seppel, an attorney-at-law at Hedman Law Firm, describes the AI Act as groundbreaking legislation that regulates the development, placement, supply, and use of AI systems. It is the first of its kind in the world.

What Does the EU AI Act Entail?

The AI Act classifies AI systems based on their level of risk, from minimal to unacceptable. High-risk AI applications, such as those used in healthcare, policing, and employment, will be subject to strict requirements, including transparency, data governance, and human oversight standards.

The Act also introduces the concept of general-purpose artificial intelligence (GPAI) and introduces regulations for GPAI systems to comply with transparency requirements. More powerful models of GPAI that may pose systemic risks, such as those developed by OpenAI, DeepMind, and IBM, will be subject to additional obligations.

Fines for breaches of the AI Act will be imposed, with fines of up to €7.5 million or 1.5% of a company’s annual turnover, and up to €35 million or 7% of global turnover for large global companies.

Enhancing Transparency and Accountability

The AI Act aims to enhance transparency and accountability for end-users. AI systems interacting with humans, such as border control identity verification and chatbots in customer service, must disclose that they are machines. Additionally, users of AI-generated “deepfakes” must indicate that the content was generated by AI.

The Act also classifies AI systems into three categories: limited risk, high risk, and prohibited risk. Each category carries different obligations and prohibitions. High-risk AI systems, used in critical infrastructures, medical devices, law enforcement, and the administration of justice, will be subject to extensive obligations and human oversight.

Prohibited AI Systems

AI systems that pose a threat to fundamental human rights will be categorized as prohibited risk systems and their development and use will be banned in the EU. Examples of prohibited systems include biometric categorization systems using sensitive data, facial recognition databases, emotion recognition in the workplace and educational institutions, and systems that manipulate people’s behavior and exploit vulnerabilities.

Estonia’s AI Strategy and Future Adoption

The AI Act now awaits further clarification and adoption by EU lawmakers. Estonia is also developing its own AI strategy based on the AI Act to provide guidance, set standards, and ensure the trustworthiness and risk mitigation of AI development and use.

The AI Act is expected to be approved at the Commission level in the next two weeks and will then proceed to the EU plenary vote in the European Parliament in April.

Further information on the AI Act can be found on the website of Hedman Law Firm, a legal firm specializing in commercial and corporate law, investment fundraising, technology law, and intellectual property matters.

Analyst comment

Positive news: EU Member States Reach Agreement on Landmark Artificial Intelligence Act.

As an analyst, this development indicates a strong commitment to regulating and ensuring safety, transparency, and accountability in AI technologies. The AI Act will classify AI systems based on risk levels, impose strict requirements for high-risk applications, enhance transparency and accountability for end-users, and prohibit AI systems that pose a threat to fundamental human rights. This framework will provide guidance, set standards, and mitigate risks in AI development and use. The market can expect increased regulation and compliance measures in the AI sector.

Share This Article
Lilu Anderson is a technology writer and analyst with over 12 years of experience in the tech industry. A graduate of Stanford University with a degree in Computer Science, Lilu specializes in emerging technologies, software development, and cybersecurity. Her work has been published in renowned tech publications such as Wired, TechCrunch, and Ars Technica. Lilu’s articles are known for their detailed research, clear articulation, and insightful analysis, making them valuable to readers seeking reliable and up-to-date information on technology trends. She actively stays abreast of the latest advancements and regularly participates in industry conferences and tech meetups. With a strong reputation for expertise, authoritativeness, and trustworthiness, Lilu Anderson continues to deliver high-quality content that helps readers understand and navigate the fast-paced world of technology.