European Politicians Approve New Rules to Regulate Artificial Intelligence
In a significant development, European politicians in two key committees have approved new rules to regulate artificial intelligence (AI), setting the stage for the world’s first legislation on the technology. The provisional legislation, endorsed by the European Parliament’s committees on civil liberties and consumer protection, aims to ensure that AI complies with the protection of “fundamental rights”. The legislation is scheduled for a vote in the assembly in April.
The proposed AI Act seeks to establish guardrails for the use of AI in various industries, including banking, automotive, electronics, and aviation, as well as for security and police purposes. Furthermore, the law aims to not only regulate the technology but also foster innovation, positioning Europe as a leader in the AI field.
The European Commission proposed the legislation in 2021, but its approval was delayed due to disagreements regarding the regulation of language models that scrape online data and the use of AI by police and intelligence services. The rules will also cover foundation models or generative AI systems, such as the one created by Microsoft-backed OpenAI, which are trained on large datasets and can learn from new data to perform various tasks.
Eva Maydell, the MEP for Tech, Innovation, and Industry, expressed pride in the approval, stating that it encourages social trust in AI while allowing companies the freedom to create and innovate. Deirdre Clune, the MEP for Ireland South, added that this approval brings Europe a step closer to comprehensive rules on AI.
Recent endorsements by European Union countries have shown support for the AI Act, focusing on better controlling government use of AI in biometric surveillance and regulating AI systems. Concessions were made to alleviate the administrative burden on high-risk AI systems and protect business secrets. The act requires transparency obligations for foundation models and general-purpose AI systems before they are introduced to the market, including compliance with EU copyright law and providing detailed summaries of training content.
Though tech companies have expressed reservations about the requirements and their potential impact on innovation, the law mandates disclosure of data used to train AI systems and testing of products, particularly those used in high-risk applications like autonomous vehicles and healthcare. The legislation also prohibits the indiscriminate scraping of images for facial recognition databases, except for law enforcement purposes in investigating terrorism and serious crimes.
The approval of these regulations marks a significant step toward striking a balance, harnessing the benefits of AI while addressing concerns such as disinformation, job displacement, and copyright infringement. The legislation is expected to serve as a global benchmark for governments aiming to navigate the AI landscape.
Analyst comment
Positive news. Short-term impact: Increased regulatory compliance costs for businesses, particularly in industries like banking, automotive, electronics, and aviation. Long-term impact: Boost to innovation and positioning Europe as a leader in the AI field, fostering social trust in AI, and addressing concerns related to disinformation, job displacement, and copyright infringement.