ETFOptimize | High-performance ETF-based Investment Strategies

Quantitative strategies, Wall Street-caliber research, and insightful market analysis since 1998.


ETFOptimize | HOME
Close Window

European lawmakers pass AI Act, world’s first comprehensive AI law

European lawmakers on Wednesday approved the world’s most comprehensive legislation yet on artificial intelligence, setting out rules and restrictions on the technology.

European lawmakers approved the world’s most comprehensive legislation yet on artificial intelligence, setting out sweeping rules for developers of AI systems and new restrictions on how the technology can be used.

The European Parliament on Wednesday voted to give final approval to the law after reaching a political agreement last December with European Union member states. The rules, which are set to take effect gradually over several years, ban certain AI uses, introduce new transparency rules and require risk assessments for AI systems that are deemed high-risk.

The law comes amid a global debate about the future of AI and its potential risks and benefits as the technology is increasingly adopted by companies and consumers. Elon Musk recently sued OpenAI and its chief executive, Sam Altman, for allegedly breaking the company’s founding agreement by giving priority to profit over AI’s benefits for humanity. Altman has said AI should be developed with great caution and offers immense commercial possibilities.

The new legislation applies to AI products in the EU market, regardless of where they were developed. It is backed by fines of up to 7% of a company’s worldwide revenue.

ELON MUSK PREDICTS AI WILL LIKELY BE SMARTER THAN ‘ALL HUMANS COMBINED’ BY 2029

The AI Act is "the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI," said Brando Benifei, an EU lawmaker from Italy who helped lead negotiations on the law.

The law still needs final approval from EU member states, but that process is expected to be a formality since they already gave the legislation their political endorsement.

While the law only applies in the EU, it is expected to have a global impact because large AI companies are unlikely to want to forgo access to the bloc, which has a population of about 448 million. Other jurisdictions could also use the new law as a model for their AI regulations, contributing to a ripple effect. 

"Anybody that intends to produce or use an AI tool will have to go through that rulebook," said Guillaume Couneson, a partner at law firm Linklaters.

Several jurisdictions worldwide have introduced or are considering new rules for AI. The Biden administration last year signed an executive order requiring major AI companies to notify the government when developing a model that could pose serious risks. Chinese regulators have set out rules focused on generative AI.

The EU’s AI Act is the latest example of the bloc’s role as an influential global rule maker. A separate competition law that came into effect for certain tech giants earlier this month is pushing Apple to change its App Store policies and Alphabet’s Google to modify how search results appear for users in the bloc. Another law focused on online content is pushing large social-media companies to report on what they are doing to address illegal content and disinformation on their platforms.

The AI Act won’t take effect right away. The prohibitions in the legislation, which include bans on the use of emotion-recognition AI in schools and workplaces and on untargeted scraping of images for facial-recognition databases, are expected to become enforceable later this year. Other obligations are expected to kick in gradually between next year and 2027.

The new rules will eventually require providers of general-purpose AI models, which are trained on vast data sets and underpin more specialized AI applications, to have up-to-date technical documentation on their models. They will also have to publish a summary of the content they used to train the model.

FORMER GOOGLE CONSULTANT SAYS GEMINI IS WHAT HAPPENS WHEN AI COMPANIES GO ‘TOO BIG TOO SOON’

Makers of the most powerful AI models—deemed to have what the EU calls a "systemic risk"—will be required to put those models through state of the art safety evaluations and notify regulators of serious incidents that occur with their models. They will also have to implement mitigations for potential risks and cybersecurity protection, the law says. 

The bloc’s initial proposal for the legislation was published in 2021, before the widespread popularity of OpenAI’s ChatGPT and other AI-powered chatbots, and provisions on general-purpose AI were added during the legislative process.

Industry groups and some European governments pushed back against the introduction of blanket rules for general-purpose AI, saying legislators should focus on risky uses of the technology—rather than the models that underpin its use.

France, home to Mistral AI, and Germany sought to water down some of the legislation’s proposals. Mistral Chief Executive Arthur Mensch said recently that the AI Act would—after some changes in final negotiations that lightened some obligations—be a manageable burden for his company, even if he thinks the law should have remained focused on how AI is used and not the underlying technology.

Lawmakers said the AI Act was among the most heavily lobbied pieces of legislation the bloc has dealt with in recent years.

US-FUNDED REPORT ISSUES URGENT AI WARNING OF ‘UNCONTROLLABLE’ SYSTEMS TURNING ON HUMANS

Corporate watchdogs and some lawmakers said they wanted the legislation to include tougher requirements—such as the rules for safety evaluations and risk mitigation—for all general-purpose AI models and not just the most powerful models.

Lobby group BusinessEurope said Wednesday that it supports the law’s risk-based approach to regulating AI, although it said there are questions about how the law will be interpreted in practice. Digital-rights group Access Now said the final text of the legislation was full of loopholes and failed to adequately protect people from some of the most dangerous uses of AI.

Another element of the law requires clear labeling of so-called deepfakes, which refer to images, audio, or video that have been generated or manipulated by AI and might otherwise appear to be authentic. AI systems that are deemed by legislators to be high-risk, such as those used for immigration or critical infrastructure, must conduct risk assessments and ensure that they are using high-quality data, among other requirements.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Lawmakers in Europe said they sought to make the legislation flexible so that it can adapt to rapidly evolving technology. For example, one part of the law says the European Commission—the EU’s executive arm—can update technical elements of its definition of general-purpose AI models based on market and technological developments.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.


 

IntelligentValue Home
Close Window

DISCLAIMER

All content herein is issued solely for informational purposes and is not to be construed as an offer to sell or the solicitation of an offer to buy, nor should it be interpreted as a recommendation to buy, hold or sell (short or otherwise) any security.  All opinions, analyses, and information included herein are based on sources believed to be reliable, but no representation or warranty of any kind, expressed or implied, is made including but not limited to any representation or warranty concerning accuracy, completeness, correctness, timeliness or appropriateness. We undertake no obligation to update such opinions, analysis or information. You should independently verify all information contained on this website. Some information is based on analysis of past performance or hypothetical performance results, which have inherent limitations. We make no representation that any particular equity or strategy will or is likely to achieve profits or losses similar to those shown. Shareholders, employees, writers, contractors, and affiliates associated with ETFOptimize.com may have ownership positions in the securities that are mentioned. If you are not sure if ETFs, algorithmic investing, or a particular investment is right for you, you are urged to consult with a Registered Investment Advisor (RIA). Neither this website nor anyone associated with producing its content are Registered Investment Advisors, and no attempt is made herein to substitute for personalized, professional investment advice. Neither ETFOptimize.com, Global Alpha Investments, Inc., nor its employees, service providers, associates, or affiliates are responsible for any investment losses you may incur as a result of using the information provided herein. Remember that past investment returns may not be indicative of future returns.

Copyright © 1998-2017 ETFOptimize.com, a publication of Optimized Investments, Inc. All rights reserved.