[ad_1]
LONDON (AP) – European Union lawmakers are expected to give final approval to the 27-nation artificial intelligence law on Wednesday, putting the world-leading rules on track to come into force later this year.
Members of the European Parliament are poised to vote in favor of the Artificial Intelligence Act, five years after it was first proposed. The AI Act is expected to serve as a global guidepost for other governments grappling with how to regulate rapidly evolving technology.
“The AI Act moves the future of AI in a human-centric direction, where humans control technology and technology helps harness new discoveries, economic growth, and social progress, and unleash human potential. ” said Dragos Tudrace, a Romanian lawmaker who co-led parliamentary negotiations on the bill.
Big tech companies have generally supported the need to regulate AI while lobbying for all rules to work in their favor. OpenAI CEO Sam Altman caused a small stir last year by suggesting ChatGPT makers could pull out of Europe if they fail to comply with AI laws, but there have been no plans to do so since then. He retracted his statement.
Here’s the world’s first comprehensive set of AI rules.
How does AI work?
Like many EU regulations, the AI Act was initially intended to act as a consumer safety law, taking a “risk-based approach” to products and services that use artificial intelligence.
The more risky an AI application is, the more it is exposed to scrutiny. Low-risk systems, such as content recommendation systems and spam filters, only have lighter rules, such as disclosing that they are powered by AI. The EU expects that most AI systems will fall into this category.
High-risk uses of AI, such as in medical devices and critical infrastructure such as water and electricity networks, face more stringent requirements such as using high-quality data and providing clear information to users.
Some uses of AI, such as social scoring systems to manage people’s behavior, and certain predictive monitoring and emotional recognition systems in schools and workplaces, are deemed to pose unacceptable risks and are banned.
Other prohibited uses include police using AI-powered remote “biometric identification” systems to scan faces in public places, except for serious crimes such as kidnapping and terrorism.
What about generative AI?
Early drafts of the law focused on AI systems that perform narrowly defined tasks, such as scanning resumes and job applications. The surprising rise of general-purpose AI models, such as OpenAI’s ChatGPT, has EU policymakers scrambling to keep up.
They added provisions for so-called generative AI models. This is the technology behind AI chatbot systems that can generate unique, seemingly real-looking responses, images, and more.
Developers of general-purpose AI models, from European startups to OpenAI and Google, are providing detailed summaries of the text, images, videos, and other data on the internet used to train their systems, and are providing EU copyright protection. Must obey the law.
Deepfake photos, videos, or audio of existing people, places, or events that are generated by AI should be labeled as artificially manipulated.
Special scrutiny is placed on the largest and most powerful AI models that pose “systemic risks,” such as OpenAI’s GPT4 (the most advanced system) and Google’s Gemini.
The EU said it was concerned that these powerful AI systems could “cause serious accidents or be exploited for widespread cyber-attacks.” They also worry that generative AI could spread “harmful bias” into many applications, impacting many people.
Companies providing these systems must assess and mitigate risk. Report serious incidents, such as malfunctions that result in the death of someone or serious harm to health or property. Take cybersecurity measures. Exposes the amount of energy used by the model.
Do European rules impact other parts of the world?
The city of Brussels first proposed AI regulations in 2019, taking on a familiar global role in gradually increasing oversight of emerging industries as other governments scramble to keep up.
In the United States, President Joe Biden signed a comprehensive executive order on AI in October, which is expected to be backed by legislation and global agreements. Meanwhile, lawmakers in at least seven of his states in the United States are working on their own AI bills.
Chinese President Xi Jinping has proposed a global AI governance initiative, and authorities have issued “temporary measures” to control generated AI, which would include text, images, and text generated for people in China. Applies to audio, video, and other content.
Other countries, from Brazil to Japan, as well as global groups such as the United Nations and the Group of Seven developed nations, are also moving to develop AI guardrails.
What happens next?
The AI law is expected to be formalized by May or June, subject to several final steps, including approval by EU member states. The provisions will begin to take effect in stages, with countries required to ban banned AI systems six months after the rules are written into law.
Rules regarding general-purpose AI systems such as chatbots will begin to apply one year after the law takes effect. By mid-2026, a set of regulations including requirements for high-risk systems will be in place.
In terms of enforcement, each EU country will set up its own AI watchdog, and citizens will be able to lodge complaints if they believe they are the victim of a breach of the rules. Meanwhile, the city of Brussels plans to create an AI directorate tasked with law enforcement and supervision of general-purpose AI systems.
Violations of the AI Act could result in fines of up to 35 million euros ($38 million), equivalent to 7% of a company’s global revenue.
[ad_2]
Source link