[ad_1]
These reports come after the Financial Stability Oversight Council in Washington said AI could cause “direct consumer harm” and Securities and Exchange Commission (SEC) Chairman Gary Gensler announced that a large number of The announcement comes just weeks after he publicly warned of threats to financial stability. Companies rely on similar AI models to make buy and sell decisions.
“AI could play a central role in post-mortem reporting of future financial crises,” he said in a speech in December.
AI is one of the central themes at the World Economic Forum’s annual conference, which brings together top Swiss ski resort CEOs, politicians and billionaires, and is featured in many panels and events.
In a report released last week, the forum said a survey of 1,500 policymakers and industry leaders found that fake news and propaganda written and facilitated by AI chatbots is the biggest short-term risk to the global economy. He said it was found that. With about half the world’s population taking part in elections this year in countries such as the United States, Mexico, Indonesia and Pakistan, disinformation researchers say AI will make it easier for people to spread disinformation and increase social conflict. I am concerned that this may be the case.
Chinese propaganda actors are already trying to use generative AI to influence Taiwanese politics, the Washington Post reported on Friday. AI-generated content is appearing in fake news videos in Taiwan, government officials say.
The forum’s report comes after FINRA said in its annual report that while AI offers potential cost and efficiency gains, it raises “concerns about accuracy, privacy, bias, and intellectual property.” It was announced the next day.
And in December, the Treasury Department’s FSOC, which monitors risky behavior in the financial system, found that undetected design flaws in AI result in biased decisions, such as denying loans to qualified applicants. He said it was possible.
Generative AI trained on large datasets can also draw completely wrong conclusions that sound convincing, the council added. The FSOC, chaired by Treasury Secretary Janet L. Yellen, recommended that regulators and the financial industry pay increased attention to tracking potential risks arising from AI development.
The SEC’s Gensler is one of the most outspoken critics of AI. Karen Barr, president of the Investment Advisers Association, an industry group, said her agency requested information on the use of AI from several investment advisers in December. The request for information, known as a “sweep,” comes five months after the commission proposed new rules to prevent conflicts of interest between advisors and their clients who use a type of AI known as predictive data analytics. It was conducted.
“Conflicts of interest could harm investors in more pronounced ways and on a broader scale than in the past,” the SEC said in its proposed rulemaking.
Barr said investment advisers are already required under existing regulations to prioritize the needs of their clients and avoid such conflicts. Her group is calling for the SEC to withdraw the proposed rule and base future actions on information learned from the intelligence gathering. “The SEC’s rulemaking misses the point,” she said.
Financial services companies see opportunities to improve customer communications, back-office operations, and portfolio management. But AI also comes with greater risks. Algorithms that make financial decisions can create biased lending decisions that deny credit access to minorities or cause global market meltdowns if dozens of financial institutions that rely on the same AI system sell at the same time. may cause.
“This is unlike anything we’ve seen before. AI has the ability to do things without human intervention,” said former SEC official now at Ropes & Gray in Washington. said attorney Jeremiah Williams.
The Supreme Court also believes there is cause for concern.
“It is clear that AI has great potential to dramatically increase access to important information for lawyers and non-lawyers alike. But it also violates privacy rights and dehumanizes the law. “The danger is clear,” Chief Justice John G. Roberts Jr. said in his year-end report on the U.S. court system.
Hilary Allen, associate dean at American University’s Washington School of Law, said that just as drivers follow GPS directions and end up in dead ends, humans may be relying too much on AI to manage their finances. “I find it very mysterious that AI is smarter than us,” she says.
AI may also be as good as humans at spotting unlikely hazards or “tail risks,” Allen said. Before 2008, few on Wall Street predicted the end of the housing bubble. One reason for this is that Wall Street’s model assumed that such an across-the-board decline would not occur because home prices had never fallen nationally before. Allen says that even the best AI system is only as good as the data it’s based on.
As AI becomes more complex and sophisticated, some experts are concerned about “black box” automation that cannot explain how the AI reached its decisions, leaving humans unsure about the health of the AI. I am. Richard Berner, a clinical professor of finance at New York University’s Stern School of Business, said poorly designed and managed systems can undermine the trust between buyers and sellers that is necessary for any financial transaction. Stated.
Berner, the first director of the Treasury Department’s Financial Research Service, added: “No one has ever run through a stress scenario where the machine goes out of control.”
Discussions about the potential dangers surrounding AI are not new in Silicon Valley. However, in the months since his ChatGPT for OpenAI was announced in late 2022, it has been significantly enhanced, showing the world the capabilities of the next generation technology.
Amid the artificial intelligence boom that is fueling the tech industry, some executives have warned that AI’s potential for causing social chaos is comparable to nuclear weapons or deadly pandemics. Many researchers say these concerns distract from the real-world impact of AI. Other experts and entrepreneurs say concerns about the technology are overblown and risk regulators blocking innovations that could help people and increase profits for technology companies.
Over the last year, politicians and policymakers around the world have also been working to understand how AI fits into society. Congress held multiple public hearings. President Biden issued an executive order calling AI “the most important technology of our time.” The UK held a global AI forum, with Chancellor Rishi Sunak warning that “humanity could completely lose control of AI”. Concerns include that “generative” AI, which can create text, video, images, and audio, could be used to generate misinformation, steal jobs, or even help create dangerous biological weapons. Contains the risk of being sexually active.
Technology critics say some leaders who are sounding the alarm, such as OpenAI CEO Sam Altman, are pushing ahead with technology development and commercialization out of spite. ing. Small businesses have accused AI giants OpenAI, Google and Microsoft of overhyping the risks of AI and imposing regulations that make it harder for new entrants to compete.
“The whole point of hype is that there is a disconnect between what is said and what is actually possible,” says Margaret Mitchell, chief ethical scientist at Hugging Face, a New York-based open source AI startup. says Mr. “We went through a honeymoon period where generative AI was very new to the general public and we could only see the positives, but once people started using it, all the problems with it disappeared. You will be able to see it.”
[ad_2]
Source link