Receive Free Artificial Intelligence Updates
I will send myFT Daily Digest E-mail summarizing the latest information artificial intelligence News every morning.
The risks posed by artificial intelligence chatbots have been formally investigated by US regulators for the first time after the Federal Trade Commission launched a broad investigation into ChatGPT maker OpenAI.
in a letter sent to microsoftFTC investigates whether people are being harmed by false information about themselves generated by AI chatbots, similar to whether OpenAI engages in “unfair or deceptive” privacy and data security practices announced.
Generative AI products are in the crosshairs of regulators around the world. AI Experts and ethicists have warned of the vast amount of personal data consumed by this technology and its potentially harmful output, ranging from misinformation to sexist and racist comments. I’m here.
In May, the FTC issued a warning to the industry, saying it would “focus on how businesses use AI technology, including new generative AI tools, in ways that could have a real and significant impact on consumers.” ,” he said.
In a letter, U.S. regulators told OpenAI that due to the way the group holds user information, the company has taken steps to address the risk that its models will make “false, misleading or derogatory” statements. He asked for the sharing of internal documents leading up to the measures taken.
The FTC declined to comment on the letter, first reported by The Washington Post. OpenAI CEO Sam Altman posted on Twitter late Thursday. called it “It’s very disappointing that the FTC’s request started with a leak and doesn’t help build trust,” he added. “It is very important to us that our technology is safe and consumer friendly, and we are confident that we are compliant with the law. We will of course work with the FTC.”
FTC Chairman Rina Khan testified before the House Judiciary Committee on Thursday morning, facing strong criticism from Republicans over her hardline enforcement stance.
Asked about the investigation at the hearing, Khan declined to comment on the investigation, but regulators’ broader concerns include that ChatGPT and other AI services are being “fed with vast amounts of data.” On the other hand, he said it was related to “the lack of checks on what kind of data is being provided”. embedded in these companies. ”
Furthermore, she added: “We have heard reports of people’s confidential information being released in response to inquiries from others. It is this type of fraud and deception that concerns us.”
Khan was questioned by lawmakers about her mixed background in court this week after the FTC suffered a crushing defeat in an attempt to block Microsoft’s $75 billion acquisition of Activision Blizzard. The FTC appealed the decision on Thursday.
Meanwhile, the chairman of the committee, Republican Jim Jordan, engaged in “irregular and improper” conduct in implementing a consent order imposed by the FTC last year, Twitter said in court filings. It accused Khan of “harassing” him following the allegations.
Khan declined to comment on Twitter’s filing, but said the FTC only cares “that they are following the law.”
Experts worry that the language model behind ChatGPT hides a lot of data. OpenAI has surpassed his 100 million monthly active users within two months of its launch. Microsoft’s new Bing search engine is also powered by his OpenAI technology, and within two weeks of its January release, it was used by more than one million of his people in 169 countries.
Users report that ChatGPT fabricates names, dates and facts, as well as fake links to news websites and references to academic papers. This problem is known in the industry as “hallucinations”.
The FTC study delves into technical details, including how ChatGPT was designed, the company’s efforts to fix hallucinations, and the oversight of human judges that directly affect consumers. . We also asked for information about consumer complaints and the efforts the company has made to assess consumer understanding of the accuracy and reliability of chatbots.
Reiterating his earlier acknowledgment about ChatGPT’s possible error, Altman tweeted: And our profit-limiting structure means that you are not incentivized to make unlimited profits. But he said the chatbot was built on “years of safety research,” adding, “We protect user privacy and design the system to learn about the world, not the individual.” added.
https://www.ft.com/content/8ce04d67-069b-4c9d-91bf-11649f5adc74 ChatGPT Maker Under Investigation by US Regulators Over AI Risks