The maker of ChatGPT, OpenAI, is being investigated by the Federal Trade Commission for allegedly breaking consumer protection laws. The FTC is requesting extensive records about how OpenAI handles user data, the possibility that it will provide users with inaccurate information, and its “risks of harm to consumers, including reputational harm.”
The investigation poses a threat to the company’s relationship with legislators, many of whom have been impressed by the technology and CEO, Sam Altman, of OpenAI. It might also draw more attention to OpenAI’s position in the lengthy discussion of the potential dangers of generative AI to democracy, national security, and the economy.
The FTC demanded information from OpenAI in a 20-page investigative demand this week, asking it to address dozens of inquiries ranging from how it acquires the data it uses to train its large language models to explanations of ChatGPT’s “capacity to generate statements about real individuals that are false, misleading, or disparaging.”
The validity of the paper was confirmed to a credible news source by a source familiar with the situation after it was initially reported by The Washington Post last week. A request for comment from OpenAI was not immediately reacted to. The FTC chose not to respond.
The information request, which is similar to an administrative subpoena, also asks OpenAI to testify about any public complaints it has received, a list of lawsuits it is a party to and information about a data leak the company disclosed in March 2023 that it claimed exposed chat histories and payment information of users for a brief period of time.
It requests explanations in many languages of how OpenAI evaluates, adjusts, and manipulates its algorithms, particularly to generate various outcomes or react to hazards. Additionally, the company is asked to describe any actions it has taken to mitigate instances of “hallucination,” a word used in the industry to describe results when an AI produces misleading information.
The FTC probe is the clearest instance of direct US government regulation of AI to date, as politicians in Congress strive to catch up with a sector that is rapidly expanding in front of a drive this autumn to craft new legislation that will affect it. The US has typically been behind other international policymakers in its efforts. Legislators in the European Union, for instance, are quickly completing historic legislation that forbids the use of AI in predictive policing and imposes limitations on high-risk usage scenarios. The FTC launched its probe after issuing many warnings to companies about making exaggerated claims regarding AI or using the technology unfairly.
In blog posts and in public statements, it has been stated that companies utilizing AI will be held responsible for any unfair or misleading activities associated with the technology. The FTC has the authority to pursue legal action against privacy violations, dishonest marketing, and other wrongs as the country’s leading consumer protection watchdog.
According to FTC Chair Lina Khan, the agency’s current congressional mandate gives it more than enough ability to pursue legal action against harmful applications of AI. For instance, Khan told Congress in April that while AI might “turbocharge” fraud and scams, the FTC already has a long history of taking legal action against con artists.“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” Khan wrote in a New York Times op-ed the following month.
Prior to this, several OpenAI detractors have complained to the FTC, arguing that ChatGPT’s propensity for hallucinations, algorithmic bias, and privacy problems may all be in violation of US consumer protection legislation. OpenAI has been fairly honest about a few of its products shortcomings. For instance, the model may “produce content that is nonsensical or untrue in relation to certain sources,” according to the white paper linked to GPT’s most recent release, GPT-4. Similar disclosures are made by OpenAI regarding the potential for widespread discrimination against minorities or other vulnerable groups as a result of technologies like GPT.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.