The Federal Trade Commission on Thursday launched an investigation into OpenAI’s business practices and its ChatGPT platform over privacy and consumer harm issues. Many industry observers applaud the move, but are concerned that regulatory efforts may be too weak or too late.
The Federal Trade Commission (FTC) on Thursday launched an investigation into OpenAI, the developer of the popular ChatGPT artificial intelligence (AI) platform, citing concerns about privacy violations, data collection practices and publishing false information about individuals.
The FTC investigation seeks to determine whether OpenAI violated consumer protection laws.
The FTC informed OpenAI about the impending investigation in a 20-page letter to the company earlier this week, in what The New York Times described as “the most potent regulatory threat to date to OpenAI’s business in the United States.”
The letter asks OpenAI to answer a series of questions about its business and security practices and provide numerous documents and other internal company details. The company was given 14 days to respond.
Sam Altman, CEO of OpenAI, tweeted his disappointment at the news, but said his company would work with the agency.
it is very disappointing to see the FTC’s request start with a leak and does not help build trust.
that said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.
— Sam Altman (@sama) July 13, 2023
According to the Washington Post, “If the FTC finds that a company violates consumer protection laws, it can levy fines or put a business under a consent decree, which can dictate how the company handles data.”
Eleanor Fox, LL.B., a law professor emeritus at New York University and antitrust expert, told The Defender the FTC’s action is a positive step:
Read More: Too Little, To Late! FTC Finally Tackles ChatGPT, Privacy, Security
