The GPT-4 model released by OpenAI was allegedly in breach of consumer protection laws, according to a US-based AI policy organization. This group request that the US Federal Trade Commission (FTC) promptly forbid OpenAI from releasing a new GPT model. The group claims that this is needed so that the system can be checked. The Center for AI and Digital Policy (CAIDP) filed a complaint today. It claims that OpenAI’s AI text output tool is “biased, deceptive, and harmful to public safety”. The group also claims that GPT-4 also flouts consumer protection laws.
A few days ago, there were reports of an open letter which request that the development of GPT-4 should halt. Together with AI experts and co – founder of Tesla, Elon Musk, the letter’s signatories also include CAIDP President Mark Rotenberg. The lawsuit, like the letter, demands slower generative AI model development and greater government regulation.
The GPT-4 generative text model, released by OpenAI in mid – March, was flagged by CAIDP as posing a threat. These dangers include the ability of GPT-4 to produce fake codes as well as biased training data. This may result in stereotypes or unjust racial and gender bias in the hiring process. The claim also highlights significant privacy issues with OpenAI’s product interface, such as a newly discovered defect that let other users view the chat history and perhaps financial details of OpenAI ChatGPT.
Gizchina News of the week
OpenAI is aware of the dangers of GPT-4 AI texts
OpenAI has already voiced concern about the potential danger of AI texts. But CAIDP thinks GPT-4 has gone too far in hurting users and should prompt regulatory action. Section 5 of the FTC Act, which forbids unfair and deceptive trading practices, is what CAIDP is suing OpenAI for. “OpenAI made GPT-4 available to the public for commercial purposes in full awareness of these risks,” the suit claims.
In the lawsuit, CAIDP asks the FTC to stop any further business GPT model rollout and to demand a honest review before any new models are issued. It also calls for the development of a reporting system that is open to the public and mimics the place where users may file fraud claims. On top of the FTC’s ongoing, albeit still largely informal, research and evaluation of AI tools, you should also ask the agency for clarity on the laws regulating generative AI systems.
At an event this week with the Justice Dept., FTC Chair Lena Khan said the agency would be on the lookout for signs that large tech firms are attempting to drive out competition. The FTC has voiced its desire to regulate AI tools. It also warns that biased AI systems could result in law enforcement action. AI groups now have serious concerns about the possible bad sides of AI texts. We all know the good sides but some bad sides should be the centre of focus now.