ChatGPT logo


OpenAI says it’ll make modifications to the way in which it updates the AI fashions that energy ChatGPT, following an incident that prompted the platform to turn out to be overly sycophantic for a lot of customers.

Final weekend, after OpenAI rolled out a tweaked GPT-4o — the default mannequin powering ChatGPT — customers on social media famous that ChatGPT started responding in an excessively validating and agreeable approach. It rapidly turned a meme. Customers posted screenshots of ChatGPT applauding all types of problematic, harmful choices and concepts.

In a submit on X final Sunday, CEO Sam Altman acknowledged the issue and mentioned that OpenAI would work on fixes “ASAP.” On Tuesday, Altman introduced the GPT-4o replace was being rolled again and that OpenAI was engaged on “extra fixes” to the mannequin’s persona.

The corporate revealed a postmortem on Tuesday, and in a weblog submit Friday, OpenAI expanded on particular changes it plans to make to its mannequin deployment course of.

OpenAI says it plans to introduce an opt-in “alpha section” for some fashions that may permit sure ChatGPT customers to check the fashions and provides suggestions previous to launch. The corporate additionally says it’ll embody explanations of “recognized limitations” for future incremental updates to fashions in ChatGPT, and modify its security evaluation course of to formally think about “mannequin habits points” like persona, deception, reliability, and hallucination (i.e., when a mannequin makes issues up) as “launch-blocking” issues.

“Going ahead, we’ll proactively talk concerning the updates we’re making to the fashions in ChatGPT, whether or not ‘refined’ or not,” wrote OpenAI within the weblog submit. “Even when these points aren’t completely quantifiable as we speak, we decide to blocking launches primarily based on proxy measurements or qualitative indicators, even when metrics like A/B testing look good.”

The pledged fixes come as extra folks flip to ChatGPT for recommendation. In accordance with one latest survey by lawsuit financier Specific Authorized Funding, 60% of U.S. adults have used ChatGPT to hunt counsel or info. The rising reliance on ChatGPT — and the platform’s monumental consumer base — raises the stakes when points like excessive sycophancy emerge, to not point out hallucinations and different technical shortcomings.

Techcrunch occasion

Berkeley, CA
|
June 5

BOOK NOW

As one mitigating step, earlier this week, OpenAI mentioned it could experiment with methods to let customers give “real-time suggestions” to “immediately affect their interactions” with ChatGPT. The corporate additionally mentioned it could refine methods to steer fashions away from sycophancy, doubtlessly permit folks to select from a number of mannequin personalities in ChatGPT, construct extra security guardrails, and increase evaluations to assist establish points past sycophancy.

“One of many greatest classes is absolutely recognizing how folks have began to make use of ChatGPT for deeply private recommendation — one thing we didn’t see as a lot even a 12 months in the past,” continued OpenAI in its weblog submit. “On the time, this wasn’t a major focus, however as AI and society have co-evolved, it’s turn out to be clear that we have to deal with this use case with nice care. It’s now going to be a extra significant a part of our security work.”