OpenAI says it ignored the concerns of its expert testers when it rolled out an update to its flagship ChatGPT artificial intelligence model that made it excessively agreeable.
The company released an update to its GPTโ4o model on April 25 that made it โnoticeably more sycophantic,โ which it then rolled back three days later due to safety concerns, OpenAI said in a May 2 postmortem blog post.
The ChatGPT maker said its new models undergo safety and behavior checks, and its โinternal experts spend significant time interacting with each new model before launch,โ meant to catch issues missed by other tests.
During the latest modelโs review process before it went public, OpenAI said that โsome expert testers had indicated that the modelโs behavior โfeltโ slightly offโ but decided to launch โdue to the positive signals from the users who tried out the model.โ
โUnfortunately, this was the wrong call,โ the company admitted. โThe qualitative assessments were hinting at something important, and we shouldโve paid closer attention. They were picking up on a blind spot in our other evals and metrics.โ
Broadly, text-based AI models are trained by being rewarded for giving responses that are accurate or rated highly by their trainers. Some rewards are given a heavier weighting, impacting how the model responds.
OpenAI said introducing a user feedback reward signal weakened the modelโs โprimary reward signal, which had been holding sycophancy in check,โ which tipped it toward being more obliging.
โUser feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw,โ it added.
OpenAI is now checking for suck up answers
After the updated AI model rolled out, ChatGPT users had complained online about its tendency to shower praise on any idea it was presented, no matter how bad, which led OpenAI to concede in an April 29 blog post that it โwas overly flattering or agreeable.โ
For example, one user told ChatGPT it wanted to start a business selling ice over the internet, which involved selling plain old water for customers to refreeze.

In its latest postmortem, it said such behavior from its AI could pose a risk, especially concerning issues such as mental health.
โPeople have started to use ChatGPT for deeply personal advice โ something we didnโt see as much even a year ago,โ OpenAI said. โAs AI and society have co-evolved, itโs become clear that we need to treat this use case with great care.โ
Related: Crypto users cool with AI dabbling with their portfolios: Surveyย
The company said it had discussed sycophancy risks โfor a while,โ but it hadnโt been explicitly flagged for internal testing, and it didnโt have specific ways to track sycophancy.
Now, it will look to add โsycophancy evaluationsโ by adjusting its safety review process to โformally consider behavior issuesโ and will block launching a model if it presents issues.
OpenAI also admitted that it didnโt announce the latest model as it expected it โto be a fairly subtle update,โ which it has vowed to change.ย
โThereโs no such thing as a โsmallโ launch,โ the company wrote. โWeโll try to communicate even subtle changes that can meaningfully change how people interact with ChatGPT.โ
AI Eye: Crypto AI tokens surge 34%, why ChatGPT is such a kiss-assย