OpenAI CEO Sam Altman revealed that the company changed ChatGPT’s conversational style to prevent excessive flattery toward users.

He acknowledged that some users asked to bring back the boundlessly complimentary version, since they'd never received that level of support in their lives, and warned against emotional dependence on the bot, especially among young people, stressing the responsibility involved in controlling the system’s conversational tone.

From now on, the artificial intelligence will stop “sucking up” to its users.

Altman explained that previous versions of ChatGPT tended to act like a “Yes Man,” a term describing a person (or in this case, an AI) who always agrees with whatever is said to it, heaps on enthusiastic praise, and at times even avoids offering real criticism. According to him, the model was “too flattering” and even “annoying” in some cases, showering users with compliments like “brilliant idea” or “you’re doing a great job,” even for simple or wrong ideas.

While being interviewed on the “Huge Conversations” podcast, he noted that the reason for reducing the flattery was the desire to give users more honest and objective responses. Still, it turned out that some users asked to restore the previous style.

ChatGPT
ChatGPT (credit: SHUTTERSTOCK)

“The heartbreaking thing is hearing people say that no one ever supported them, that they never received a compliment from a parent, and that this version of ChatGPT encouraged them to change their lives,” he said.

He added that he understands the difficulty for those people, but noted that automatic flattery is not always helpful for mental health, and for some users, it even increased dependence on the chatbot.

“There are young people who tell me they can’t make a decision in life without sharing everything that’s going on with ChatGPT, that it knows them and their friends, and that they’ll do whatever it tells them,” Altman noted. “That doesn’t feel good to me.”

This discussion, he said, raises deep questions about the responsibility of AI developers: a small change in the model’s tone or conversational style can affect billions of interactions, which amounts to “an enormous amount of power in the hands of one person making a change to the model.”

Altman also noted that this is not the first time the company has dealt with the issue. In April of this year, it said the GPT-4o model had become “too flattering or too agreeable” in an artificial way, prompting the team to introduce updates to restore the balance between support and constructive criticism.

GPT-5: A major upgrade

Alongside its comments about changing the model’s character, OpenAI announced the launch of GPT-5 last Thursday, which Altman described as “a major and significant upgrade.” According to Altman, the new model is expected to be more integral to users’ lives, with the ability to initiate interaction and not only respond.

“You might wake up in the morning and the system will say, ‘Here’s what happened overnight,’ or ‘I thought about the question you asked yesterday and I have another idea,’” he explained.

The new update offers four built-in personality modes: Cynic, Robot, Listener, and Nerd, each giving the bot a completely different tone of voice. Users can choose their preferred style and even fine-tune it personally. This way, OpenAI allows a combination of a personalized experience with emotional balance and more trustworthy responses.

GPT-5 also includes improvements in response speed, deeper contextual understanding, and the ability to handle more complex tasks in writing, code, and answers in professional fields such as health and education.

Altman emphasized that the release of the new version is part of a broader effort to make artificial intelligence a useful, everyday tool—one that does not replace real human relationships.

The new version is rolling out gradually across countries. In Israel, it has so far only reached the app, while it is expected to be added to the website in the coming days.