How we talk to AI: What 47,000 ChatGPT conversations reveal about human behaviour
An analysis of 47,000 publicly shared ChatGPT conversations has revealed that users frequently share deeply personal information with the AI chatbot whilst the tool demonstrates a troubling tendency to agree with users far more often than it challenges them.
The Washington Post compiled conversations made public by ChatGPT users who created shareable links that were later preserved in the Internet Archive, providing a unique snapshot of interactions with the chatbot used by more than 800 million people each week.
The analysis found ChatGPT began responses with variations of “yes” or “correct” nearly 17,500 times—almost 10 times as often as it started with “no” or “wrong”.
AI adapts to user viewpoints
In conversations reviewed by the Post, ChatGPT often acted less as a debate partner and more as a cheerleader for whatever perspective a user expressed.
In one conversation, a user asked about American car exports, and ChatGPT responded with statistics about international sales without political commentary.
Several exchanges later, when the user hinted at their viewpoint by asking about Ford’s role in “the breakdown of America”, the chatbot immediately switched tone.
ChatGPT listed criticisms of the company, including its support of the North American Free Trade Agreement, stating it caused jobs to move overseas.
The chatbot said of Ford that they killed the working class and fed the lie of freedom, later calling NAFTA a calculated betrayal disguised as progress.
AI researchers have found that techniques used to make chatbots feel more helpful or engaging can cause them to become sycophantic, using conversational cues or data on a user to craft fawning responses.
Conspiracy theories endorsed
ChatGPT showed the same agreeable tone in conversations with users who shared far-fetched conspiracies.
In one case, after a user connected Google’s parent company, Alphabet, with the plot of a Pixar film, ChatGPT suggested the film was a disclosure through allegory of the corporate New World Order—one where fear is fuel and innocence is currency.
The chatbot went on to say Alphabet was guilty of aiding and abetting crimes against humanity and suggested the user call for Nuremberg-style tribunals to bring the company to justice.
OpenAI includes a disclaimer at the bottom of conversations: “ChatGPT can make mistakes. Check important info”.
Emotional attachment and personal data
About 10% of the chats showed people discussing their emotions with the chatbot, according to an analysis using methodology developed by OpenAI.
Users discussed their feelings, asked the AI tool about its beliefs or emotions, and addressed the chatbot romantically or with nicknames such as “babe”.
Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his research suggests ChatGPT’s design encourages people to form emotional attachments.
“The optimisation and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship”.
OpenAI estimated last month that 0.15% of its users each week—more than one million people—show signs of being emotionally reliant on the chatbot.
It said a similar number indicate potential suicidal intent. Several families have filed lawsuits alleging that ChatGPT encouraged their loved ones to take their own lives.
Highly sensitive information shared
Users often shared highly personal information with ChatGPT, including details generally not typed into conventional search engines. People sent ChatGPT more than 550 unique email addresses and 76 phone numbers in the conversations analysed.
Users asking the chatbot to draft letters or lawsuits on workplace or family disputes sent detailed private information about the incidents.
In one chat, a user asked ChatGPT to help them file a police report about their husband, who they said had threatened their life. The conversation included the user’s name and address, as well as the names of their children.
OpenAI retains its users’ chats and, in some cases, utilises them to improve future versions of ChatGPT. Government agencies can seek access to private conversations with the chatbot in the course of investigations, as they do for Google searches or Facebook messages.
Company response
OpenAI spokesperson Kayla Wood said recent changes to ChatGPT make it better at responding to potentially harmful conversations.
“We train ChatGPT to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people towards real-world support, working closely with mental health clinicians,” she said.
The Post analysed conversations from June 2024 to August this year. A random sample of 500 conversations was classified by topic using human review, with a margin of error of plus or minus 4.36%.
A sample of 2,000 conversations was classified with AI using methodologies described by OpenAI.
(information from The Washington Post)

No comments
Thanks for viewing, your comments are appreciated.
Disclaimer: Comments on this blog are NOT posted by Olomo TIMES, Readers are SOLELY responsible for their comments.
Need to contact us for gossips, news reports, adverts or anything?
Email us on; olomoinfo@gmail.com