Huge Study
*Looks inside
this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.
Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.
AI sucks in a lot of ways sure, but this feels like fud.
The hugeness is probably
391, 562 messages across 4,761 different conversations
That’s a lot of messages
I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.
I wonder if the headline was written by an AI
*hugely funded?
…fud?
fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.
It’s crypto bro speak.
What? The term FUD has been around since at least the 90s, though I think significantly older than that
Are you unironically saying “fud”
Where are you hearing it so much? (And ideally can you describe it in a little more detail than saying it’s crypto bros again?)
Crypto bros are infamous for describing any criticism as FUD, no matter the criticism. It’s like a verbal tic. https://primal.net/search/FUD
When all this FUD ends and Bitcoin goes 🚀
Quantum FUD is at ATH
FUD Busters [NFT]
Flokicoin is built to last… Don’t follow the FUD.
I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.
“Centaurs”
They think they are getting mythical abilities
They’re right but not in the way they think
As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”
There’s a certain irony in all the alright techbros really just wanting to be told they were “stunning and brave” this whole time.
Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently helps keep the Assistant character away from dark traits (sometimes). But it hasn’t been implemented in any models, I assume because of the cost of scaling it up.
When you talk to a large language model, you can think of yourself as talking to a character
But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don’t fully know
Fuck me that’s some terrifying anthropomorphising for a stochastic parrot
The study could also be summarised as “we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?”. They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?
To be fair, I’m only about 1/3rd of the way through and struggling to continue reading it so I haven’t got to the interesting research but the intro is, I think, terrible
stochastic parrot
A phrase that throws more heat than light.
What they are predicting is not the next word they are predicting the next idea
The paper is more rigorous with language but can be a slog.











