Whoa, I thought only corporations were allowed to make up diseases?
Because she works in the medical field, she decided to create a condition related to health and hit on the name bixonimania because it “sounded ridiculous”, she says. “I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania — that’s a psychiatric term.”
If that wasn’t sufficient to raise suspicions, Osmanovic Thunström planted many clues in the preprints to alert readers that the work was fake. Izgubljenovic works at a non-existent university called Asteria Horizon University in the equally fake Nova City, California. One paper’s acknowledgements thank “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise”. Both papers say they were funded by “the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”.
Even if readers didn’t make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.
Your comment is a little too long, but I read up to this point:
Because she works in the medical field
So, now I know enough to know that any AI summary of this paper is absolutely true because science said it.
Also, I’m pleasantly surprised that Sideshow Bob is finally doing something useful.
science didn’t say it either. the first thing you learn in research class is you don’t trust pre-prints since they by definition have not been reviewed (like the academia equivalent of blog posts)
You didn’t mention if you work in the medical field at the top of your comment, which invalidates everything else you’ve claimed about science. I should know, I do my own research by reading Google AI summaries.
Many people are inherently lazy so it stands to reason that given an easy out they will take it. Only a fool trusts critical advice without checking it. It appears that there are a lot of fools out there.
Unfortunately the complete article is not available, which is yet another issue exacerbated by the Assumed Intelligence cohort.
If it’s plausible enough based on the dataset it was trained on it exists. Hallucinations are basically just the LLM trying to stay current by inference, I think.
Edit: Guess I used the wrong words, oh well
LLMs don’t try anything. They are deterministic tools.
While I understand your point, deterministic with a billion variables is beyond human ability to process, let alone the multi-billion parameter models in general circulation today.
At what point does deterministic descend into random?
Assumed Intelligence is a solution for a bunch of multivariate problems, like say “the travelling salesman”, but it’s not intelligence nor in my opinion is it effectively “deterministic”.
While I understand your point, deterministic with a billion variables is beyond human ability to process, let alone the multi-billion parameter models in general circulation today.
Fair enough. There’s a significant difference in complexity between the surface implication of what I said versus reality. Yes, it’s deterministic, but it’s also complex enough that something more should be said… though, we need to be careful here. Our language is not mature enough to scaffold the precise concepts we need here, and attempting to do so regardless carries the risk of smuggling in many concepts we did not intend to smuggle in. Concepts like intent, for example. I agree with you, but cautiously.
At what point does deterministic descend into random?
It shouldn’t at any point. Instead, we’re discussing a system that’s similar to the double pendulum or three body problem. It’s deterministic, though computationally irreducible. That’s chaotic, but it is not random. It’s extremely sensitive to initial conditions.
What are you saying precisely? It’s well known that LLMs have non-deterministic output (Ilya Sutskever even claims as such). Are you saying the way it goes about retrieving tokens as deterministic?
I think you’re right about that, but it is artificial nondeterminism in the sense that it’s relying on several algorithmic factors and, more subtly, device differences. The system itself is a complex yet deterministic function.
I can agree with that largely but I still contend you’re conflating a few things to make that argument. Fundamentally an LLM will make predictions based on probability (ignoring temperature) and probability does not equal certainty.
I would argue that’s empirically true but not fundamentally true. Actually, I’d argue that my point is the fundamental truth here. Computers still cannot generate random output. They simulate the process, and it’s not truly random. It’s just good enough to fool us at the surface level.
They are deterministic but complex to determine.
The Assumed Intelligence systems I’m familiar with have a “random” element, but it’s unclear where that source of randomness comes from. Is it using a computational random source, or something like the lava lamp wall at Cloudflare, which is significantly more random, potentially actually random.
It’s temperature primarily. That being said there is still a chance that an LLM can output values that are unexpected even at low temperatures.
“Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.
They coined the term “hallucination” as soon as when people realized that the “AI thing” is throwing back bullshit at us.
They had to force that term in people’s head, else we would call that bullshit, lies and so on as we should.
It’s like Google with their “side loading”. There is no such thing, it’s installing an app…
It’s a word war. People are being manipulated.
Been going on for a while. Remember “Alternative Facts”?
I concur.
Why do you concur? You have a problem with “hallucinations” because it’s something humans do. This commentor wants to call them (among other things) “lies”, which implies intent and knowledge of falsehood which an LLM definitely can’t have. I’m not saying “halliconations” are super accurate but I don’t think the term is too positive and lessens the major issues LLMs have.
ok. so I think what you see as commenter wants to call them lies is descriptive of what the corporations are pushing (as “hallucinations” but what a reasonable person would call lies)
In other words it’s a “meta” conversation that I concur with. A LLM cannot do human things obviously, but “sales” can portray them as such.
In my day to day usage I make an actual effort to refer to that stuff that is wrong from an LLM as wrong. Not with human focused words.
fair enough
Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.
Ah, it’s always hallucinating, sometimes the hallucinations conveniently line up with reality.
The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.
Exactly. This is also why Ai doesn’t really truly understand the responses it gives back.
It’s faking intelligence by the training data, so it seems like intelligence by an untrained eye, but in reality Ai is just an hallucination that tries it best to give the most likely and correct answer possible (again without understanding).







