“This is the Strait of Hormuz in the data economy. If you want to make a change, this is where you cut it off. Anything short of that is theatrical political posture.”
You mean it hallucinated a positive response to your leading question as it is meant to? You are operating on a fundamental misunderstanding of what LLMs do. Even if what you said is true, an LLM would have no knowledge of that unless it was explicitly told as such as an input - and why would they be stupid enough to do that?
You are welcome to try. I can pastebin the prompt.
I asked it about itself, the model.
It replied that it didn’t exist. I pointed it the the docs, from the Google page. It acknowledged the page was legit, and told me there was no mention of Gemma 4, although there were like 20 mentions, including download links. It insisted. It took me pointing out the specific paragraphs to have it say "this may indicate there is Gemma 4 model.
May be…
I don’t need to try. You aren’t learning facts from interrogating an LLM. If it doesn’t have information, it will make up a result. If it does have information, it will make up a result. Even that is personifying it too much because really the transformer has no concept of what „making something up“ is. It takes an input and gives an output, no matter what.
You mean it hallucinated a positive response to your leading question as it is meant to? You are operating on a fundamental misunderstanding of what LLMs do. Even if what you said is true, an LLM would have no knowledge of that unless it was explicitly told as such as an input - and why would they be stupid enough to do that?
You are welcome to try. I can pastebin the prompt. I asked it about itself, the model. It replied that it didn’t exist. I pointed it the the docs, from the Google page. It acknowledged the page was legit, and told me there was no mention of Gemma 4, although there were like 20 mentions, including download links. It insisted. It took me pointing out the specific paragraphs to have it say "this may indicate there is Gemma 4 model. May be…
At some point it told me I was hallucinating.
I don’t need to try. You aren’t learning facts from interrogating an LLM. If it doesn’t have information, it will make up a result. If it does have information, it will make up a result. Even that is personifying it too much because really the transformer has no concept of what „making something up“ is. It takes an input and gives an output, no matter what.