

You can see my comment history to determine if I’m an LLM or not :)
In any case, have fun in your circles!
A human being from a Finland.


You can see my comment history to determine if I’m an LLM or not :)
In any case, have fun in your circles!


If the part of the image that reveals the image was made by an AI is obvious enough, why contact a specialist? Of course, reporters should absolutely be trained to spot such things with their bare eyes without something telling them specifically where to look. But still, once the reporter can already see what’s ridiculously wrong in the image, it would be waste of the specialist’s time to call them to come look at the image.


The article says they used ChatGPT or some similar LLM bot. It says they used a chatbot, and that’s what the word chatbot means by default. A skilled reporter mentions if it was something else.
The reporter used a chatbot such as ChatGPT to ask if there’s anything suspicious in the image, the chatbot, by coincidence, happened to point out something in the photo that the reporter could then recognise as AI-generated indeed, and got on typing his article again.
The only part of this that is not mentioned in the article is that the reporter confirmed the referred spot in the image with his own eyes, but that is such an integral part of a reporter’s education that you need specific reasons to work against the assumption that this was done.


There’s hoping that the reporter then looked at the image and noticed, “oh, true! That’s an obvious spot there!”
It is implied in the article that the chatbot was able to point out details about the image that the reporter either could not immediately recognize without some kind of outside help or did not bother looking for.
So, the chatbot added making the reporter notice something on the photo in a few seconds that would have taken several minutes for the reporter to notice without aid of technology.