For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    11 hours ago

    The problem is a lot of this is almost impossible to actually verify. After all if an article says a skyscraper has 70 stories even people working in the building may not be able to necessarily verify that.

    I have worked in a building where the elevator only went to every other floor, and I must have been in that building for at least 3 months before I noticed because the ground floor obviously had access and the floor I worked on just happened to do have an elevator so it never occurred to me that there may be other floors not listed.

    For something the size of a 63 (or whatever it actually was) story building it’s not really visually apparent from the outside either, you’d really have to put in the effort to count the windows. Plus often times the facade looks like more stories so even counting the windows doesn’t necessarily give you an accurate answer not that anyone would necessarily have the inclination to do so. So yeah, I’m not surprised that errors like that exist.

    More to the point the bigger issue is can the AI actually prove that it is correct. In the article there was contradictory information in official sources so how does the AI know which one was the right one? Could somebody be employed to go check? Presumably even the building management don’t know the article is incorrect otherwise they would have been inclined to fix it.