For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.

  • anamethatisnt@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    12 hours ago

    Yeah, my morning brain was trying to say that when it is used as a tool by someone that can validate the output and act upon it then it’s often good. When it is used by someone who can’t, or won’t, validate the output and simply uses it as the finished product then it usually isn’t any good.

    Regarding your friend learning to use the terminal I’d still recommend validating the output before using it. If it’s asking genAI about flags for ls then sure no big deal, but if a genAI ends up switching around sda and sdb in your dd command resulting in a wiped drive you only got yourself to blame for not checking the manual.