• MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    t when you start to analyze its output it has a lot of holes that require someone trained in the art to fix.

    I don’t disagree, but that’s not really what the article is saying.

    The article is saying: GPT found a novel approach resulting in a solution where none existed before, presented it poorly - though still technically correctly - and they polished the output to make it more human friendly.

    I have used the new LLMs for various things over the past few months, the one constant: for anything longer than a paragraph of output, you can get better results by reading the output (yourself) and feeding back “notes” for things to improve.

    What happens 10 or 15 years from now, when all the current crop of experts are retired and all the experts who could have curated the AI output had to spend all that time as baristas instead because the AI took all of their entry level jobs?

    Presumably, that next crop of experts will be curating AI output for 10-15 years before the current crop expires. Hopefully they learn what they’re doing in that time.