• wuffah@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    21 hours ago

    The propensity of the average person to simply believe what they’re told is staggering, and I know because I do it all the time. It takes effort to seek out information, vet it, consider it, and then make a determination on the next information to seek or the next course of action. Deterministic, trustworthy information and abstracted concepts are extremely valuable to the brain, an organ that consumes roughly 20% of our body’s energy.

    Before, computers performed tasks that were impossible for the human mind. Machine learning has been automating tasks impossible for humans such as computer vision or large dataset processing, but chatbots are the first technology that has really enabled automating human thought. In this new sense, directly offloading this cognitive work to a computer is literally letting it think for us.

    The more reliant on this mode of thinking we become, the easier it is to transfer cognitively expensive work to a device that externalizes that energy cost. However, the trade-offs that are emerging are:

    • Internal electric brain energy is traded for relatively inefficient external electricity production to feed circuits.

    • The words generated by LLM’s must still be verified and combined into coherent, dependable ideas and actions.

    • The drive and skill required to develop good ideas that have value is degraded without constant practice.

    In the end, it becomes only a slightly less amount of work to perform the same thinking process for checking and mentally processing the output of an LLM chatbot, which defeats its purpose. If you skip that step of contextualizing it as possibly representing corporate interest and diluting meaning while offering a juicy cognitive shortcut, you’re becoming willingly complacent in your own digital brainwashing. This effect is also emergent and automatic; it doesn’t even have to be of nefarious purpose, it seems to be a procedural consequence of this mode of thinking.

    What I really fear, and what is also emerging, is that eventually AI agents will become so advanced and trusted that their end-to-end capabilities will make mistakes and ulterior motives impossible to spot, and that they will become completely above the capability and desire for human scrutiny.

    These digital brains we trained on all of human knowledge are now in the process of training us.

    • No1@aussie.zone
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      18 hours ago

      The propensity of the average person to simply believe what they’re told is staggering,

      Goddamit — now I don’t know if I should.believe you!