• aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    15 hours ago

    Jokes on you. I never had critical thinking skills to begin with. Now I’m able to do more, even though I have no idea what I’m doing. Let’s see how this plays out.

  • TheFeatureCreature@lemmy.ca
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    3
    ·
    21 hours ago

    I’m glad there are official studies being done to document this, but also it’s vary obvious if you’ve spent any time around people in the past few years. The degradation of critical thinking and research skills is highly tangible and disturbing. Any country that isn’t addressing this significant intelligence gap is going to have an entire generation (or more) of brain-drained, unskilled citizens that can’t meaningfully contribute to the national workforce. For western countries that have already surrendered most of their manufacturing and innovation overseas, this will be even more devastating.

  • supernight52@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    21 hours ago

    Wow- who would have thought using a tool that actively takes away critical thinking from any reply it generates, and relying on that instead of engaging your brain would cause negative effects on mental health and dexterity?

  • wuffah@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    19 hours ago

    The propensity of the average person to simply believe what they’re told is staggering, and I know because I do it all the time. It takes effort to seek out information, vet it, consider it, and then make a determination on the next information to seek or the next course of action. Deterministic, trustworthy information and abstracted concepts are extremely valuable to the brain, an organ that consumes roughly 20% of our body’s energy.

    Before, computers performed tasks that were impossible for the human mind. Machine learning has been automating tasks impossible for humans such as computer vision or large dataset processing, but chatbots are the first technology that has really enabled automating human thought. In this new sense, directly offloading this cognitive work to a computer is literally letting it think for us.

    The more reliant on this mode of thinking we become, the easier it is to transfer cognitively expensive work to a device that externalizes that energy cost. However, the trade-offs that are emerging are:

    • Internal electric brain energy is traded for relatively inefficient external electricity production to feed circuits.

    • The words generated by LLM’s must still be verified and combined into coherent, dependable ideas and actions.

    • The drive and skill required to develop good ideas that have value is degraded without constant practice.

    In the end, it becomes only a slightly less amount of work to perform the same thinking process for checking and mentally processing the output of an LLM chatbot, which defeats its purpose. If you skip that step of contextualizing it as possibly representing corporate interest and diluting meaning while offering a juicy cognitive shortcut, you’re becoming willingly complacent in your own digital brainwashing. This effect is also emergent and automatic; it doesn’t even have to be of nefarious purpose, it seems to be a procedural consequence of this mode of thinking.

    What I really fear, and what is also emerging, is that eventually AI agents will become so advanced and trusted that their end-to-end capabilities will make mistakes and ulterior motives impossible to spot, and that they will become completely above the capability and desire for human scrutiny.

    These digital brains we trained on all of human knowledge are now in the process of training us.

    • No1@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      16 hours ago

      The propensity of the average person to simply believe what they’re told is staggering,

      Goddamit — now I don’t know if I should.believe you!

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    21 hours ago

    People who used AI tools for hints and clarification had a much easier time once the chatbot was removed when compared to those who used the bot to essentially prompt the answers.

    Probably important for people who want to get some of the benefits of AI without paying the heavier costs. This reminds me of how I used Wolfram Alpha understand solving integrals in multivariate calculus. I paid for subscription that allowed viewing the steps it made to reach a solution. That helped me understand how the different strategies get applied in integration.

  • dream_weasel@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    18 hours ago

    There is a data point missing here.

    Do the same study and give some an LLM, some no LLM, and some a type A subject matter expert for reference. It may also matter if this person is a friend coworker or random passerby, but I would be willing to bet money that the same effect is present to a lesser (but still statistically significant) degree.

    Maybe a future study can be further refined to build some scaffolding for more effective teaching/learning “on the job” or in general.

  • Kowowow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    22 hours ago

    I would be interested in how bad a something like an internet and local file librarian or conversational text search engine(does that make sense?) would be compared to standard ai systems