• Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    We use about 20% of our caloric intake (at rest, not doing math) for our bio intelligence. Having superpowers of social organization is expensive and power hungry.

    So it’s really no surprise that the computation machines that can run AI require tens of megawatts to think.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 hours ago

    it’s really good at writing termination notices without making middle managers feel bad about letting their employees go.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    edit-2
    9 hours ago

    It’s not artifical intelligence. A Large Language Model is not intelligent.

    And yes yes, scientifically, LLM belongs there and whatnot. But important is, what the people expect.

    • Capybara_mdp@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”

      Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.

    • Capybara_mdp@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”

      Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.

    • luciferofastora@feddit.org
      link
      fedilink
      arrow-up
      6
      ·
      4 hours ago

      That’s the typical discrepancy between “definition of technical term” and “popular expectations evoked by term”. The textbook example used to be “theory”, but I guess AI is set to replace that job too…

    • Tinidril@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      I completely agree that LLMs aren’t intelligent. On the other hand, I’m not sure most of what we call intelligence in human behavior is any more intelligent than what LLMs do.

      We are certainly capable of a class of intelligence that LLMs can’t even approach, but most of us aren’t using it most of the time. Even much (not all) of our boundary pushing science is just iterating algorithms that made the last discoveries.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        On the other hand, I’m not sure most of what we call intelligence in human behavior is any more intelligent than what LLMs do.

        Human intelligence is analog and predicated on a complex, constantly changing, highly circumstantial manifestation of consciousness rooted in brain chemistry.

        Artificial Intelligence (a la LLMs) is digital and predicated on a single massive pre-compiled graph that seeks to approximate existing media from descriptive inputs.

        The difference is comparable to the gulf between a body builder’s quad muscle and a piston.

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    4 hours ago

    Isn’t it more like they’re comparing all the hamburgers and everything else you have eating since you were born?

    That’s what they’re doing with AI enegry usages isn’t it? I thought it was including the training which is where the greatest costs come from vs just daily running.

    • lovely_reader@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      3 hours ago

      No. “In practice, inference [which is to say, queries, not training] can account for up to 90% of the total energy consumed over a model’s lifecycle.” Source.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        I have about a gallon of liquid caffiene, comes with a pump so you can add it to home made soda one dose at a time.

        I suspect you could do the same with coke…

  • krooklochurm@lemmy.ca
    link
    fedilink
    arrow-up
    42
    ·
    17 hours ago

    See, the thing is, I watch piss porn. Hear me out. I told my friend that the thing is, to do piss porn, you kind of have to be into it. You could try and fake it, but it wouldn’t be very convincing. So, my contention is, piss porn is more genuine than other types of porn, because the people partaking are statistically more likely to enjoy doing that type of porn. Which is great, I think, because then they really get into it, which is hot. It’s that enjoyment that gets me off. Their enjoyment.

    She said, “Krooklochurm, you’re an idiot. Anyone can fake liking getting pissed in the face.”

    So I said, “Well, if you’re so adamant, get in the tub and I’ll piss in your mouth, and let’s see if it’s as easy as you claim.”

    So she said, “All right. If I can fist you in the ass afterwards.”

    Which I felt was a fair deal, so I took it.

    My (formal) position was strengthened significantly by the former event. And I can also attest that I could not convincingly fake enjoying being ass-fisted.

    What does that have to do with anything, you ask? Genuinity. The real deal. That’s what.

  • Una@europe.pub
    link
    fedilink
    arrow-up
    77
    ·
    24 hours ago

    So, you are saying, I should mix my cocaine with twix bars for maximum efficiency? (Would still be stupid, but now more efficiently)??

  • the_q@lemmy.zip
    link
    fedilink
    arrow-up
    65
    ·
    23 hours ago

    To be fair a lot of people think they’re intelligent and they really really aren’t.

    • Aneb@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      21 hours ago

      I’m not trying to grade on potential but betting on human potential vs AI potential feels like it rewards ourselves for being better vs a machine. Would we have Albert Einstein if we didn’t have Isaac Newton?

      • applebusch@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        That’s kind of a false dichotomy. They may be separate today, but there’s no reason to believe we won’t augment human minds with artificial neural networks in the future. Not in the magical cure all fix all way techbros like to sell it, but for like really boring and mundane things initially. Think replacing a small damaged part of some brain region, like the visual or auditory cortexes, to repair functional deficiencies. Once they get the basic technology worked out to be reliable, repeatable, and not require too much maintenance (cough subscriptions and software licenses), there’s no reason to believe we won’t progress rapidly to other augmentations and improvements. A simple graphical interface for like a heads up display or a simple audio interface for direct communications both come to mind, but I’m sure our imaginations will be comically optimistic about some things and comically pessimistic about others. All that to say that any true AI potential will be human potential in time. We won’t stop at making super intelligent AGI. We will want to BE super intelligent AGI. Since we already know highly efficient and capable intelligence is possible (see yourself) it’s only a matter of time until we make it ourselves, provided we don’t kill ourselves somehow along the way.

  • 0nt0p0fth3w0rld@feddit.org
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    23 hours ago

    and some of the most intelligent people are cast out from society because they don’t fit the culture of arrogance.

  • wabafee@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    11 hours ago

    I think we’re at a point were the hardware right now does not fit with the algorithm being used. Since they take so much power due to our computers being digital. Having a transistor only capable of holding 1 state (0V or 5V usually) is inefficient. The heat add up as you multiply especially with LLMs. There seems to be a potential for analog where a transistor acts more on a range 0 - 5v. Which in theory could store more information or directly represent what LLM runs on (floating point). For more context 1 float tends to be 32bits. 1 bit is 1 transistor so 1 float = 32 transistor. While an analog transistor could be 1 float = 1 analog transistor.

  • But_my_mom_says_im_cool@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    20 hours ago

    I think the entire idea of ai and the Internet in general taking up power and water needs to be fleshed out and explained to everyone. Even to me it’s a vague notion, I heard about it a few years back but can’t explain it to someone like my parents who would have no idea the Internet requires water to run

    • asmoranomar@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      19 hours ago

      It’s not too hard. AI requires a LOT of work. Work requires energy. Some energy is wasted during this and the byproduct is heat. The heat has to be removed for many reasons, and water is very good at doing that.

      It’s like sweating, it cools you down. But you need water to sweat.

  • mechoman444@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    15 hours ago

    Ya evolution is pretty cool.

    And on that note the human physiology sacrifices quite a bit for its intelligence.

    For example the reason humans come out as babies is because if they came out with a full sized brains they’d kill the mother!

    It’s all about the most proficient use of energy.