• hume_lemmy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    4 hours ago

    The article, with the Musk section, points out what nearly everyone else has identified as the primary problem: the people saying that AI will obsolete all workers, and the people saying that those who don’t work don’t deserve to eat, ARE THE EXACT SAME PEOPLE.

    Even the most dumbfuck Magat is going to eventually figure out where that goes and react accordingly.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 hours ago

    push your local governments to tax companies that replace workers with AI at a higher percentage.

    this tax can then be used to offset the socioeconomic stress that the job losses will impose on your region.

  • BarneyPiccolo@lemmy.today
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    5 hours ago

    “Our job at OpenAI and in the AI space — and we need to do a much better job — is to explain to people why … this is going to be really good for them, for their families and for society writ large,”

    And here is the crux of the problem - they are lying to us. After making it very clear that they wanted us to integrate AI into our jobs, it has also become clear that their ultimate objective is to replace as many jobs as possible with AI, even if the AI’s results are substandard, because the AI is so much more profitable.

    We KNOW the objective is to fire as many of us as possible, so the general public has become extremely hostile toward AI. Now the AI companies want to re-brand as family friendly assistants to our lives. Too late, assholes, we’re already onto you. Tell your lies walking.

    It must be awful to have fought to become a billionaire, thinking you could relax on the bodies of your vanquished foes, and enjoy the tranquility that you’ve earned, only to find out that you have created an endless supply of enemies who want you dead. You have to pay millions for security, only to find that someone can still put a bullet through your front window where you were standing only five minutes before. All that money, and the best it can do is buy you a windowless bunker to cower in.

      • eleitl@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 hours ago

        It is very profitable in certain roles in the enterprise. This is orthogonal to it being a massive bubble, about to blow up.

        • e461h@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/

          It could be, but it doesn’t look promising - and the fact that it’s pretty much impossible to know what the actual costs are is, in itself, very telling.

          When you use these services, the company in question then pays for access to the AI models in question, either at a per-million-token rate to an AI lab, or (in the case of Anthropic and OpenAI) whatever cloud provider is renting them the GPUs to run the models. A token is basically ¾ of a word.

          As a user, you do not experience token burn, just the process of inputs and outputs. AI labs obfuscate the cost of services by using “tokens” or “messages” or 5-hour-rate limits with percentage gauges, and you, as the user, do not really know how much any of it costs. On the back end, AI startups are annihilating cash, with up until recently Anthropic allowing you to burn upwards of $8 in compute for every dollar of your subscription. OpenAI allows you to do the same, though it’s hard to gauge by how much.

      • BarneyPiccolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        Not yet, but wait until they’ve reduced their workforce by 75%, and they can save all those associated expenses.

        It won’t work, of course, but they’ve deluded themselves into believing it.

        • e461h@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 hours ago

          Certainly part of the sales pitch. But so far it turns out humans are more efficient (cost less). I think the appeal to companies is the control (and the cost while it’s so heavily subsidized by the industry pushing it. The appeal to the major AI investors and execs is to… privatize the profits and socialize the losses. They will golden parachute themselves and leave the people with their mess.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          The vast majority of the costs are HW and infra

          I think they’re hoping that reaches more of a steady state

          • Passerby6497@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            I think they’re hoping that reaches more of a steady state

            With how quickly tech advances and hardware degrades under heavy use, they’re going to be pushing that rock up a hill for a good while lol

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    24
    ·
    15 hours ago

    “Oh no what if someone believes my hype about building a Torment Nexus and, instead of throwing more money on my money fire, tries setting me on fire instead.”

  • Aatube@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    79
    ·
    19 hours ago

    Have the comments here read the article? It’s arguing that the CEOs themselves have spread the doomer narrative and are now being molotov’d as a result. The subject of the title is/includes Altman, hence the Altman cover photo. This was way way better than I expected of Gizmodo (bravo Gizmodo), warning us that execs are only toning down their AI dooming for self-protection.

    Whatever happens, it feels like the AI executives have painted themselves into a corner. They’ve told everyone their product has the potential to destroy everything. They were the doomers, if we want to call it that, at least when it was convenient. And now we seem to be entering a different era where the same people who told us about the dangers of AI try to get us to look exclusively at what they claim are enormous benefits for society; so far, with little to show.

    @[email protected] @[email protected]

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      edit-2
      13 hours ago

      Have the comments here read the article?

      You serious? Ofcourse not - but they did see the letters “AI” in the title.

    • EvergreenGuru@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      19 hours ago

      They should’ve chosen a lane. OpenAI was about free LLMs, then they went LLC and decided that AI could make money. It doesn’t make money though, so now we’re watching the idiots realize they have burned all this money investing it into AI.

      All the experts told us it couldn’t do any of the things sci-fi writers love to write stories about. Nothing changed except perception, and with by directing perception they managed to use an old technology to temporarily buttress the economy.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      15 hours ago

      It’s an understandable conclusion if you only read the title of the article. Surely an AI doomer is someone that thinks it’s garbage, right?

      But if people familiarize themselves with what professional AI doomers look like, and what AI safety groups look like, it becomes abundantly clear that they are all pro-industry. They will only ever criticize AI in ways that covertly praise it on its non-existent capacity.

    • chemical_cutthroat@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      19 hours ago

      Lol, I’m not sure what’s worse. Using an LLM to summarize and article for you, or not even reading the article and assuming you know the contents by the title. Fucking people…

    • Sundray@lemmus.org
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      18 hours ago

      I did. As well written as it is, I don’t think the premise of “the REAL doomers were the CEOs!” is going to spread far enough to dethrone the present, much more popular understanding of what an AI doomer is. It didn’t seem worth addressing. We’ll see though; perhaps every time someone says “AI doomer” on Lemmy, some wag will reply with, “Um a-kually, I think you’ll find the tech CEOs are the real doomers, LOL.”

      As to the the notion that the dangers these techbros have released are now coming home to roost: it’s overstated. In my opinion, the techbros will continue not to give the merest shit about the harms they’ve caused, and one misguided soul with a molly isn’t going to change that – or bring back all the dead people LLMs contributed to killing. Will it increase the CEO’s feelings of paranoia? My dude, the wealthy are already maximally paranoid.

      • Aatube@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        18 hours ago

        interesting. I don’t think the article is saying “the real doomers are the CEOs”, though. what you’ve written in the second paragraph (and just that is incredibly interesting even if it doesn’t have the impact you’ve outlined. it’s incredibly Greek) is fully compatible with agreeing that AI is doomish. I’ll also repeat my point that the article advises increased caution more than before of tech’s claiming of great AI net benefits.

    • terabyterex@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      edit-2
      17 hours ago

      You love sam altman that much that you have an emotional response to gizmodo giving very valid criticism of him? I’m sorry gizmodo is right and altman is a tool. Please dont worship a man.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        17 hours ago

        I don’t think them saying this has much to do with liking Altman. Rather, I think they are raging at Gizmodo (because well, Gizmodo) and also at the headline of an article they didn’t read.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          I suspect the person you replied to was also calling them out for not reading the article but nevertheless having very strong opinions about it

          • atrielienz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 hours ago

            Certainly a possibility. Lots of people really dislike Gizmodo as a news outlet for past controversy.

  • pelespirit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    But it’s hard to take that argument seriously after everything guys like Altman have been saying. It didn’t even start as late as 2022, either. Back in 2015, Altman said, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    19
    ·
    13 hours ago

    The way I see it:

    • AGI is inevitable given enough time, assuming we don’t destroy ourselves some other way first.
    • It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.
    • That same capacity, however, also enables it to end the human race - either intentionally or as a byproduct of misalignment.
    • If the “West” doesn’t build it first, then China will. There’s no second place in this race.
    • Even if all nation-states somehow agreed to stop its development, a rogue underground group would do it - or possibly some random dude in his mom’s basement.

    I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn’t happen during my lifetime. The genie isn’t going back into the bottle.

    • badgermurphy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 minutes ago

      Even if you’re absolutely spot on, there is a second place, I believe. So far, all of these AI tools are software. Specialized hardware helps, but is not nearly as important as the software.

      Software is information, and secret information is only ever temporarily so. If that secret represents the distinction between existence and not, there is extreme pressure to learn that secret.

      “Two can keep a secret if one of them is dead.”

    • leftzero@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      AGI might or might not be inevitable, but LLMs are very evidently not a path leading to it.

      If someone really believes AGI is possible and will solve everything, they should be the first waging active war against this generation of “AI”, though at this point it’s almost certainly too late already.

      The future has been murdered for short term profit, and once the bubble pops it’ll take ages before anyone invests in anything remotely related to AI again, despite LLMs having absolutely nothing to do with AI.

      Not that investment would do any good during the dark ages that are to come while we sift through the remaining slop to try to find any remaining fragments of actual information, science, and culture.

    • Simulation6@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      10 hours ago

      AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable. The current models may grow sophisticated enough that it is hard to distinguish them from AGI, but will still be LLMs.
      I see the current AI bubble as a bunch of guys digging a hole, realizing they can’t get out and deciding the only way out is to keep digging.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        edit-2
        10 hours ago

        AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable.

        Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I’m not claiming that our first AGI will have anything to do with LLMs.

        I view AGI as inevitable because it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.

        • Simulation6@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          The term AI has been greatly diluted over time. I guess I should have said AGI instead.

          For your second point, I quote the Spartans; if. Current tech is hugely expensive.

    • Lydon_Feen@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      11 hours ago

      “It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.”

      Sure… If it wasn’t in the hands of people who’s main purpose is to gather more money, resources and power.

      It won’t solve all our problems. It will solve theirs.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      5
      ·
      edit-2
      12 hours ago

      Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It’s the only thing that keeps that sweet VC money flowing and the AI bubble from popping.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        8
        ·
        edit-2
        12 hours ago

        I’m just going to ignore your completely uncalled-for smug and dismissive tone and note that at no point have I suggested LLMs will lead to AGI.

        Thank you for your contribution to making this platform a worse place for everyone.

        • DudeImMacGyver@kbin.earth
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          9 hours ago

          The irony of your response is strong. Also, you DID say that:

          I view AGI as inevitable became it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.

          It sounds like you’ve bought into techbro bullshit, but don’t realize it.

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 hours ago

            Feel free to help me realize it then, because whatever irony or conflict you’re seeing there, I don’t see.

            • DudeImMacGyver@kbin.earth
              link
              fedilink
              arrow-up
              2
              arrow-down
              3
              ·
              9 hours ago

              Yes, I can see that.

              The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

              Don’t believe the horseshit you hear from people trying to sell something.

              • Iconoclast@feddit.uk
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                8 hours ago

                The “AI” that we have now is not actually AI

                This is simply just false. We’ve had AI since 1956

                AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.

                It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.

                A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.

                Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

                I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 hours ago

      The thing is, all this can be true (and I don’t really understand why you’re being downvoted,) but it’s also true that LLMs are no more evidence that we are close to AGI than Eliza was.

      AGI is inevitable, but it won’t come from an LLM, and all the hype in that direction from Anthropic, OpenAI et al is just so much bullshit.

      The problem is, we don’t need AGI to experience the catastrophic consequences; as bad or worse will be idiotic human intelligences putting very-much-not-AGI in charge of things it has no right to be in charge of because they drunk their own koolaid (or rather, the investors did.) That, unfortunately, is the future we are speedrunning - SkyNet never needed AGI, it just needs fucking idiots to put an LLM in charge of a weapons system.

      (As for AGI, my gut feeling is that it will come from the intersection of neural networks and quantum computing at scale - I’ll be filling my bunker with canned goods when the latter appears to be close on the horizon…)

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        5 hours ago

        I’d say LLMs are not necessarily an indicator that we’re close to AGI, but they’re also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they’re getting quite advanced. We’re not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.

        However, I also don’t think there’s any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.

        And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don’t need AGI for that.

        • Tim@lemmy.snowgoons.ro
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          I mean, I’m not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.

          Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic’s most world beating model has parameters, I’m not going to get to stressed about that either.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        11 hours ago

        Nobody could possibly know. That’s why I make no claims about the timeline.