• inari@piefed.zip
    link
    fedilink
    English
    arrow-up
    58
    ·
    1 day ago

    Here’s one theory. According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they’re already doing to the world.

    I don’t think that’s really it.

    I think they have these grandiose claims just to hype their product up for investors, so people won’t focus on how these LLMs are so unreliable and inaccurate

    • kinsnik@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      Yeah, it is so people think that the ai companies are seeing the next, not-yet-public versions and are scared, they must be so powerful, right?

      Altman has been claiming chat gpt made him feel dumb since 4.5

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    21 hours ago

    they want to create urgency and FOMO. That way:

    1. investors throw all their money to the new incredibly fast-growing shiny tech before they can stop and think to trivial things like how much it costs or whether it’s actually doing useful things

    2. AI companies can continuously flood the zone with announcements of incredible new feats of intelligence by their LLMs. By the time studies come out, showing that these feats were not so impressive after all, they have released two newer, more powerful models, capable of even more impressive (real or invented) feats.

    3. AI companies can try positioning themselves as the “good, ethical guys” that you have to root for (and give all your money to), because the alternative is for the bad, unethical guys to create this AGI with no guardrails that will destroy the world. It’s “we can’t stop because if we stop someone else will do it”

    4. this kind of pressure works for governments too. We can’t let China/the US/Iran/Russia (pick your specific adversary) control this potentially destructive technology first!

    5. things that scare us, regular humans, make the rich and powerful salivate. We are scared of losing our jobs, they are happy to cut people costs (see… well, just about everyone in Tech). We are scared AI can create a surveillance state, they want to sell surveillance tech to companies and governments (see Palantir). “This tech makes regular people afraid” is music to the ears of the 0.1%.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    24 hours ago

    It’s regulatory capture. They scream about how it’s super dangerous for three years. The politicians get lobbied so the public is “protected”, then open source models (especially the evil Chinese ones) are banned and high end models are only allowed through subscription services.

  • ChicoSuave@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    It’s part of the sales pitch to turn compute into a utility and rate limit people from technology unless they are a subscription paying member of the herd.

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    21 hours ago

    I worry about AI itself - not the companies developing it. Back when I started worrying about it 12 years ago, influenced by Stuart Russel and Nick Bostrom I was expecting it to take at least 50 years before we had AI that resembles what we have now, so suffice it to say that the fact that we’re here already doesn’t exactly ease my worry.

    I’ve yet to hear a single convincing argument against the idea that even attempting to create something more intelligent than us is a really bad idea - very likely to be our last bad idea ever. Whether Mythos is actually as capable as Anthropic claims is beside the point for me. Even if it’s not, it’s only a matter of time until someone creates one that is.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      18 hours ago

      Those worries are manufactured by the AI industry. You can’t just imagine a doomsday scenario and then shift the burden of proof onto people to disprove it.

      Nick Bostrom is involved in some creepy child indoctrination stuff, along with known sex abuser and Rationality cult creator Eli Yudkowsky:

      To give an example of how swiftly teenagers are recruited and rewarded for their participation in EA: one 17-year-old recounts how in the past year since they became involved in EA, they have gained some work experience at Bostrom’s FHI; an internship at EA organization Charity Entrepreneurship; attended the EA summer program called the European Summer Program on Rationality (ESPR)…

      IIRC you were a Yudkowsky fan weren’t you?