• SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    35 minutes ago

    AI isn’t the problem, it is just an excuse to abuse and gaslight people. If AI didn’t exist, some other card would be played.

    Instead of destroying the looms, we should take them over and make our own products. AI can be incredibly useful and might allow cottage industries and smaller communities to become strong enough to contest the powers above us. The big constraints is just the affordability of local hardware and the development of sufficiently powerful models.

    Things are moving quickly, especially in the local AI space. Two years ago, fitting a 70b was difficult in my hardware, which had 4k context capacity, could take an hour to output, really sucked at calculating numbers, and was censored. Now a 122b can be uncensored, allow for 256k context, takes less than two minutes to output an lengthy response, and is much smarter.

    What I am saying, is that we shouldn’t reject the power of AI. We should use it ourselves, and become the equals of the elite. If we foolishly abandon power, the wealthy will just continue bullying us.

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 minutes ago

      I agree and would add for others reason here that the Luddites issue was not the looms they destroyed, but the out of control inequality that the government was not addressing. We need to stop blaming AI as a society for job loss and instead get the governments to help with the transition which so far they have largely been inactive on.

  • homoludens@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    55 minutes ago

    Yeah, well this isn’t a democracy where people have a say in what happens in our society. Our feudal elite decides what will happen, so stop complaining.

  • anomnom@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    It’s not AI. It’s LLMs that don’t actually think in any meaningful way. They just repeat what they have ingested. And was most mathematically likely.

    That’s why imma pessimist about LLMs doing anything truly revolutionary. They’re another productivity tool to solve problems that shouldn’t exist in the first place and middle-managment loves it for the same fucking reason.

  • switcheroo@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    8 hours ago

    Ai isn’t being used to better society. To improve lives. It’s being used to drain and make the Epstein class more undeserved money.

  • YoureHotCupCake@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    7 hours ago

    It is outrageous what is happening with AI right now, I work for a large company that does contracts with the US government specifically for the VA. Not only did they just lay off a bunch of people but they just announced that we being required to use AI in every step of our workflow and they have decided that AI is so great they now have people who have never a day in their life been coders doing development work. The guy whose job it was to create and manage schedules is now being required to use AI to write code and ship it. These AIs are wrong so so so much its crazy that this is the direction we are going in. If you thought things were bad already its about to get way worse.

    I am so deeply sorry to all the vets who will be struggling to get the healthcare that they need because of this. We don’t want to do this either but its clear as day they will fire us and replace us with any warm body regardless if that person has actual experience or not. I am looking to leave but the market is complete dog shit and Its been a struggle to get any kind of response for applications.

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      7 hours ago

      Hmmm…
      How about moderation of lemmy users based on suspected political affiliation according to an LLM?

      • Casterial@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Do they use LLM to moderate? Reddit does and it doesn’t have context and it’s how I got permanent ban lol

        • Mac@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          One party claimed such but the (AI supporting) accused denied it.

        • Tollana1234567@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          reddit does use it, i suspect they are using googles version and or OPEN AI. thats why there has been so many AI generated messages after you get banned. reddit realized this(admin/spez) that its alerting people to its AI usage, they use to shadowban instead now.

          the AI response from a sitewide ban usually goes like this: “Your account has been banned due to violaitons(s), please refer to the TOS”. Also it doesnt tell you what the ban is, so they kept nebelous enough that you cant appeal it.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        17
        ·
        11 hours ago

        I just tried it to see if it could implement a ping scanner in python. It could, but only if it blocked the gui while running. That kind of thing is an intermediate level school assignment. It’s not even half bad, it’s maybe 15% not bad.

  • Svengarlic @lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    If most are against somethung, how can twice as many feel something else? Isn’t most more than half?

      • Svengarlic @lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 hours ago

        While not mutually exclusive, you are limited by the total population of respondents. If 60% of people say it’s too fast, then would not it require 120% of that same population to double it?

        • Infinite@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          (Most Americans say AI development is moving too fast) and (twice as many are AI pessimists as AI optimists)

  • Snapdragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    11 hours ago

    Besides medical science, I see no use for AI. People make excuses about being “more accessible” for disabled people, but you could replicate those features without AI.

    • bridgeenjoyer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      19
      ·
      11 hours ago

      Its the equivalent of using a 80 lb sledge hammer for a penny nail. Swinging wildly and missing 99% of the time, hitting your own shins, but 1% of the time it worked so its definitely good and the right thing to do!

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 hours ago

    I don’t understand the question and I’m guessing people in the survey may not have either. Moving too fast as in using too many physical resources without first focusing on optimization or “OMG the robots are coming for my job!”? These are very different views on technology that could give the same answer.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          7 hours ago

          It’s all opinion question. They’re trying to gather opinions and feelings, not measure quantitative data about each person themselves.

          • Imgonnatrythis@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            It’s just a survey writing thing. A good survey can focus on these subjective issues but produce potentially actionable results. This question is akin to asking do you think food is too spicy?

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    I’ve got some pessimistic views as to long-term AI concerns — I’m not sure that aligning advanced AI goals with human goals in the long run is a viable problem to solve. We may not be able to achieve Friendly AI. I could believe that.

    But I certainly don’t think that AI development is “moving too fast”. Not really anything to gain in slowing down development. I remember Elon Musk proposing a six-month moratorium on development — that doesn’t make any sense, only would be something that you’d want to do if you had an immediate milestone that you believed that there was major risk attached to. In general, either AI is something that you should ban globally because it’s too much of an existential risk for humanity, and halt all development and enforce that halt, or you’d like to achieve it as soon as possible. We are not at a point where there is a consensus that that level of unacceptable risk exists and there is a global commitment to enforcing such a global prohibition.

    I can believe that there might be an excess of infrastructure development in particular, that we might not have the research side moving as quickly as need be to support that. Like, we might be doing misallocation in buying a lot of specific chips without establishing that those chips are going to provide a worthwhile return. But in terms of the technology advancing…no, can’t agree there.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 hours ago

      And…let me make it even more concrete. I’d say that there are basically two scenarios:

      1. We establish that AI — for some definition of AI — is simply too dangerous for humanity to have. In that case, the right path is to ban AI globally. That means that nobody gets it. Some coalition of countries is going to have to be willing to attack anyone who tries developing it. In that case, what we have is effectively an arms control restriction baked into customary international law. It is not optional to participate. And, for all the future of humanity, we need to be willing to enforce that. It means that we need a viable verification protocol to ensure that nobody is developing it, as is normally the case for arms treaties. And everyone has to submit to that verification protocol.

      2. We don’t. In that case, we want to develop AI sooner rather than later.

      I am certainly not willing to say that #2 is the “right” scenario and #1 is the “wrong” one. But if we decide on #1, that comes with a lot of things that we need to be doing as a species. It’s not just going to be the pre-computer-era status quo persisting, where our limited state of technology was what maintained the situation.

      EDIT: I’d also add that, just as that I’m not sure that Friendly AI is a solvable problem, I’m also not sure that it’s really viable to have a verification protocol where we can prevent development of AI. Past arms control treaties where I think that verification was likely much easier — it’s hard to hide development of major warships under the Washington Naval Treaty, for example, yet there were still parties evading restrictions — were not always successful. #1 comes with its own set of hard problems too. Are parallel compute processors legal? What about their development and production? Under what restrictions are they used? Is it possible to achieve advanced AI using CPUs (my guess is that it likely is)? If so, what new restrictions will need to be placed on use and access to CPUs? How will we identify entities building production facilities to build CPUs and GPUs? Will we need to track all existing CPUs and GPUs, to try to identify entities who might be stockpiling them? How will we monitor what the great stores of those out there now are being used for?

      If we go with #1, that also entails a different world from the one that we live in today.