• vikinghoarder@infosec.pub
    link
    fedilink
    arrow-up
    5
    arrow-down
    27
    ·
    14 hours ago

    I’m trying to figure out why everyone is so mad about AI?

    I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing). So I still can’t figure out why theres so much push back, is everyone using it extensively and reached a dead end in what it can do?

    Give me some red pills!

    • Anisette [any/all]@quokk.au
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 hours ago

      I’m sure you must’ve heard about the disastrous effects on the environment and the electrical grid by now, as well as the crisis in computer parts (GPUs, RAM, SSDs and now also HDDs) caused by AI data centres. Besides this, AI output is polluting the internet. This can be used to very quickly spin up a lot of sites to support a narrative, or to fill a site with “content” that increases SEO to get advertiser money. This makes looking anything up these days almost impossible. This is especially a problem because AI is unreliable. AI works purely off of statistics and doesn’t have any conception of truth or falsehood, so it generates what is in philosophical terms called “bullshit” (real term!). Output can be true accidentally, but you are never getting “the truth”. This property has already been exploited by companies by generating data optimised for LLM consumption in order to advertise their products. Many chatbots are also built in a way that is very dangerous. They are optimised to keep you using them, which is often done by making them agree with pretty much everything you say. This has been shown multiple times to cause psychotic breakdowns in all kinds of people, even if they started out using it to, for example, write code. However, the group most at risk of this are the people using the bots as an alternative to a therapist. unfortunately, AI companies encourage this usage through initiatives like gpt health. It also turns out that AI dependence can harm your ability to learn certain things. This makes sense intuitively, a coder who relies on a chatbot to write parts of the code or to debug is less likely to develop those skills themselves. This is especially a problem with increased usage in schools. Yet more ethical problems arise with the image generation modes of AI, which, unfortunately(but unsurprisingly) turn out to be trained on like… A LOT of child porn. This has been one of the controversies with grok recently. Unfortunately, there is no real way to stop someone from asking for anything in the training data. Best you can do is either try to give negative incentive to the model or to hardecode in a bunch of phrases to automatically reject. This is a fundamental problem with the architecture. Generation of revenge porn, child porn and misinformation has run rampant. AI is also a privacy and security nightmare. Due to the fundamental architecture of AI models there is no way to distinguish between code and instructions. This means that “agents” can be injected with instructions to, for example, leak confidential data. This is a big problem with parties like hospitals attempting to integrate AI into their workflow. This is in addition to pretty much all models being run “in the cloud” due to the high costs associated with running a model. Speaking of costs, all of these models currently operate at a gigantic loss. They are currently essentially circulating an IOU between themselves and a few hardware companies(nvidia), but that cannot last forever. If any of these companies survive, they will be way more expensive to use than now. Many of the current companies are also pretty evil, being explicitly associated by figures like Peter Thiel, whose stated goal in life has been to end democracy. There are also some arguments surrounding copyright. While I do not want to strengthen copyright law and so will be careful around my comments on this topic, it is certainly true that ai often outputs essentially exactly someone elses work without crediting them.

      This is all I could think of off the top of my head, but there surely is more. hope this helps!

    • sobchak@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      7 hours ago

      I’m working with people that seem to try to offload a lot of their work to AI, and it’s shit, and making the project take longer and shittier. Then they do things like write documents in AI and expect people to read that nonsense, and even use AI to send long, useless Slack messages. In short, it’s been detrimental to the project.

    • Brainsploosh@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      11 hours ago

      It doesn’t reason, and it doesn’t actually know any information.

      What it excels at is giving plausible sounding averages of texts, and if you think about how little the average person knows you should be abhorred.

      Also, where people typically can reason enough to make the answer internally consistent or even relevant within a domain, LLMs offer a polished version of the disjointed amalgamation of all the platitudes or otherwise commonly repeated phrases in the training data.

      Basically, you can’t trust the information to be right, insightful or even unpoisoned, while sabotaging your strategies and systems to sift information from noise.

      EtA: All for the low low cost of personal computing, power scarcity and drought.

      • undeffeined@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        8 hours ago

        The less you know how LLMs work, the more impressed you are by them. The clever use of the term AI seems like the culprit to me since it will most likely evoke subconscious associations with the AI we have seen portraid in entertainment.

        LLMs can be useful tools, when applied in restricted contexts and in the hands of specialists. This attempt to make it permeate every aspect of our lives is, in my honest opinion, insane

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      11 hours ago

      I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing).

      AI is fine with simple programming tasks, and I use it regularly to do a lot of basic blocking out of functions when I’m trying to get something working quickly. But once I get into a specialty or niche it just shits the bed.

      For example, my job uses oracle OCI to host a lot of stuff, and I’ve been working on deployment automation. The AI will regularly invent shit out of whole cloth even knowing what framework I’m using, my normal style conventions, and a directive to validate all provided commands. I have literally had the stupid fuck invent a command out of thin air, then correct me after I tell it the command didn’t work about how that command didn’t exist and I needed to use some other command that doesn’t exist instead, or it gives me a wrong parameter list or something.

      Hell, even in much more common AD management tasks it still makes shit up. Like, basic MS admin work is still too much for the AI to do in its own.

      • Gibibit@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        AI correcting the user is so insane. I’ve had Mistral’s agentic ai (Vibe) confidently tell me that you can’t, in Unity3D, implicitly cast a MonoBehaviour to a bool. Even though this has been a feature since before Unity 2018.

        Not to mention I have to delete 90% of the code it generates because it doesn’t serve any function. I’m sure you can wrangle it to write some boilerplate for you but rest assured I’m not impressed by its capibilities.

    • skarn@discuss.tchncs.de
      link
      fedilink
      arrow-up
      7
      ·
      10 hours ago

      There are many reasons. My biggest problem with it is that it enables the productions of a incredible deluge of cheap shitty content (aka slop), sufficient to drown out a lot of more interesting decent work.

      This is coumpunded by big tech having decided that slop is preferable to real content. This leads to the general feeling that I’m drowning in an ocean of shit, and thus I dislike AI.

      • TrooBloo@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        ·
        10 hours ago

        Specifically regarding open source software development (what you might call “small tech”), this has led to a huge amount of slop pull requests that make it difficult to run these projects.

    • OhneHose@feddit.org
      link
      fedilink
      arrow-up
      10
      ·
      13 hours ago

      Thing is with at least the programming part is: It good at common issues, as in it re invents the wheel really good. But context is king, the better the model knows what the data and task looks like the better it can solve the problem at hand. It won’t fix any niche problems or actually spit out performant code. It uses what’s publicly available as Ressource and it’s inherently uncreactive at problem solving. All the chat assistants effectively did for me is replace stackoverflow.

      These models only know how to re-produce already solved problems. There’s certainly great applications, like on the fly translation, summerizing and data extraction.

      But it still is just a probability machine, trained on satisfying it’s customer. That’s also why it will confidently spit out complete garbage and be proud about it. And that’s also a reason why the early models are shit at math, they don’t to math, they just guess. Later models write python or other code to do the math, that’s then f.e. called “thinking”.

      It will stay around but many many ai companies will fail, barely any are turning out profit, must just burn absolutely insane amounts of money in a circle jerk ai pit.