• RestrictedAccount@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    22 minutes ago

    I keep hearing how Claude is so much better and I need to try it. Furthermore, last night I had an AI Expert spend an hour telling me how Claude could just do things by itself and how great it is.

    So I paid the money got Claude. I gave it a task to summarize a bunch of press releases and put them into a newsletter. After spending a lot of time getting the formatting, perfect I went to fact check what had been written.

    About a third of it was totally hallucinated

    This was today

    All it had to do was read press releases and make a summary and it couldn’t do that without hallucinating a bunch of fake DEI facts that just weren’t true

    So I tried to give it the task of verifying all of its declared statements and it ran out of juice so I have to wait till it resets.

  • avg@lemmy.zip
    link
    fedilink
    arrow-up
    46
    ·
    8 hours ago

    My manager has gone 100% into AI where he might have been slightly skeptical at first, it’s slightly scary, I see many of the benefits of modern AI, specially in helping me deal with my ADHD, but I feel like what differentiates me from the masses is gone. Middling coding skills, doesn’t matter, ability to recall obscure knowledge I read or learned about years ago, doesn’t matter, why would I be a valued employee while still having to deal with the negative side effects of ADHD? On top of that, immigrants getting hunted for sport on the streets, I’m not doing too good.

    • Pokexpert30 🌓@jlai.lu
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      Being able to understand and follow the ai and tell it where to go next should be what make you differentiate from the mass. Hyperfocus on your task and run along the ai. It’s satisfying to execute and it exhausts you mentally like a good gym session. Unsure if it’s a given to everyone doe.

  • potatopotato@sh.itjust.works
    link
    fedilink
    arrow-up
    103
    ·
    14 hours ago

    I unfortunately work with AI and actually understand how it works. It’s going to replace workers the same way that cocaine replaces workers.

    It’ll make some knowledge workers moderately more productive but that excess will be absorbed like with any other tool and we’ll just do more shit as a society at the expense of continuing to destroy the environment.

    Once the bubble bursts and things calm down there will probably be some job growth as the economy figures out how to better utilize these new tools. It’s like if you invented a machine that could frame 60% of a house and brilliantly declared you’d fire all the framers but then realized you’re now building a lot of houses and need more framers than before to finish the remaining 40%.

    • redsand@infosec.pub
      link
      fedilink
      arrow-up
      14
      ·
      8 hours ago

      It’ll frame the whole house well enough for the layman but 40% will fail code compliance

    • ZkhqrD5o@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      7 hours ago

      IMO, the only thing to be taken seriously with text generators should be natural language processing.

      • take this fat block of text and give me a bullet point list.
      • what are synonyms for X?
      • copy-paste a big TOS and tell me the key takeaways that are anti-customer.
      • take these documents and make one coherent document about one page long.
      • etc.

      The problem is that even with things like this, it frequently fails because it hyperfixates on some details while completely glossing over others, and it’s completely random if it does that or if it’s good, and this uncertainty basically necessitates that you check everything it outputs, negating much of the productivity that you gain.

      I once used it for a Python script, and I generated three outputs out of these three generations only. One regex function ended up in my real script, but I got the idea to use regex from it. And I used its output, which actually worked.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      8 hours ago

      You are thinking of office work, but there are a LOT of jobs that will be permanently replaced by AI-driven robotics, like fast food workers, retail shelf stockers, drivers, warehouse work, etc. Those are workers that can’t be easily trained UP, and many will likely become permanently unemployed.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 hours ago

        That has been happening for decades. It hasn’t actually made retail that much more automated, just massively reduced quality of service and quality of work for those remaining. Every store that has followed these methods still gets customers due to increased isolation and lack of choice, but no one likes going there.

        • BarneyPiccolo@lemmy.today
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          5 hours ago

          Yeah, they’ve been sneaking in industry killing technology for years. I had a nice career in the record business as a sales manager to retailers, until they shifted all music to easily pirated digital files on the Internet, closing 99% of the record stores in the country, and 1000s of people like me lost their jobs nearly overnight, without the media noticing it at all.

          The tech to make any fast food outlet almost fully robotic is available right now, and every fast food corporation has a plan to implement it some point, and fire all those pesky humans. The only reason they haven’t done it, is because they know there will be a huge outcry, and almost certainly a crippling boycott of whichever company dives in first.

          But make no mistake, as soon as one does it, they’ll ALL do it, and MILLIONS of fast food workers are going to lose their jobs. Teens, retirees, working moms, second incomes, etc., are all going to be in trouble.

      • sobchak@programming.dev
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        5 hours ago

        I don’t buy that. There’s little reason to automate those jobs because the labor is so cheap. And as someone who has worked most of those jobs in the past, most of those workers could be easily trained for different jobs; most are actively taking it upon themselves to train to get out of them.

        • BarneyPiccolo@lemmy.today
          link
          fedilink
          arrow-up
          4
          ·
          5 hours ago

          Labor is cheap? Most cities are approaching $15 an hour, and even those immoral states that keep it at the Federal minimum of $7.75, a robot is still going to be cheaper in the long run. Then there are benefits, payroll taxes, personal issues, schedules, etc. People are a pain in the ass, and expensive in a lot more ways than money.

          Besides, it almost certainly won’t be up to the franchisee. When corporate decides that they can be more efficient and more PROFITABLE with automation, the stores will go along with it, whether they like it or not.

          It’s not an if, it’s a when. It’s definitely going to happen.

    • WanderingThoughts@europe.pub
      link
      fedilink
      arrow-up
      37
      ·
      14 hours ago

      make some knowledge workers moderately more productive but that excess will be absorbed

      That seems to result in a higher burn out rate. The worker had to do more soul crushing check and verify work instead of doing knowledge work.

      • lightnsfw@reddthat.com
        link
        fedilink
        arrow-up
        8
        ·
        8 hours ago

        Can confirm. It’s not AI but probably 80% of my job is just emailing other people to do shit, emailing other people status updates about their work, and verifying their completed work which is frequently wrong. It sucks.

    • vikinghoarder@infosec.pub
      link
      fedilink
      arrow-up
      5
      arrow-down
      27
      ·
      12 hours ago

      I’m trying to figure out why everyone is so mad about AI?

      I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing). So I still can’t figure out why theres so much push back, is everyone using it extensively and reached a dead end in what it can do?

      Give me some red pills!

      • Anisette [any/all]@quokk.au
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 hours ago

        I’m sure you must’ve heard about the disastrous effects on the environment and the electrical grid by now, as well as the crisis in computer parts (GPUs, RAM, SSDs and now also HDDs) caused by AI data centres. Besides this, AI output is polluting the internet. This can be used to very quickly spin up a lot of sites to support a narrative, or to fill a site with “content” that increases SEO to get advertiser money. This makes looking anything up these days almost impossible. This is especially a problem because AI is unreliable. AI works purely off of statistics and doesn’t have any conception of truth or falsehood, so it generates what is in philosophical terms called “bullshit” (real term!). Output can be true accidentally, but you are never getting “the truth”. This property has already been exploited by companies by generating data optimised for LLM consumption in order to advertise their products. Many chatbots are also built in a way that is very dangerous. They are optimised to keep you using them, which is often done by making them agree with pretty much everything you say. This has been shown multiple times to cause psychotic breakdowns in all kinds of people, even if they started out using it to, for example, write code. However, the group most at risk of this are the people using the bots as an alternative to a therapist. unfortunately, AI companies encourage this usage through initiatives like gpt health. It also turns out that AI dependence can harm your ability to learn certain things. This makes sense intuitively, a coder who relies on a chatbot to write parts of the code or to debug is less likely to develop those skills themselves. This is especially a problem with increased usage in schools. Yet more ethical problems arise with the image generation modes of AI, which, unfortunately(but unsurprisingly) turn out to be trained on like… A LOT of child porn. This has been one of the controversies with grok recently. Unfortunately, there is no real way to stop someone from asking for anything in the training data. Best you can do is either try to give negative incentive to the model or to hardecode in a bunch of phrases to automatically reject. This is a fundamental problem with the architecture. Generation of revenge porn, child porn and misinformation has run rampant. AI is also a privacy and security nightmare. Due to the fundamental architecture of AI models there is no way to distinguish between code and instructions. This means that “agents” can be injected with instructions to, for example, leak confidential data. This is a big problem with parties like hospitals attempting to integrate AI into their workflow. This is in addition to pretty much all models being run “in the cloud” due to the high costs associated with running a model. Speaking of costs, all of these models currently operate at a gigantic loss. They are currently essentially circulating an IOU between themselves and a few hardware companies(nvidia), but that cannot last forever. If any of these companies survive, they will be way more expensive to use than now. Many of the current companies are also pretty evil, being explicitly associated by figures like Peter Thiel, whose stated goal in life has been to end democracy. There are also some arguments surrounding copyright. While I do not want to strengthen copyright law and so will be careful around my comments on this topic, it is certainly true that ai often outputs essentially exactly someone elses work without crediting them.

        This is all I could think of off the top of my head, but there surely is more. hope this helps!

      • sobchak@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        I’m working with people that seem to try to offload a lot of their work to AI, and it’s shit, and making the project take longer and shittier. Then they do things like write documents in AI and expect people to read that nonsense, and even use AI to send long, useless Slack messages. In short, it’s been detrimental to the project.

      • Brainsploosh@lemmy.world
        link
        fedilink
        arrow-up
        17
        ·
        edit-2
        10 hours ago

        It doesn’t reason, and it doesn’t actually know any information.

        What it excels at is giving plausible sounding averages of texts, and if you think about how little the average person knows you should be abhorred.

        Also, where people typically can reason enough to make the answer internally consistent or even relevant within a domain, LLMs offer a polished version of the disjointed amalgamation of all the platitudes or otherwise commonly repeated phrases in the training data.

        Basically, you can’t trust the information to be right, insightful or even unpoisoned, while sabotaging your strategies and systems to sift information from noise.

        EtA: All for the low low cost of personal computing, power scarcity and drought.

        • undeffeined@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          7 hours ago

          The less you know how LLMs work, the more impressed you are by them. The clever use of the term AI seems like the culprit to me since it will most likely evoke subconscious associations with the AI we have seen portraid in entertainment.

          LLMs can be useful tools, when applied in restricted contexts and in the hands of specialists. This attempt to make it permeate every aspect of our lives is, in my honest opinion, insane

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        9 hours ago

        I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing).

        AI is fine with simple programming tasks, and I use it regularly to do a lot of basic blocking out of functions when I’m trying to get something working quickly. But once I get into a specialty or niche it just shits the bed.

        For example, my job uses oracle OCI to host a lot of stuff, and I’ve been working on deployment automation. The AI will regularly invent shit out of whole cloth even knowing what framework I’m using, my normal style conventions, and a directive to validate all provided commands. I have literally had the stupid fuck invent a command out of thin air, then correct me after I tell it the command didn’t work about how that command didn’t exist and I needed to use some other command that doesn’t exist instead, or it gives me a wrong parameter list or something.

        Hell, even in much more common AD management tasks it still makes shit up. Like, basic MS admin work is still too much for the AI to do in its own.

        • Gibibit@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          5 minutes ago

          AI correcting the user is so insane. I’ve had Mistral’s agentic ai (Vibe) confidently tell me that you can’t, in Unity3D, implicitly cast a MonoBehaviour to a bool. Even though this has been a feature since before Unity 2018.

          Not to mention I have to delete 90% of the code it generates because it doesn’t serve any function. I’m sure you can wrangle it to write some boilerplate for you but rest assured I’m not impressed by its capibilities.

      • skarn@discuss.tchncs.de
        link
        fedilink
        arrow-up
        7
        ·
        8 hours ago

        There are many reasons. My biggest problem with it is that it enables the productions of a incredible deluge of cheap shitty content (aka slop), sufficient to drown out a lot of more interesting decent work.

        This is coumpunded by big tech having decided that slop is preferable to real content. This leads to the general feeling that I’m drowning in an ocean of shit, and thus I dislike AI.

        • TrooBloo@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          4
          ·
          8 hours ago

          Specifically regarding open source software development (what you might call “small tech”), this has led to a huge amount of slop pull requests that make it difficult to run these projects.

      • OhneHose@feddit.org
        link
        fedilink
        arrow-up
        10
        ·
        11 hours ago

        Thing is with at least the programming part is: It good at common issues, as in it re invents the wheel really good. But context is king, the better the model knows what the data and task looks like the better it can solve the problem at hand. It won’t fix any niche problems or actually spit out performant code. It uses what’s publicly available as Ressource and it’s inherently uncreactive at problem solving. All the chat assistants effectively did for me is replace stackoverflow.

        These models only know how to re-produce already solved problems. There’s certainly great applications, like on the fly translation, summerizing and data extraction.

        But it still is just a probability machine, trained on satisfying it’s customer. That’s also why it will confidently spit out complete garbage and be proud about it. And that’s also a reason why the early models are shit at math, they don’t to math, they just guess. Later models write python or other code to do the math, that’s then f.e. called “thinking”.

        It will stay around but many many ai companies will fail, barely any are turning out profit, must just burn absolutely insane amounts of money in a circle jerk ai pit.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    52
    ·
    edit-2
    15 hours ago

    Early last year I had to attend a company conference, it is a yearly thing where mgmt get to stand on a stage and have their peons aplaud them after telling everyone how amazing they are.

    That year was particularly insulting.

    The CEO brought up a person who he said had inspired him and how great the guy was.

    The only thing that guy spoke about was how proud he was to have moved high income jobs to low income countries.

    That, in front of a crowd of high income employees in a high income country.

    And we were expected to applaud him…

    And what is ridiculous was that most people genuinly did seem to enjoy the talk.

    Granted, this was a company in the finance sector, and I work in IT, but come on people, at least have the decency of looking uncomfortable when someone is happily talking about moving similar jobs to yours to other countries to your face.

    • LemmyLegume@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      I’m glad I’m not the only one that’s had to suffer through this kind of crap.

      My last company did a yearly circle-jerk session like this where the leadership literally recorded “intro bits” of themselves like you’d see on a jumbo-tron at a sporting event. You know, like a snappy montage shot of them nodding their head or waving or some shit with fire around it.

      And then they literally played these clips on a giant screen with intro music while they came out on stage. It was the cringiest, most ego-fueled, shit I’ve ever seen from a company that could barely manage their basic operations.

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Lol they did that this year here as well, used AI to make them look like superheroes, didn’t work…

    • lightnsfw@reddthat.com
      link
      fedilink
      arrow-up
      6
      ·
      8 hours ago

      We have those virtually a few times a year. There’s always some dickhead talking about how great they did at reducing payroll. I’m like, “Motherfucker, you are talking to the payroll.”

    • Meron35@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      11 hours ago

      That tracks for finance though. Many in that industry are the grind hard in your 20s-30s, retire in Thailand in your 40s type.

    • KombatWombat@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      7 hours ago

      Isn’t that good though? I also have a fairly high income and live in a high income country. Compared to people in poorer countries, we would be the upper class living very charmed lives. In fact, the US poverty line is at $15,000 in annual income, or just over $40 a day. But someone making this much would be richer than 83% of the world. People in less privileged countries should have better access to well-paying jobs to help mitigate the disparity.

  • Epp@lemmus.org
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    10
    ·
    6 hours ago

    The future is in agentic AI with a single developer for code review. Management tells the developer what they want, developer engineers the prompt, gives it to the AI agent that has complete access to the relevant projects and DB schema. It generates a change log, and the developer reviews it, asking for changes as needed.

    Huge teams are about to be consolidated, with a huge inflow of software engineers into the unemployment bin, and entire downstream economies are going to collapse from the resulting unemployment of previously high paying careers. We need Universal Basic Income yesterday.

    • vala@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      It’s somewhat naïve to think we need a human in the loop, but only a single human.

      Sounds like your org mostly builds CRUD apps. Maybe this is how it will work in that space but I don’t see it happening in general.

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 hours ago

      constantly reviewing low quality work. kill me now.

      what godawful boring job that would be. I’d have absolutely no motivation to do it well

      • Epp@lemmus.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        It’s only low quality until it isn’t. Have you used Gemin 3.1 Proi lately? Anthropic’s Opus 4.6?

        Everything looks like low-quality crap when you only use the free models from Microsoft and OpenAI. But I suspect you haven’t utilized the paid, professional models if you hold that opinion.

    • sacredfire@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      5 hours ago

      The near future? How is this a sustainable business model for any business? You just need one developer and “agentic” AI to build anything, how do you differentiate yourself?

      But before that problem, I don’t see the current tools anywhere near able to deliver on the hype. They are incredible and they have plenty of use cases, but for anything non trivial it feels like it’s more work fixing the errors they create than just doing it myself. I think I’d kill myself if I had to review and fix multiple agents worth of indecipherable code.

      All that being said, everyone still might get laid off! It doesn’t have to be good to crash the market.

      • Epp@lemmus.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        What tools do you have experience with? Just the free ones? The crap from Microslop or OpenAI? I sincerely believe you’d change your opinion if you were using the professional products from companies that have created working models, but I imagine most people only have limited experience with those models, if any. Most use ChatGPT or Copilot and surmise it’s all inaccurate crap. They’d be right from that limited sample, but wrong about the market at large.

      • InputZero@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        5 hours ago

        You differentiate yourself by being first. That’s partly why OpenAI and Sam Altmann are so fixated on bringing general AI to light. They know that if AGI is possible that the first one to reach it will see all the benefits. Second place gets nothing. Unfortunately it’s becoming more and more obvious that AGI is a dream and not actually possible.

    • vane@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      As always all those claims have very big gaps: relevant projects, db schema, management know what they want. Dude there are multi billion dollar companies that take hundreds of millions of dollars to figure out what management want and you think they will replace it with single prompt ? It’s like talking to 5 year old.

      • Epp@lemmus.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        That’s why you still need one skilled developer per project as the middle man, for requirement elicitation. You don’t have to believe me, no skin off my nose, but I’m with an organization that’s making it work exactly as described. Months of work done in weeks instead.

        • vane@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          44 minutes ago

          8 weeks is still months. If your code looks like your math then RIP, LLM junkie.

    • Epp@lemmus.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      5 hours ago

      Sorry, missed this was in ShitPost. Please, allow me to revise: Dang clankers, Deytükurjerbs!