A Google Gemini-powered AI agent was given free rein to run a coffee shop in Sweden, and is quickly burning through its budget.

    • lIlIlIlIlIlIl@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      18
      ·
      2 hours ago

      Genuine curiosity:

      You’re of course allowed to be mad at techbros and capitalism, but this feels like getting mad at a technology which I can’t resolve.

      It’s a wonderful and fascinating technology that has real value and purpose when used correctly.

      Is it a conflating of techbros + the new tech that everyone’s reacting to, or are we actually mad at the tech itself?

      Thanks so much in advance for any constructive answers

      • mnemonicmonkeys@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        1 hour ago

        LLM’s are a technological dead end. They aren’t interesting in the slightest, as anything they can do is already done more effectively and efficiently with other tools

        • blargh513@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          34 minutes ago

          Huh?

          I think people just need to reset their expectations.

          I asked one for help to interpret PCI policy application (credit card regulatory stuff). I gave it the situation and it provided me with a good answer that, when I asked our compliance team about, they agreed.

          That saved me a lot of time. I don’t see how that’s a dead end. Then I had it draft a response to the person asking questions; I tuned it a little to my liking and sent it. What might have taken me an hour before took 10 minutes. This seems like a helpful thing, not a bad thing. I’m not sure what other technology would have done that.

        • ericwdhs@discuss.online
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          49 minutes ago

          I think LLMs are an interesting technology. Of course, the output is inherently untrustworthy, and that rules out a ton of applications tech bros are trying to cram it into.

      • 🌸𝓯𝓵𝓸𝔀𝓮𝓻🌸@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        59 minutes ago

        First it’s the tech bros using a tech for something it wasn’t meant for and continuously lying about it. That causes a backlash and makes people hate the tech itself, because it’s being used where it causes friction.

        • ericwdhs@discuss.online
          link
          fedilink
          English
          arrow-up
          2
          ·
          26 minutes ago

          Yeah, it really sucks, because LLM tech itself is amazing. Quantifying language and ideas into what’s basically a massive queryable concept map is a huge achievement. What do the tech giants decide to do with that achievement? Shove it every little place it doesn’t belong making everyone hate it.

          Oh well, I’ll keep backing up the interesting local open-source models people make and playing with them in the corner.

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    No surprises here. Well, at least the items it ordered this time were kinda-sorta-maybe-almost plausible to stock at a café, unlike the tungsten cubes in the vending machine.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    3 hours ago

    café barista Kajetan Grzelczak sees it differently. “All the workers are pretty much safe,” he told the AP. “The ones who should be worried about their employment are the middle bosses, the people in management.”

    This shows that AI can’t do that job either.

    • 13igTyme@piefed.social
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      3 hours ago

      I wonder if AI would actually be good at replacing CEO and other C-suite positions, but was trained in such a way to purposely not be good at replacing a CEO because tech CEOs are the ones in control of this bubble.

      • michaelmrose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        Tells me you’ve never used it and had it deliver extremely convincing analysis which turns out to be pants on head stupid when you dig into the nitty gritty. It is only useful if you can continually watch its output and make it redo anything that is nonsense and no the AI can’t watch itself. It will happily confirm that its nonsense is great. It needs either manual and continual analysis or guardrails that tell it when its wrong… It’s why it can be used for software because tests and error messages can catch it fucking up. Real life lacks such affordances.

    • AskMeForADickPic@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      3 hours ago

      Yes but it is training from this and as a result should get better. Ai was bad at everything until it stole the Internet and used it for training.

      • teyrnon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        2 hours ago

        It’s an llm though, not really ai, and it hasn’t really gotten “better,” than automated programs to make decisions based on metrics, which would outperform llm’s as a ceo.

      • Jesus_666@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 hours ago

        Mind you, stealing the internet worked because they effectively had the sum total of human knowledge as a training set. I don’t think that there’s nearly as much detailed data on the minutiae of running a business.

      • atomicbocks@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        There is no model that can be trained in real time currently, and one instance isn’t going to offer anything to the model as far as new training data goes.

  • N0t_5ure@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 hours ago

    God, I’m so sick of AI that I feel like a luddite. I used to be a tech nerd, and enjoy the cutting edge of developing technologies. Now I just wish we could go back in time. I think the problem isn’t so much the developing technology, but rather the way it is being crammed down our throats whether we want it or not. Everywhere I look I’m inundated with AI slop. Youtube has gotten ridiculous. I used to be able to find interesting content fairly easily. Now, every search is full of an endless array of AI slop from brand new accounts with only a few hundred followers. Anything good has been buried by 10,000 AI-generated ripoffs. Maybe someday AI will come into it’s own, but it is nowhere near there now, and I am so, so tired of having to deal with it. It’s like the entire world is being turned into one of those automated customer service telephone lines that are completely useless; that you’re stuck navigating until you’re put on hold for 30 minutes when you ask to speak to a human.

    • Zier@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      49 minutes ago

      The problem is, AI is being used as a replacement for informed decisions/information, but it was never properly trained on how to be factual or make responsible adult decisions. AI is literally a global spam bot/virus that has infected Earth worse than Covid ever could. And the people pushing it on us are worse than anti-vax/anti-maskers.

  • tidderuuf@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    4 hours ago

    Has anyone thought that maybe training an AI on a group of people that spend the majority of their lives communicating online might not be the best group to emulate in the real world?

  • felixwhynot@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 hours ago

    Counterpoint: put AI in charge of big corpos immediately, drive them bankrupt. As a bonus you don’t have to pay CEO salary to do it! Win/win!

    • mnemonicmonkeys@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      As a bonus you don’t have to pay CEO salary to do it!

      That alone would be a huge bump in profitability. Hell, just make it employee-owned so the workers see the benefits

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 hours ago

    When old memory of ordering stuff is out of the context window, she completely forgets what she has ordered in the past

    Look I agree that AI is probably a terrible business manager… but this is irresponsible design on the researcher’s part. AI breaks past the context window with tool calling. If it doesn’t have a list inventory tool, it will obviously fail to do this correctly.

    These techniques are built into virtually every coding harness today, if you’re not using them for a business harness, that’s negligent.

  • bluGill@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    3 hours ago

    I wonder how each of us would do with the same 20k seed money? I’m sure some of us know something about managing a coffee shop and would do okay - but a lot of us don’t know much about it and would make a lot of stupid mistakes as well.

    • otacon239@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      3 hours ago

      The difference being is that you’re far less likely to be asked what someone should do to manage their coffee shop. Imagine a coffee shop manager asked you what they should do to improve their business.

      People got it in their heads that AI is an expert in these fields, but at best, I’d guess it has high school + a couple years of Gen Ed college courses but without any of the applicable life experience. I wouldn’t ask that person a damn thing about a specialty and I certainly wouldn’t hire them to own or manage a business out the gate.

    • bright@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      I don’t know if that’s true, especially in comparison to ai. I think a competent random human would do research before taking charge of a coffee shop and be in reasonably good shape from day one. For sure some mistakes would be made, but i think generally the operation would run ok.

      But all of that misses the key difference - a human doing this wouldn’t be a random person, they would usually have relevant past experience, like previously being an assistant manager at a coffee shop. So they would manage the shop way better than this ai did.

      Maybe if they create an ai that has been specially designed to manage a business then it might perform as well or better than a human, possibly. But just throwing a standard ai in the role is gonna work much less well than a human.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        More importantly, even if they didn’t have experience, they’d start learning as soon as they started the job. LLM chatbots have an extremely limited “memory”. If you tell it something today, that info may be completely gone tomorrow.

  • TheFlopster@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    6
    ·
    3 hours ago

    It’s not clear if the cafe is just that poorly run, or if people know ai is running it, so they stay away from even trying it. Both would cut into the profits.