A Google Gemini-powered AI agent was given free rein to run a coffee shop in Sweden, and is quickly burning through its budget.
AI boosters crying into their computers: “but I put make no mistakes into the prompt how is this happening!!!”
Genuine curiosity:
You’re of course allowed to be mad at techbros and capitalism, but this feels like getting mad at a technology which I can’t resolve.
It’s a wonderful and fascinating technology that has real value and purpose when used correctly.
Is it a conflating of techbros + the new tech that everyone’s reacting to, or are we actually mad at the tech itself?
Thanks so much in advance for any constructive answers
LLM’s are a technological dead end. They aren’t interesting in the slightest, as anything they can do is already done more effectively and efficiently with other tools
Huh?
I think people just need to reset their expectations.
I asked one for help to interpret PCI policy application (credit card regulatory stuff). I gave it the situation and it provided me with a good answer that, when I asked our compliance team about, they agreed.
That saved me a lot of time. I don’t see how that’s a dead end. Then I had it draft a response to the person asking questions; I tuned it a little to my liking and sent it. What might have taken me an hour before took 10 minutes. This seems like a helpful thing, not a bad thing. I’m not sure what other technology would have done that.
I think LLMs are an interesting technology. Of course, the output is inherently untrustworthy, and that rules out a ton of applications tech bros are trying to cram it into.
The article isn’t about the technology. This “experiment” is pure techbro fantasy.
First it’s the tech bros using a tech for something it wasn’t meant for and continuously lying about it. That causes a backlash and makes people hate the tech itself, because it’s being used where it causes friction.
Yeah, it really sucks, because LLM tech itself is amazing. Quantifying language and ideas into what’s basically a massive queryable concept map is a huge achievement. What do the tech giants decide to do with that achievement? Shove it every little place it doesn’t belong making everyone hate it.
Oh well, I’ll keep backing up the interesting local open-source models people make and playing with them in the corner.
This tech sucks balls. Stop trying to justify it.
No surprises here. Well, at least the items it ordered this time were kinda-sorta-maybe-almost plausible to stock at a café, unlike the tungsten cubes in the vending machine.
café barista Kajetan Grzelczak sees it differently. “All the workers are pretty much safe,” he told the AP. “The ones who should be worried about their employment are the middle bosses, the people in management.”
This shows that AI can’t do that job either.
I wonder if AI would actually be good at replacing CEO and other C-suite positions, but was trained in such a way to purposely not be good at replacing a CEO because tech CEOs are the ones in control of this bubble.
Tells me you’ve never used it and had it deliver extremely convincing analysis which turns out to be pants on head stupid when you dig into the nitty gritty. It is only useful if you can continually watch its output and make it redo anything that is nonsense and no the AI can’t watch itself. It will happily confirm that its nonsense is great. It needs either manual and continual analysis or guardrails that tell it when its wrong… It’s why it can be used for software because tests and error messages can catch it fucking up. Real life lacks such affordances.
It has the number 1 qualification for being a C-suite employee - no soul!
Also endless bullshit.
Yes but it is training from this and as a result should get better. Ai was bad at everything until it stole the Internet and used it for training.
It’s an llm though, not really ai, and it hasn’t really gotten “better,” than automated programs to make decisions based on metrics, which would outperform llm’s as a ceo.
Mind you, stealing the internet worked because they effectively had the sum total of human knowledge as a training set. I don’t think that there’s nearly as much detailed data on the minutiae of running a business.
Especially not when they blame its mistakes on “limited context window” AKA learning disability.
You mean like the emails and archived chats of said business?
There is no model that can be trained in real time currently, and one instance isn’t going to offer anything to the model as far as new training data goes.
“get better” by guessing a different string of words with no logic or reasoning
God, I’m so sick of AI that I feel like a luddite. I used to be a tech nerd, and enjoy the cutting edge of developing technologies. Now I just wish we could go back in time. I think the problem isn’t so much the developing technology, but rather the way it is being crammed down our throats whether we want it or not. Everywhere I look I’m inundated with AI slop. Youtube has gotten ridiculous. I used to be able to find interesting content fairly easily. Now, every search is full of an endless array of AI slop from brand new accounts with only a few hundred followers. Anything good has been buried by 10,000 AI-generated ripoffs. Maybe someday AI will come into it’s own, but it is nowhere near there now, and I am so, so tired of having to deal with it. It’s like the entire world is being turned into one of those automated customer service telephone lines that are completely useless; that you’re stuck navigating until you’re put on hold for 30 minutes when you ask to speak to a human.
The problem is, AI is being used as a replacement for informed decisions/information, but it was never properly trained on how to be factual or make responsible adult decisions. AI is literally a global spam bot/virus that has infected Earth worse than Covid ever could. And the people pushing it on us are worse than anti-vax/anti-maskers.
Has anyone thought that maybe training an AI on a group of people that spend the majority of their lives communicating online might not be the best group to emulate in the real world?
Sure, lots of people. Just not the group of people spending the majority of their lives communicating online.
I think we are those people.
Oh no
Notice we are saying “Don’t do this”.
I think the group of people spending the majority of their lives communicating online would be the first to insist that people who spend their lives online shouldn’t be put in charge of anything in the real world.
Counterpoint: put AI in charge of big corpos immediately, drive them bankrupt. As a bonus you don’t have to pay CEO salary to do it! Win/win!
As a bonus you don’t have to pay CEO salary to do it!
That alone would be a huge bump in profitability. Hell, just make it employee-owned so the workers see the benefits
When old memory of ordering stuff is out of the context window, she completely forgets what she has ordered in the past
Look I agree that AI is probably a terrible business manager… but this is irresponsible design on the researcher’s part. AI breaks past the context window with tool calling. If it doesn’t have a list inventory tool, it will obviously fail to do this correctly.
These techniques are built into virtually every coding harness today, if you’re not using them for a business harness, that’s negligent.
Because no reasoning capabilities…
I wonder how each of us would do with the same 20k seed money? I’m sure some of us know something about managing a coffee shop and would do okay - but a lot of us don’t know much about it and would make a lot of stupid mistakes as well.
The difference being is that you’re far less likely to be asked what someone should do to manage their coffee shop. Imagine a coffee shop manager asked you what they should do to improve their business.
People got it in their heads that AI is an expert in these fields, but at best, I’d guess it has high school + a couple years of Gen Ed college courses but without any of the applicable life experience. I wouldn’t ask that person a damn thing about a specialty and I certainly wouldn’t hire them to own or manage a business out the gate.
Considering the state of education in the US, its still probably better than asking a random person.
See:
without any life experience
I’d rather ask a moron with a corporeal being than someone who thinks they know everything but has never lived.

The moron does not require a data center to give me wrong information.
I don’t know if that’s true, especially in comparison to ai. I think a competent random human would do research before taking charge of a coffee shop and be in reasonably good shape from day one. For sure some mistakes would be made, but i think generally the operation would run ok.
But all of that misses the key difference - a human doing this wouldn’t be a random person, they would usually have relevant past experience, like previously being an assistant manager at a coffee shop. So they would manage the shop way better than this ai did.
Maybe if they create an ai that has been specially designed to manage a business then it might perform as well or better than a human, possibly. But just throwing a standard ai in the role is gonna work much less well than a human.
More importantly, even if they didn’t have experience, they’d start learning as soon as they started the job. LLM chatbots have an extremely limited “memory”. If you tell it something today, that info may be completely gone tomorrow.
It’s not clear if the cafe is just that poorly run, or if people know ai is running it, so they stay away from even trying it. Both would cut into the profits.
Did you not read the article? It’s pretty clear that the AI is poorly running the cafe.
You might think that ordering cases of canned tomatoes, or a 10-year supply of rubber gloves are poor management decisions, but that’s because this AI is playing seven dimensional chess against your tic-tac-toe. Just wait until it’s cornered the tomato market, and then you’ll see.
The AI is going to poison the tomato supply with a toxin absorbed through the skin!
Call me when it starts to corner the Toilet Paper Market. Because THAT is when it has become sentient and is planning the next economic apocalypse.
Especially when the training data contains some gems such as this: https://old.reddit.com/r/wallstreetbets/comments/kzoh1c/i_am_financially_ruined_agricultural_futures/








