

I don’t remember, probably not last time, but I remember doing some patching in the past.


I don’t remember, probably not last time, but I remember doing some patching in the past.


It’s not in the thread line I’m replying to, to get to that I would have had to read another reply, and all of the replies to that to spot yours.
If the work you do can be fully specified in a Jira ticket, you’re a code monkey and not a software engineer, of course you can use LLMs to do your job since you can be replaced by an LLM.
And it’s not true that agents can’t help with edge cases, they can. If you know which points to look at, you task to analyze the specific interaction and watch which parts of the code would be mentioned.
You’re missing my point entirely, it’s not that it can’t help with, it’s that the solution it writes will not take them into account unless you tell it to, and to explain every edge case in enough details to be unambiguous about all of them is essentially the same as writing code directly. Not to mention that you can’t possibly know all of the edge cases of the solution it will write without seeing it, so you can’t directly tell it to watch for edge cases without knowing what code it will write.
I do write way less amount of symbols to LLM than I would when I write code.
Maybe, but then you have to review everything it wrote so you waste more time. Give me one concrete example of something that you can prompt an LLM to give you code that is advanced enough to be worth it (i.e. writing the prompt and reviewing the code it wrote would be faster than writing the code myself) and not generic enough that I would be able to find the answer in stack overflow.
Those symbols don’t have to be structured
If you don’t structure them the LLM might misinterpret what you meant. Structure in a language is required to make things unambiguous, this reminds me of the stupid joke of “go to the store and bring 1L of milk, if they have eggs bring 6” and the programmer coming back with 6L of milk because they had eggs. Of course that’s a stupid example, but anything complex enough to be worth using an LLM would be hard to describe unambiguously and covering all edge cases in normal human speak.
and they can even have typos, so I can focus my brain activity on things that actually matter.
Typos are very easy to correct, most editors will highlight them for you, and some can even autocorrect them but more likely you avoid most of them by using tab completion anyways. I don’t waste any brain activity on that, I’m thinking on the solution and structuring it in an unambiguous way, that is what writing code is, it’s not some cryptic art of writing the proper runes to make the machine do your will like you seem to be implying, it’s just structured thought.
Plus, copilot is shit.
Might be, wouldn’t know any other as that’s the one I have available to use, but sincerely I doubt others are that much better to make a difference.
I rate your post as a skill issue.
Yup, I have absolutely no skill in using LLMs, nor will I waste my time with it. Don’t get me wrong, it’s a neat tool for auto completing small snippets like we used to do with an actual snippet library a couple of years ago, it is also a decent tool to navigate unknown code bases asking it where certain parts are or how to achieve something in the. I would say that 60% of the time it gives you some good pointers, but 90% of the time most of the code it writes is wrong, but at least it points you in the right direction of where to start investigating.
I don’t expect you to understand this since from what I’m reading here you probably never worked on anything big enough, but a software engineer job is not to write code, that’s just a side-effect, our job is to solve problems, so either you’re trying to get the LLM to solve the problem for you, or wasting lots of time explaining your solution in English, reading the generated code, understanding it, analyzing it, fixing any issues and testing it, possibly multiple times instead of explaining your solution once in code and testing it.


To be fair, the first part of the game is by far the best. The unofficial patch adds back in a heckton of content in the late game, but even then, it feels sparse.
Maybe, I don’t know how far into the game I got since I never finished it. But I don’t think it ever felt empty… Although the damn zombie mission is one I hate and has made me quit the game in more than one occasion.
I’m running Linux now
I have been running Linux only for over a decade, so I can confidently say the game runs, and in Steam is just hit play.


Not replying to you but to that statement, they’re absolutely wrong. I’ve never finished Bloodlines, life keeps getting in my way and I keep losing my save file (this is not unique to Bloodlines, there are several other games that are in the same bag). My point is every few years I start a new save on the OG bloodlines, and that game still holds out great, sure graphics are outdated, but other than that it’s a great game even by today standards, and while I haven’t played bloodlines 2, I’m fairly confident from everything I’ve seen it’s a worse game by every metric that matters. These people think that graphics can overcome anything, but that’s one of the least important parts of the game.


Sorry, I won’t go through your post history to reply to a comment, be clearer on the stuff you write.
I’m a software engineer, and if that’s how you code you’re either wasting time or producing garbage code, which might be acceptable wherever you work, but I guarantee you that you would not pass code reviews where I do. I do use copilot, and it’s good at suggesting small snippets, maybe an if, maybe a function header, but even then 60% of the time I need to change what it suggested. Reviewing code is harder than writing it yourself, even if I could trust that the LLM would do exactly what I asked (which I can’t, not by a long shot) it would maybe be opened to bugs or special cases that I would have to read the code, understand what it tried to do, figure out edge cases on that solution and see if it handled them. In short, it would take me much longer to do stuff via LLMs than writing them myself, because writing code is the easy part of programming, thinking on the solution and it’s limitations and edge cases is the hard part, and LLMs can’t understand that. The moment you describe your solution in sufficient detail that an LLM can possibly generate the right code, you’ve essentially written the code yourself just in a more complicated and ambiguous format, this is what most non technical managers fail to understand, code is just structured English, we’re already writing something better than prompts to an LLM.


This is what you said:
Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.


No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.
I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.


Again, I agree with the majority of what you’re saying, and yes, I think most of us might suffer from https://xkcd.com/2501/ but in this particular instance Linus only needed the terminal because of the bug, otherwise he should have been able to install it via the GUI, so the bug was even more disastrous to the UX.
I think that nanny features are okay on the GUI, which is exactly what happened here, but I should be allowed to do what I want on my system if I have the know how, and I’m okay with danger style messages to let me know I’m about to do something potentially dangerous, but I’m against being forbidden from uninstalling X (which is the short version of what Linus did).
Flatpacks/snaps/etc are great, and I agree that there should be a push for user space to be mostly there. Also I know it’s not for most users, but you might be interested in checking out NixOS which allows you to rollback almost anything, so while not a solution for the majority of people if this is something you have problems with and have the time and energy to learn Nix language it’s a great distro for having a system that’s almost impossible to break.


I agree with lots of what you’re saying, this was a serious bug, it wasn’t the user’s fault, and users can’t be expected to learn bash.
My point is that the message tried to be as scary as possible, because if that message shows then something is about to uninstall critical components from the system, the bug here was that trying to install steam triggered that. I agree that it wasn’t Linus fault, but I think that most users would stop at that message, he didn’t because he thinks he knows what he’s doing, but he doesn’t, he’s in that middle ground where he knows enough to be confidently wrong.
Let me ask you, how would you have given that message in a way that would make people stop?, remember that the message is valid, the bug was installing steam doing that.


You’re completely missing the point. People can buy steam machines and use them as a PC without ever opening steam, or worse, use them as servers or parts of a cluster. If Steam Machines were sold at a loss they would , by definition, be cheaper than equivalent hardware, so companies would buy 10k of them to put into a warehouse to run stuff because it would be cheaper than buying the same thing from other places. This is what happened to the PS3, non-blocked systems can’t be sold at a loss because you can’t guarantee that whoever is buying it will use them for your intended purpose.


Nope, the PS3 was just an example of why you can’t sell at a loss with an open platform. Selling at a loss was the central point of the discussion, if that flew over your head it’s fine, but don’t try to make it my fault that you jumped in the middle of a discussion about why Valve can’t sell at a loss and said:
The Steam Machine is a standard x86 computer that can’t match the ubiquitous ThinkCentres in price/performance.
Which implies that even with the Steam Machines being sold at a loss a ThinkCenter would have a best price/performance which is just impossible.
This is going in circles and bringing nothing constructive.


It wasn’t a standard accept/continue/yes prompt, it wasn’t something that he could just press enter or something easy and continue without noticing, he had to have read the message to know what to do, it was something akin to:
WARNING The following essential packages will be removed. This should NOT be done unless you know exactly what you’re doing! … You’re about to do something harmful, if you’re sure of what you’re doing type the phrase “Yes, do as I say!”
The message couldn’t have been more clear about it. Plus most users wouldn’t need to use the terminal, he just happened to use the distro during the brief window that that bug existed.
As a Linux enthusiast I can definitely tell you I never encourage people to just type words in the magic box and get it over with, and always tell them to understand what they’re typing.


Regardless, this is a thread about whether Valve could still make money selling at a loss, you stepped into it claiming they couldn’t compete in price/performance, which implies that they couldn’t compete even selling at a loss (since that was the central point of the discussion)
You’re the one that brought up Valve selling at a loss
I wasn’t, it was the person I’m replying to, the one I mixed out with you. Sorry for that, thought it was the same person.
you think anything under $800 would be selling at a loss
I never claimed that.


I never said $800 would be selling at a loss, in fact I said that there’s a good possibility that they can sell it cheaper than 800 and still make a profit because they buy things in bulk. You were the first one who even mentioned it being profitable for them selling at a loss:
They could totally make money selling it at a loss.
Which is completely false, if they sold at a loss by definition they would lose money on each sale, and because it’s an open platform people would just buy the cheap hardware to be used for any project which would make Valve bleed money like Sony did with their PS3 until they closed the system.


And then we could make money having people riding her. If you’re going to start a hypothetical scenario of Valve still being able to make money selling at a loss you can’t be angry that people are replying on the basis your premise is true.


If they’re sold at a loss, by definition they have to be cheaper than anything sold at a gain with the same performance.


I don’t think so, I think a normal user would pause when the system asks him to type “Yes, do as I say” as that is clearly a sign that you’re about to shoot yourself in the foot.


If it’s sold at a loss like a console it would beat the price/performance of any other x86 chip on the market, which is why they can’t sell it at a loss, ergo my point.


No, it won’t. $800 will get you a machine that’s around 50% faster. Controller included.
Care to share a link to a PCPartPicker with that? Here’s a link on the same thread of someone building a similarly speck machine for 800 https://lemmy.world/comment/20649777 and that is without the controller. In case you haven’t noticed, RAM prices are a bit crazy at the moment.
It’s literally a laptop CPU with a laptop GPU.
It’s literally not, they custom developed it for the product, similar to the Steam Deck one, it is based on the architecture used on laptops, but so are Playstation and Xbox AFAIK.
Also not true. A 1k prebuilt is around 70% faster. Controller not included, though.
Can you provide a link to such a prebuilt? Here’s the first prebuilt I could find with similar specs, and it’s 1k https://periphio.com/gaming-pcs/firestorm-7600-prebuilt-amd-gaming-pc/
Sure, but that’s an argument in favour of it costing less.
Yes, that was my point, the top of what this should cost is the same as a prebuilt with similar specs since Valve buys stuff in bulk it should be cheaper than that.
Yeah, and the best selling console of the generation is $450 for the digital-only version.
And the other one is 700, your point is?
Stop this delusion. If this was an actual possibility, it would already be happening with the Steam Deck. Yes, I know you know someone who did it. I know someone who bought a Surface to put Linux on it. There’s dozens of us!
It didn’t happened with the Deck because it’s not sold at a loss, so it’s cheaper to assemble a similarly built PC for you. But I definitely saw several posts through the years recommending people just buy a Steam Deck as their machine in certain conditions. If the Steam Deck costed 300 I guarantee you people would be using it as their daily drivers or building clusters of them.
So? PCs have other uses outside gaming, you know?