

FYI: That’s more Windows games than run in Windows!
WTF? Why? Because a lot of older games don’t run in newer versions of Windows than when they were made! They still run great in Linux though 👍
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast


FYI: That’s more Windows games than run in Windows!
WTF? Why? Because a lot of older games don’t run in newer versions of Windows than when they were made! They still run great in Linux though 👍


If I broke into your home, why TF would I carefully take apart your robot vacuum in order to copy your wifi credentials‽
Also, WTF other “secrets” are you storing on your robot vacuum‽
This is not a realistic attack scenario.


NO! It’syour device, you should have root! The fact that the manufacturer gives their product owners root is a good thing, not bad!
I will die on this fucking hill.


Does anyone have the data on the total number of data centers that were being built over time? I’m not convinced that AI is causing that many more data centers to be built. From everything I’ve read, is just that they’re putting more GPUs into them.


WTF? Have you ever been in a data center? They don’t release anything. They just… Sit. And blink lights while server fans blow and cooling systems whir, pumping water throughout.
The cooling systems they use aren’t that different from any office building. They’re just bigger, beefier versions. They don’t use anything super special. The Pfas they’re talking about in this article are the same old shit that’s used in any industrial air conditioner.
For the sake of argument, let’s assume that a data center uses 10 times more cooling as an equivalently sized office building. I don’t know about you, but everywhere that I’ve seen data centers, there’s loads and loads of office buildings nearby. Far more than say 10 for every data center.
My point is this: If you’re going to be bitching about pfas and cooling systems, why focus on data centers (or AI, specifically) when there’s all these damned office buildings? Instead, why don’t we talk about work from home policies which would be an actual way to reduce pfas use.
This article… Ugh. It’s like bitching that electric car batteries can catch fire, pretending that regular cars don’t have a much, much higher likelihood of catching fire and there’s several orders of magnitude more of them.
Are Pfas a problem? Yes. Are data centers anywhere near the top 1000 targets for non-trivially reducing their use? No.
Aside: This is just like the articles bitching about data center water use… Data centers recycle their water! They have a great big intake when they’re done being built but then they’re done. They only need trivial amounts of water after that.


Google search: “scientific articles about (whatever)” Then you get tons of ads and irrelevant results.
LLM search: “Find me scientific articles about (whatever)” Then you get just the titles and links (with maybe a short summary).
It’s 100% better and you don’t have to worry about hallucinations since you it’s wasn’t actually trying to find an answer… Just helping you perform a search.


It’s ok: Google and all other ad-supported search is about to go the way of the dinosaur as soon as local AI search catches on. When your own PC runs a search for you, it basically googles on your behalf and you never see those ads.
It’s going to change everything.


Anthropic didn’t lose their lawsuit. They settled. Also, that was about their admission that they pirated zillions of books.
From a legal perspective, none of that has anything to do with AI.
Company pirates books -> gets sued for pirating books. Companies settles with the plaintiffs.
It had no legal impact on training AI with copyrighted works or what happens if the output is somehow considered to be violating someone’s copyright.
What Anthropic did with this settlement is attack their Western competitor: OpenAI, specifically. Because Google already settled with the author’s guild for their book scanning project over a decade ago.
Now OpenAI is likely going to have to pay the author’s guild too. Even though they haven’t come out and openly admitted that they pirated books.
Meta is also being sued for the same reason but they appear to be ready to fight in court about it. That case is only just getting started though so we’ll see.
The real, long-term impact of this settlement is that it just became a lot more expensive to train an AI in the US (well, the West). Competition in China will never have to pay these fees and will continue to offer their products to the West at a fraction of the cost.


You’ve obviously never tried to get any given .NET project working in Linux. There’s .NET and then there’s .NET Core which is a mere subset of .NET.
Only .NET Core runs on Linux and nobody uses it. The list of .NET stuff that will actually run on .NET Core (alone) is a barren wasteland.


If it’s written in C# that’s a huge turn-off though because that means it’s likely to only run on Windows.
I mean, in theory, it could run on Linux but that’s a very rare situation. Almost everything ever written in C# uses Windows-specific APIs and basically no one installs the C# runtime on Linux anymore. It’s both enormous and a pain in the ass to get working properly for any given C# project.


As an information security professional and someone who works on tiny, embedded systems, knowing that a project is written in Rust is a huge enticement. I wish more projects written in Rust advertised this fact!
Benefits of Rust projects—from my perspective:


Also, stuff that gets mis-labeled as AI can be just as dangerous. Especially when you consider that the AI detection might use such labels to train itself. So someone who’s face is weirdly symmetrical might get marked as AI and then have hard time applying for jobs, purchasing things, getting credit, etc.
I want to know what counts as AI. If someone uses AI to remove the background in an image or just to remove someone standing in the background is technically generative AI but that’s something you can do in any photo editor anyway with a bit of work.


Meh. Nothing in this article is strong evidence of anything. They’re only looking at a tiny sample of data and wildly speculating about which entry-level jobs are being supplanted by AI.
As a software engineer who uses AI, I fail to see how AI can replace any given entry-level software engineering position. There’s no way! Any company that does that is just asking for trouble.
What’s more likely, is that AI is making senior software engineers more productive so they don’t need to hire more developers to assist them with more trivial/time consuming tasks.
This is a very temporary thing, though. As anyone in software can tell you: Software only gets more complex over time. Eventually these companies will have to start hiring new people again. This process usually takes about six months to a year.
If AI is causing a drop in entry-level hiring, my speculation (which isn’t as wild as in the article since I’m actually there on the ground using this stuff) is that it’s just a temporary blip while companies work out how to take advantage the slightly-enhanced productivity.
It’s inevitable: They’ll start new projects to build new stuff because now—suddenly—they have the budget. Then they’ll hire people to make up the difference.
This is how companies have worked since the invention of bullshit jobs. The need for bullshit grows with productivity.


I’m guessing this graph matches closely with anime viewing… The true amplifier of Japan’s population decline!
To solve this crisis, we must make catgirls real and unleash an army of bland protagonists young men with almost no personality that possess some overpowered skill. Such as the ability to stay thin despite the ready availability of sugary/processed foods.


Incorrect. No court has ruled in favor of any plaintiff bringing a copyright infringement claim against an AI LLM. Here’s a breakdown of the current court cases and their rulings:
https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training
In both cases, the courts have ruled that training an LLM with copyrighted works is highly transformative and thus, fair use.
The plaintiffs in one case couldn’t even come up with a single iota of evidence of copyright infringement (from the output of the LLM). This—IMHO—is the single most important takeaway from the case: Because the only thing that really mattered was the point where the LLMs generate output. That is, the point of distribution.
Until an LLM is actually outputting something, copyright doesn’t even come into play. Therefore, the act of training an LLM is just like I said: A “Not Applicable” situation.


Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.
For images, it’s not even data collection because all the images that are used for these AI image generation tools are out on the internet for free for anyone to download right now. That’s how they’re obtained: A huge database of (highly categorized) image URLs (e.g. ImageNET) is crawled/downloaded.
That’s not even remotely the same thing as “data collection”. That’s when a company vacuums everything they can from your private shit. Not that photo of an interesting building you uploaded to flickr over a decade ago.
This is sad, actually, because this very technology is absolutely fantastic at identifying things in images. That’s how image generation works behind the scenes!

ChatGPT screwed this up so badly because it’s programmed to generate images instead of using reference images and then identifying the relevant parts. Which is something a tiny little microcontroller board can do.
If they just paid to license a data set of medical images… Oh wait! They already did that!
Sigh
Zawinski’s law: Every program attempts to expand until it can read mail. Those programs which cannot expand are replaced by ones which can.
This is just the modern equivalent: Intra-site messaging.
Listen, if someone gets physical access to a device in your home that’s connected to your wifi all bets are off. Having a password to gain access via adb is irrelevant. The attack scenario you describe is absurd: If someone’s in a celebrity’s home they’re not going to go after the robot vacuum when the thermostat, tablets, computers, TV, router, access point, etc are right there.
If they’re physically in the home, they’ve already been compromised. The fact that the owner of a device can open it up and gain root is irrelevant.
Furthermore, since they have root they can add a password themselves! Something they can’t do with a lot of other things in their home that they supposedly “own” but don’t have that power (but I’m 100% certain have vulnerabilities).