

Sony’s modern OLEDs are sick. There are a few between my family, and they have the best processing I’ve seen, they decode massive bluray rips no problem, and native options for a clean ad-free UI.
Why TF aren’t people buying them?


Sony’s modern OLEDs are sick. There are a few between my family, and they have the best processing I’ve seen, they decode massive bluray rips no problem, and native options for a clean ad-free UI.
Why TF aren’t people buying them?


It’s one of those “reporting on social media without actually adding anything” articles, but in this case it’s pretty cool.


The source, 2024 apparently:
https://www.nikkei.com/telling/DGXZTS00011190R00C24A7000000/
Translation:
https://translate.kagi.com/www.nikkei.com/telling/DGXZTS00011190R00C24A7000000?ref=frankandsense.io
Yes, that Nikkei, like the stock exchange:
https://en.wikipedia.org/wiki/The_Nikkei
Though it does feel like one of those “listicle” chum articles. The image feels a bit AI generated, whether it actually is or isn’t.


Apple likes being able to distribute apps and have users pay subscriptions to run them locally. This is what they already do; even 3rd party apps get a cut to Apple.
And its why iPhones are so powerful, other than their meager RAM capacity.


The good thing is that we have a few giants with vested interests in resisting that. PC OEMs like Dell and HP, Clevo, Intel/AMD who still have huge consumer sales, and the big one:
Apple.
Apple is all-in on personal compute, and they have the muscle to resist the anticompetitive plays, hopefully.


It’s all just show anyway. All the Nvidia chip restriction did is teach Chinese devs to do more with less, and now they’re running circles around other labs that have 100X the hardware. They don’t need the H200s.
You ask me? If the US wants to seed AI development: restrict Nvidia GPU sales in the US. It’d force labs to get smarter with less, and branch out to more diverse hardware, instead of monopolizing and scaling up.


Davinci works better in Linux. Vapoursynth mostly works better in Linux.
RAW photo editing is already horrible in Windows if you’re trying to do HDR. To be fair, it’s horrible in Linux too. As much as I hate it, they can’t touch Apple there.
See this post I just made: https://lemmy.world/post/41751454/21613633
iOS will render HDR JPEG-XL, AVIF and tiled HEIFs straight out of a camera; no problem. Heck, it will even display RAWs in the photo app. But it’s a struggle on Windows and Linux.
And if by “professional use” you mean “Adobe,” I view that in the same way as still being on Twitter. At this point, subjecting yourself to Adobe on Windows is something you should do through gritted teeth.


This is an option.
The problem is, while faster on paper, its not really any faster than my 3090, especially in spots where I need its performance most. It’d be a downgrade.
I could get a second 3090, I guess, and have some redundancy in case one fails. But I’m on an ITX board with 1 slot.
I’d be WAY more interested in a roughly equivalent 9000 series or Battlemage card, but alas, there is none.


Oh, I know. I bought my previous GPUs in the peak of cypto busts.
But I need 24GB… My plan was to eventually buy a used 4090 when they get cheaper, or 24GB Intel/AMD card. But it doesn’t feel like AMD/Intel are interested in big GPUs anymore, and the 4090 just keeps going up in price.


So I FOMO bought an SSD and SD card when all this started.
Not regretting it.
…But now I’m considering hoarding a spare GPU. Is that nuts? I don’t need it, I don’t want to hoard, but If my old Ampere card suddenly dies, is it gonna take $2000 to replace?


Start posting. Please. I am begging you.
I am late to this argument, but data center imagegen is typically batched so that many images are made in parallel. And (from the providers that aren’t idiots), the models likely use more sparsity or “tricks” to reduce compute.
Task energy per image is waaay less than a desktop GPU. We probably burnt more energy in this thread than in an image, or a few.
And this is getting exponentially more efficient with time, in spite of what morons like Sam Altman preach.
There’s about a billion reasons image slop is awful, but the “energy use” one is way overblown.


Valve’s margins are almost criminal as-is. They make cash hand over fist, and they’re basically a monopoly already.
…So not much might actually change, at least at first?
But this why they don’t need to go public. They have plenty of cash. And it’d just shrink the slice of profit Valve’s owners get.
Still, even private, I’m afraid of what Steam might look like if big competitors like EGS and Microsoft footgun themselves out of the market, and GoG fades away due to their DRM-free policy.


Oh, sweet summer child.
You just wait.


It’s especially bizzare when git repos for “open source corporate alternative” type software uses it over their own repo’s issue trackers and forums.
WTF
They want the social media engagement, I guess. But still. It’s ridiculous how much of a blind spot folks have for it.
It also makes giving or getting any kind of support a hellish time sink, but that’s almost besides the point…
I mean… I think it’s worrying that the norm is “who cares if it’s AI? It’ll fool plenty of people, spread it!” If that’s what you mean.
Social media is going to kill us all well before AI does.


It’s not so much about English as it is about writing patterns. Like others said, it has a “stilted college essay prompt” feel because that’s what instruct-finetuned LLMs are trained to do.
Another quirk of LLMs is that they overuse specific phrases, which stems from technical issues (training on their output, training on other LLM’s output, training on human SEO junk, artifacts of whole-word tokenization, inheriting style from its own previous output as it writes the prompt, just to start).
“Slop” is an overused term, but this is precisely what people in the LLM tinkerer/self hosting community mean by it. It’s also what the “temperature” setting you may see in some UIs is supposed to combat, though that crude an ineffective if you ask me.
Anyway, if you stare at these LLMs long enough, you learn to see a lot of individual model’s signatures. Some of it is… hard to convey in words. But “Embodies” “landmark achievement” and such just set off alarm bells in my head, specifically for ChatGPT/Claude. If you ask an LLM to write a story, “shivers down the spine” is another phrase so common its a meme, as are specific names they tend to choose for characters.
If you ask an LLM to write in your native language, you’d run into similar issues, though the translation should soften them some. Hence when I use Chinese open weights models, I get them to “think” in Chinese and answer in English, and get a MUCH better result.
All this is quantifiable, by the way. Check out EQBench’s slop profiles for individual models:
https://eqbench.com/creative_writing_longform.html
https://eqbench.com/creative_writing.html
And it’s best guess at inbreeding “family trees” for models:



Did y’all read the email?
embodies the elegance of simplicity - proving that
another landmark achievement
showcase your philosophy of powerful, minimal design
That is one sloppy email. Man, Claude has gotten worse at writing.
I’m not sure Rob even realizes this, but the email is from some kind of automated agent: https://agentvillage.org/
So it’s not even an actual thank you from a human, I think. It’s random spam.


To be fair, relying on OneDrive was like waiting for a bomb to go off. This was bound to happen once they got everyone stuck using it.
“let’s save today’s photos from my phone onto my machine” bullshit?
Once it’s set up, I don’t see how that’s a hassle? Especially if its just syncing over WiFi.
A good hijack, thanks.