

That’s amazing. It’s almost hard to believe, yet it sounds about right.


That’s amazing. It’s almost hard to believe, yet it sounds about right.


That makes sense.
Heh. Spotify used to stream 384K Vorbis, which should be sufficient. But now the web app is apparently AAC. And the app-apps are conspicuously listed with “equivalent to” bitrates, whatever that means:
https://support.spotify.com/us/article/audio-quality/
Very high: Equivalent to approximately 320kbit/s


Behold:
5.4 Electrical Conductivity Measurement This method includes electrical impedance spectroscopy (EIS) and dielectric analysis (DEA). The physical state of a material is measured as a function of frequency in EIS and the frequency ranges from 100 Hz - 10 MHz. It is simple and easier technique used to estimate the physiological status of various biological tissues49-52. Experimental frequency response of impedance is characterised by electrical equivalent circuits of materials. The physical properties of materials can be quantified by monitoring the changes in parameters at the equivalent circuit, among various equivalent models proposed53-54. DEA measurement is used in high frequency areas, generally 100 MHz - 10 GHz. DEA is used in moisture estimation and bulk density determination

So a overripe banana is an interesting high-pass filter, kinda like a capacitor, though the big takeaway is the conductance vs ripeness.
So if you want to test if a banana is ready to eat, hook it up… preferably with several other bananas in series. If the music is too loud, they are ready. Too quiet, and it’s not time yet.


I did a blind test, and found it depends on the genre.
Slow, chill music is completely transparent when compressed, no matter how hard I “audio peep.” It’s not even a question.
But something “dense” like System of a Down has audible distortion. It loosely (not always) coincided with the bitrate of the flac files, which kind of makes sense, though even the extreme end is hard to notice unless you know the particular song very well.
Also… a lot of recordings kind of suck. It’s crazy to worry about tiny bits of distortion when a bit perfect master is already noisy and distorted.


It’s been like this for decades, according to my dad. Well before the internet.


You can still buy the Framework Strix Halo board (with 128GB RAM) for $2k. For now.
I have a 3090, yet I’m still seriously considering it. The CPU itself is an engineering marvel, not to speak of the ridiculously fast RAM and IGP.
Only caveat is you better be a Python/Linux person. As you will be building forks and beating your head against the screen to get the setup working right.


Friend, this is 2026.
Clickbait is mandatory. Get your reason out of here.


It’s not! Use SonoBus; it’s dead simple, and superior to Discord. It’s far lower latency, with customizable filters, peer-to-peer; and totally free.
Now if you want emojis and video and rambling channels and stuff, you will have to go elsewhere.


I’m sure many did, sadly.


Shame they didn’t go Intel. Arc is good, and they could have gotten around TSMC supply constraints.


It’s the strategy!
It’s the 2020s. There’s no such thing as bad attention.
I mean, I saw the community, yet I had to think of it.
It’s “nottheonion” adjacent.


Yep.
Internet is “good enough” for P2P these days. I get not everyone is in a great living situation, but less than 500K down/up is an outlier at this point.


SonoBus for voice chat.
It’s peer to peer, just works on everything, sounds better than Discord, and most importantly, is 100X less annoying because latency is so low.
I think it was designed for remote music collaboration, hence it feels like you’re talking to your friend in the room. No compression if you don’t want it, no awkward interruption of pauses from the audio delay.
Oh, and it’s free, and has no chat or emojis or anything.
That’s what gets upvotes on Lemmy, sadly.
This is how Voat (another Reddit clone) died. Political shitposts and clickbait tabloids crowded out every niche, so all the interesting content left.
As it turns out, doomscrolling twitter troll reposts with the same few comments in each one is quite depressing.
I don’t know a good solution, either. Clickbait works. Maybe some structural changes could help, though?


FYI you can buy this this: https://frame.work/products/framework-desktop-mainboard-amd-ryzen-ai-max-300-series?v=FRAFMK0002
And stick a regular Nvidia GPU on it. Or an AMD one.
That’d give you the option to batch renders across the integrated and discrete GPUs, if such a thing fits your workflow. Or to use one GPU while the other is busy. And if a particular model doesn’t play nice with AMD, it’d give you the option to use Nvidia + CPU offloading very effectively.
It’s only PCIe 4.0 X4, but that’s enough for most GPUs.
TBH I’m considering exactly this, hanging my venerable 3090 off the board. As I’m feeling the FOMO crunch of all hardware getting so expensive. And $2K for 16 cores with 128GB of ridiculously fast quad channel RAM is not bad, even JUST as a CPU.


As a hobby mostly, but its useful for work. I found LLMs fascinating even before the hype, when everyone was trying to get GPT-J finetunes named after Star Trek characters to run.
Reading my own quote, I was being a bit dramatic. But at the very least it is super important to grasp some basic concepts (like MoE CPU offloading, quantization, and specs of your own hardware), and watch for new releases in LocalLlama or whatever. You kinda do have to follow and test things, yes, as there’s tons of FUD in open weights AI land.
As an example, stepfun 2.5 seems to be a great model for my hardware (single Nvidia GPU + 128GB CPU RAM), and it could have easily flown under the radar without following stuff. I also wouldn’t know to run it with ik_llama.cpp instead of mainline llama.cpp, for a considerable speed/quality boost over (say) LM Studio.
If I were to google all this now, I’d probably still get links for setting up the Deepseek distillations from Tech Bro YouTubers. That series is now dreadfully slow and long obsolete.


I dunno. Whatever the default was, so perhaps not?
But whatever Ublock Lite’s default is is probably what 99% of folks are using.


Chinese electric cars were always going to take off. RAM is just a commodity; if you sell the most bits at the lowest price and sufficient speed, it works.
If you’re in edge machine learning, if you write your own software stacks for niche stuff, Chinese hardware will be killer.
But if you’re trying to run Steam games? Or CUDA projects? That’s a whole different story. It doesn’t matter how good the hardware is, they’re always going to be handicapped by software in “legacy” code. Not just for performance, but driver bugs/quirks.
Proton (and focusing everything on a good Vulkan driver) is not a bad path forward, but still. They’re working against decades of dev work targeting AMD/Nvidia/Intel, up and down the stack.
It’s language for people who advertise they know something about “AI,” but couldn’t implement it if their life depended on it.
TBH I’ve never heard that one, but it sounds like they’re trying to use “gradient descent” in a sentence.