

HAProxy is not meant for complex routing or handling of endpoints. It’s a simple service for Load Balancing or proxying alone. All the others have better features otherwise.


HAProxy is not meant for complex routing or handling of endpoints. It’s a simple service for Load Balancing or proxying alone. All the others have better features otherwise.


I’ll be honest with you here, Nginx kind of ate httpd’s lunch 15 years ago, and with food reason.
It’s not that httpd is “bad”, or not useful, or anything like that. It’s that it’s not as efficient and fast.
The Apache DID try to address this awhile back, but it was too late. All the better features of nginx just kinda did httpd in IMO.
Apache is fine, it’s easy to learn, there’s a ton of docs around for it, but a massively diminished userbase, meaning less up to date information for new users to find in forums in the like.


It’s called a Reverse Proxy. The most popular options are going to be Nginx, Caddy, Traefik, Apache (kinda dated, but easy to manage), or HAProxy if you’re just doing containers.


That’s not really the point though. I’m not even talking about end users. Government agencies, corporate backend services, customer service agencies and more are all abandoning Windows for Linux partially because Win11 is a horrible product, but also because the requirements just keep growing which is stupid.
Microsoft’s response to this is the above, which they were STAUNCHLY opposed to previously because they need to try and force AI down users throats to justify the money they have pissed away on it. They’re shoehorning Copilot bullshit into every product line they have now, and it’s WILDLY unpopular and unnecessary. If this is the best they can do to address it, they’ll continue to hemorrhage users.
When more state agencies in the US start switching, they’ll release some “Windows Lite” bullshit, but it will too late because the commitments needed for these organizations to bother switching is massive. They’ll be losing licenses for an entire generation of Windows at the very least.


Looks like you’re hosting a mostly static frontend there. Could be hosting that for free in a number of places, and then you’d have no problem.


“Allow”
Fuck you, Microsoft. You and Apple have lost millions of users to Linux, and I’m here for it.


Unless they specify Solar, Wind, or Hydrogen, it’s just going to be these assholes building their own coal generators FFS.
Big NOPE


GEEEEEE, what a coincidence, eh? Almost like these companies may be coordinating some sort of market shift for some reason.
What do you call that when a bunch of companies responsible for large swathes of market share of a particular good or service use the guise of unnatural market pressure to create conditions unnaturally beneficial to themselves and not consumers?


First: there is no cheap way to back this amount of data up. AWS Glacier would be about $200/mo, PLUS bandwidth transfer charges, which would be something like $500. R2 would be about $750/mo, no transfer charges. So assume that most companies with some sort of whacky, competing product would be billed by either of these companies with you as a consumer, and you can figure out how this is the baseline of what you’ll be getting charged from them.
50TB of what? If it’s just readily available stuff you can download again, skip backing that up. Only keep personal effects, and see how much you can reduce this number by.


deleted by creator


It all is if you’re getting both. You’re sharing IPs with many different devices at the same time. That’s how it works.
Read up on it.


25% of what?
1/4 of 100% of what?
I’ve seen zero RISC devices in the wild, and the phrasing here wants me to think I should have by now.


deleted by creator


I think you’re missing the point or not understanding.
What you’re talking about is just running a model on consumer hardware with a GUI. We’ve been running models for a decade like that. Llama is just a simplified framework for end users using LLMs.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
They aren’t talking about just running models as you’re describing.


I want to say Sonarr has a regex renaming feature, but I may just be making that up as I’m not looking at my instance right now. Doing whatever renaming based on a pattern would be preferable during the download phase in order to keep the metadata of each service clean.
Failing that, if you have a predictable list of release group strings you want removed from filenames, a one liner with sed or similar would take care of this. You’d then break the known locations of these files by any service tracking them of course, but they will eventually be reindexed.


Seems kind of pricey for that specific unit, but it should work well for just hosting simple services.


It’s a dumbass AI-powered recommendation engine with an awful GUI. That’s about it.
As far as it being malicious, that’s really up to you.
For starters: Rails, PHP, and passthrough routing stacks like message handlers and anything that expects socket handling. It’s just not built for that, OR session management for such things if whatever it’s talking to isn’t doing so.
It seems like you think I’m talking smack about HAProxy, but you don’t understand it’s real origin or strengths and assume it can do anything.
It can’t. Neither can any of the other services I mentioned.
Chill out, kid.