A lot of the hype it’s because it’s Rockstar. There aren’t many studios with their level of attention to detail and budget to make it come true.
A lot of the hype it’s because it’s Rockstar. There aren’t many studios with their level of attention to detail and budget to make it come true.


All machine learning is and has always been part of the artificial intelligence field. They’re doing AI, whether that happens to be a trending term or not.


unthinkable? Coordinate an attack to kidnap families of high executives?


year of the french linux desktop


yeah, well, that’s what Mr robot did, so that’s enough for me


my signal notification history is a lot of “Locked message”


any ideas how well Linux runs on Snapdragon X2?


“Make no mistakes” wasn’t enough?? 😮


I thought it was a given that, if it’s a phone app, you’re likely to want to carry it with you, not leave it on a desk with your computer.


that would still require the OS or user to spoof a location to actually prevent tracking. I hope they’d do that, but I wouldn’t expect them to.


HEL


you can’t compare it to the islands, because the GPS trace is in an area within that green circle, with a different scale. You can only look at the 300m scale in the bottom right, which looks in the ballpark of an aircraft carrier to me


so… some really basic shit that should have been expected in a pre-2010 update + AI
Well done, guys. I guess you gotta start somewhere.
you’re the one comparing it to Linux


You don’t need that assumption. Your assumption can just be “the person and vessel (or a point in the vessel, like its center of mass) don’t diverge significantly over time”.
Then, if you treat velocity as a vector and compute the person’s average velocity vector over time, you’ll have a pretty close estimation to the vessel’s velocity vector.
After all, if those two average vectors (vessel’s and person’s) were to differ much, they would end up in different locations.
The average basically zeroes the vector for each lap the person does, so the remainder must be the vessel’s.


yeah, I think the whole “water” argument really dilutes the case against data centers.
On a serious note, the argument works for areas that already struggle to supply enough water for consumers. Otherwise, we should be focusing more on the power stress to the grid, and the domino effect on supply chain of hardware cost increases that it’s happening across many industries. It started with GPUs, now it’s CPU, storage, networking equipment, and other components.
If these prices are too high for a couple of years, we’ll start seeing generalized price increases as companies need to pass along the costs to consumers.


It’s not, I read the code. It’s not merely asking the LLM for recommendations, it’s using embeddings to compute scores based on similarities.
It’s a lot closer to a more traditional natural language processing than to how my dad would use GPT to discuss philosophy.
Ok, I’m not suggesting replacing humans with AI and I despise companies trying to do this unsustainable practice.
With that out of the way, I’ll restate that LLMs follow some rules more reliably than humans today. It’s also easier to give feedback when you don’t have to worry about coming across as a pedantic prick for pointing out the smaller things.
On your point that LLMs are not improving; well, agents and tooling are definitely improving. 6 months ago I would need to babysit an agent to implement a moderately complex feature that touches a handful of files. Nowadays, not as much. It might get some things wrong, but usually because it lacks context rather than ability. They can write tests, run them, and iterate until it passes, then I can just look at the diff to make sure the tests and solution makes sense. Again, something that would fail to yield decent results just in the last year.


No, it also doesn’t do that. It gets embeddings from an LLM and uses that to rank candidates.
there have been enough reasons to ditch nova in the past years, in case someone is still using it