Debugging only teaches logic. Not structure. No amount cut, paste, debug with teach you the factory pattern.
Debugging only teaches logic. Not structure. No amount cut, paste, debug with teach you the factory pattern.
The AI bubble is currently grinding my gears on this. “XXX is an open source model”. No, it’s not. Do I have access to all of the information necessary to recreate it? No, I don’t as nobody releases training data.
Training data is the source of these models. Without it, they are just free use.
The trouble is that “core” is just that. The heart of the processor. There’s a lot of shared state in the caches and the TLBs which is all common to multiple cores.
At this point I think speculation attacks are almost being accepted as the price of having high performance processors. It’s almost impossible to rewind all non-architectural state when you hit a mis-speculated branch.
Probably because although there are fabs going up around the world (USA and Europe) TSMC Taiwan seem to hold the latest technology nodes, and aren’t they interested in growing capacity. They seem to like having the high end expensive limited process. All the other fabs are coming up with processes 2 or 3 generations back. (5 or 7, not 2 or 3).
All means that although there’s a market for the optics, it’s not the bleeding edge stuff.
My comment is more about how we have this decentralised tool, but we’re unable to get our collective heads out of the centralised model. We e ended up turning it back into centralised VCS.
Ah… Git.
The decentralised version control system.
I’d want the floppy disc to be a standard size though. You can’t just chop a ¼" off like that. It won’t work.
Bets on which car company is going to be the first to EOL a server and brick a bunch of cars because some key feature is now “unsupported”?
…because something needs to check you’ve paid your subscription. A man in the middle.
To those suggesting mumble, are there any good guides out there? The website is shockingly bad for introductory information.
“Dozens”. Really?!?!
Late 90s was 350nm down to 180nm (Known as 0.35um and 0.18um respectively). Things were still pretty honest around then.
2010s is probably where most of the shenanigans started.
The reason we went multicore was because the frequencies weren’t scaling, but the number of transistors were. We’ve been around the 2-300ps clock cycle for a long time now.
If you have tabs like that, they’re not “open”. They are crumbs left as you wandered the internet. You’re not going back to them. Do yourself a favour and close them.
It’s like having thousands of unread emails in your inbox. At some point you have to stop kidding yourself you’re going to read them.
Since ads began, there have been ad-blockers. You just didn’t know about them.
What do you mean “work”? What is it that needs to move?
You just fire up Firefox and start using it. It’ll even scrape your chrome setup to move bookmarks and stuff over.
It’s not an OS. It’s an application.
This is what a lack of competition looks like.
However… Twice the price of 4nm? The gains are fairly marginal from what I gather. I don’t think many will bother.
They’ll just transition to the next fad that needs large amounts of parallel compute. It’s what they’ve just done moving from cryptocurrency to machine learning.
Here’s something that might blow your mind. Coverage is not the point of tests.
If you your passing test gets 100% coverage, you can still have a bug. You might have a bunch of conditions you’re not handling and a shit test that doesn’t notice.
Write tests first to completely define what you want the code to do, and then write the code to pass the tests.