Something I don’t understand - AI coding is mostly useful in common code, snippets, easy stuff. What Nvidia is doing (drivers, optimization, chip design, etc.) is something I imagine there is close to zero AI training, so what can they realistically even use it for so much?
I’m gonna take a guess that a big portion of it is infrastructure-as-code, the operations side and not product development itself. I work in the operations side of things and we never touch the product at all, but we deal with a lot of code due to how backend infrastructure is built and maintained now, especially if you’re in the cloud.
Something I don’t understand - AI coding is mostly useful in common code, snippets, easy stuff. What Nvidia is doing (drivers, optimization, chip design, etc.) is something I imagine there is close to zero AI training, so what can they realistically even use it for so much?
Burn money on AI tokens so it looks like AI could be profitable some day so people keep investing in AI companies that can then buy Nvidia chips…
You’re thinking of it like “how can AI make a better product”
They’re looking at it as “how can we sell more chips”
Two very different questions with very different answers.
It’s a house of cards and Nvidia can’t afford to acknowledge no one wants AI or knows how to make it profitable.
I’m gonna take a guess that a big portion of it is infrastructure-as-code, the operations side and not product development itself. I work in the operations side of things and we never touch the product at all, but we deal with a lot of code due to how backend infrastructure is built and maintained now, especially if you’re in the cloud.
Making slides for all the pointless meetings.
Guess code review and troubleshooting. Not really sure, I have only really used it for code templates and ideas for troubleshooting to look into.
The most use I found is rewriting documents in a specific way. But only after I write it first. Then go back and forth. Just to make tone consistent.
There’s plenty of driver code available. All of Linux and BSD, plus whatever internal stuff they have. Optimization is pretty generic.
Chip design maybe not, but I imagine you can train an AI on the principles and generate a bunch of candidates, then benchmark them in simulation.