

Companies are building entire workflows around AI, but they are building them under the assumption that they won’t ever be charged per token.
Or worse, where the AI models underpinning a workflow breaks or degrades in some way to reduce token usage and then starts behaving in unexpected ways, in a process/workflow that assumes a particular type of behavior.



The ones that power spacecraft generate less than 5000W of heat at max power (while producing 300W of usable electricity).
In order to power a single server rack of 72 Blackwell GPUs, which takes about 130,000 watts, you’d need about 430 of those RTGs, and need to manage cooling requirements of 430 times as much (plus however much additional power will be required by the cooling system itself, too).