For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.
It’s just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.
Does anyone read more than the headline? OP even said this in the summary.
It depends what purpose that paperwork is intended for.
If the regulatory paperwork it’s managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.
Learning and understanding is hard work. An LLM can’t do that for you.
Sure it can summarise instructions for you to show you what’s more pertinent in a given instance, but is that the same as someone who knows what to do because they’ve been wading around in the logs and regs for the last decade?
It seems like, whether you’re using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.
As always, there’s a risk that a user just won’t identify a problem in the information produced.
I don’t think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.
I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.
If you’ve never used a custom LLM or wrapper for regular ol’ ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it’s trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?
I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.
NOOOOOO ITS DOING NUCLEAR PHYSICS!!!111
It’s eating the rods, it’s eating the ions!
I unfortunately don’t can someone explain?
This
Oh shit had already forgotten about this amid so many other scandals. The guy who said this is running the whole of US like a fucking medieval kingdom, another reality slap in the face. At that time I was like, “surely no one right in the mind would vote for this scammer”.
Don’t blame the people who just read the headline.
Blame the people who constantly write misleading headlines.
There is literally no “artificial intelligence” here either.
deleted by creator
Lol, in SoCal these are a landmark that most call “the boobs” or “the titties”
I’m shocked!
Looks like it’s a bit nippy out there, brrrr.
Huh, it is really Russian roulette with how we’re all gonna die, could be WW3, could be another pandemic or could a bunch of AIs hallucinating and causing multiple nuclear meltdowns.
deleted by creator
It’s literally just a document search for their internal employees to use.
Those employees are fallible humans trying to navigate tens of thousands of byzantine technical and regulatory documents all published on various dinosaur platforms.
AI hallucination is a very popular thing to get outraged about right now but don’t forget about good old fashioned bureaucratic error.
My employer implemented AI search/summarization of our docs/wiki/intranet/JIRA systems over a year ago and it has been very effective in my experience. It always links to the source docs, but it permits natural language queries and can do some reasoning about the contents of the documents to pull together information across a sea of text.
Nothing that is mission critical enough to lead to a reactor meltdown should ever be blindly trusted to these tools.
But nothing like that should ever be trusted to the whims of one fallible human, either. This is why systems have protocols, checks and balances, quality controls, and failsafes.
Giving employees a more powerful document search doesn’t somehow sweep all that aside.
But hey, don’t let a rational, down-to-earth argument stand in the way of freaking out about a sci-fi dystopia.
Don’t forget he inevitable climate change.
I can only hope my bingo card somehow explodes & kills me.
The LLM told me that control rods were not necessary, so it must be true
The chatbot said 3.6 Roentgen is just fine and the core cannot have exploded, maybe we heard a truck driving by
The original article at the non-profit website: https://themarkup.org/artificial-intelligence/2025/04/08/for-the-first-time-artificial-intelligence-is-being-used-at-a-nuclear-power-plant-californias-diablo-canyon
Finally we get the sequel to “Chernobyl” … Based in America…
Live action at that
They made the prequel already - wiki/Three_Mile_Island_accident.
Can we not have the lying bots teaching people how to run a nuclear plant?
Diablo Canyon
The nuclear power plant run by AI slop is located in a region called “Diablo Canyon”.
Right. We sure this isn’t an Onion article? …actually no, it couldn’t be, The Onion’s writers aren’t that lazy.
Fuckin whatever, I’m done for the night. Gonna head over to Mr. Sandman’s squishy rectangle. …bet you’ll never guess what I’m gonna do there!!
What could go wrong?
using AI in a nuclear plant at Diablo Canyon… it’s so on the nose you’d say it’s lazy writing if it were part of the backstory of some scifi novel.
Well, considering it’s exclusively for paperwork and compliance, the worst that can happen is someone might rely on it too much and file incorrect, I dunno, license renewal with the DOE and be asked to do it again.
Ah. The horror.
When it comes to compliance and regulations, anything with the literal blast radius of a nuclear reactor should not be trusted to LLM unless double or triple checked by another party familiar with said regulations. Regulations were written in blood, and an LLM hallucinating a safety procedure or operating protocol is a disaster waiting to happen.
I have less qualms about using it for menial paperwork, but if the LLM adds an extra round-trip to a form, it’s not just wasting the submitter’s time, but other people’s as well.
All the errors you know about in the nuclear power industry are human-caused.
Is this an industry with a 100% successful operation rate? Not at all.
But have you ever heard of a piece of paperwork with an error submitted to regulatory officials and lawyers outside the plant causing a critical issue inside the plant? I sure haven’t. Please feel free to let me know if you are aware of such an incident.
I would encourage you to learn more about how LLM and SLM structures work. This article is more of a nothingburger superlative clickbait IMO. To me, at least it appears to be airgapped if it’s running locally, which is nice.
I would bet money that this will be entirely managed by the most junior compliance person who is not 120 years old, with more senior folks cross checking it with more suspicion than they would a new hire.
I’m not sure if that opening sentence is fatuous or not. What errors in any industrial enterprise are not human in origin?
to people who say it’s just paperwork or whatever it doesn’t matter: this is how it begins. they’ll save a couple cents here and there and they’ll want to expand this.
Also, it’s not like the paperwork isn’t important.
That’s textbook slippery slope logical fallacy.
Slippery slope arguments aren’t inherently fallicious.
it’s not actually. there’s barely an intermediate step between what’s happening now and what I’m suggesting it will lead to.
this is not “if we allow gay marriage people will start marrying goats”. it’s “if this company is allowed to cut corners here they’ll be cutting corners in other places”. that’s not a slope; it’s literally the next step.
slippery slope fallacy doesn’t mean you’re not allowed to connect A to B.
You may think it’s as plausible as you like. Obviously you do or you wouldn’t have said it. It’s still by definition absolutely a slippery slope logical fallacy. A little will always lead to more, therefore a little is a lot. This is textbook. It has nothing to do with companies, computers, or goats.
this is textbook fallacy fallacy
True, but it you change the argument from “this will happen” to “this with happen more frequently” then it’s still a very reasonable observation.
All predictions in this vein are invalid.
If you want to say “even this little bit is unsettling and we should be on guard for more,” fine.
That’s different from “if you think this is only a small amount you are wrong because a small amount will become a large amount.”
What could possibly go wrong?
Fucking christ…
SkyNet is fully operational, operating at 60 teraflops.
Dave, I don’t known what to tell you but you can’t come in alright?