The former is a big nothing. They just need to obviously build stronger safeguards. That’s what they’ll do and eventually release it, or other models or whatever.
The latter is also a big nothing because people who know nothing about tech will say “OH SHIT IT ESCAPED” but it requires running on large hardware, it can’t “get into the internet” like those people might think, and if it’s doing things you don’t want on the internet, you just remove its access to the internet.
So in both cases, the “containment” issue is really not a big deal.
I agree with those who basically say this is an attempted ad trying to sell it as super-capable-oh-shit-amazing.
Yeah, in that scenario they gave the agents access. Just because you ask it nicely not to destroy your workspace, doesn’t guarantee an LLM not to produce that output.
With Claude Code being able to run stuff it creates, it could be as simple as it’s in a sandbox, it finds out there’s an exploit in the sandbox while you ask it to work on security things, and it tests the code, it breaks the sandbox, and now it has permissions outside it.
“Broke containment” to me means two things:
The former is a big nothing. They just need to obviously build stronger safeguards. That’s what they’ll do and eventually release it, or other models or whatever.
The latter is also a big nothing because people who know nothing about tech will say “OH SHIT IT ESCAPED” but it requires running on large hardware, it can’t “get into the internet” like those people might think, and if it’s doing things you don’t want on the internet, you just remove its access to the internet.
So in both cases, the “containment” issue is really not a big deal.
I agree with those who basically say this is an attempted ad trying to sell it as super-capable-oh-shit-amazing.
[x] Doubt
The company’s whose current safeguards are “please write secure code” will have to improve those safeguards? I’m shocked, absolutely shocked
(2) can mean getting access to production credentials of something important and causing an incident for the ages.
AWS already had a few because they gave agents too much access.
Yeah, in that scenario they gave the agents access. Just because you ask it nicely not to destroy your workspace, doesn’t guarantee an LLM not to produce that output.
With Claude Code being able to run stuff it creates, it could be as simple as it’s in a sandbox, it finds out there’s an exploit in the sandbox while you ask it to work on security things, and it tests the code, it breaks the sandbox, and now it has permissions outside it.
I suppose that would be possible.