The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.
LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.


Yes, I definitely had a long term strategy when posting an internet comment.
My feeling is you’re pro general purpose computing and against censorship. Which is why it’s strange to see you make fun of a provider for not censoring their product enough. I was referring only to that part of the article.
I see your stand from the comment above though, I’m not going to argue that. Peace.
Which is good because calling making a tech product safe “censorship” is a pretty fraught position to defend.