I tell people who work under me to scrutinize it like it’s a Google search result chosen for them using the old I’m Feeling Lucky button.
Just yesterday I was having trouble enrolling a new agent in my elk stack. It wanted me to obliterate a config and replace it with something else. Literally would have broken everything.
It’s like copying and pasting stack overflow into prod.
Just the other day I was researching potential solutions to a programming issue I had at work. Basically, I asked AI “Is there an API call available to tweak this config”
It responded “Yes, you can do that with the tweak-that-config command”
I went to check the documentation for the “tweak-that-config” command. It just plain didn’t exist, and never had. Turns out there was no API call to tweak the config I wanted, and attempting to use AI as a search engine is, in fact, a waste of time.
I know nothing about stacking elk, though I’m sure it’s easier if you sedate them first. But yeah, common sense and a healthy dose of skepticism seems like the way to go!
Yeah, you just have to practice a little skepticism.
I don’t know what its actual error rate is, but if we say hypothetically that it gives bad info 5% the time: you wouldn’t want a calculator or an encyclopedia that was wrong that often, but you would really value an advisor that pointed you toward the right info 95% of the time.
5% error rate is being very generous, and unlike a human, it won’t ever say “I’m not sure if that’s correct.”
Considering the insane amount of resources AI takes, and the fact it’s probably ruining the research and writing skills of an entire generation, I’m not so sure it’s a good thing, not to mention the implications it also has for mass surveillance and deepfakes.
I think DeepMind proved in 2022 that it would never reach 95% accuracy to human input even with infinite power and training data, as a response to the OpenAI research paper with similar findings that accurately predicted the performance of each new model after the paper was published.
I think of it like talking to some random know-it-all that saddles up next to you at the bar. Yeah, they may have interesting stories but are you really going to take legal advice from them?
I tell people who work under me to scrutinize it like it’s a Google search result chosen for them using the old I’m Feeling Lucky button.
Just yesterday I was having trouble enrolling a new agent in my elk stack. It wanted me to obliterate a config and replace it with something else. Literally would have broken everything.
It’s like copying and pasting stack overflow into prod.
AI is useful. It is not trustworthy.
Sounds more actively harmful than useful to me.
If you would say the same for stack overflow and Google, then sure.
Otherwise, absolutely not.
When it works it can save time automating annoying tasks.
The problem is “when it works”. It’s like having to do code reviews mid work every time the dumb machine does something.
So it causes more harm or loss than benefit. So it’s not useful.
“When it works” it creates the need for oversight because “when it doesn’t work” it creates massive liabilities.
I’m not disagreeing lol.
I bring it up to explain why losers are obsessed with it.
Is it useful yet? I’m impressed how far its come, but have yet to find a use for it.
Just the other day I was researching potential solutions to a programming issue I had at work. Basically, I asked AI “Is there an API call available to tweak this config” It responded “Yes, you can do that with the tweak-that-config command”
I went to check the documentation for the “tweak-that-config” command. It just plain didn’t exist, and never had. Turns out there was no API call to tweak the config I wanted, and attempting to use AI as a search engine is, in fact, a waste of time.
I know nothing about stacking elk, though I’m sure it’s easier if you sedate them first. But yeah, common sense and a healthy dose of skepticism seems like the way to go!
If it costs more than the benefit then it isn’t even useful.
Yeah, you just have to practice a little skepticism.
I don’t know what its actual error rate is, but if we say hypothetically that it gives bad info 5% the time: you wouldn’t want a calculator or an encyclopedia that was wrong that often, but you would really value an advisor that pointed you toward the right info 95% of the time.
5% error rate is being very generous, and unlike a human, it won’t ever say “I’m not sure if that’s correct.”
Considering the insane amount of resources AI takes, and the fact it’s probably ruining the research and writing skills of an entire generation, I’m not so sure it’s a good thing, not to mention the implications it also has for mass surveillance and deepfakes.
I think DeepMind proved in 2022 that it would never reach 95% accuracy to human input even with infinite power and training data, as a response to the OpenAI research paper with similar findings that accurately predicted the performance of each new model after the paper was published.
https://arxiv.org/pdf/2001.08361
https://arxiv.org/pdf/2203.15556
I think of it like talking to some random know-it-all that saddles up next to you at the bar. Yeah, they may have interesting stories but are you really going to take legal advice from them?