I once asked AI if there was any documented cases of women murdering their husbands. It said NO. I challenged it multiple times. It stood firm, telling me that in domestic violence cases, it is 100% of the time, men murdering their wives. I asked “What about Katherine Knight?” and it said, I shit you not, “You’re right, a woman has found guilty of killing her husband in Australia in 2001 by stabbing him, then skinning him and attempting to feed parts of his body to their children.”…
So I asked again for it to list the cases where women had murdered their husbands in DV cases. And it said… what for it… “I cant find any cases of women murdering their husbands in domestic violence cases…” and then told me of all the horrible shit that happens to woman at the hands of assholes.
Ive had this happen loads of times, over various subjects. Usually followed by “good catch!” or “Youre right!” or “I made an error”. This was the worst one though, by a lot.
That is so fucked. It is shit like this that makes me not trust AI at all. One thing is how it gets things wrong all the time and never learns from mistakes or corrections. Another is that I simply do not trust the faceless people behind these AIs to be altruistic and not having an agenda with their little chat bots. There is a lot of potential in AI, but it is also a tool that can and will be used to mis- and disinform people and that is just too dangerous on top of all the mistakes AI still makes constantly.
It’s such weird behavior. I was troubleshooting something yesterday and asked an AI about it, and it gave me the solution that it claims it has used for the same issue for 15 years. I corrected it “You’re not real and certainly were not around 15 years ago”, and it did the whole “you’re right!” thing, but then also immediately went back to speaking the same way.
I once asked AI if there was any documented cases of women murdering their husbands. It said NO. I challenged it multiple times. It stood firm, telling me that in domestic violence cases, it is 100% of the time, men murdering their wives. I asked “What about Katherine Knight?” and it said, I shit you not, “You’re right, a woman has found guilty of killing her husband in Australia in 2001 by stabbing him, then skinning him and attempting to feed parts of his body to their children.”…
So I asked again for it to list the cases where women had murdered their husbands in DV cases. And it said… what for it… “I cant find any cases of women murdering their husbands in domestic violence cases…” and then told me of all the horrible shit that happens to woman at the hands of assholes.
Ive had this happen loads of times, over various subjects. Usually followed by “good catch!” or “Youre right!” or “I made an error”. This was the worst one though, by a lot.
That is so fucked. It is shit like this that makes me not trust AI at all. One thing is how it gets things wrong all the time and never learns from mistakes or corrections. Another is that I simply do not trust the faceless people behind these AIs to be altruistic and not having an agenda with their little chat bots. There is a lot of potential in AI, but it is also a tool that can and will be used to mis- and disinform people and that is just too dangerous on top of all the mistakes AI still makes constantly.
It’s such weird behavior. I was troubleshooting something yesterday and asked an AI about it, and it gave me the solution that it claims it has used for the same issue for 15 years. I corrected it “You’re not real and certainly were not around 15 years ago”, and it did the whole “you’re right!” thing, but then also immediately went back to speaking the same way.