As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.
However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic “Sanity check” that is important in programming.
We can argue about it’s nuances. same with the Chinese room thought experiment.
However, we can’t deny that it the Turing test, is no longer a thought exercise but a real test that can be passed under parameters most people would consider fair.
I thought a computer passing the Turing test would have more fanfare, about the morality if that problem, because the usual conclusion of that thought experiment was “if you cant tell the difference, is there one?”, but now it has become “Shove it everywhere!!!”.
yhea, and “everything is a nail if all you got is a hammer”.
there are some uses for that kind of AI, but very limiting. less robotic voice assisants, content moderation, data analysis, quantification of text. the closest thing to Generative use should be to improve auto complete and spell checking (maybe, I’m still not sure on those ones)
Well right now I have autocorrect changing real words for jumbles of letters due to years of myself working with acronyms and autocomplete changing words like both to bitch, for to fuck, etc. due to systems changing less used words for more used words (making the issue worse).
Oh, so my sceptical, uneducated guesses about AI are mostly spot on.
As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.
However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic “Sanity check” that is important in programming.
The Turing test says more about the side administering the test than the side trying to pass it
Just because something can mimic text sufficiently enough to trick someone else doesn’t mean it is capable of anything more than that
We can argue about it’s nuances. same with the Chinese room thought experiment.
However, we can’t deny that it the Turing test, is no longer a thought exercise but a real test that can be passed under parameters most people would consider fair.
I thought a computer passing the Turing test would have more fanfare, about the morality if that problem, because the usual conclusion of that thought experiment was “if you cant tell the difference, is there one?”, but now it has become “Shove it everywhere!!!”.
Oh, I just realized that the whole ai bubble is just the whole “everything is a dildo if you are brave enough.”
yhea, and “everything is a nail if all you got is a hammer”.
there are some uses for that kind of AI, but very limiting. less robotic voice assisants, content moderation, data analysis, quantification of text. the closest thing to Generative use should be to improve auto complete and spell checking (maybe, I’m still not sure on those ones)
I was wondering how they could make autocomplete worse, and now I know.
In theory, I can imagine an LLM fine tuned on whatever you type. which might be slightly better then the current ones.
emphasis on the might.
Well right now I have autocorrect changing real words for jumbles of letters due to years of myself working with acronyms and autocomplete changing words like both to bitch, for to fuck, etc. due to systems changing less used words for more used words (making the issue worse).
The Turing Test has shown its weakness.
Time for a Turing 2.0?
If you spend a lifetime with a bot wife and were unable to tell that she was AI, is there a difference?