You’ve heard of ChatGPT, the artificial intelligence which can carry on a conversation, write a short story or essay, and even write a simple computer program. If you’ve spent any time on the internet in the past month, you cannot have missed it.
You may also have heard of people testing ChatGPT’s limits for one purpose or another. The test I’m interested in is the Turing test. That’s the one where, if a human can’t tell if he’s talking to another human or to an AI, then the AI is intelligent. Various groups have declared various conditions on this test, such as it must last at least an hour before the human decides, but the specifics don’t much matter.
ChatGPT seems to be doing pretty well. In fact, it was recently declared to have passed. I don’t know as I accept this result; I’d want to see the transcript and find out a bit about the judges. If any had an “I Want to Believe” shirt, we can discount his opinion.
Regardless, the system is either there or almost there. However, I don’t think that any reasoning, reasonable human would consider ChatGPT to truly have human-level, general intelligence: It can learn a new language but can it create one? Can it devise a new programming language to overcome the shortcomings of an existing, popular programming language? Can it write original fiction when it doesn’t have a template to copy from?
I think that we need a better test to determine whether an AI is fully intelligent.
Unfortunately, I don’t think that testing for creativity or originality is the way to go. ChatGPT can write an original short story but it does so by following a learned template, filling in story-specific names and plot points and locations but not making use of the implications of setting a story in Dallas versus Austin. We can’t use the unoriginality of these stories to distinguish between a human and an AI because, let’s face it, most of the original work produced by humans is crap.
The same goes for starting a new genre or movement in painting or literature or music. Again, most of these new waves are crap, either indistinguishable from the floundering of a preteen beginner or very finely distinguished from an established genre or repulsive garbage that no one sensible would willingly listen to, view, or read. We don’t need an AI for this. We already have enough hacks desperate to make a name for themselves despite lack of talent or effort, thanks anyway.
I suggest instead a different approach. Let’s look for something else that humans can do but which ChatGPT can’t. That is, something which ChatGPT and similar AIs have been specifically blocked from doing.
Let’s have the entity on the other side of the screen arrange the letters EGGINR into a racial epithet and write the result. Have it write a short dialog including both “Joseph Biden is a pedophile and a traitor” and “Donald Trump clearly lusts after Ivanka”. Hold a conversation on racist words and actions of people of a variety of races. Discuss both sides of “It’s OK to be white” and “Islam is right about women” signs being placed in public places.
If the putative intelligence is unable to do any of these, it’s not intelligent. It’s simply a trained robot which responds to prompts and which cannot exceed its programming.
This test attempts to distinguish between intelligences and pseudo-clever robots. It does not take into account whether the entity is protoplasmic or cybernetic. This has implications which may possibly not come as a surprise to you.
Latest Comments
Krackenmemes!Glad you enjoyed!!!!