ChatGPT has created a storm, in favor and against, although it is not the only Large Language Model around. The way it writes complex theses can only be compared with human writing.
But does ChatGPT think while writing. I asked the LLM itself and it said it has no idea what think means but it it might do so soon enough. I was reading an article about Chomsky as against ChatGPT. It hit me suddenly that that article by somebody or something whose thoughts are beyond my senses. Was he thinking while preparing the feature. If so how do I know that he was thinking.
The answer is simple. I cannot know that. The brain activity of that person is masked from me. Connect electrodes? It will reveal the brain waves. But the electrical activity which accompanies a similar feature written by ChatGPT is equivalent.
The appalling truth is that my friend might be a machine with neural networks capable of deep learning and humanoid output. My friend might be a machine. I might be a machine.
Humans or presumably humans made all the weapons around us that could wipe out entire life. Does that sound like rational thought. Like any kind of thought. The damage to the environment does not seem like the consequence of rational thought.
Now, like Dr.Frankenstein, humans have started work on the weapon of ultimate destruction. Of course everyone is aware of this but the progress will not stop. On LinkedIn one person compared my worries to the worry of teachers when calculators were introduced. I was also advised to acquire some skill an AI machine wouldn’t be likely to possess.
Still seeing some of the responses I get from ChatGPT et al I confess to a degree of apprehension new to me. Imagine AI and Neuralink together. And remember old HAL.
And then appreciate that the meaning of humanity needs to be redefined.