Human nature is a controversial subject. We get excited about new things, but at the same time, we are afraid of anything we cannot explain. I am not talking about understanding. People do not understand most of the things in this world. The lack of a well-accepted explanation is what scares us. The rise of generative AI, capable of producing well-structured text and supporting human-like conversations, developed a new wave of hype. Many see it as the next step in the evolution of IT, which is to create true artificial intelligence. Most people cannot explain how it works, and it scares them. A crucial question remains unanswered: why does AI-generated text sound so human?
What if I told you that even experts in machine learning, the very people who designed and developed original software, do not fully understand how it works? They, too, cannot explain how the text generated by a cold machine can look so natural. Recently, OpenAI introduced the next level of their product, which is capable of expressing human-like emotions. How is it possible?
I am here to offer you an explanation, but you may not like it. The answer lies in the fundamental nature of human communication itself. The machine can talk like a human because it does exactly what most people do: it fakes it.
Yes, what else did you expect? The latest innovation in computer software simulates the most simple and rudimental features of human intelligence: the ability to pretend to be smart by imitating others. The GPT-like program requires an LLM (Large Language Model). Such models are trained on a vast amount of text. Trained means that the machine analyzes sentences written by people to identify patterns and statistical relationships between words and phrases. In other words, it memorizes lots of examples of language use without understanding the meaning of what is written. When it is time to say something, it just picks up something others would say in this situation. Sounds familiar?
How about the emotional aspect of this text generators? Can a computer program feel? No, it cannot. We are fooled by the fluency and coherence of AI-generated text, attributing it to an underlying cognitive ability that simply doesn’t exist. We are projecting our own understanding and emotions onto the machine, mistaking its ability to mimic for genuine comprehension and feelings. However, most people are faking their thinking processes and emotions too. We are teaching computers everything we know. And now we just achieved the next step: we taught it how to pretend.
Humans learn language by listening to others, absorbing and processing spoken and written text, building vocabulary, grammar, and even mannerisms. However, most people are too lazy to analyze this information beyond the minimum necessary to participate in everyday conversations. People construct narratives from a large set of disconnected facts and statements so that sometimes they even contradict each other.
In this sense, generative AI mirrors the primitive-level process of human communication. Not surprisingly, a new type of insult has emerged. The phrase “you talk like chat GPT” is not a compliment. We have little respect for pretentious behavior, even though most of us do this occasionally. That is one of the reasons we worry about the potential abuse of new technologies. I can assure you that AI will be abused no less than many other inventions that have been abused before. However, we shouldn’t worry too much about that. Indeed, generative AI can flood the information space with lousy writing and misinformation much faster than humans used to do it all this time. It will take this job away from bad writers, dishonest scientists, and corrupt journalists. So what? Maybe that’s a good thing. Maybe we’ll finally wake up and realize we must stop consuming this junk.
The potential ability of AI to deceive and manipulate us depends entirely on our gullibility and intellectual laziness. After all, despite having the letter “I” in the title, this software is not intelligent; it cannot invent anything. It still largely depends on human creativity.
The less people know, the easier it is for them to believe an incomplete or ridiculous explanation. Proponents of new technologies believe that AI systems can generate text that is not only coherent but also creative and engaging. They use that observation as proof that AI is learning to understand the meaning behind the words it uses and that it is only a matter of time before it surpasses human abilities in this area. After all, we still do not know how our mind functions. We tend to think of two black boxes producing similar results as equal.
That may be the case. Homo Sapience developed true intellectual capabilities only after humans learned to speak. However, this is only the second level of development (ML engineers missed the first one altogether). Artificial systems have a long way ahead of them before they evolve to the level of human intelligence. There are six primary levels the silicon brain has to go through: ideation, language, practice, incentive, memetic, and impulses.
The ideation level (level 1) is the ability to create new and manage existing ideas. Language is an essential tool for building and maintaining knowledge because we humans use it to express and exchange ideas. Using IT terminology, ideas are binary objects formed at a higher level of abstraction. The domain of ideas is much greater than our vocabulary. The English language uses 26 letters to construct more than 200,000 words. Similarly, words are used to express an uncountable number of human ideas. That is the food for thought.
However, ideas are useless unless there is a way to try them out. Our ability to act on our ideas while using our experience to learn and adjust our thoughts (level 3) is not replicated by computer systems—not yet. Some companies are working on the development of AI robots. These machines are equipped with tools to put their ideas into practice – legs and hands. Except, they do not have their own ideas. Given enough time, smart people can devise a way to teach computers to use their knowledge to generate new ideas and try them out. As they reach this milestone, they will realize that their robots still missing a fundamental human trait: an incentive (level 4). We do not bother to think about how our feelings affect our thinking. Computers do not feel pain, are not afraid to die, and cannot experience the ecstasy of inventing something new and good. Software is not afraid to make a mistake. Without this feature, they cannot prioritize their actions.
By reaching level four of this evolution, robots will be capable of abstract thinking and have the power to act on ideas at their own discretion without the constraint of necessity or fate. In other words, robots will have free will. Yet still, to match the human race, these robots need to develop a way to spread and discuss their ideas and share their experience. Digital data transfer will not work because it is too precise. We need to introduce an element of chaos to this data exchange to create “ideas mutations” (level 5). At the final stage, this new society of artificial humans will need impulses (level 6). My book “Vertical Progress” elaborately describes the nature of human impulses.
Technological progress is accelerating parabolically. This means that the future is much closer to us than our past. We may witness computer systems evolving through all six levels relatively quickly. We are struggling to keep up with the changes that are beginning to tear off some of our concepts and norms. As we approach the singularity, the destruction will reach the foundations of society. We will see the acceleration of progress reach unimaginable proportions. Yes, we haven’t seen anything yet.
We have little time to prepare for what is coming. Human civilization is progressing toward a finer society by investing in better and faster ways to perform routine work. Text-generating software is not an enemy but rather a sign—a message. We must face the fact that we can no longer afford to be ignorant. We need to start by questioning everything we are doing. What are we doing? How, and, most importantly, why?