Can AI think like humans, or is it merely mimicking intelligence?

This blog post delves deeply into whether AI is truly a being capable of ‘thinking’ like humans, or if it is just a machine that imitates intelligence.

 

What is AI?

It’s easy to see that AI stands for Artificial Intelligence. AI is often interpreted as systems that mimic knowledge of human behavior to act accordingly. For example, AlphaGo, which defeated Lee Sedol at Go, or the systems programmed into self-driving cars—all machines that can mimic the intelligence of human beings and translate it into action are called AI. However, I believe we must reinterpret AI based on its literal meaning. AI simply means artificially developed intelligence. Artificial implies an object ‘made’ by humanity, whether intentionally or unintentionally. Yet intelligence is an extremely difficult ability to define. Since various scientists interpret intelligence in diverse ways, defining it for the general public is even more challenging. Therefore, I wish to utilize Alex Wissner-Gross’s paper regarding intelligence.

 

Intelligence: A Capability Distinct from Thought

Alex Wissner-Gross suggests that if we were to leave a single sentence to help future descendants reconstruct or understand artificial intelligence, it would be: “Intelligence is a physical process that maximizes the freedom of future actions and prevents constraints on its own future.” He then expressed this as the following formula:

F = T∇Sτ

This is a formula for intelligence. Assuming intelligence is F, T represents some force, S denotes the diversity of achievable futures, and τ signifies a specific point in the future. At first glance, this seemingly absurd formula drives behaviors we commonly associate with intelligence. Input this formula into a system placed in a specific situation, and it will balance a rod without any instructions or play Pong on its own. It also enables systems to increase their own assets in simulated stock trading or to create well-connected social networks. We can observe that what humans consider intellectual actions, like social cooperation, are induced by this formula.
However, it’s easy to see that a machine possessing intelligence and the act of thinking are separate matters. As mentioned earlier, intelligence is merely purpose-driven to avoid future constraints. Thinking, however, is a higher-order concept encompassing this. It involves pursuing goals and the desire to predict the future. For example, when observing other animals using tools or hunting in groups, we consider them hunting intelligently, but it’s difficult to view them as thinking beings. Furthermore, individuals with intellectual disabilities often demonstrate remarkable creativity in various areas despite incomplete development of intellectual abilities. This suggests that intelligence is merely a tool used to achieve a purpose; possessing intelligence does not equate to thinking. Therefore, the moment AI demonstrates that it thinks, the term “AI” itself must change. It would have transcended the level of merely possessing intelligence to actually engage in thought.

 

Is there a way to prove thinking?

Throughout history, humanity has developed AI while observing only the front side of the coin. The front side refers to the calculated values AI outwardly displays. That is, a system where inputting data A produces output B, providing an exact answer to a question. To explain more simply, consider one example. In Ken Goldberg’s TED Talk video, you can see a robot called the “Remote Garden.” A remote garden is a system enabling anyone to access a garden robot online to water plants or plant seeds. This system is installed in the lobby of a museum in Austria. However, one could pose this question to those remotely controlling it: “Is the robot REAL?” Even if no robot exists, we could disseminate photos online using various images to make people believe a robot is there. This mirrors Descartes’ epistemological problem. AI can similarly be seen as an epistemological problem. Whether AI is a system that outputs data based on input data is an epistemological question. In other words, we cannot help but question whether AI thinks.
So, can we not see the other side of the coin? To this question, I want to boldly say YES. In a TED talk I saw by Blaise Agüera y Arcas, he posed a question about creativity using the following equation:

Y = W(*)X

W represents the brain’s complex neural network, X is the data of objects perceived through the five senses, and (*) indicates how the neural network interacts when X data is input. Finally, Y is the data we ultimately perceive and output from X. TED suggests that the neural map W can be approximated using the operations of X, Y, and (*). This allows us to derive the result Y when inputting X. Through this, we gained some insight into creativity and thought. However, it makes one wonder whether the resulting Y value is truly complete. In TED, when the input value ‘dog’ was fed into X, we saw it draw a picture of a dog as Y. But if we asked humans to draw a dog, could they produce a picture as detailed and unmistakably recognizable as the one from TED? I wondered if they could draw a dog differently from others if asked to do so. In other words, it feels like nothing more than a collection of data derived from big data. But what if humanity perfectly deciphered W, the neural network? It could likely derive the value Y through X, (*), and W, just as humans do. Then, instead of relying solely on big data, it could develop W independently, like humans, and express the value Y in its own unique way. This would allow humanity to flip the coin and reveal the reverse side: creativity and thought.
So when will we perfectly understand the nervous system, advance neuroscience, and fully interpret the collection of neurons? On this, I’d like to quote Dijkstra: “The question of whether Machines can Think is about as relevant as the question of whether submarines can swim.” It took humanity thousands of years after building ships and sailing the seas to finally create submarines and begin exploring the previously unknown depths of the ocean. AI is currently in the process of building ships and navigating the seas. Therefore, I have no doubt that humanity will one day interpret the unknown realm of thought and create machines that think.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.