The Paradox of Artificial Intelligence: Smarter than a Grandmaster but More Ignorant than a Toddler

Bennie Mols

How do the learning processes of humans and machines differ? What impact will AI systems like ChatGPT have on future education? These were two of the core questions at the Hot Topic panel discussion, ‘The Paradox of Artificial Intelligence’, at the Heidelberg Laureate Forum (HLF).

From left to right: Bennie Mols (moderator), Celeste Kidd, Po-Shen Loh, Brigitte Röder, Eric Schulz. Image credits: HLFF / Buck

AI has surpassed top human players in board games like chess and go, it can learn from much more data than any human on earth and it sometimes recognizes patterns that elude experts. AI generates and translates texts at lightning speed and is on the verge of matching top radiologists in diagnoses, operating much faster and without fatigue.

In other areas, however, even toddlers outperform AI. Think about common sense knowledge, reasoning about cause and effect, making generalisations and learning from just a few examples. The paradox of today’s AI is that what is hard for humans, seems to be easy for machines, and what is hard for machines seems to be easy for humans. 

In the HLF Hot Topic panel discussion ‘The Paradox of Artificial Intelligence,’ organised on Tuesday, 24 September, 2024, four leading scientists explored learning in humans and machines: AI-researcher Eric Schulz, mathematician Po-Shen-Loh and the psychologists Celeste Kidd and Brigitte Röder. How do the learning processes of humans and machines differ? What impact will AI systems like ChatGPT have on how we teach math and computer science? And how can we harness the advantages of intelligent machines without losing our own skills?

AI Rapidly Picks up Abilities

The paradox of today’s AI stems from the fact that the human brain has an evolutionary biological origin and AI does not, said Celeste Kidd, Associate Professor of Psychology at the University of California, Berkeley: “It is likely that the type of intelligence that we have evolved for taking care of helpless offspring. We need to be able to read the intentions of a child that is running towards a cliff [… or one] that’s not yet able to feed themselves [and] say that they are hungry.” 

This ability of reading intentions has enabled all sorts of other human abilities, like coordinating and sharing information, Kidd explained. “The way in which we do that is largely enabled by being able to read uncertainty off of one another. Humans know what they do not know and they are signalling that to each other […] I think that it is fair to say that the current generation of AI-models are struggling with this type of metacognition and that is a major limitation.”

“The first time I was really surprised by one agent, was actually GPT-2,” reacted Eric Schulz, director of the Institute of Human-Centered AI at Helmholtz Munich, Germany. “It could do psychology tasks that agents couldn’t do previously. And then GPT-3 came along and we did a battery of psychological tasks. It really surprised me that it can do some kind of model-based reinforcement learning. […] When we say that AI cannot do x, we should probably always put an asterisk, and then say: yet. Machine learning moves at such a fast pace that when we are pointing something out that an agent cannot yet do, soon it might be able to do this.” 

Later on in the discussion, Schulz explained that the newest vision based GPT-models are even getting quite good at tasks like common sense reasoning and intuitive physics, tasks in which previous GPT-models were rather bad. Schulz: “We did some experiments with models that were asked to predict the stability of towers of various blocks. Will a tower fall down or not? These models were quite good at this, in fact almost as good as humans.”

Brigitte Röder and Eric Schulz. Image credits: HLFF / Buck

Humans Need Help

Professor of Biological Psychology and Neuropsychology Brigitte Röder from the University of Hamburg (Germany) explained some key differences between biological wetware and computer hardware. One of these is that human brain development is characterised by a number of anatomical and functional changes that do not occur in artificial neural networks.

Röder: “With a few exceptions, all neurons are generated by the end of the second trimester, but the number of connections between neurons rapidly increases after an infant is born. In the first one to two years, a large number of exuberant connections are formed […] Over the following years, the number of connections is dramatically reduced to about half of them.” This pruning process is controlled by experience – enter the important roles of parenting and education.

On the other hand, Röder pointed out, the communication between neurons involves biochemical processes, which are rather slow as compared to the communication between neurons in artificial neural networks. However, “the human brain mitigates [its lack of speed and memory capacity] through abstractions and generalisation. In fact, [this] compels the brain to formulate rules and general principles applicable to countless instances.” Making abstractions and generalisations are still huge challenges for AI.

Po-Shen Loh. Image credits: HLFF / Buck

Po-Shen Loh, Professor of Mathematics at Carnegie Mellon University (Pittsburgh, USA) starting saying: “We are in trouble. We are in big trouble […] The biggest thing that scares me is that GPT-o1, which was released this September, solves problems from the International Mathematical Olympiad better than I can.” Loh served as the national coach of the USA International Mathematical Olympiad team between 2013-2023, so he knows what he is talking about. 

Then he referred back to the evolutionary history of the human brain: “Our brain is optimized for reproduction, but now we are competing against machines which are optimized for compute. […] We need something to help increase our human capabilities.”

Curiosity Drives Us

Unlike AI systems, children are naturally curious, exploring the world on their own while simultaneously learning within a social and cultural context. “Our curiosity is driven by knowing what we don’t know,” said Celeste Kidd. “Humans don’t just play chess and Go. We came up with the whole idea of games like chess and Go. No generative model that I know of is very good at generating games. If you give it instructions, it’ll generate a game. And if you play it, you might find it’s not very fun. That’s not AI’s fault. It just doesn’t have the same multimodal experiences as humans.”

Eric Schulz, however, is trying to inject curiosity in AI-systems by rewarding them to explore new directions. “Once you have a formal definition of curiosity that you can test experimentally, then you can run it on large language models. And then what we often seen is that they show some of these forms of curiosity, at least in these tasks that psychologists have come up with.” 

He went on to explain that GPT-3 would never ask you a question back, but the latest models like GPT-4 do ask clarification questions: “Where are you currently? And then I have to say ‘Heidelberg.’ Oh, and what kind of taste of ice cream do you like? Then it tries to build a model of you. And it tries to recommend you an ice cream shop nearby.”

“Curiosity is extremely important in order to detect something that you can’t predict,” Brigitte Röder remarked. “Contrary to AI, humans are active learners. Children seek for information because they are curious.” 

Celeste Kidd explained that the information that children find is of a very different type than the data fed into AI-systems. Kidd: “The single experience of a child with an apple is very different from Google Photos labeling an apple on an image. A child’s experience with an apple is sensory. They’re feeling the apple, they’re seeing the apple, it’s multi-dimensional. The data people are getting is much, much richer. And there are tons of correlations you can pick up on in order to leverage things like learning and generalization.”

AI Deeply Impacts Education

If AI already performs so well on even the very hard questions of the International Mathematics Olympiad, and if AI is already helping to write computer code, what then are the implications of AI on teaching students in mathematics and computer science?

Apart from being a professor in mathematics, Po-Shen Loh is also a social entrepreneur. He founded the math educational websites Expii and LIVE. What is the impact of AI on education according to him? “When you are trying to solve a problem, one of the most powerful things is to know that there exists a certain tool to help you,” Loh told the audience. “Even if you have to do a little fine tuning to make sure that it is absolutely correct, it saves you so much time.”

He explained that nowadays everyone needs to learn “a critical skill called ‘how do I solve a problem I have never seen before?’” Loh: “This is a fundamental new thing that needs to be taught in education. Unfortunately a lot of people go through school where they are shown how to solve a problem, then they practice it and when the exam comes, they are supposed to remember it. That is a good task for AI, right?”

Loh said that large language models already have fundamentally changed the way he hires software engineers. He will no longer hire software engineers who just know how to make stuff, they also should know how to use these new tools.

Röder added that because the memory of humans as compared to AI is so limited, students need to be taught where to get knowledge, and AI definitely plays a role there. “But in order to become an expert in a certain field you still need to acquire knowledge on your own. Also, for creativity you need a lot of knowledge.” In other words: If we as humans want to harness the advantages of intelligent machines without losing our own skills, we still need to have a lot of knowledge in our own brains.

Schulz said that he uses ChatGPT every day and it has changed his life a lot. “Basically I now have in my pocket an agent that can answer instantly the craziest questions that come to my mind. I can have an iterative process of knowledge construction. Even if it sometimes goes off the rails, I quite enjoy it and it has changed how I interact with information.” 

However, Schulz also noticed that current large language models are optimised to tell the user what he would like to hear. And that is not always desirable. Schulz: “For me true knowledge comes from people who disagree with me and tell me something new.” 

The post The Paradox of Artificial Intelligence: Smarter than a Grandmaster but More Ignorant than a Toddler originally appeared on the HLFF SciLogs blog.