top of page

The Illusion of Consciousness and Learning in AI: Why Educators Must Understand the Difference




Introduction

Artificial Intelligence (AI) has rapidly infiltrated our classrooms, our lecture theatres, and even our marking piles. It promises personalisation, speed, and efficiency, and increasingly, there are whispered claims that AI is not only learning but, some say, becoming conscious. Yet beneath these bold assertions lies a profound misunderstanding of what it means to be conscious and what it means to learn. As educators, we must tread carefully. The misunderstanding is not harmless: it risks changing the very way we think about learning, teaching, and what it means to be human.

In this article, we unpack these ideas and offer some guidance for teachers and trainers navigating this rapidly evolving field.


What is Consciousness?

Consciousness, in its simplest form, is often defined as subjective experience: the feeling of what it is like to experience the world (Nagel, 1974). When we touch a hot stove, we don't just react mechanically; we feel pain. We are aware of our own mental states, our desires, and our intentions.

Current AI systems, however sophisticated they may seem, lack this phenomenal consciousness (Chalmers, 1995). They process information, produce responses, and even appear to reflect, but they do not feel anything. There is no internal experience inside ChatGPT or Google DeepMind's AlphaGo, no awareness behind the words or moves. As Searle (1980) argued in his famous "Chinese Room" thought experiment, the appearance of understanding is not the same as genuine understanding.

In short, AI can simulate conversation about consciousness, but it does not possess it.


What is Learning?

Learning, at its heart, involves more than just pattern recognition. As educators, we know that learning means changing the structure of the mind, changing what a person can do, think, understand, or feel (Illeris, 2007). It requires context, meaning-making, connection to prior knowledge, and often a social dimension (Vygotsky, 1978).

When a child learns to read, they do not merely repeat letter combinations; they link symbols to sounds, sounds to meaning, and meaning to emotions and social worlds. Learning transforms the learner.

AI systems, by contrast, are extraordinary at statistical pattern matching (Marcus, 2022). They process enormous datasets, adjusting internal weights to minimise errors in prediction. But there is no meaning attached to these adjustments. AI models like GPT-4 are often described as "stochastic parrots" (Bender et al., 2021), mimicking without understanding.

Thus, while AI may appear to "learn," what it actually does is optimise parameters based on feedback, a fundamentally different process from human learning.



Why Does This Matter for Educators?

If we mistake AI's capabilities for genuine consciousness and learning, we risk profound consequences for the education system. Three dangers are particularly pressing:

  1. Over-reliance on AI feedback. If we believe AI systems understand student work, we may trust their feedback uncritically. Yet current AI tools often lack the nuance, empathy, and contextual awareness needed for effective educational feedback (Wiliam, 2011). A system that corrects grammar but misses the student's underlying confusion about meaning has failed as a teacher.

  2. Erosion of human-centred education. Education is not just about information transfer. It is about cultivating critical thinking, empathy, resilience, qualities rooted in conscious, social beings. By overvaluing AI tools, we risk dehumanising the learning experience.

  3. Misconceptions about what it means to learn. If "learning" becomes synonymous with "optimising outputs," we may devalue slow, messy, but ultimately transformative human learning. Not everything that can be measured is worth measuring; not everything that cannot be measured is worthless.


Understanding these distinctions allows teachers to use AI wisely, as a tool, not a replacement. AI can assist with tasks like drafting ideas, offering alternative explanations, or suggesting resources. But the core work of helping students become, to develop their unique voices, identities, and capabilities, remains irreplaceably human.



Conclusion

Artificial Intelligence is an extraordinary achievement of human ingenuity. But it is neither conscious nor truly learning. It gives the illusion of these things because we, as meaning-makers, instinctively project agency and understanding onto anything that behaves intelligently.

As teachers, our task is not simply to adopt every new technology but to question: What does this tool do? What does it not do? How might it help, or harm, the learning of my students?

By keeping these questions alive, we honour the true nature of education, a deeply human, deeply conscious, deeply transformative endeavour.



References

  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.

  • Chalmers, D. J. (1995). "Facing up to the Problem of Consciousness." Journal of Consciousness Studies, 2(3), 200–219.

  • Illeris, K. (2007). How We Learn: Learning and Non-Learning in School and Beyond. Routledge.

  • Marcus, G. (2022). "Deep Learning Is Hitting a Wall." The Gradient.

  • Nagel, T. (1974). "What Is It Like to Be a Bat?" The Philosophical Review, 83(4), 435–450.

  • Searle, J. R. (1980). "Minds, Brains, and Programs." Behavioral and Brain Sciences, 3(3), 417–457.

  • Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.

  • Wiliam, D. (2011). Embedded Formative Assessment. Solution Tree Press.


 
 
 

Comentários


bottom of page