The Use Of AI In Education
When To Use Or Not To Use AI - That Is The $64 Million Dollar Question.
ELT Pros On Linkedin | Video Library | Our Blog | ELT Buzz News | TpTs
The whole rollout and focus on generative AI in education has made me very depressed. Mostly due to the fact, I believe many can’t see the forest for the trees. See this LinkedIn discussion for more about this and what other teachers have to say. Also, my personal blog with many articles about AI.
Generative AI use has been hoisted onto teachers and students alike with little forethought to its ramifications, and with little thought about the cost/benefits involved. AI was simply hyped with cherry picked examples painted to amaze and then marketed to billions of users. It was declared “inevitable” as if manna from heaven and ordained. I question this, especially regarding its use in education.
What’s clear is that there is little concensus and knowledge of where these LLMs (Large Language Models) work and where they fail. There is no simple, progressive line when it comes to evaluating them. For example, ChatGPT can solve many complex alegraic equations but yet still finds it difficult to count or do simple addition. It can write code to make a website where you can play tic-tac-toe but yet it can’t tell you the right move to make in this simple game.
So the waters are very muddy when it comes to educators making decisions about when to use generative AI and when not to. There simply is little guidance or rules to follow. It’s a jagged edge.
Adding to the problem is the complete failure of the myraid number of companies producing these AI tools to inform educators what their products are very good at doing and what they fail at doing. There simple is nothing on offer by them except “User Guides” or some simple advice about prompting.
So, how is a teacher to know if they should use AI or not? For lesson planning, resource creation, student activity and evaluation? There simply isn’t any guidance out there and everyone is just mucking about, making it up as they merrily go along.
I took some time to look in depth at Teach AI’s much vaunted “tookkit” and was terribly underwhelmed. It suffers from the same verbal, terminological diarreah so much of higher education ails of … and it offers only vague user case examples and principles for when and when not to use AI in lessons.
My own bearishness about generative AI in education rests primarily on the issue of human agency. I will focus on that here - both in the case of students and teachers.
Of course there are other issues against the use of AI, primarily how inaccurate, unreliable it is. Then there are the problems of accountability, privacy, security, and then problems of output design. Further, the issues around business and cost, even enivronmental cost and last but not least, the whole legal and ethical fandango that is copyright and how AI can be seen as one giant super hi-tech plagarising machine, as Chomsky has alluded.
The Oxford English Dictionary defines human agency as;
Generative AI is not just a casual tool that can “help” a student along the learning pathway. Rather, it helps students avoid learning by wholesale producing text and images and producing a final product. Learning is about thinking for oneself, about the process not the product (answer). This was drilled into my head during my own days in Teachers’ College. Somehow, we’ve lost that narrative.
Many teachers suggest using ChatGPT to generate ideas for projects, for writing, for debates, for discussion and thought in class. However, by just giving students the prompts, aren’t we hopping, skipping and jumping over some very important steps students need to be taking to “learn”? Shouldn’t we always be asking before using AI - “Does using the AI stop students from thinking for themselves?”
There are existing reference tools. Google is one. Books another. They offer enough friction to keep learning happening. ChatGPT doesn’t. It just dumps out in very bookish language, the full answers. [and that is another problem especially for second language learners, how generative AI produces high level text, “inhuman” in a word, since it has been trained not on oral language but trillions of globs of text.
The key word in the above definition of “agency” is sui generis. Learning and our ability to act is a social process. We are social creatures. I see so many educators turning towards AI and away from a humanistic approach to education. Why not learn with and from each other rather than the artifical texts of AI? I wonder to myself what the world will be like in 100 years if we forsake listening to each other, producing conversations, interacting with each other, making art and literature “ourselves” and instead just produce AI generated communication, art, signs? I wonder what McCluhan would say about this new Gutenburg 2 galaxy?
This article alludes to how “soul gutting” AI is, even if it produces superior or equal results. We’d much rather get our info. from a human.
This article asks many of the same questions. Are we losing our soul? Is our classroom going to become a silent one of prompts and reams of text generated? What if more is less when it comes to education?
I see so many teachers making resources with AI for their lessons. But what is being lost when a teacher isn’t truly thinking through all the myriad “human” factors that go into decisions regarding materials creation for their own students? One size fits all? Is that the future of education ChatGPT? No human intervension, personalization?
I see so many students rushing, rushing madly to just “complete their work”. It is getting worse with ChatGPT and Bard et al. Again, I ask, have we lost the narrative, too much focus on the product (answers) and not the process (learning)?
Finally, look at this photo that was put in an article, glorifying the new world and possibility of technology. Ask yourself - instead of learners talking to and experiencing bytes and bites and glowing images of digital dreams - instead, they took off those gross goggles and started talking with each other, experiencing each other? Learning socially? Learning language together, not in a new world technological terror chamber?
I ask us all, to pull back the curtain and look nakedly at what AI actually is good at and good for us and what it isn’t. I think if you do, if you are brave, you’ll see as Dorothy saw and what Toto was barking at … the genie, those Stanford silver spoon-fed fellows - all have their pants down and wear no clothes.