ARTIFICIAL INTELLIGENCE IN SCHOOLS: WHERE’S THE HUMANITY?
SATURDAY 2 NOVEMBER, 10:00—11:30, FROBISHER
Chair: Harley Richardson
Several years ago the English Department at my school taught the Year 9’s a short story by E. M Forster called ‘The Machine Stops’. In this rather bleak vision of humanity, Forster created a world where all humans live underground in separate, isolated pods. They are entertained by fifteen minute lectures which are drip fed to them by the machine. Then, as the title suggests, the machine stops; the two main characters, a mother and her son, are forced to face up to what is left of them when there is no longer a machine to provide meaning. Supposing, in an education system that relies on AI, the machine stops? Even if it doesn't stop, what do people really know without the aid of machines?
I thought of this short story during this Battle of Ideas debate. I am reading, more and more, about how AI will eventually take a prominent place in teaching and learning within schools. The role of the teacher will be reduced to standing over teenagers reminding them to concentrate on a screen. And what will these machine resources be like? I read one description of a child, struggling with conjugating verbs for instance, completing quiz question after quiz question, via a computer, until he has mastered it. The idea of the machine then would presumably be to understand where the gaps are in the child’s learning and plug those weaknesses with pertinent tasks or questions. I am promised that this technology is coming – when rather than if.
Donald Clark described AI as an idiot savant: capable of rfficiency in one particular area. AI is competence without comprehension. So what can it do competently in the classroom? According to Clark, it is ridiculous when schools spend twenty minutes a day on registers when face recognition technology could do the job instead. There is considerable potential for AI to detect learning conditions such as autism. Moreover, we are all biased, and therefore computers can be used to make decisions in a way that will eradicate bias.
In terms of the ‘dumb stuff’ Carla Aerts agreed: let AI do the things that are routine and simply take up teachers’ time. In addition it does have a role in special educational needs in helping to identify an individual’s specific learning requirements. The operative word is help, and the impression from Carla Aerts, more than Donald Clark, was that AI needs to be employed in a way that assists but does not direct educational decisions. In any case she pointed out, where’s the humanity in education now? This was a point that resonated since the demands to get students through testing can make teachers rather robotic.
highly sensitive data
Jen Persson, a campaigner on the fair use of data, opened with an anecdote straight out of a science fiction nightmare. Using a school computer, a seventeen year old girl had been writing a highly personal letter to her mother. The school’s safeguarding software was able to detect that fact that the girl was telling her mother she had been raped. In the event, the school knew about the allegation before the girl’s mother did. The problem with technology that records student activity is that it will inevitably store highly sensitive data. What should it do with such data, and are schools aware of the implications? Aside from this ethical problem there is a pedagogical one: who gets to decide what the students should know, courtesy of new learning technologies? Of course, this is a discussion that occurs even without AI. It may be the case that AI, depending on who designs it, might take curriculum decisions out of the hands of educators. In the meantime, education's preoccupation with mental well-being means that computers can be used to survey mental health quickly. Some audience members were unimpressed with the idea of any technology fostering mental well-being initiatives. Perhaps there is just too much focus on mental health.
In an informative opening, Gareth Sturdy said that what is called AI is actually machine learning. In fact AI is machine learning dressed up to make a buck. The Turing Test was an attempt to investigate whether a machine could pass itself off as human in a game. This was considered further by John Searle, who suggested that a machine could imitate a Chinese speaker by simply following a set of rules, without actually understanding Chinese. Machines, at least in the twentieth century, are not capable of strong AI. It is possible for machines to manipulate symbols without understanding them. Sturdy’s argument challenged any belief that AI is coming to get teachers’ jobs. Their presence will be limited to supportive roles, and in the case of face recognition, Sturdy pointed out that teachers should take the registers because they can engage with the body language of the arriving students in a way that face recognition technology cannot.
I have been suspicious of the idea of AI, or machine learning, as a successful substitute for a teacher. If I want to understand King Lear, what can a machine do to address my confusions or extend understanding? In other words, how can a machine help to access higher forms of learning? It may transpire that children would prefer a human who can understand their problems on a level that does not simply involve posing quiz questions. One audience member expressed the difference between competence and understanding. A learner might be able to answer questions in a competent way, but that does not mean that the learner has a comprehensive understanding.
The clarification of the distinction between machine learning and understanding is an important one. If education succumbs to a series of ‘AI’ machines expecting them to make a huge difference to results they might be surprised. That said, the technology could facilitate processes of education in a way that saves time and labour. As the rugby final literally kicked off in Japan, and the white privilege debate metaphorically kicked off nearby, this was one of the more cerebral discussions at the Battle of Ideas.