What’s the most useful phrase that a lackluster high school student can learn?
“You want fries with that?”
The social myth is that when all else fails, you get a job taking orders at the drive-up window at McDonald’s. So, when AI fabricates bogus information, will it be forced to take a job dispensing French fries?
Not at McDonald’s.
After three years of testing, McDonald’s cancelled its AI contract with IBM. The order-taking AI developed unpredictable incompetence: it put bacon in the ice cream, an order for twenty-four chicken nuggets became 240, then 250, and finally 260 nuggets. It garbled voice orders and perplexed humans with its hallucinations. (AI fabrications and errors are called hallucinations.)
New employees make mistakes, but they are quickly trained to dispense burgers and fries with skill and aplomb. And it doesn’t take long. But AI couldn’t learn the ropes in 3 years even with the grandees of IBM hovering over them.
In May, Google released AI Overviews to provide accurate summaries to Google inquiries. Instead, the AI fabricated medical advice worthy of Fred Flintstone: eat at least one rock a day for vitamins and minerals. Cooking advice? Keep the pizza ingredients sticking to the dough by adding glue. Most of us stopped eating glue when we graduated from 1st grade.
From 1st grade through high school, AI would seem to be a dog chasing its tail: too risky to succeed but too tempting to reject. In spite of its errors, hallucinations, and privacy violations, Big Tech is relentlessly hyping AI for education.
The hyping hipster-in-chief could well be Salman Khan founder of the internationally known Khan Academy. Khan recently released his book, Brave New Words, with blathering blurbs from Big Tech notables like Bill Gates, Sam Altman, and Satya Nadella.
Khan skimps on data, footnotes, and even anecdotes. But he piles on endless modal and auxiliary verbs that denote uncertainty or possibility. Here is a sample: five modal verbs in four sentences:
“… AI might help students better engage with online exercises or videos, it might also help them when they are browsing Wikipedia, YouTube, or the New York Times website. It might reformulate the news article they are reading closer to their grade level, potentially leaving out age-inappropriate details. While students are researching a paper, it might help zero in on material that actually addresses the issue they are investigating. It might also Socratically [sic] help a student engage with what they are reading or even provide context that the student needs to better understand the content.” (Emphasis added.)
In short, AI will deliver a paradise on earth or at least in our schools, according to Khan, if only we approach AI “in the spirit of educated bravery.” He doesn’t clarify bravery. In fact, Khan prefers a blindfold to bravery.
Khan is a 21st century Pollyanna: he denies that there are any problems with AI. And if there are problems with AI, well, everything else has problems, too.
”Likewise, generative AI can produce incorrect facts, but is it better or worse than what is already out there?”
Leave aside the oxymoron “incorrect facts.” Khan uses the fatuous argument of children: they did it too. Khan continues the powder puff of hyperbole on the next page:
“Similar concerns exist around the problem of AI misinformation. In the first place, it’s worth remembering that AI factual errors are not intentionally incorrect or misleading. Instead, they are more akin to someone misremembering something.”
AI intention is irrelevant. It does not have intention. The AI is simply wrong. Nor are the hallucinations simple “misremembering.” AI creates information that sounds authentic, it sounds deceptively convincing. Khan asked Khan Academy’s proprietary AI, Khanmigo, to design a lesson on global warming. He was impressed by the AI lesson plan. But he added:
“Afterward, I did my own work to make this information as accurate and determine whether I needed to refine it all.”
If you need to proof AI, what are you gaining? Even students are advised to use Khanmigo for the first draft of a “five paragraph essay.” Steve Braule, writing in TechLearning, advises students that the AI should write the first draft:
“Then the students can be asked to edit the generated results and justify their edits in a brief essay.”
Braule claims that learning to edit is a higher learning skill. It’s not a higher learning skill when compared to writing. He further suggests using AI to create an event timeline in a history class:
“Multiple generative AI tools could create such a timeline. By allowing students to use an AI tool to create such a timeline, they will have to learn to craft an effective prompt.”
Writing a prompt and editing an AI generated effort does not compare to the thinking skills required to identify a subject, research the information, write your own first draft to define your own thinking, and then revision. Every step in this process requires critical thinking, revision, creativity in the writing, and finally editing. The student is thinking for herself—not responding to a machine-generated essay.
Our Polyanna Khan is so optimistic about AI that he even recommends that families use AI to facilitate interaction:
“… of course, generative Al can be a fun and entertaining way for families to spend time together. … generative AI works prophylactically to strengthen a family’s bonds. Whether playing games, telling jokes, or having silly conversations, a family that uses large language models in a positive and constructive way can help strengthen its relationships and create lasting memories. …[in] the future, we may even have a version of this artificial intelligence at our dinner tables or on car rides to facilitate family interactions with games and conversations.”
According to Khan, we should rely on AI to protect our family relationships. But Khan still has a soft spot for human beings:
“There will always be a space for parents, as well as for living, breathing tutors, motivators, mentors, and teachers.”
How nice that there will be space for humans. But most of that space will be dominated by AI. Khanmigo will report to parents on their children’s activities. It will be a self-appointed psychologist recording your actions on your computer. According to Khan, Khanmigo will track your mental health. Khanmigo would say:
“You’ve been looking at your ex-girlfriends [sic] wedding pictures on Instagram for a while now. How is this making you feel? Maybe we can talk about it.”
***
The AI promulgated by Salman Khan drives a wedge driven between parent and child, between teacher and student. It acts prophylactically like a condom to prevent authentic human relationships. It teaches students that good family relationships are generated by a machine, not by the hard work and emotional discord that seeks harmony. This is how we learn what it means to be human.
Back to the front line of AI replacing humans at McDonald’s. IBM couldn’t teach AI to be responsive to human beings requesting cheeseburgers. But humans learn quickly how to dispense burgers and fries. And we already know how to connect with fellow humans.
The friendly chatter at the drive-up window, the pointless discussion of the weather, the helpful neighborhood directions, all remind us of the little things that make us part of a community.
But McDonald’s wants profits, not community. Soon, IBM or another company will design a different AI for handling hamburger orders at McDonald’s.
Let’s hope that AI will learn to say:
Hey! You want fries with that or not?
Learning and Teaching Creativity, by Dan Hunter is available from
https://itascabooks.com/products/learning-and-teaching-creativity-you-can-only-imagine
Teachers receive a 20% discount.
Audiobook is available at
https://www.audible.com/pd/Learning-and-Teaching-Creativity-Audiobook/B0CK4BQDJP
Everything we are adding to our communication universe removes us further from actually talking with one another;; consequently making us fearful of talking with one another because we're losing the hang of it. OK, I'm going out on a limb and suggesting this is capitalism run amok. (Along with your critique is the critique by the economist Daron Acemoglu that suggests that robotics are no more productive than humans- that the real benefit of them is to business that don't pay payroll taxes on them, and can depreciate them. Topic adjacent, I now)