By Stuart Jenner
Ready or not, here it comes: AI (Artificial Intelligence) is barreling towards us. What are the implications for children, parents, and educators?
There are three things worth noting. (These apply to any district, but we should note there’s a lot of pressure on districts with below-average results to “close gaps.”) There have been many times when districts will take a shiny new tool or fad, attempt to use it, and wind up worse off than before they started. So, let’s be careful. AI is not a shortcut.
First implication: learning to evaluate information and think critically is more important than ever.
This recent article on the Free Press discusses the extreme challenges of discerning truth. The author mentions the challenges from artificial images, for example, a “fake” of the Pope wearing a fancy coat. It discusses how an AI engine created a fake account about a professor assaulting students on a trip to Alaska. The story was completely fake; the prof had never even been to Alaska.
Critical thinking involves looking at sources, conclusions, and evidence presented, and looking for holes, falsehoods, bias, and other reasons to be cautious about using a source. Critical thinking also involves looking for contradictions, known errors, and other reasons a source could be wrong.
The AI chat results have an aura of authenticity that makes them very hard not to accept at face value. But, the results are only as good as the sources the AI tools draw from. How often do we check and recheck sources? How do we know a site is not created by foreign adversaries trying to create turmoil and dissension? Just because a story generated by an AI engine has sources doesn’t mean the sources are valid!
Thinking critically is very important, and this is something schools MUST do, parents MUST insist, and students MUST master.
Second: ethics.
The Oxford Dictionary defines ethics as “moral principles that govern a person’s behavior or the conducting of an activity.” Who determines the moral principles that govern AI choices? (And there are choices.) The Seattle Times had an outstanding article on April 17 that initially appeared in the New York Times. The authors outline how Microsoft and Google are choosing speed, not caution, in rolling out their AI technologies. Microsoft, to its credit, had an ethics review board, but upper management chose to ignore the board’s comments! So what good is it to have a review board?
A few years ago, Google fired two people who had raised concerns about the large language models Google uses. Then more recently, her (Jen Gennai, the director of Google’s Responsible Innovation group) team had already documented concerns with chatbots: They could produce false information, hurt users who become emotionally attached to them, and enable “tech-facilitated violence” through mass harassment online.
In March, two reviewers from Gennai’s team submitted their risk evaluation of Bard. They recommended blocking its imminent release (according to two people familiar with the process). Despite safeguards, they believed the chatbot was not ready. Gennai changed that document.
The people said she took out the recommendation and downplayed the severity of Bard’s risks.
This points to a fundamental challenge: most schools commonly claim they do not want to preach a religion or moral code. But everyone has a starting point for values. What are the moral principles the schools teach? And specifically, are there any moral principles being taught that relate to the use and oversight of AI?
If school districts do want to teach these principles, when and where does this occur? In a regular class, an ethics assembly, or an after-school ethics club that only a handful of students can attend? Civics would seem appropriate, but people are forming value systems far earlier than 11th grade.
Third: number sense and math skills.
Every time there’s an innovation that impacts math education, there are downsides. There is a great tendency to see innovation as a magic answer to woeful math skills. Ironically, sometimes people with the most resources wind up with skills regression. An example is early users of calculators who ended up with poor mental math skills.
In researching this article, I found several stories on AI and math. One of the best critiques is from February 2023 in the Wall Street Journal. The writer discusses how Chat GPT does not work well for math problems.
Excerpt:
While the bot gets many basic arithmetic questions correct, it stumbles when those questions are written in natural language. For example, ask ChatGPT, “If a banana weighs 0.5 lbs and I have 7 lbs of bananas and nine oranges, how many pieces of fruit do I have?” The bot’s quick reply: “You have 16 pieces of fruit, seven bananas, and nine oranges.”
The total would be 23 (pieces). If each banana weighs 0.5 pounds, then 7 pounds of bananas would yield 14 bananas. Fourteen bananas plus nine oranges would equal 23 pieces of fruit.
I was very curious to see if the Bing implementation of Chat GPT had a different answer two months after the story was written. Nope. Same wrong answer …. and more! In a way, this is humorous, but tragic if someone is going to take these types of results and use them to build a bridge or an airplane’s wings.
See this screenshot of the answer Bing AI Chat wrote on April 20, 2023. (Yes, the illogical nature gives me a headache too.)
Number of fruit pieces Bing AI Chat screenshot, April 20, 2022
To conclude, AI is a tool. Educators, parents, and students must be aware of its shortcomings. Learning critical thinking, ethics, and number sense are crucial foundations for understanding the tool’s limits and potential problems.