The integration of AI programs in the classroom: Benefits and challenges
Video production teacher Mark Middlebrook began integrating AI programs into his classroom last year, seeing it as an effective way to write and revise scripts. This year, he has chosen to continue utilizing AI for different assignments and technology help.
“I assigned everyone to put ChatGPT on their phone because I want them to be able to access any technical data,” Middlebrook said. “I have so many different places to be in the classroom so that’s one of the first starts. But actually, the true start was last year when we had to have a script. We had a whole joint shoot where everyone was going to shoot the same script and so we created one on ChatGPT.”
As advancements in AI continue, software like ChatGPT, Scite, and Scholarly are becoming increasingly popular. The generative AI programs that are on the rise are classified as large language models or chatbots. They can rapidly process information and use it to create synthetic content. The immediate results generated by these programs make it appeal to many. However, it can lead to the spread of misinformation. Large language models have proven to make information-based errors showing the software is not completely reliable and users cannot depend on AI to constantly give factual information.
The dangers of misinformation from AI: A cautionary tale
One example of misinformation from AI occurred in June 2023 when New York attorney Steven A. Schwartz was fined for citing fake court cases that he discovered using ChatGPT. Schwartz was defending his client Robert Mata, who was injured by a metal serving cart on a 2019 flight with Avianca Airline, who sought to dismiss the case. Schwartz cited several cases including Shaboon v. Egypt Air and Varghese v. China Southern Airlines that were fabricated by AI. After discovering this, Schwartz admitted to using ChatGPT to conduct his legal research.
“I did not comprehend that ChatGPT could fabricate cases,” Schwartz told Judge Castel according to a New York Times article. “I continued to be duped by ChatGPT. It’s embarrassing.”
Consumer concerns about AI and the spread of misinformation
According to a survey conducted by Forbes Advisor, 76% of consumers are worried about misinformation from AI services like ChatGPT, Google Bard, and Bing Chat. The survey also showed that 54% of the people surveyed believe that they can tell whether content was generated authentically or by a chatbot.
The rise in AI usage among students: A survey at Daniel Pearl Magnet High School
The usage of AI among students at Daniel Pearl Magnet High School is on the rise. According to a survey of 15 DPMHS students, 73% of them use ChatGPT. Only 13% of students said they used AI for homework help and 66% of respondents believe that AI contributed to the spread of misinformation.
“I think AI contributed to misinformation because it lacks the emotional aspect,” one respondent said. “It can easily spit things out without really knowing what it’s doing.”
Students like sophomore Jordan Vivano use AI programs like Dezco and ChatGPT for fun personal uses such as artwork. However, Viviano also uses ChatGPT when struggling with schoolwork. Despite the risks, some students like Vivano prefer to use ChatGPT due to how fast it is.
“I just do quick fact-checks,” Viviano said. “I know Chat GPT isn’t 100% accurate all the time, so I don’t just take what it says immediately and run with it. What I do is I take it and I’ll just do some quick research to be like is this really true?”
Combatting misinformation from AI: The role of organizations like the News Literacy Project
To combat this rise of misinformation caused by AI, organizations like the News Literacy Project (NLP) have done a lot of work promoting AI awareness and knowledge to students and educators. A section of their website is dedicated to sharing news literacy in the age of AI with links to stories on how AI affects education, government and society perspectives, as well as general information on AI and what to look out for when it comes to avoiding misinformation.
“We think everyone just needs to understand that this technology is here and it’s already changing the information landscape,” said NLP’s Senior Director of Media Relations Christina Veiga. “There may be some benefits to it, and there may be some drawbacks. We need to understand what those are, and take the time to practice news literacy skills like checking multiple sources and pausing before we share just to make sure that we are consuming information that’s credible.”
Analyst comment
Positive news: The integration of AI programs in the classroom is seen as beneficial for tasks like writing and revising scripts. AI usage among students is increasing, with some finding it helpful for schoolwork. The News Literacy Project is working to combat misinformation from AI through promoting AI awareness and news literacy skills.
Market analysis: The market for AI programs in education is likely to grow as more schools and teachers recognize the benefits. However, concerns about misinformation may lead to a demand for more reliable AI systems, creating opportunities for companies to develop trustworthy AI solutions for educational purposes.