Many in today’s day and age use Artificial Intelligence (AI) tools like ChatGPT or Google Gemini daily. These tools can be used to automate certain tasks to a degree, like pattern recognition or problem solving. Another thing AI is commonly used for is writing. AI’s like ChatGPT are Large Language Models (LLMs) that predict text based on a prompt. This can be used, for example, to write an essay for your English class, or do your Math homework. This is the main concern for teachers, students, and parents alike – so, how is AI affecting the school environment?
When prompted on how AI can be an educational tool, Wantagh High School English teacher Mr Garey responded “I think that you can use ChatGPT to generate informative text that has mistakes in it. It helps students to correct it’s mistakes”. This got me thinking about how inaccurate AI’s are with writing, which is a good thing for schools because it would be easier to catch people who are using them for cheating purposes. It would also be good for the students as they can catch grammatical mistakes created by the AI, improving their skills.
However, this practice of correcting AI’s mistakes may not last for long. It is well-known that AI is improving in accuracy, but many may not realize just how fast it is improving. This September, OpenAI (ChatGPT’s parent company) unveiled their new OpenAI o1 model, which is designed to spend more time solving problems than the typical ChatGPT prompt. The new model according to OpenAI.com boasts a great improvement of accuracy across a whole range of fields, with the most staggering improvement being math, having a 94.8% accuracy rate compared to 60.3% in ChatGPT.
If this trend continues as it has in the past, then it is beyond possible that AI can and will be used to cheat in students’ work. It may become impossible to distinguish AI generated text from human written text just by reading it. More advanced tools will have to be created to detect AI in schoolwork, if possible. In fact, governments might have to step in to stop these AI companies and regulate if things get bad enough.