The Future: Incorporation of AI in classrooms will facilitate innovation
(Isabella Lee/Illustrations Director)
By Jeslyn Wang
May 4, 2023 9:09 p.m.
From the internet to the first handheld laptop, each revolutionary technology almost seems trivial in comparison to its successors decades later. The latest emergence of artificial intelligence is undoubtedly a testament to this notion.
In mere seconds, AI-based language processing programs can construct graduate-level essays and code entire websites. OpenAI’s model, GPT-4, even scored in the 88th percentile for the LSAT and scored within the 90th percentile of the Uniform Bar Examination. These tools are programmed to receive any prompt and parse through a vast range of information to generate an output with ease.
The budding field of AI research has only recently exploded in popularity following the release of ChatGPT in early November. Reaching over 100 million users by January, the language processing tool has opened up an unprecedented level of both potential and controversy.
While proponents celebrate AI’s efficiency and aid, critics condemn its dangers and potential abuse – particularly in the academic realm.
The rising advancements in AI are unpredictable, but if used with clear intentionality and a genuine foundational understanding, the power of this technology outweighs the negative implications and holds revolutionary promises for the future of education.
“AI, and its advancements, has the ability to truly tailor education to the individual, unlike most pedagogical settings,” said Cole Hume, a third-year philosophy student who is president of UCLA’s AI Robotics Ethics Society.
At UCLA, which has a large undergraduate body, some classrooms can reach a ratio of one professor to upward of several hundred students. In the process, the quality of teaching becomes incredibly stifled. However, with AI tools such as ChatGPT, students – especially those from underprivileged backgrounds who lack the opportunities and resources of private schooling – can receive a much more personalized learning experience to supplement their education.
Daniel Mendelevitch, a second-year data theory student who is a member of UCLA’s Data Science Union, said he thinks that more needs to be done to address misconceptions regarding ChatGPT’s uses.
“The technology is not brand new, it’s just that the societal reaction to it is,” said Mendelevitch.
He added the true potential of AI tools stems from the ability to better understand material, not as a way to copy essays or answer homework. When studying for a math final last quarter, Mendelevitch used ChatGPT to generate new practice problems by inputting questions from old midterms. With dense readings, he would ask the chatbot to identify important aspects, and then focus on rereading those shortened summaries.
Mendelevitch’s usage of AI stresses the possibility of embracing AI’s ability to refine complex concepts and provide students with a stronger academic starting point, rather than rejecting this technology for fear of the unknown.
John Villasenor, the faculty co-director of the UCLA Institute for Technology, Law and Policy, said schools must facilitate and adopt new frameworks to integrate AI in the classroom.
Teachers and faculty must start by re-shifting the emphasis toward assignments that foster critical thinking and creativity, instead of work that forces students to regurgitate obsolete notes or memorized internet answers. After all, if a teacher is giving assignments that can be directly copied from ChatGPT, then these assignments reflect the teacher’s shortcomings, not the students’.
“The most effective professionals in whatever industry these students are entering will probably be the ones that can leverage them (AI) to just become more effective at their jobs,” said Hume. “When you just completely forfeit it from the classroom, you’re disabling that potential honing of a skill set for the student.”
One of the main arguments against AI integration is its ethical considerations. Critics often argue that AI tools are susceptible to outputting false or harmful messages and are too primitive to have safeguards surrounding their data collection. For instance, in 2016, Microsoft developed a chatbot, “Tay,” which was programmed to tweet back to users in a human-friendly way. However, the chatbot was quickly recalled upon generating offensive tweets.
While Microsoft acknowledged the risks associated with a technology as new and advanced as an AI chatbot, the company also recognized its product’s faults and immediately corrected its misstep.
It’s important to recognize that AI tools still require a lot of fine-tuning and modification. However, companies such as OpenAI and Microsoft are actively seeking to increase transparency in terms of how they update and monitor their models.
These companies are working with experienced teams to refine their developments and are constantly updating AI’s natural built-in barriers, such as preventing algorithms from outputting racist or false messages. Whether it be through extensive lab testing, rigorous safety evaluations or direct expert feedback, these measures are only getting stronger with time.
All in all, the influx of AI technology does force the educational world to take a hard look in the mirror. Nevertheless, that is not to say these developments are negative.
To embrace technology is to embrace the beauty of change.
We must find a balance between tradition and innovation.