Wednesday, Oct. 16, 2024

AdvertiseDonateSubmit
NewsSportsArtsOpinionThe QuadPhotoVideoIllustrationsCartoonsGraphicsThe StackPRIMEEnterpriseInteractivesPodcastsClassifieds

IN THE NEWS:

Hispanic Heritage Month 2024

Editorial: As AI advances, instructors should adjust teaching methods to combat misuse

By Editorial Board

April 6, 2023 11:06 p.m.

Editor’s note: Editorials do not represent the views of the Daily Bruin as a whole. The board encourages readers to respond to our editorials at dailybruin.com/submit.

Artificial intelligence can be a great source of information, but we still need to use it well.

ChatGPT and other large language models are evidently here to stay. These current tools are just some of the first available but they already seem to have had an earth-shattering effect in academic spaces.

From an instructor’s perspective, it might be easy to accuse students of laziness for using AI tools to complete assignments. From a student standpoint, however, it’s a sign that the assignments themselves are failing to address the right skills.

A fact-finding language algorithm – which is incapable of critical thinking – should not be able to complete a competent essay or project successfully. Instructors must further prioritize real synthesis in light of ChatGPT.

Banning ChatGPT in any given classroom would not solve the problem – rather, it would deny students experience with an emerging potent industry tool. It would be like prohibiting digital photography because it was “too easy” relative to film, or banning Google searches because they offered simplicity to “cheat” relative to painstaking library visits. In neither case would it be okay to copy others’ work, but the world adjusted to those fundamental technological improvements.

We must adapt to AI technology and find ways to use it to support our thinking. Presuming students won’t use ChatGPT will facilitate cheating. Allowing us to use it within limits will help us succeed.

According to Tuesday’s BruinPost message, “Individual instructors have the authority to establish course policies for the use of ChatGPT and other AI tools.”

The board urges instructors to adapt their assignments so that an AI can’t solve them. To establish experience and trust, we must allow students to use AI tools within limits. We did it with search engines, which are arguably a greater leap from library research than AI is from Google.

In a sociology course, the instructor acknowledged ChatGPT’s presence by innovatively assigning an essay critiquing a ChatGPT response. A history capstone course allowed for the limited use of ChatGPT with appropriate citations for its final essay. Various other courses encourage students to use AI resources to create essay outlines or brainstorm concepts.

A similar educational phenomenon was the introduction of the graphing calculator. For many years, students learned equations and properties of mathematical functions through derivation and manual graphing. The introduction of these devices changed the process of learning algebra, trigonometry and calculus forever. The “old-school” way of learning math could not survive.

However, educators worked to incorporate the graphing calculator in their teachings. With the help of calculators like the TI-84, math students are more capable of grappling with advanced concepts at younger ages. If instructors of various disciplines situate AI appropriately within their curricula, a similar process toward advancement in learning could take place.

With the growing prevalence of ChatGPT among undergraduate students, there is truly no running away from AI as an educational tool. UCLA is suitably positioned to take the lead in this effort.

There is no better time than now for professors to reevaluate their assessments to see if they truly test a student’s comprehension and synthesis of material. Likewise, the university should be equipped with the ability to educate students on responsible and ethical ways to use ChatGPT and other AI tools, such as proper attribution and citation practices.

If a student plugs an essay prompt into ChatGPT and receives an A, that assignment likely did not test human synthesis, analysis or contextualization skills to begin with. If instructors really value critical thinking abilities, the coursework they assign must reflect it.

Current AI programs can competently regurgitate basic information, articulate simple ideas and summarize written sources. But algorithms cannot fully replicate human creativity and ingenuity. Instructors must focus on skills and thinking that computers can’t replace.

Overall, the incorporation of ChatGPT and AI technologies into schooling will force educators to strive for a higher quality of knowledge assessment. ChatGPT and AI language models will only get better as more students use them.

Students will continue to use these technologies, regardless if they are told not to. They are simply too accessible and advanced to be ignored. Thus, instructors must find ways to use ChatGPT and other large language models to their advantage.

ChatGPT does not need to be a threat to learning if instructors use and understand its capacity as an ally. If not, it will certainly become an enemy.

Share this story:FacebookTwitterRedditEmail
Editorial Board
COMMENTS
Featured Classifieds
More classifieds »
Related Posts