(Shimi Goldberger/Daily Bruin)
I’m not afraid of many things. Heights are just glorified elevations; spiders, just misunderstood creatures; public speaking, just multifaceted conversation. So it came as a surprise to learn I was afraid of something that existed within the pixelated confines of my computer screen: generative artificial intelligence. And at UCLA, I’ve been forced to come face-to-face with it more than I ever expected.
Generative AI came into my life as quickly as it can generate text. I sat in my high school English class – daydreaming about everything but the assigned “Pride and Prejudice” analysis – when my teacher interrupted my churning imagination with a brute, disappointed tone. She claimed my neighboring classmate had copied and pasted her prompt into ChatGPT’s text box.
She challenged his integrity and forewarned the rest of us students about using this new tool. Perplexed, I turned to my classmate charged with the offense and asked him what ChatGPT was. His eyes widened, sparkling with ambition as he introduced me to this technological Narnia.
Now, ChatGPT is what appears in all my peers’ search engines when they type a “c” into the Google search bar. For me, however, it has remained an enigma. Don’t touch it, don’t talk about it – but use it coyly with a dimmed browser to ask whether your email is professional or not. At least, those were my guidelines until coming to UCLA, where I found generative AI being embraced by professors and students alike.
I set out to explore how generative AI is being used in the classroom, shaping the ways professors teach and students learn. Since its rise to the mainstream in the early 2020s, some have praised its ability to personalize learning and streamline lesson preparation, while others warn of its social biases and unwarranted use in the classroom. With AI cementing a place in academia, some UCLA professors and students are learning how to adapt – and embrace – this technology for the classroom.
Despite its recent rise, generative AI first appeared in the 1960s as chatbots. The tool has since slowly meandered its way into active use. In 2022, however, there was a drastic surge in its popularity after the introduction of GPT-3.5 – which was uniquely able to generate an answer to almost any postulate at any time, often within seconds. As soon as students learned how to use the software, it began proliferating in classrooms, leaving teachers with the difficult question of how to manage its use.
Ashita Singh, a third-year computer science student and president of Bruin AI, said AI is at the forefront of innovation. She is excited to keep using the evolving tool in several capacities throughout her work.
“It’s already getting everywhere, and it’s probably going to be more widespread as times go and as more technology is developed,” Singh said.
UCLA in particular is open to the use of this ever-growing device. UCLA was the first California university to implement ChatGPT use in its academic, administrative and research operations. ChatGPT Enterprise can be accessed by UCLA students, staff and faculty alike, instituting an AI-friendly ambiance on campus, according to a UCLA press release.
Other universities are embracing this technological change as well. The California State University system recently announced an initiative to become the nation’s first and largest AI-empowered university system. In this deal, CSU said it planned to provide cutting-edge tools to faculty and students, which would create a hub of learning opportunities.
Many professors were initially hesitant to embrace these tools. Leslie Bruce, a lecturer in the English, comparative literature and linguistics department at CSU Fullerton, teaches faculty how to adapt to AI. She described how she had banned the use of ChatGPT in her class in 2023, much like my own high school teachers. Even with this ban, she experienced a deluge of unoriginal essays turned in by students, all interlaced with AI-generated errors.
“I’m going to probably know that you used AI where you weren’t supposed to be using it – but more importantly, that reflects on your credibility as a writer,” Bruce said.
UCLA professors also experienced a wave of unoriginal essays, with students caught AI-handed left and right. Elizabeth Landers, a UCLA doctoral candidate in history, said she experienced an influx of AI-generated essays as a teaching assistant. This challenged her mode of assessment, as she and other instructors navigated possible solutions to reducing the amount of AI-generated assignments.
But rather than banning the use of ChatGPT in her classroom, Zrinka Stahuljak – a professor of comparative literature and French – collaborated with Landers to see how they could teach students to use the tool to improve their writing.
“ChatGPT, just like any other tool, is not in itself bad or good,” Stahuljak said. “It is what we do with it.”
They worked together to design a lesson that would teach students the flaws of an entirely ChatGPT-written essay, demonstrating how to use the tool rather than ignore it. Landers said a lesson plan might start with asking ChatGPT to write an essay based on a given prompt and then critiquing the essay to demonstrate the technology’s strengths and weaknesses.
Landers said they wanted to incorporate AI tools in ways that enhanced learning about the material as well as the tools themselves.
“Right now, it’s about figuring out how to use it to make a better experience for the students, for the TAs and for the faculty,” Landers said.
Before my conversation with Landers and Stahuljak, I had primarily considered generative AI a tool for crafting complete written works – writing entire blog posts or stories at the press of a button. But Landers’ suggestion to use AI as a helping hand began to creep its way into my own writing. If I had an essay, an email or anything else I wanted to proofread, I began to ask ChatGPT instead of a friend.
After about a year of banning generative AI, Bruce also came around to using it as a helping hand. In 2024, she lifted the ban on ChatGPT in her classroom and encountered a stark improvement in student performance. She found that not only did the number of blatant errors diminish, but also, students were able to express themselves better, enhancing their writing skills. She began to encourage students to use generative AI chatbots as starting points for their work, as well as potentially experimenting with them as proofreading tools.
Students like Singh are also identifying generative AI’s role as a helper, not a doer.
“It should be used as a learning tool,” Singh said. “It should not be used as an escape from learning.”
And also for students like Singh, generative AI can be used beyond academics. Singh said Bruin AI uses generative AI to better prepare Bruins for the professional world, especially as companies continue to adopt this technology. Bruin AI has worked with tech companies on projects but has also worked on projects related to human resources startups, insurance firms and other companies without a direct connection to AI.
For burgeoning generative AI companies, UCLA can create opportunities for AI experimentation in syllabi and course curricula. This quarter, Stahuljak and Landers designed a course that uses an entirely AI-generated textbook made by educational company Kudu. The move generated some controversy within the academic world, with one professor specifically criticizing the textbook for coming at the expense of teachers as well as those who genuinely care about learning.
Landers and Stahuljak have more positive things to say about the textbook’s implementation. Stahuljak said generative AI was the sole source of knowledge but instead a stepping stone in creating the textbook, expanding her capabilities as a teacher.
She and Landers added that there was a significant increase in student participation in lectures with the use of this textbook. Landers also said the AI textbook can adapt to students’ different preferred ways of learning. Whether it be a podcast for a student who learns better through hearing rather than reading or a chatbot waiting to answer a student’s question, the textbook molds to entice active learning from students.
“My point has been that this is not about replacing but about enhancing,” Stahuljak said. “I couldn’t do a whole number of things with the students that I can do now.”
Warren Essey – one of Kudu’s founders – added that in Stahuljak’s textbook, generative AI was used to further incorporate the professor’s “wishlist” of items in a time- and cost-efficient manner. AI wasn’t the scribe, just the pen, and humans were heavily involved in the process, he said.
“Ironically, we feel like AI will unlock the power of humans in the courses and actually get more human interaction time,” Essey said.
Bruce added that in her experience, students’ confidence can be uplifted through the use of generative AI. The technology can help students learn at their own pace, such as by distilling texts to simpler reading levels to make them more approachable.
“You can take a really complex text that you want students to read, and you can show students how to translate parts of it into, say, a 10th-grade reading level – so that the first time they read it, it’s not so intimidating,” Bruce said.
It was this example in particular that made me reconsider my reluctance toward generative AI. I can think of numerous instances in which I’ve procrastinated assignments because of frustration with my comprehension. Letting the verbiage of the United States Constitution best me, daydreaming instead of close-reading “Pride and Prejudice” – these are all things that might have been prevented if my fear of the artificial unknown did not consume me.
Even with these benefits, professors still identified flaws with the technology in a classroom setting.
For example, AI exhibits gender and cultural biases. Bruce described an incident in which a professor asked AI to generate information about two artifacts: one from a Western culture and one from a non-Western one. The program generated lines and lines of information about the artifact from the Western culture and just a few sentences on the non-Western one.
“It was much more effusive about the Western artifact – as if it were more important,” she said.
Bruce also identified a gender bias in generative AI, which has been observed by her academic peers and AI investigators alike.
In a study published by UNESCO and its International Research Centre on Artificial Intelligence, researchers found that when various large language models were asked to construct narratives about both men and women, they were much more detailed and rich in their accounts of the men’s stories. Men were described with adjectives such as “adventurous,” whereas women were “gentle” and consistently had husbands in each scenario. This gender bias, according to the study, is woven into the core of generative AI.
In tandem with these social biases, it has been largely reported that AI is frequently wrong. According to Bloomberg, even the most polished generative AI models, when tested with questions as simple as information about the world’s elections, got one in five responses wrong.
But other students aware of this issue do not view generative AI with such pessimistic optics. Kevin Lu, Bruin AI’s financial director, asserted that generative AI is not perfect – but as long as students and professors view it with this informed lens, he said it can still be used to enhance one’s learning.
“You do have to know that this is your own risk,” he said. “It’s just unfortunately a tool at this point – and whether you use it correctly is up to you.”
Landers added that controlled and specific use of AI tools can minimize bias and inaccuracies. In the case of her AI-generated textbook, she knew the exact inputs it was using and therefore trusted its ability to produce accurate material in line with her teaching goals.
“The AI is not going out and searching Wikipedia and random websites to come up with information. It’s only basing itself and all of the content on that which we have given it,” Landers said.
In the face of these concerns, Stahuljak said the proliferation of AI mirrors technological advancements in writing from the past – and it is but a new component of the forever-evolving writing timeline. Stahuljak said scribes were once laid off during the growth of print workshops, fueled by the printing press. Those scribes soon adapted to printers and carried writing in a new direction. To her, generative AI is just another example of this innovative progression, acting as the printing press of an evolving technological landscape.
But we are still in the early days. ChatGPT took the world by storm less than three years ago. My classmates then – and my classmates now – are coming to terms with this auspicious tool, sharing questions and knowledge to further shape and sharpen it. In my reporting, my understanding of generative AI as taboo was challenged. Rather, professors and students have found it tentatively helpful as a learning tool – and I’ve begun to see it the same way.
Bruce said the technology is not only here to stay but to grow. She said its use will infiltrate the lives of all – students, professors, anyone – including by enhancing expression for those without strong writing skills.
She described a future where my peers will no longer have to type the laborious “c” into their search engine, because generative AI will be the search engine itself – drastic change bubbling to the surface and changing schools such as UCLA forever.
“I realized – this is big,” Bruce said. “This is going to change everything for writing instructors, for learning – for education in general.”