Opinion: Students must resist blindly trusting AI sources for research
Powell Library is pictured above. Alessandra Kahn argues students must look further than AI overviews when researching topics. (William Gauvin/Daily Bruin)
By Alessandra Kahn
May 7, 2026 11:42 a.m.
Can you cheat in a game with no rules?
Since the rollout of ChatGPT in November 2022, educators have been scrambling to determine how it might fit appropriately in their classrooms – if at all.
Outside the classroom, defining what constitutes an ethical use of AI is similarly challenging. Restricting it, however, is out of the question; a Google search now automatically produces an AI summary.
Even those students who avoid AI in school find themselves interacting with it in their personal lives, whether consciously or not.
To be responsible researchers students must reintroduce reflection into research: Why this search engine over a different one? Why these keywords over others? Why the AI summary over Wikipedia?
Admittedly, this process feels painstaking and inefficient, but such is the nature of free will. It may be inconvenient to choose an outfit every morning, but that liberty is certainly preferable to a state-mandated uniform.
To make real use of the skills that UCLA students pay thousands in tuition to hone, we need to be agents of our own knowledge — even when it’s inconvenient. Whether that means exploring AI-free search engines or simply verifying the information that AI provides with a human-created source, students must take authority over our searches.
Christabelle Marbun, a fourth-year philosophy student, is a research assistant at the Livescu Initiative on Neuro, Narrative and AI – a research initiative that explores the intersection of human and non-human thought using philosophical frameworks. She said working in this initiative has caused her to question what it actually means to learn.
“I think it depends on the mental model, whether it’s like you’re using it as a machine to think for you or if you’re using it as a calculator, because I think that there are differences,” Marbun said. “A lot of people think they’re using it (AI) as a calculator when they’re using it as a machine that thinks for them.”
It’s one thing when overreliance on AI takes place in the classroom, where illicit use is generally discouraged and penalized. But outside of this context, reliance on AI isn’t just permitted — it’s encouraged. Google boasts that users find its AI overview more satisfying than traditional search.
When students execute a search, our goal should be to learn the truth not find satisfaction.
Pamela Hieronymi, a philosophy professor who researches moral responsibility and free will, said humans tend to conflate language that makes sense with ideas that are true.
“We have a way of interacting with language,” Hieronymi said. “We’re used to it coming from a mind and we’re used to (by) default taking it at face value and trusting it.”
In fairness, our tendencies don’t have to be our doom.
In fact, a study in Scientific Reports tested people on their ability to discern images of real faces from AI-generated images, with assistance from an AI chatbot that was correct only half of the time. Those who held less positive views of AI performed better at this task than those who held more positive views.
Andrew Lopez, the external vice president of the AI Robotics Ethics Society at UCLA, said he reads AI overviews but doesn’t stop there.
“I’m not just going to act on the first – certainly not the Gemini overview at the top of Google results,” said Lopez, a third-year business economics student. “I’m going to try to look at least another source or two before I act on something.”
Responsible learning takes time. In an age when we’re pushed to settle for what is immediate, this patience is more important than ever.
Tina Austin, a lecturer at UCLA extension who teaches courses in biomedical research, said she reads beyond the first search output and reflects on what AI might’ve left out.
“Keep in mind every time what’s been chipped off, what’s been missing,” Austin said. “And that’s when you start building healthy habits with tools.”
Much like a bag of potato chips, the information contained in AI outputs is heavily processed.
Slicing and frying a potato certainly makes it delicious, but it also decreases its nutritional value.
This same logic applies to AI overviews: Large language models serve us appealing, digestible content, but there’s no guarantee of quality.
Just as a mindful eater might question a health claim on a bag of chips, students must resist the urge to blindly trust the claims of Gemini or ChatGPT.
Pessimism about the role of AI in our research and learning does not equal success.
Skepticism does.
