Campus Queries: What do ChatGPT and the rise of AI mean for the future?
(Kimi Jung/Daily Bruin staff)
Campus Queries is a series in which Daily Bruin readers and staff present science-related questions for UCLA professors and experts to answer.
Q: What is ChatGPT, and what are its benefits and limitations?
A: ChatGPT was released in November as an artificial intelligence platform, but many people know little about the benefits, risks, limitations and what it will look like in the future.
ChatGPT was developed by OpenAI, a research company focused on artificial intelligence. Glenn Reinman, vice chair of undergraduate studies in the computer science department and faculty director of the Break Through Tech AI hub, said in an emailed statement that ChatGPT is a natural language interface where people and computers communicate using human language to create content.
“These tools are just these large language models where they use deep learning to synthesize and create new content based on these massive amounts of information,” Reinman said.
ChatGPT can increase the time available for creative work as more mundane tasks become automated, said Yuan Tian, a computer science assistant professor who specializes in privacy and security for emerging technologies.
ChatGPT is a tool that improves people’s efficiency at sorting large amounts of information, Reinman said. He added that ChatGPT has its limitations, as it takes a large period of time to build a database, meaning the information in its database often lacks knowledge of current events. Reinman said he is concerned about how AI can be used in dangerous ways if people are not vigilant.
“Giving a voice to certain agents who (do not have) the best interest of the community as a whole, with deepfakes and other things, can really create confusion about what is real and what is not real. … It’s a double-edged sword,” Reinman said.
Daniel Snelson, an assistant professor of English and faculty member of the UCLA Game Lab, said the primary disadvantage in the algorithms behind these platforms is their inherent bias from the information they learn from. He added that ChatGPT will not be able to recognize the bias it employs, as it is not a sentient AI.
Tian said little is known about how ChatGPT handles privacy. Sensitive information could be leaked, as the platform handles a wide range of complex information, she added.
ChatGPT can be used in technical fields such as coding but is less effective in other fields. For example, Reinman said ChatGPT can create and analyze music, but it lacks the innovation and emotion found in music created by humans.
“The emotional level of a human is still really better served for some of these other areas like music,” Reinman said.
AI is unable to fully replicate human behavior, said Snelson.
“The way that we (humans) say things, the way that we use our language is what shapes our world,” Snelson said.
If users use AI with caution and an open mind, there are benefits to using platforms like ChatGPT. Some users successfully apply AI in innovative and inventive ways, such as AI Dungeon, a platform in which the AI plays the role of a game master and guides you through an adventure, Snelson added.
Q: From your perspective, what are your predictions on the future of ChatGPT, and how does it impact us or put us at risk?
A: Tian said we cannot avoid the development of AI platforms such as ChatGPT as their technology will only improve and become integrated into people’s everyday lives.
“There might be societal issues in the future, but I think as the technology evolves, the social norms … will also evolve,” Tian said.
However, ChatGPT and other AI platforms still have potential risks. The rapid development of these platforms is concerning, as they should be developed thoughtfully, Snelson said.
Tian said that in the future, ChatGPT could lead to new human-AI collaborations – they just need to be done properly. ChatGPT could also be further improved with the ability to help writers shift between artistic mediums, such as images and text, and web developers who use coding software in their work, she added.
ChatGPT can be used to automate or facilitate jobs that require synthesizing data and information, which is a risk for people in fields like journalism or radiology, Reinman said.
Human intervention is required to ensure ChatGPT handles this data accurately, he added.
“I hope we can use ChatGPT in ways that we actually know what we are doing, … and I think it would be bad to use ChatGPT without really knowing exactly what it can do,” Tian said.
Snelson said he encourages students to experiment with ChatGPT critically to improve their understanding of the platform.
“Experimenting with these platforms is going to be the best way to understand how they operate, what their strengths are and what their drawbacks might be,” Snelson said.