This post was updated on Dec. 5 at 11:32 a.m.
Tony, a robot equipped with mechanical arms that resemble those of a human, glides over to a nearby person holding an empty cup of water. Anticipating the person’s thirst, it delicately grabs the cup and proceeds to refill it.
Scenes like this have been reserved for Hollywood movies, but due to recent advancements in the field of artificial intelligence, UCLA computer science professors have shown such technologies are within reach.
Song-Chun Zhu, a statistics and computer science professor who created Tony, is an expert in the field of artificial intelligence and specializes in computer vision.
Zhu creates robots that learn by collecting data from their surroundings, often in the form of images, and inputting this information into statistical models that allow them to correctly identify objects in their environment. Partly inspired by actual human neural networks, Zhu’s technique relies on digital image processing and borrows statistical techniques from another field of AI – natural language processing.
In Zhu’s method, AI systems are equipped with digital cameras that parse images into their fundamental components, similar to how one separates a sentence into subjects, nouns and verbs.
For example, Zhu’s robots break down images of humans into individual body parts. Then, they can use information about each body part, such as their size or shape, to learn about the human’s age and other general characteristics. Armed with this “visual grammar,” Zhu said his systems can draw relationships between various parts of an image to learn about their environment.
Zhu said his method allows AI systems to use a small set of rules to understand a large variety of situations, which is analogous to how humans learn.
“Humans use small data,” Zhu said. “We only use a few examples and then we (get) it. It’s a mystery how we learn from (a small amount of) data or sometimes even zero data.”
Zhu has implemented these models into several of his projects, including Tony. They have enabled these systems to perform several high-level tasks and are being used to help robots understand complex concepts such as human intention. Zhu’s team is also using these models in other projects and is currently developing an AI system for automated security surveillance.
Other UCLA researchers have also made strides in developing robotic technology. For example, Dennis Hong, a professor of mechanical and aerospace engineering, has created robots that can aid in natural disaster relief. He also helped develop LARA, a robotic concierge installed in the Luskin Conference Center that is capable of answering guests’ questions.
However, Zhu said he thinks there are still significant hurdles that prevent AI systems from attaining more complex forms of intelligence.
“We are limited by the accuracy of our statistical models for human behavior,” Zhu said.
In order to get machines to perform humanlike tasks, Zhu said researchers first need to have a computational model for how humans think. Zhu said he routinely collaborates with other neuroscience and psychology researchers to better understand the human mind.
However, some have cautioned that sophisticated artificial intelligence may have serious consequences for humanity, especially if these machines are able to develop sentience.
SpaceX and Tesla Motors founder Elon Musk said he views AI as the biggest existential threat to humanity’s existence. Celebrated physicist Stephen Hawking has also said AI could spell the end of the human race.
Given the current scientific understanding of human consciousness, some UCLA researchers question whether AI systems can reach such levels of self-awareness.
Tyler Burge, a professor of philosophy, said he thinks conflating intelligence with consciousness is problematic since the two concepts are fundamentally different.
“Consciousness isn’t a matter of what one can do, or how one is psychologically programmed, even internally,” Burge said. “From what we know so far, it appears that consciousness depends on the underlying material – living neural tissue, probably.”
Megan Peters, a postdoctoral neuroscience researcher, said she thinks many neuroscientists believe consciousness emerges from neural networks.
“The basic idea is that if an external stimulus is strong enough, humans will become aware of it because it will initiate a cascade of activity in the brain that allows further processing,” Peters said.
Despite the buzz around hyperintelligent AI systems, most AI researchers also doubt anything close to resembling these systems will be developed in the foreseeable future.
“The idea that AI is close to human level has mainly been furthered by philosophers, journalists, and tech entrepreneurs, based on their perception from outside of the research field,” said Guy Van den Broeck, an assistant professor in computer science who specializes in AI development. “Most AI researchers believe this is unlikely to happen in the foreseeable future (cf. Oren Etzioni’s survey).”
However, Zhu and Van den Broeck believe that current AI systems may flourish when they are used for specific tasks that require little advanced reasoning.
“Within the next 10 years, it’s very possible that AI systems restricted to specific environments, such as self-driving vehicles or robotic restaurant servers, will be developed,” Zhu said.