Three Google researchers recently published a paper on ArXiv on AI progress. The paper reports that the deep neural network transformer, the underlying technology in the field of AI, is not good at generalizations. Transformers are the basis of large language models behind AI tools such as ChatGPT. In the new paper, the authors Steve Yadlowsky, Lyric Doshi, and Nilesh Tripuraneni wrote, “Transformers exhibit various failure modes when a task or function requires something beyond the scope of the pre-trained data.
The report claims that AI can find simple tasks difficult to execute if they are beyond its training. However, the deep neural network transformers are good at performing tasks related to training data and are not very good at handling tasks beyond this scope. For those hoping to achieve Artificial General Intelligence (AGI), this issue cannot be ignored. AGI is a term used by technologists to describe a hypothetical AI that can do anything a human can do. As it stands, AI is very good at performing specific tasks but doesn’t transfer skills across domains as humans do.
Pedro Domingos, professor emeritus of computer science and engineering at the University of Washington, said the new research means “at this point, we shouldn’t get too obsessed about the AI that’s coming.”
AGI is touted as the ultimate goal of AI. In theory, this represents the creation of something as smart as or smarter than ourselves. Many investors and technologists are investing a lot of time and energy into this.
On Monday, OpenAI CEO Sam Altman took the stage with Microsoft CEO Satya Nadella and reiterated his vision of “collaborating to build AGI.”
Still far from the goal
Achieving this goal means asking AI to perform many of the inductive tasks that the human brain can perform. This includes adapting to unfamiliar scenarios, creating analogies, processing new data, and abstract thinking. But, as the paper points out, if the technology struggles to achieve even “simple tasks,” then we’re clearly still far from the goal.
Princeton University computer science professor Arvind Narayanan said that many people have accepted the limitations of AI. However, he also says “It’s time to wake up.”
Jin Fan, a senior AI scientist at NVIDIA, questioned why the findings of this paper would surprise people because “transformers are not a panacea in the first place.”
Domingos said the study highlights how “a lot of people are very confused” about the potential of a technology that’s been touted as a path to AGI.
Gizchina News of the week
He added, “This is a paper that has just been published, and the interesting thing is who will be surprised and who won’t be surprised.”
While Domingos acknowledges that transformers are advanced technology, he believes many people think such deep neural networks are much more powerful than they actually are.
“The problem is that neural networks are very opaque. These big language models are trained on unimaginably large amounts of data. This leaves a lot of people very confused about what they can and can’t do,” he said. “They’re starting to always think you can do miracles.” He adds
Quotas and Limits
According to Google, AI has certain quotas and limits that apply to it. These quotas restrict how much of a particular shared Google Cloud resource your Google Cloud project can use. For example, the training dataset maximums are 30,000 documents and 100,000 pages. Also, the training dataset minimum is every label on at least 10 documents. These quotas apply to each Google Cloud Console project. They are also shared across all applications and IP addresses using that project.
Google’s acceleration in AI development comes as a cacophony of voices. This includes notable company alumnae and industry veterans who are calling for AI developers to slow down. They warn that the technology is developing faster than even its inventors anticipated. Geoffrey Hinton, one of the pioneers of AI tech who joined Google in 2013 and recently left the company, has since gone on a media blitz warning about the dangers of supersmart AI escaping human control. Pichai and other executives have increasingly begun talking about the prospect of AI tech matching or exceeding human intelligence, a concept known as artificial general intelligence, or AGI. The once-fringe term, associated with the idea that AI poses an existential risk to humanity, is central to OpenAI’s mission and has been embraced by DeepMind but avoided by Google’s top brass.
Limitations of AI Governance
There are technical limits as to what is currently feasible for complex AI systems. With enough time and expertise, it is usually possible to get an indication of how complex systems function. However, in practice, doing so will seldom be economically viable at scale. Also, tough requirements may inadvertently block the adoption of AI systems. From a governance perspective, AI poses a unique challenge both in formulating and enforcing regulations and norms. Finally, there is simply a growing sense that the time has come for a more cohesive approach to AI oversight. Given the open research culture in the AI field, there is a need for a comprehensive approach to AI governance. This is more needed due to higher functional building blocks, and the growing number of AI apps.
Final Words
AI has made significant progress in recent years, but it is not yet ready to surpass humans. There are still many limitations to AI, including quotas and limits, limitations of AI explanations, and limitations of AI governance. As AI continues to develop, it is important to keep these limitations in mind.