Meta is one of the leading companies in AI development globally. However, the company appears to not have confidence in the current AI methods. According to Yann LeCun, chief AI scientist at Meta, there needs to be an improvement for true intelligence. LeCun claims that the most current AI methods will never lead to true intelligence. His research on many of the most successful deep learning fields today method is skeptical.
The Turing Award winner said that the pursuit of his peers is necessary, but not enough. These include research on large language models such as Transformer-based GPT-3. As LeCun describes it, Transformer proponents believe: “We tokenize everything and train giant models to make discrete predictions, and that’s where AI stands out.”
“They’re not wrong. In that sense, this could be an important part of future intelligent systems, but I think it’s missing the necessary parts,” explained LeCun. LeCun perfected the use of convolutional neural networks, which has been incredibly productive in deep learning projects.
LeCun also sees flaws and limitations in many other highly successful areas of the discipline. Reinforcement learning is never enough, he insists. Researchers like DeepMind’s David Silver, although they developed the AlphaZero program and mastered chess and Go, focused on “very action-oriented” programs, while LeCun observed. He claims that “most of our learning is not done by taking actual action, but by observation”.
AI methods should push human-level intelligence
LeCun, 62, has a strong sense of urgency to confront the dead ends he believes many may be heading. He will also try to steer his field in the direction he thinks it should be heading. “We’ve seen a lot of claims about what we should be doing to push AI to human-level intelligence. I think some of those ideas are wrong,” LeCun said. “Our intelligent machines aren’t even at the level of cat intelligence. So why do we not start here?”
LeCun believes that not only academia but also the AI industry needs profound reflection. Self-driving car groups, such as startups like Wayve, think they can learn just about anything by “throwing data” at large neural networks, which seems “a little too optimistic,” he said.
“You know, I think it’s entirely possible for us to have Level 5 autonomous vehicles without common sense, but you have to work on the design,” LeCun said. He believes that this over-engineered self-driving technology will like all computer vision programs obsoleted by deep learning, they become fragile. “At the end of the day, there will be a more satisfying and possibly better solution that involves systems that better understand how the world works,” he said.
Gizchina News of the week
Concepts of AI methods need to change
LeCun hopes to prompt a rethinking of the fundamental concepts about AI, saying: “You have to take a step back and say, ‘Okay, we built the ladder, but we want to go to the moon, and this ladder can’t possibly get us there. ‘I would say it’s like making a rocket, I can’t tell you the details of how we make a rocket, but I can give the basics.”
According to LeCun, AI systems need to be able to reason, and the process he advocates is to minimize certain underlying variables. This enables the system to plan and reason. Furthermore, LeCun argues that the probabilistic framework should be abandoned. This is because it is difficult to deal with when we want to do things like capture dependencies between high-dimensional continuous variables. LeCun also advocates forgoing generative models. If not, the system will have to devote too many resources to predicting things that are hard to predict. Ultimately, the system will consume too many resources.
LeCun: AI’s road is narrow now
In a recent interview with business technology media ZDNet, LeCun reveals some information from a paper which he wrote regarding the exploration of the future of AI. In this paper, LeCun disclosed his research direction for the next ten years. Currently GPT-3, Transformer advocates believe that as long as everything is tokenized and then huge models are trained to make discrete predictions, AI will somehow emerge. But he believes that this is only one of the components of future intelligent systems, but not a key necessary part.
And even reinforcement learning can’t solve the above problem, he explained. Although they are good chess players, they are still only programs that focus on “actions”. LeCun also adds that many people claim to advance AI in some way, but these ideas mislead us. He further believes that the common sense of current intelligent machines is not even as good as that of a cat. This is the origin of the low development of AI he believes. The AI methods have serious flaws.
As a result, LeCun confessed that he had given up the study of using the generative network to predict the next frame of the video from this frame
“It was a complete failure,” he adds.
LeCun summed up the reasons for the failure, the models based on probability theory that limited him. At the same time, he denounced those who believed that probability theory was superstitious. They believe that probability theory is the only framework for explaining machine learning, but in fact, a world model built with 100% probability is difficult to achieve. At present, he has not been able to solve this underlying problem very well. However, LeCun hopes to rethink and draw an analogy.
It is worth mentioning that LeCun talked bluntly about his critics in the interview. He specifically took a jab at Gary Marcus, a professor at New York University who he claims has “never made any contribution to AI”.