Will universal artificial intelligence be realized in 2029? Gary Marcus "challenge" Musk: how about a hundred thousand dollars?

2022-06-06

On May 30, the world-class red man Elon? Elon Musk posted a tweet saying: 2029 is a critical year. I would be surprised if we hadn't realized general artificial intelligence (AGI) at that time. The same goes for people on Mars. Shortly after Musk's launch, Gary F. Marcus, a well-known artificial intelligence scientist and professor of Psychology Department of New York University, immediately wrote a blog post and called for musk to "popularize" the knowledge of general artificial intelligence from five aspects and put forward his reasons for believing that AgI will not be possible in 2029. Musk hasn't replied to Gary Marcus' challenge. Melanie, an AI expert at the Santa Fe Institute? Melanie Mitchell suggested placing bets on longbets Marcus said that as long as musk was willing to bet, he would gladly keep the appointment. The following are Gary Marcus' five points of view to refute musk. AI technology review has made the following arrangement without changing its original meaning: Musk is a "big talk" prophet First, Musk's prediction of time is always inaccurate. In 2015, musk said that it was two years before the real autonomous vehicle appeared; Since then, he has said the same thing every year, but the real autonomous vehicle hasn't appeared yet. Musk doesn't pay attention to the challenge of marginal cases Secondly, musk should pay more attention to the challenges of marginal situations (i.e. outliers, or unusual situations) and think about what these outliers may mean to the prediction. Because of the long tail problem, it is easy to think that the AI problem is much simpler than the actual situation. We have a large amount of data about daily affairs, and the current technology is easy to process these data, which will mislead us and give us a wrong impression; For rare events, we get very little data, and the current technology is difficult to deal with these data. We humans have a large number of incomplete information reasoning skills, which may be able to overcome the long tail problem in life. However, the long tail problem is a very serious problem for the popular artificial intelligence technology that relies more on big data than reasoning. In 2016, an article entitled "is big data taking us closer to the deep questions in artistic intelligence?" In the interview, Gary Marcus tried to give a warning. He said at that time: Although there is a lot of hype about artificial intelligence and a lot of money is invested in artificial intelligence, I think this field is developing in the wrong direction. At present, there are many readily available achievements in the specific directions of deep learning and big data. People are very excited about big data and what big data brings them now, but I'm not sure whether it will bring us closer to deeper problems in artificial intelligence, such as how we understand language or how we reason about the world. Think of driverless cars. You will find that driverless cars are great in general. If you put them in a sunny place in Palo Alto, the performance of the vehicle will be very good. But if you put your car in a place where it snows or rains, or where you haven't seen it before, these cars will inevitably have problems. Steven levy wrote an article about Google auto factory, in which he talked about the research at the end of 2015 that enabled them to finally enable the system to recognize leaves. The system can indeed identify leaves, but it can not obtain so much data for uncommon things. Common sense can be used to communicate between human beings. We can try to figure out what this thing is and how it got there, but what the system can do is memorize things, which is the real limit. Unexpected situations have always been a scourge of contemporary artificial intelligence technology, and may continue to be, until the real revolution appears. This is why Marcus has promised that musk will not launch an L5 level autonomous vehicle this year or next year. Outliers are not completely impossible to solve, but they are still a major problem. So far, there is no known robust solution. Marcus believes that people must get rid of the heavy reliance on existing technologies such as deep learning. There are still seven years to 2029. Seven years is a long time. However, if AgI is to be realized before the end of this decade, this field needs to invest in other ideas. Otherwise, just outliers are enough to fail the goal of AgI. General artificial intelligence has a wide range The third thing musk needs to consider is that AgI is a wide-ranging issue, because intelligence itself covers a wide range. Marcus quoted Chaz Firestone and Brian scholl here: there is not only one way of thinking in the world, because thinking is not a whole. On the contrary, thinking can be divided into parts, and different parts of it operate in different ways: "seeing color" and "planning vacation" operate in different ways, while "planning vacation" is different from "understanding a sentence", "moving limbs", "remembering an event" or "feeling an emotion". For example, deep learning does quite well in identifying objects, but does less well in planning, reading or language understanding. This situation can be represented by the following figure: At present, AI is doing well in some aspects of perception, but it still needs efforts in other aspects. Even in perception, 3D perception is still a challenge, and scene understanding has not been solved. There is still no stable or reliable solution for common sense, reasoning, language or analogy. The fact is that this pie chart Marcus has been used for 5 years, but the situation of AI has hardly changed. In his article "deep learning: a critical assessment" published in 2018, Marcus concluded that although I raised these questions, I don't think we should give up deep learning. Instead, we need to redefine deep learning: deep learning is not a general-purpose solvent, but a tool. In addition to this tool, we also need hammers, wrenches and pliers, not to mention chisels, drill bits, voltmeters, logic probes and oscilloscopes. Four years later, many people still hope that in-depth learning can become a panacea; But this is still unrealistic for Marcus. He still believes that human beings need more technology. Realistically, seven years may not be enough to invent these tools (if they don't already exist) or put them into production from the laboratory. Marcus proposed to musk the situation of "mass production hell" in 2018 (musk thought that the mass production stage of model 3 electric car was like hell, and called it "mass production hell"). In less than a decade, it will be very demanding to integrate a set of technologies that have never been fully integrated before. Marcus said that I don't know what musk intends to make Optimus (Tesla's humanoid robot), but I can guarantee that the AgI required by General Motors' domestic robots is far more than that required by a car. After all, cars are more or less the same whether they are driven on the road or on the road. Complex cognitive system has not been built yet The fourth thing musk needs to realize is that human beings still do not have an appropriate methodology to build complex cognitive systems. Complex cognitive systems have too many moving parts, which usually means that people who make driverless cars and other things end up playing a huge "game of beating hamsters", often just solving one problem and then another. Patch after patch sometimes comes in handy, sometimes it doesn't work. Marcus thinks it is impossible to get AgI without solving the problem of methodology, and he thinks that no one has put forward good suggestions at present. Debugging with deep learning is very difficult because no one really understands how it works, and no one knows how to fix problems, collect more data, add more layers, and so on. The debugging known to the public is not applicable in the classical programming environment; Because the deep learning system is so unexplainable, people cannot think about what the program is doing in the same way, nor can they expect the usual elimination process. On the contrary, in the deep learning paradigm, there are a lot of trial and error, retraining and retesting, not to mention a lot of data cleaning and data enhancement experiments. A recent Facebook report frankly said that there were many difficulties in training the large language model opt. Sometimes this is more like alchemy than science, as shown in the following figure: ▲ "is this your machine learning system?" "Yes, you pour the data into this pile of linear algebra, and then go to the other side to pick up the answer." "What if the answer is wrong?" "Then mix it up in this pile of things until the answer looks right." Programming validation may ultimately help, but again, there is no tool for writing verifiable code in deep learning. If musk wants to win the bet, they may also have to solve the problem, and it will have to be solved soon. Bet criteria The last thing musk needs to consider is the standard of gambling. If you want to bet, you have to make basic rules. The term AgI is rather vague, as Marcus said on twitter a few days ago: I define AgI as "flexible and universal intelligence, with intelligence and reliability comparable to or superior to human intelligence." Marcus also proposed to make a bet with musk and formulate specific basic Betting Rules. He and Ernie Davis wrote the following five predictions according to the requirements of the people who cooperated with metaculus: By 2029, AI will still be unable to tell you exactly what happened while watching the film (Marcus called it the "understanding challenge" in the new Yorker magazine in 2014), nor can it answer the questions of who these characters are, what their conflicts and motivations are. By 2029, AI will still be unable to read novels and accurately answer questions about plots, characters, conflicts, motives, etc. By 2029, artificial intelligence will still be unable to be a competent cook in any kitchen. By 2029, AI will still be unable to reliably write more than 10000 lines of bug free code according to natural language specifications or through interaction with non professional users. (gluing together code from existing libraries does not count.) By 2029, artificial intelligence will still be unable to arbitrarily extract proofs from mathematical literature written in natural language and convert them into symbol forms suitable for symbol verification. If musk (or others) succeeds in breaking at least three predictions in 2029, he will win; If only oneortwo are broken, AgI cannot be said to be realized. Marcus is the winner. Marcus was eager to try this bet and asked musk, "do you want to bet? How about $100000?" What do you think? Who do you think will win? (eat melons)

Edit:Li Jialang    Responsible editor:Mu Mu

Source:ithome.com

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>