Turing prize winner Yann Lecun: AI can never match human intelligence

2022-08-26

Some time ago, Google engineers claimed that their AI chat robot lamda had consciousness, which caused chaos. Lamda is a large language model (LLM) that can predict the next word that may appear based on any given text. Many conversations are easy to predict to some extent, so this system can promote and keep the conversation flowing. Lamda performed so well in this respect that Blake Lemoine, an engineer, began to suspect that it had human like perception. As LLM becomes more and more popular and powerful, people's views on LLM are also more and more different. Indeed, today's systems have exceeded many benchmarks of "common sense" language reasoning, but many systems still lack common sense and are prone to nonsense, illogical and dangerous suggestions. So this leads to a very contradictory question: Why are these systems so intelligent and at the same time so limited? Recently, Yann Lecun, a Turing prize winner, and Jacob browning, a postdoctoral researcher at New York University, published an article in noema magazine, which answered this question: the fundamental problem is not AI itself, but the finite nature of language. He gave the following arguments: 1. Language only carries a small part of all human knowledge. 2. Most human knowledge and all animal knowledge are nonverbal (non symbolic). 3. Therefore, large-scale language models cannot approach human level intelligence. Obviously, Lecun believes that we should discard the old hypothesis about the relationship between language and thinking, that is, language and thinking have identity. In his view, these language systems are inherently "superficial." even the most advanced AI on earth will never be able to acquire all the thinking that humans have. This view actually denies the validity of Turing test. The basis of Turing test is that if a machine says everything it wants to say, it means that it knows what it is saying, because knowing the correct sentences and when to use them will exhaust its knowledge. Lecun starts from the perspective of linguistic philosophy to explain the ideological basis and irrationality of Turing test. AI does not understand what it is saying. The philosophy of language in the 19th and 20th centuries believed that "knowing something" means being able to think of a correct sentence and knowing how it relates to other sentences in the big network of truth we know. According to this logic, the ideal language form is a pure formal language of mathematical logic, which consists of arbitrary symbols connected by strict inference rules. But if you are willing to make some effort to eliminate ambiguity and imprecision, you can also use natural language. Linguistic philosopher Wittgenstein once said: "the sum of true propositions constitutes the whole of natural science." For a long time, people have been convinced that logical mathematics and formalization are the necessary foundation of language

Edit:Li Jialang    Responsible editor:Mu Mu

Source:ithome.com

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>