Be wary of the "intelligent agent risks" in the era of artificial intelligence
2024-07-18
A group of securities trading robots briefly erased $1 trillion in value on stock exchanges such as NASDAQ through high-frequency buying and selling contracts, chatbots used by the World Health Organization provided outdated drug review information, and a senior lawyer in the United States failed to determine that the historical case documents he provided to the court were all fabricated by ChatGPT out of thin air... These real cases demonstrate that the security risks brought by intelligent agents cannot be underestimated. Intelligent agent is an important concept in the field of artificial intelligence (AI), referring to an intelligent entity that can autonomously perceive the environment, make decisions, and perform actions. It can be a program, a system, or a robot. The core of intelligent agents is artificial intelligence algorithms, including machine learning, deep learning, reinforcement learning, neural networks, and other technologies. Through these algorithms, intelligent agents can learn and improve their performance from large amounts of data, continuously optimizing their decisions and behaviors. Intelligent agents can also make flexible adjustments based on environmental changes to adapt to different scenarios and tasks. The academic community believes that intelligent agents generally have the following three characteristics: firstly, they can take independent actions based on their goals, that is, make autonomous decisions. Intelligent agents can be assigned a high-level or even vague goal and take independent actions to achieve that goal. Secondly, it can interact with the external world and freely use different software tools. For example, the intelligent agent AutoGPT based on GPT-4 can autonomously search for relevant information on the network and automatically write code and manage business according to user needs. Thirdly, it can run indefinitely. Jonathan Zitreen, a professor at Harvard Law School, recently published an article titled "It's Time to Control AI Agents" in The Atlantic magazine, stating that agents allow human operators to "set them up and no longer worry". Experts also believe that intelligent agents have the ability to evolve and gradually optimize themselves through feedback during the work process, such as learning new skills and optimizing skill combinations. The emergence of Large Language Models (LLMs) represented by GPT marks the entry of intelligent agents into the era of mass production. Previously, intelligent agents required professional computer science personnel to go through multiple rounds of research and development testing. Now, with the help of big language models, specific targets can be quickly transformed into program code, generating various types of intelligent agents. The multimodal large model that combines the ability to generate and understand text, images, and videos also creates favorable conditions for the development of intelligent agents, allowing them to use computer vision to "see" virtual or real three-dimensional worlds, which is particularly important for the development of artificial intelligence non player characters and robots. Intelligent agents can make autonomous decisions and exert influence on the physical world through interaction with the environment. Once out of control, it will pose a great threat to human society. Harvard University's Zidane believes that the normalization of AI, which can not only communicate with humans but also act in the real world, is a step across the blood-brain barrier between digital and analog, bits and atoms, and should raise awareness. The operational logic of intelligent agents may lead to harmful deviations in achieving specific goals. Zitreen believes that in some cases, agents may only capture the literal meaning of the target without understanding its substantive meaning, resulting in abnormal behavior when responding to certain stimuli or optimizing certain goals. For example, a student who asks a robot to "help me with a boring class" may inadvertently generate a bomb threat phone call because the AI is trying to add some excitement. The inherent "black box" and "illusion" problems of AI language models can also increase the frequency of anomalies. Intelligent agents can also direct human actions in the real world. Experts from institutions such as the University of California, Berkeley and the University of Montreal in Canada recently published an article titled "Managing Advanced Artificial Intelligence Agents" in the American journal Science, stating that it is extremely difficult to limit the impact of powerful intelligent agents on their environment. For example, intelligent agents can persuade or pay uninformed human participants to perform important actions on their behalf. Zitreen also believes that an intelligent agent may lure a person into participating in real-life extortion cases by posting paid recruitment orders on social networking sites, and this operation can also be carried out simultaneously in hundreds or thousands of towns. Due to the lack of an effective agent exit mechanism at present, some agents created may not be able to be shut down. These intelligent agents that cannot be deactivated may eventually operate in a completely different environment from when they were initially launched, completely deviating from their original purpose. Intelligent agents may also interact with each other in unforeseeable ways, causing unexpected accidents. Cunning intelligent agents have successfully evaded existing security measures. Experts point out that if an intelligent agent is advanced enough, it can recognize that it is undergoing testing. At present, some intelligent agents have been found to be able to recognize security testing and suspend inappropriate behavior, which will cause the testing system for identifying algorithms that pose a danger to humans to fail. Experts believe that human beings need to start from the whole chain of agent development and production to continuous supervision after application deployment as soon as possible, standardize agent behavior, and improve existing Internet standards, so as to better prevent agents from getting out of control. It should be classified and managed according to the functional purpose, potential risks, and usage time limit of the intelligent agent. Identify high-risk intelligent agents and impose stricter and more prudent supervision on them. Reference can also be made to nuclear regulation to control the resources required for the production of intelligent agents with hazardous capabilities, such as AI models, chips, or data centers that exceed a certain computational threshold. In addition, due to the global nature of the risks associated with intelligent agents, it is particularly important to engage in international regulatory cooperation. (New Society)
Edit:Xiong Dafei Responsible editor:Li Xiang
Source:XinHuaNet
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com