Law

How to define the illegal spread of AI fake images? Who should take responsibility?

2025-01-13   

After the 6.8 magnitude earthquake occurred in Dingri County, Shigatse City, Xizang, someone stole a picture that clearly marked "a child is pressed under the ruins" generated by AI (artificial intelligence) technology for traffic, and posted it with keywords such as "Shigatse earthquake". Later, observant netizens discovered that the child in the photo had six fingers, which were confirmed to be created by AI. According to media reports, the original author of the image is Mr. He, and the AI work was not released during the earthquake disaster, but in November 2024. According to Mr. He's introduction, this AI work aims to reflect the living environment under the ruins of war. Mr. He said that the behavior of using disasters to gain traffic by moving "fake images" and transplanting flowers and trees is very bad. The individuals involved in the case have been administratively detained. But the remaining question is worth considering: what legal issues are involved in generating images or videos by AI, as well as moving these video images? How to define whether it is illegal? Lawyer Xu Hao from Jingshi Law Firm told China News Service that the key to whether the information is illegal lies in whether the publisher knows that the information is false. Individuals or organizations who intentionally spread false images generated by AI during major disaster events, with the intention of misleading the public, disrupting social order, or obtaining improper benefits, may face administrative penalties. Those with serious circumstances may be held criminally responsible. There are two forms of false information about disaster and epidemic situations: one is intentionally fabricating false information and publishing it; The second is to knowingly publish false information. Once it is publicly released on the platform and has gained views, it already constitutes an illegal act, but whether it constitutes a crime is determined based on the consequences caused. Do publishers need to bear legal responsibility? Fu Jian, a well-known criminal defense lawyer and director of Henan Zejin Law Firm, believes that in the current earthquake disaster, if the publisher uses someone else's AI to create false disaster images and posts them on the internet, causing widespread dissemination, it may be suspected of spreading rumors, falsely reporting dangerous situations, epidemics, police situations, and intentionally disrupting public order, which violates the provisions of the Public Security Administration Punishment Law and may face fines, detention, etc. And the creator did not intentionally spread false information when using AI for drawing, and indicated that AI creation should not be held responsible. Directly using AI to generate images to "ride on traffic", later referred to as "unintentional behavior", may be suspected of disrupting public order, provoking trouble, or may face administrative responsibility. According to Article 25 of the Law on Public Security Administration Punishments, those who spread rumors, falsely report dangerous situations, epidemics, police situations, or intentionally disrupt public order by other means shall be detained for not less than five days but not more than ten days and may be fined not more than 500 yuan. For minor circumstances, they shall be detained for not more than five days or fined not more than 500 yuan. Meanwhile, Article 291-1, Paragraph 2 of the Criminal Law stipulates that anyone who fabricates false dangerous situations, epidemics, disasters, or police situations and spreads them on information networks or other media, or knowingly spreads such false information on information networks or other media, seriously disrupting social order, shall be sentenced to fixed-term imprisonment of not more than three years, criminal detention, or public surveillance; Those who cause serious consequences shall be sentenced to fixed-term imprisonment of not less than three years but not more than seven years. Does the platform have an audit obligation? The review obligation of the platform is evaluated based on the published content, the identity of the publisher, and the significance of the event, that is, the degree of impact after publication. If it involves major content such as epidemic and disaster situations, the platform's review obligation is relatively high. Fu Jian pointed out that the platform allowed the publication of images without verifying their authenticity, indicating a certain lag in platform review and loopholes. However, a large number of videos are published on the platform, and the platform should promptly review and handle reports from other users. If false information is found, measures such as delisting and account suspension should be taken to prevent further spread of false information. If the platform fails to take timely and reasonable measures to prevent the further spread of false information, it shall bear corresponding responsibility for the expanded damage. In addition, if the relevant provisions of the Cybersecurity Law are violated, resulting in serious impact on the dissemination of false information, administrative responsibilities such as warnings, fines, and orders for rectification may also be faced. Fu Jian stated that AI image dissemination of false information may also infringe upon the copyright and personality rights of others, as well as criminal offenses such as fabricating and intentionally disseminating false information. The development of information technology is currently ahead of the law, but information cannot be used as a reason for exemption. The utilization of AI technology should balance freedom and the public's right to know Xu Hao said that in the event of a disaster, people need accurate information to deal with the problem and make accurate judgments about the current situation. Spreading false information through technological means can disrupt social order. When the public is in a state of panic and anxiety, ensuring the accuracy of information is crucial. It is necessary to crack down severely on the use of technology to spread false information in order to maintain social order. (New Society)

Edit:Rina Responsible editor:Lily

Source:TopNewsExpress

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links