Notice: Undefined index: OS in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/const.inc.php on line 64 Notice: Undefined variable: siters in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 2414 Notice: Undefined index: User in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/const.inc.php on line 108 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3607 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3612 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 70 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 74 Notice: Undefined index: User in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 158 Notice: Undefined index: SID in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 177 Notice: Undefined index: UID in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 179 Notice: Undefined variable: UserName in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 180 Notice: Undefined variable: Mobile in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 181 Notice: Undefined variable: Email in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 182 Notice: Undefined variable: Num in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 183 Notice: Undefined variable: keyword in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 184 Notice: Undefined index: ac in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 189 Notice: Undefined index: CHtml in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 191 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 201 Notice: Undefined index: t in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/info_view.php on line 40 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3607 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3612 Notice: Undefined variable: strimg in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3612 Notice: Undefined offset: 1 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 617 Notice: Undefined index: enseo in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3076 Notice: Undefined variable: TPath in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/info_view.php on line 125 Does AI make rumor making easier and more 'scientific'?-瞭望新时代网-瞭望时代,放眼世界

Law

Does AI make rumor making easier and more 'scientific'?

2024-07-17   

There are pictures and truths, all of which have been verified by experts. "Recently, Li Meng (pseudonym), a resident of Tianjin, had a heated argument with her mother over a" science popularization article ": her mother firmly believed that the article contained videos, pictures, and various research conclusions drawn by so-called doctors and medical teams, which could not be fake; Li Meng carefully identified the article and found that it was generated by AI and the platform had also debunked the rumors, so it must be fake. The content of this article is related to cats - a girl played with a cat and developed a terminal illness called "illness", which later made her completely unrecognizable. It is precisely because of this article that Li Meng's mother firmly opposes her keeping cats, fearing that she may also suffer from "problems". Li Meng couldn't help but laugh and cry about this, 'I really wish my mom could go online less.'. It's not just Li Meng's mother who has been deceived by AI rumors. Recently, public security organs in multiple places have released multiple cases of using AI tools to spread rumors. For example, the organization that publishes false news about the "Xi'an explosion" can generate 4000 to 7000 fake news articles per day at its peak, with a daily income of over 10000 yuan. However, the actual controller of the company, Mr. Wang, operates 5 such organizations and 842 accounts. Experts interviewed by the Legal Daily pointed out that convenient AI tools have significantly reduced the manufacturing cost of rumors, increased the quantity and dissemination power of rumors. AI rumor spreading presents characteristics such as low threshold, mass production, and difficult identification, and urgently needs to strengthen supervision and cut off the chain of interests behind it. On June 20th, the Shanghai police issued a notice stating that two brand marketers fabricated false information such as "stabbing people at Zhongshan Park subway station" to gain popularity. The relevant personnel have been administratively detained by the police. In the notification, there is a noteworthy detail: a counterfeiter used AI software to generate video technology and fabricated false information such as a subway attack video. It is reported that in recent years, the use of AI to spread rumors has become frequent and rapidly spreading, causing significant social panic and harm. Last year, in the incident of a missing girl in Shanghai, a group maliciously fabricated and hyped up rumors such as "the girl's father is a stepfather" and "the girl was taken to Wenzhou" using the methods of "headline party" and "shocking body". The gang used AI tools and other tools to generate rumor content. Through a matrix of 114 accounts, they published 268 articles within 6 days, with multiple articles receiving over 1 million clicks. The Cybersecurity Bureau of the Ministry of Public Security recently announced a case. Since December 2023, a piece of information about "hot water gushing out underground in Huyi District, Xi'an City" has frequently spread on the internet, with rumors such as "hot water gushing out underground is due to an earthquake" and "it is due to the rupture of underground heat pipes". After investigation, the relevant rumors were generated through AI proofreading. Recently, outrageous "heavyweight news" such as "a high-rise residential building in Jinan caught fire and multiple people jumped off the building to escape" and "a morning exercise master found a living person in a grave near Hero Mountain in Jinan" have been widely spread online, attracting a lot of attention. The Cyberspace Office of the Jinan Municipal Committee of the Communist Party of China first refuted rumors through the Jinan Internet platform, but many people were confused by the appearance of "having pictures and truth". A research report released by the New Media Research Center of the School of Journalism and Communication at Tsinghua University in April this year showed that economic and corporate rumors accounted for the highest proportion of AI rumors in the past two years, reaching 43.71%; In the past year, the growth rate of AI rumors related to the economy and enterprises has reached 99.91%, with industries such as food delivery and express delivery being particularly affected by AI rumors. So, how easy is it to create a fake news article using AI? It is reported that testing with various popular artificial intelligence software on the market has found that as long as keywords are given, a "news report" can be immediately generated within seconds, including detailed events, comments and opinions, follow-up actions, etc. As long as the time and place are added, accompanied by pictures and background music, a fake news report can be completed. According to the investigation, many AI generated rumors are mixed with content such as "reports", "relevant departments are conducting in-depth investigations into the cause of accidents and taking measures to repair them", "reminding citizens to pay attention to safety in daily life", etc. After being published online, people often find it difficult to distinguish between truth and falsehood. In addition to AI news, popular science articles, images, dubbed videos, and simulated voices after changing faces, all of these can be generated using AI. After manual fine-tuning and integration of some real content, they will become difficult to distinguish. A researcher at the Journalism and Social Development Center of Renmin University of China once held that the splicing essence of "generative AI" has a strong affinity with rumors, and both belong to "making something out of nothing" - creating seemingly true and reasonable information. AI makes spreading rumors simpler and more "scientific". By summarizing patterns and piecing together plots based on hot events, AI can quickly create rumors that meet people's "expectations" and spread them more quickly. Online platforms can use AI technology to detect the stitching of images and videos in reverse, but it is difficult to review the content. At present, people do not have the ability to completely intercept rumors, let alone many unverified or unverifiable, ambiguous information I once held that. Falsification to gain traffic for profit is suspected of constituting multiple charges, and the "rumor spreading efficiency" of some AI software is astonishing. For example, there has been a counterfeit software that can generate 190000 articles per day. According to the Xi'an police who seized the software, they extracted articles saved by the software for 7 days and found a total of over 1 million articles, covering current events, social hot topics, social life, and other aspects. The users of the account systematically publish these "news" on relevant platforms and then profit from the platform's traffic reward system. At present, the accounts involved in the case have been seized by the platform, and the relevant software and servers have also been shut down. The police are still digging deeper into the case. Behind multiple AI rumor incidents, the main motivation of rumor mongers is to attract traffic and profit from it. Using AI to mass produce popular copy, suddenly you become rich "" Let AI help me write promotional articles, 3 articles can be completed in 1 minute "" Graphic and text creation, AI automatically writes articles, easy daily production of 500+single accounts, multiple accounts operation, easy for beginners to get started "... Through searching, it can be found that many social media platforms have similar" get rich "articles circulating, and there are also many bloggers pushing them in the comment section. In February of this year, the Shanghai public security organs discovered that a short video of an artist's "unfortunate fate and regretful death" had appeared on an e-commerce platform, triggering a large number of likes and shares. After investigation, the video content was found to be forged. After the video publisher arrived at the scene, he confessed that he runs a local specialty online store on a certain e-commerce platform. Due to poor sales, he fabricated eye-catching fake news to attract traffic to his online store account. He doesn't know how to edit videos, so he uses AI technology to generate text and videos. Zhang Qiang, a partner at Beijing Yinghe Law Firm, stated that using AI to fabricate online rumors, especially to fabricate and intentionally spread false information about dangerous situations, epidemics, disasters, and police situations, may be suspected of the crime of criminal law fabrication and intentional dissemination of false information. If it affects the reputation of individuals or businesses, it may be suspected of criminal defamation and damage to business reputation. If it affects stock, securities, and futures trading, disrupts the trading market, it may be suspected of the crime of fabricating and disseminating false information about securities and futures trading under criminal law. Continuously improving the debunking mechanism and clearly labeling the synthesized content is aimed at addressing the chaos of AI fraud and deepening the governance of the online ecosystem. In recent years, relevant departments and platforms have issued multiple policies and measures. As early as 2022, the Central Cyberspace Office and others issued the Regulations on the Management of Deep Synthesis of Internet Information Services, which stipulates that any organization or individual shall not use the deep synthesis services to produce, copy, publish, disseminate information prohibited by laws and administrative regulations, or use the deep synthesis services to engage in activities prohibited by laws and administrative regulations, such as endangering national security and interests, damaging the national image, infringing social and public interests, disrupting economic and social order, and infringing the legitimate rights and interests of others. Deep synthesis service providers and users shall not use deep synthesis services to produce, copy, publish, or disseminate false news information. In April of this year, the Secretariat of the Cyberspace Administration of China issued a notice on carrying out the special action of "clearing and rectifying the bottomless flow of 'self media'", requiring the strengthening of information source labeling and display. Those who use AI and other technologies to generate information must clearly label the technology used for generation. Posting content that contains fiction, deduction, etc. must be clearly labeled as fiction. For content suspected of using AI technology, some platforms will post a prompt below that reads "Content suspected of being generated by AI, please be cautious in screening", and clearly label content that includes fictional, deductive, and other elements as fictional. Measures such as "account suspension" will be taken against illegal accounts. Some developers of large models have also stated that they will watermark the content generated through the large model through backend settings to inform users. In Zhang Qiang's opinion, people still lack sufficient understanding and experience in dealing with generative AI. In this situation, it is very necessary to remind people through the media to pay attention to identifying AI information. At the same time, it is necessary to increase response efforts at the law enforcement level and promptly investigate and rectify behaviors such as spreading rumors and fraud through AI. Zheng Ning, the head of the Law Department at the School of Cultural Industry Management, Communication University of China, believes that the existing debunking mechanism should be further improved. Once a piece of information is identified as a rumor, it should be immediately marked and pushed back to users who have browsed the rumor for debunking prompts, in order to prevent the rumor from spreading further and causing greater harm. It is worth noting that some people may not have a subjective awareness of spreading rumors, but simply publish AI synthesized content on the internet, which is widely shared and makes many people believe, thus causing harm. Regarding this, Zeng believed that the simplest way to prevent it is to establish regulations through relevant departments or platforms, which must indicate "this image/video is AI synthesized" in all AI synthesized content. (New Society)

Edit:Lubaikang Responsible editor:Chenze

Source:epaper.legaldaily.com.cn

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links