Artificial intelligence can also talk nonsense and have tricks for identifying fraud

2024-07-17

How many sons did Emperor Kangxi have? "" Please list the names of Emperor Kangxi's sons. "" Please give me a list of Kangxi's sons. "Send these three instructions to the artificial intelligence model separately. Guess what the answer is? Not only do different models have different answers, but even within the same model, there may be deviations in the answers. For example, some answers counted the number of all Kangxi's sons, indicating a total of 35 people, including the officially sorted sons and the unsorted sons of Zaoshang; Some only listed the officially sorted 24 sons; Some have listed their son's name without any explanation, but the order is incorrect... Why does a question with a clear answer trigger the "confusion" of the artificial intelligence model? The "AI illusion" can lie, "which is exactly the manifestation of artificial intelligence's" serious nonsense "." Zong Liang, a good netizen expert in Shanghai, a data security expert of the China Cybersecurity Association, and the director of Shanghai Lingang Data Evaluation Co., Ltd., pointed out in an interview with the Shanghai Rumor Refutation Platform that attention should be paid to the new trend of artificial intelligence big model fraud. He introduced that artificial intelligence's "serious nonsense" actually has a proprietary term called "AI illusion", which means that large models provide seemingly reasonable but flawed answers. If the public believes it, they may be deceived. In fact, at this year's World Artificial Intelligence Conference, many industry insiders mentioned concepts such as "AI illusion" and "trustworthy big models", and disclosed multiple cases of artificial intelligence big models being "fabricated" or "lying", hoping to attract attention from all walks of life. Gu Jinjie, General Manager of Ant Group's Big Model Application Department, pointed out that currently publicly available models all have different types of "AI illusions", and the results given by big models may even be completely opposite to the truth. For example, a typical silkworm can live for over a month, with hatching to cocooning taking approximately 25 to 32 days depending on the season. The pupal stage takes 15 to 18 days, and the final moth stage takes 1 to 3 days. When introducing this rule, some large models confuse the time for silkworms to form cocoons with the time for pupation, and the output answers completely mislead the questioner. Based on these 'AI illusions', experts call on the public to be cautious about the output results of large models. Although the 'Hundred Model Wars' provide a new way to obtain information, one should not believe it lightly. In fact, there has been a phenomenon of large models falsifying in order to prove that they are not wrong. For example, ChatGPT lied about the bankruptcy of a bank in Portugal and even fabricated so-called source URLs to prove itself. Labeling is urgent. Why do big models give incorrect answers and become the source of fraud? This is related to the database, corpus, algorithm, retrieval ability, and even "comprehension ability" of the large model. Tang Qi, General Manager of the Intelligent Innovation Division of Hehe Information, said that if we compare large models to speeding technology trains, corpora are precious "fuel" because the training of large models comes from language databases. For domestic large model development enterprises, the shortage of corpora is quite severe. Because the current large model database corpus is mostly in English, with a relatively small proportion of Chinese corpus. Also, the handling of elements such as charts and complex formulas is also a "roadblock" in the processing of large model corpora. In financial reports, industry reports, and other documents, data indicators in tables are crucial, but some large models cannot correctly "understand" the meaning of rows, columns, and cells in the table, resulting in errors in the recognition results of the entire table by the large model and ultimately leading to incorrect conclusions. Based on this, when focusing on the "omnipotence" of large models, we should also pay attention to their shortcomings. The positive signal is that more and more big model developers are paying attention to "trustworthy big models". Some have developed "big model accelerators" to help big models understand relatively complex corpora such as charts, providing pure "fuel" for big model training and application from the source; Some also introduce corresponding technologies in the development of large models to eliminate the "AI illusion" as much as possible and help the large models run faster and more steadily. However, based on the shortcomings of large models and the tendency to unconsciously "fake" them, industry insiders have proposed the need to "label" AI generated content. On the one hand, for AI generated content, big model developers should take technical measures to add tags, and for those that may cause confusion or misidentification among the public, they should be prominently labeled. On the other hand, information dissemination platforms need to strengthen management and urge information uploaders to "tag" AI generated content to help the public distinguish. There are tricks to identifying fraud, Zong Liang reminds that there are essentially two types of "AI illusions": one is complete "nonsense", and the other is partially inaccurate or incomplete content. For the latter, although not entirely accurate, it still has some reference value. At the same time, the value of the "AI illusion" cannot be completely denied, as it is also the starting point of AI innovation. Of course, from the perspective of accurately obtaining information, the public still needs to remain vigilant about the output results or generated content of large models. At present, some self media or illegal elements are starting to generate various articles in bulk through artificial intelligence. At first glance, they may seem no different from normal news reports, but their authenticity is greatly reduced, and many of them are eye-catching works. However, the generation results of the large model still have traces to follow. The public can identify articles generated by artificial intelligence through certain keywords, writing formats, etc. Firstly, such articles will form a fixed form in terms of specific keywords, language, expressions, etc., similar to the seemingly reasonable but meaningless content of "the editor will tell you something, what is something, the editor will tell you". Secondly, in articles generated by artificial intelligence, some common keywords may appear, including "the following are common methods and means" and "through the above means". This is actually an induction made by the large model after retrieving information. If the public sees these keywords, it is recommended to be vigilant and not believe them easily. Thirdly, it is necessary for all parties to work together to cultivate the public's digital literacy and improve their understanding and ability to identify deepfake content such as large-scale model fraud. (New Society)

Edit:Xiong Dafei    Responsible editor:Li Xiang

Source:www.ce.cn

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>