Artificial intelligence fraud: can it be cured?
2024-10-12
Seeing may not necessarily be believing, and having a plan may not necessarily be the truth. With the continuous development of AI technology, AI synthesis is no longer limited to simple tasks such as changing faces or generating audio. It can deeply synthesize various elements such as faces, sounds, gestures, and even enable real-time video calls using synthesized characters. At the same time, the technological threshold for AI synthesis is becoming lower and lower, making it easy for ordinary people to learn. How to prevent new types of fraud that utilize AI deep forgery technology? How can AI technology be better directed towards goodness through governance? News 1+1 Focus: Can artificial intelligence fraud be cured? How to allocate responsibility for artificial intelligence fraud and deception? The Director of the Digital Economy and Legal Innovation Research Center at the University of International Business and Economics agrees that there are many entities playing a role in the development and use of deepfake technology. Technical developers, distributors, and users. The person who uses technology to commit fraud is certainly the first person responsible, but in the entire ecosystem, technology developers, software distribution platforms, and those who spread relevant information may also play a certain role. How to form a governance synergy? There may be two different types of responsibility to distinguish here: one is the responsibility of the perpetrator, such as those who use artificial intelligence to commit fraud. Another responsibility is platform responsibility, where one end of the platform is connected to users and the other end is connected to people providing artificial intelligence technology, which requires the platform to fulfill its management responsibilities on both sides. Once illegal and irregular behavior is discovered, the platform should take active measures to stop both the publishers of information content and the providers of software, in order to achieve the security and stability of the entire platform governance. How does the law deal with artificial intelligence fraud? Director of the Research Center for Digital Economy and Legal Innovation at the University of International Business and Economics: The development of artificial intelligence is actually just beginning, and it has not been used for illegal and criminal activities for long. China's laws have also responded in a timely manner. After the law is introduced, what needs to be done is further implementation and implementation. The introduction of these new systems is still in the stage of establishing regulations and systems. Through the joint efforts of all parties, it is necessary to rely on regulatory departments to continuously implement the detailed rules of laws and regulations to every subject, as well as through comprehensive social governance, to realize that individuals should know and claim their rights and interests, and enterprises should know and practice their responsibilities and obligations. We still need a gradual and evolving process. In September, the Cyberspace Administration of China drafted the "Method for Identifying Artificial Intelligence Generated Content (Draft for Comments)". Can this curb "face swapping" fraud? The director of the Digital Economy and Legal Innovation Research Center at the University of International Business and Economics agrees that the biggest problem in a large number of scams using AI deepfake technology is that the scammers believe the videos are real. The draft of the "Method for Identifying Synthetic Content Generated by Artificial Intelligence" requires that videos, audios, and images generated by artificial intelligence must be clearly labeled. In this way, the person who sees it can know that this is AI generated content. This governance achieved through information disclosure is a very effective measure in our current perspective. How can elderly people prevent "AI face swapping" fraud? Director of the Digital Economy and Legal Innovation Research Center at the University of International Business and Economics, permission: This is a job of improving quality and literacy. Due to the information gap and technological gap, many elderly people do not even know that there is a technology in the world that can turn fake into real. To raise awareness among the elderly, it is necessary to include media, educational institutions, and research institutes to jointly popularize this knowledge and tell them that the authenticity of information needs to be carefully screened. On the other hand, many elderly people obtain information not only from society, but also from their families. Children should communicate more with their parents and elders, tell them about these fraud cases, and make them alert. Through the joint efforts of society and family, raise awareness among the elderly. (New Society)
Edit:Rina Responsible editor:Lily
Source:
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com