Preventing AI security risks

2022-09-27

By sticking a "magic sticker" on your face, you can make the face recognition access control system misjudge and open the door for strangers unprepared; Put this "sticker" on your glasses and aim at your mobile phone, you can unlock its face recognition system, and get user information as if you were in a land of no one... This scene once only appeared in science fiction movies has now occurred in real life. Recently, the first AI security contest was held in Beijing, and the above attack and defense scenes surprised many viewers. In recent years, the scale of China's artificial intelligence industry has grown rapidly. According to the data released by the Ministry of Industry and Information Technology, the scale of AI core industry exceeded 400 billion yuan, more than six times higher than the same period in 2019, and the number of enterprises exceeded 3000, 15% higher than the same period in 2019. A number of typical application scenarios have been formed in key industries such as manufacturing, transportation, medical care, and finance. "With the increasingly extensive application scenarios and the rapid growth of the use frequency, the scope and possibility of AI security risks continue to increase." Zhang cymbal, an academician of the CAS Member and honorary president of the Institute of artificial intelligence of Tsinghua University, believes that the outbreak of algorithms represented by deep learning has opened the prelude to the wave of artificial intelligence and has made great progress in computer vision, intelligent speech, natural language processing and many other fields. However, the second generation of data driven AI has gradually exposed its shortcomings in interpretability and robustness, leading to frequent security incidents. How to realize the benign interaction between high-quality development and high-level safety is the most important proposition facing the current AI industry. The White Paper on Artificial Intelligence (2022) issued by the Chinese Academy of Information and Communications also pointed out that, with the exposure of various risks and challenges in the application of artificial intelligence and the deepening of people's understanding of artificial intelligence, artificial intelligence governance has become a topic of great concern to all sectors of the world, and the voice of credible security is growing. "AI security risks are mainly analyzed from the perspectives of 'people' and 'systems'. From the perspective of people, AI security risks have the problem of technology abuse or even' weaponization '." Tian Tian, CEO of Beijing Ruilai Intelligent Technology Co., Ltd., explained with the example of deep synthesis technology that the technology can greatly improve the efficiency and quality of content production, but the risk of its negative application - deep forgery is increasing and has caused substantial harm. For example, the public opinion is guided by fabricating false speech videos through "AI Face Changing". In this regard, Zhu Jun, director of the Basic Theory Research Center of the Artificial Intelligence Research Institute of Tsinghua University, believes that technology is neutral, and the supervision of application scenarios and users should be strengthened to prevent the occurrence of derivative risks such as technology abuse and ensure that its application is controllable. The face recognition cracking demonstrated in the competition shows the risk of the AI system level, which comes from the weakness and vulnerability of the deep learning model itself. By adding disturbance to the input data, the system makes wrong judgments, which makes its reliability difficult to be trusted. This vulnerability also exists in the auto drive system sensing system: under normal circumstances, the autopilot system will immediately brake the vehicle after recognizing obstacles, signs, pedestrians and other targets, but after adding interference patterns to the target objects, the vehicle sensing system will make errors and cause collision risks

Edit:Li Jialang    Responsible editor:Mu Mu

Source:xinhuanet

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>