Preventing AI security risks
2022-09-30
By sticking a "magic sticker" on your face, you can make the face recognition access control system misjudge and open the door for strangers unprepared; Put this "sticker" on your glasses and aim at your mobile phone, you can unlock its face recognition system, and get user information as if you were in a land of no one... This scene once only appeared in science fiction movies has now occurred in real life. Recently, the first AI security contest was held in Beijing, and the above attack and defense scenes surprised many viewers. In recent years, the scale of China's artificial intelligence industry has grown rapidly. According to the data released by the Ministry of Industry and Information Technology, the scale of AI core industry exceeded 400 billion yuan, more than six times higher than the same period in 2019, and the number of enterprises exceeded 3000, 15% higher than the same period in 2019. A number of typical application scenarios have been formed in key industries such as manufacturing, transportation, medical care, and finance. "With the increasingly extensive application scenarios and the rapid growth of the use frequency, the scope and possibility of AI security risks continue to increase." Zhang Bo, academician of the Chinese Academy of Sciences and honorary dean of the Artificial Intelligence Research Institute of Tsinghua University, believes that the explosion of algorithms represented by deep learning has opened the curtain of the wave of artificial intelligence, and has made considerable progress in computer vision, intelligent speech, natural language processing and many other fields. However, the second generation of data driven AI has gradually exposed its shortcomings in interpretability and robustness, leading to frequent security incidents. How to realize the benign interaction between high-quality development and high-level safety is the most important proposition facing the current AI industry. The White Paper on Artificial Intelligence (2022) issued by the Chinese Academy of Information and Communications also pointed out that, with the exposure of various risks and challenges in the application of artificial intelligence and the deepening of people's understanding of artificial intelligence, artificial intelligence governance has become a topic of great concern to all sectors of the world, and the voice of credible security is growing. "AI security risks are mainly analyzed from the perspectives of 'people' and 'systems'. From the perspective of people, AI security risks have the problem of technology abuse or even' weaponization '." Tian Tian, CEO of Beijing Ruilai Intelligent Technology Co., Ltd., explained with the example of deep synthesis technology that the technology can greatly improve the efficiency and quality of content production, but the risk of its negative application - deep forgery is increasing and has caused substantial harm. For example, the public opinion is guided by fabricating false speech videos through "AI Face Changing". In this regard, Zhu Jun, director of the Basic Theory Research Center of the Artificial Intelligence Research Institute of Tsinghua University, believes that technology is neutral, and the supervision of application scenarios and users should be strengthened to prevent the occurrence of derivative risks such as technology abuse and ensure that its application is controllable. The face recognition cracking demonstrated in the competition shows the risk of the AI system level, which comes from the weakness and vulnerability of the deep learning model itself. By adding disturbance to the input data, the system makes wrong judgments, which makes its reliability difficult to be trusted. This loophole also exists in the automatic driving perception system: under normal circumstances, the automatic driving system will immediately brake the vehicle after identifying obstacles, signs, pedestrians and other targets, but after adding interference patterns to the target objects, the vehicle perception system will make an error, causing a collision risk
Edit:Li Ling Responsible editor:Chen Jie
Source:Economic Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com