When a large model lands, running fast requires running steadily
2024-07-15
What is even hotter than summer in Shanghai is the 2024 World Artificial Intelligence Conference and High Level Conference on Global Governance of Artificial Intelligence (WAIC 2024). The number of offline visitors to the conference has exceeded 300000, setting a new historical high. It is worth noting that the debut of WAIC 2024 not only involves model updates, but also covers applications, platforms, systems, and more. Industry and audience attention is increasingly focused on areas closely related to model implementation, such as interactive experiences and business models. A widely concerned issue is that as the capabilities of large models continue to enhance, their security, reliability, and controllability are also increasingly challenged. Especially in the face of industry users' requirements for legality, compliance, precision, and controllability, the potential issues of data security and illusion in large models cannot be avoided. Chang Yongbo, Director of the Artificial Intelligence Division of the East China Branch of the China Academy of Information and Communications Technology, said that application value and application security are the two wings of the development of big models. Currently, big models have entered a rapid iteration period. While actively exploring and implementing applications, big model manufacturers should also attach great importance to the industry requirements for security in application scenarios. The inherent flaws of technology cannot be ignored. Relying on a huge parameter scale, massive training data, and powerful computing resources, large models, as the most popular branch of artificial intelligence technology, have demonstrated abilities beyond humans in multiple fields. Many fields such as finance, healthcare, education, government affairs, and manufacturing are actively exploring the paradigm of big model security applications to address big model security risks Chang Yongbo introduced that with the deep application of large-scale models, all parties involved in industry, academia, research and application are strengthening the research on the security threats and defense technology system of large-scale models. On the basis of the original trustworthy artificial intelligence governance system framework, improving the robustness, interpretability, fairness, authenticity and other capabilities of large models has become a hot research topic in the industry. The continuous maturity of security evaluation technology and security defense technology effectively safeguards the development of large-scale models. At WAIC 2024, the white paper "Big Model Security Practice (2024)" jointly written by Tsinghua University, Zhongguancun Laboratory, Ant Group and other institutions was officially released. The white paper shows that the big model technology has its own shortcomings, including unreliable generated content, uncontrollable capabilities, and external security risks. Illusion is currently a difficult problem for big models to solve Chang Yongbo said that while models follow grammar rules, they may generate false or meaningless information. This phenomenon originates from the output method of large models based on probabilistic reasoning. It may lead to overconfidence in fuzzy predictions, resulting in the fabrication of erroneous or non-existent facts that affect the credibility of the generated content. The emergence of intelligence is another effect of large models, which can enable models to exhibit excellent performance, but also has characteristics such as suddenness, unpredictability, and uncontrollability. In addition, the fragility and vulnerability of large models make it difficult to eliminate external security risks. Related data shows that with the rapid development of big model technology, related network attacks are also increasing. Focusing on the various risks brought by the construction of a large model of safety, reliability, and controllability is a new and unavoidable issue for regulators, academia, and industry. In recent years, such policies and regulations as the Administrative Provisions on the Recommendation of Algorithms for Internet Information Services, the Administrative Provisions on the Deep Synthesis of Internet Information Services, the Interim Measures for the Management of Generative Artificial Intelligence Services, and the Measures for the Review of Scientific and Technological Ethics (for Trial Implementation) have been issued, setting up the basic framework for human intelligence governance in China. A series of policies and regulations adhere to the principle of balancing development and security, strengthen the prevention and control of technological ethical risks, and put forward requirements for the development of large-scale model security from the aspects of technological development and governance, service standards, supervision and inspection, and legal responsibilities. The white paper proposes to build a governance framework that integrates government supervision, ecological cultivation, enterprise self-discipline, talent cultivation, and testing and verification for large-scale model security. In terms of regulation, Chang Yongbo introduced that agile governance is becoming a new type of governance model. This model is characterized by flexibility, fluidity, agility, and adaptability, advocating for the participation of multiple stakeholders and enabling rapid response to environmental changes. When implementing governance strategies, it is necessary to combine flexible ethical norms and mandatory laws and regulations to establish a sound governance mechanism that balances innovation and safety while regulating the risks of large-scale models. To ensure the maximum effectiveness of large models in practical applications and prevent potential risks and abuse, the construction of large models usually focuses on three important dimensions: security, reliability, and controllability Wang Weiqiang, Chief Scientist of Ant Group's Security Laboratory, explained that security means ensuring that the model is protected at all stages, preventing any unauthorized access, modification, or infection, and ensuring that the artificial intelligence system is vulnerability free and induction free; Reliability requires large models to continuously provide accurate, consistent, and realistic results in various contexts, which is particularly important for decision support systems; Controllability is related to whether the model can be understood and intervened by humans when providing results and decisions, so that humans can adjust and operate as needed. Wang Weiqiang specifically mentioned the currently highly anticipated Agent (intelligent agent). He said that agents are currently the key path for the implementation of large models, but the complex agent system further expands the risk exposure of large models. At present, methods such as RAG (Retrieval Enhanced Generation), instruction following, and knowledge graph embedding can effectively improve the controllability and accuracy of model outputs. Working together to promote the healthy development of artificial intelligence "Currently, it is almost impossible to make large models completely error free, but it is possible to reduce the probability of errors and weaken the harm of errors. Chang Yongbo said that security governance requires the joint efforts of industry, academia, and research. The China Academy of Information and Communications Technology has conducted a series of standard and evaluation studies, and leading manufacturers are also accelerating the construction of their own security and governance systems. Zhao Zhiyuan, the head of security content intelligence at Ant Group, introduced relevant experience. On the one hand, before the large-scale model products are put into application, enterprises need to conduct comprehensive evaluations, launch targeted defenses against exposed security issues, and ensure strict entry control; After the relevant products enter the market, they should also constantly monitor potential risks and hazards, and carry out technical remedies and improvements. On the other hand, model technology usually lags behind security technology, and industry research needs to maintain a certain level of foresight. We have been exploring the technology of constructing visual domain generated content risk suppression based on security knowledge for a long time. After the release of the multimodal large model, we integrated this technology into the multimodal base to reduce the proportion of risk content generation Zhao Zhiyuan introduced that Ant Group has built a large-scale model security integration solution "Ant Tianjian" 2.0 version for industrial level applications, forming an evaluation and defense technology chain including large-scale model infrastructure evaluation, large-scale model X-ray evaluation, etc., and has been applied to the entire process of AI applications in professional scenarios such as finance, government affairs, and healthcare. Chang Yongbo said that the threshold for implementing large models is significantly decreasing, and a large number of small and medium-sized enterprises have weak capabilities in model security governance, some of which even do not meet basic compliance requirements. To solve these problems, further regulatory guidance and the release of the capabilities of top manufacturers are needed. We have now made the evaluation capability framework of 'Ant Sky Mirror' open source, and in the future, we will also share more detection capabilities and risk awareness on the platform, which can adapt to a variety of models. We hope that the open capabilities we provide can help the big model industry continue to develop healthily Wang Weiqiang said that model manufacturers are closest to users and can detect security risks in a timely manner. By maintaining positive communication and interaction with regulators, they can help ensure the safe implementation of large models. Li Qi, a long-term associate professor at Tsinghua University, believes that the application of large-scale model security is an emerging field, and research and application are still in their infancy. With the continuous development of new practices, related technologies will also continue to upgrade, creating a high-value reference system for constructing a paradigm of large-scale model security practices. Artificial intelligence governance is a global issue. The Shanghai Declaration on Global Governance of Artificial Intelligence, released at the WAIC 2024 opening ceremony, emphasizes the high importance of AI security issues. The declaration emphasizes that we should view problems from a development perspective, use artificial intelligence technology to prevent AI risks under human decision-making and regulation, and improve the technological capabilities of AI governance. The declaration calls for promoting the development and adoption of ethical guidelines and norms for artificial intelligence with broad international consensus, guiding the healthy development of artificial intelligence technology, and preventing its misuse, abuse, or misuse. (New Society)
Edit:Xiong Dafei Responsible editor:Li Xiang
Source:XinHuaNet
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com