Artificial intelligence, urgent need to strengthen international ethical norms

2022-11-11

At present, a new round of scientific and technological revolution and industrial transformation is booming, and artificial intelligence has become the key word of emerging technology. In the military field, intelligence is regarded as another trend of equipment development after mechanization and informatization, and artificial intelligence technologies such as face recognition are also gradually emerging in regional armed conflicts. At the same time, more and more people are worried about the moral and ethical risks arising from the military application of artificial intelligence. How to standardize the military application of AI at the ethical level and ensure that AI technology will not deviate from the principle of "intelligence for good" has become an important frontier issue for all countries. The ethical problem caused by the military application of artificial intelligence has a long history. Its core is how to treat and handle the relationship between people and machines. In recent years, autonomous weapon systems such as robotic dogs carrying automatic rifles and robotic sentries for border patrols have gradually come into the public's view. Some scholars have even begun to discuss the military application of the meta universe, but the relevant laws and ethical rules are far from perfect. How to prevent this new weapon from killing civilians indiscriminately? Does an autonomous weapon system require human authorization before firing? Who should be responsible for the consequences of the use of such weapons? As the United States and the West accelerate the development of unmanned aerial vehicles, deep forgery, social robots and other technologies, the international community is increasingly worried about the ethical risks of AI military applications. In fact, although the current international law system does not impose rigid constraints on AI technology, the application of AI technology in military and other fields is not unlimited. According to international humanitarian law, before using new weapons, countries must assess whether they conform to international law and whether they follow the principles of necessity, distinction and appropriateness. Weapon systems equipped with artificial intelligence modules must also meet the above requirements, but they face many difficulties in implementation. At the current technical level, the unique "black box effect" of AI leads to the output results that are often difficult to explain. Once an accident occurs, it is easy to fall into the dilemma of accountability. With the significant improvement of weapon intelligence, human-computer integration technologies such as brain computer interface and mechanical exoskeleton continue to emerge, and the relationship between people and machines is facing fundamental changes. The machine of the future may even replace the human itself and become the actual decision-maker and executor of military action, causing a deeper ethical crisis. Some people even predict that the killing robot in the science fiction film Terminator may become a reality if the military application of artificial intelligence is not constrained in time. To solve the problem of AI governance, China is taking action. As China enters the ranks of innovative countries, the international community generally expects China to contribute wisdom and programs to the construction of the global governance system of AI, promote exchanges and cooperation, share experience, and promote AI technology to better enable global sustainable development. China attaches great importance to the issue of AI governance, always adheres to the concept of "people-oriented, intelligent and good", and has issued guidance documents such as the New Generation AI Governance Principles and the New Generation AI Ethics Code. At the beginning of this year, the Chinese government issued the Opinions on Strengthening Ethical Governance in Science and Technology, which put forward the five basic principles of "promoting human welfare, respecting the right to life, adhering to fairness and justice, reasonably controlling risks, and maintaining openness and transparency", and emphasized that it would focus on strengthening cutting-edge areas such as artificial intelligence

Edit:wangwenting    Responsible editor:xiaomai

Source:china.cn

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>