It is necessary to establish institutional norms and constraints on AI technology
2023-12-07
Generative artificial intelligence technology represented by ChatGPT, relying on its powerful text generation ability, has inevitably infiltrated the field of writing. While people are still amazed by the smooth structure and precise expression of AI articles, some have become proficient in using ChatGPT as a tool for writing academic papers. Recently, Gu Jun, Director of the Supervision and Evaluation Department of the Jiangsu Provincial Department of Science and Technology, responded during a guest visit to the "Political Conduct Hotline" of the Jiangsu Radio and Television News Channel that he will organize self inspections and spot checks of scientific and technological personnel's publicly published papers, and conduct scientific and technological ethics reviews to guide scientific and technological personnel to improve their awareness of integrity. So, can generative artificial intelligence technologies such as ChatGPT be used for academic paper writing? How to prevent academic misconduct caused by the abuse of AI technology? The reporter from Science and Technology Daily interviewed staff from relevant regulatory departments and industry scholars. The risk of academic misconduct cannot be ignored and cannot be denied. ChatGPT can generate coherent and logical articles, and bring convenience to people's work and life. However, the risks of academic misconduct arising from this cannot be ignored. Researchers abroad have found that the answers given by ChatGPT are based on the massive data trained on its model, which may sometimes limit people's thinking, and even some answers may not be accurate, such as only selecting data that is beneficial to one's own viewpoint for evidence and ignoring other data. On December 1st, the "Measures for the Review of Science and Technology Ethics (Trial)" were officially implemented, but the reporter did not find any clauses related to generative artificial intelligence technology in them after searching. At present, the regulations for ChatGPT writing papers have not been clearly refined by relevant departments and research organizations in China. However, regardless of the method used, any academic misconduct such as plagiarism, plagiarism, and falsification will be severely investigated and punished. On November 24th, the National Natural Science Foundation of China announced the investigation and handling results of the second batch of misconduct cases in 2023. The largest number of misconduct cases were plagiarism, plagiarism, and forgery of various information, with a total of 15 people, accounting for 62.5%. The reporter learned that some provinces in China have conducted targeted spot checks on scientific research papers, focusing on academic misconduct such as plagiarism, falsification, and duplicate publication. Although there is no specific focus on ChatGPT, these issues may be related to it. The relevant person in charge of the Supervision and Evaluation Department of the Jiangsu Provincial Department of Science and Technology said that it will further strengthen the review of scientific and technological ethics. At the same time, it is required that all units conduct self-examination and cleaning of academic misconduct issues in their published academic papers. The science and technology department regularly organizes proactive spot checks and continuously updates the procedures and standards for addressing academic misconduct. The person in charge stated that preventing academic misconduct caused by the abuse of AI technology will be one of the key regulatory contents of the technology management department in the future. At present, domestic research institutions are developing corresponding software for plagiarism checking in AI writing papers. In the future, it will be increasingly difficult to use ChatGPT to write papers and obtain honor rewards and research projects. How to face the unavoidable AI technology? Professor Li Qianmu from Nanjing University of Technology has been engaged in research on artificial intelligence system security and big data mining for a long time. In his view, when people are talking about the emergence of AI technology
Edit:Hu Sen Ming Responsible editor:Li Xi
Source:XinhuaNet
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com