Reviewer or difficult to identify the abstract of the paper written by AI

2023-01-17

According to a research published on the preprint server bioRxiv recently, the artificial intelligence (AI) chat robot ChatGPT has been able to write convincing abstract of fake research papers, which even the reviewers can't find. ChatGPT, a chat robot, will create realistic text according to user prompts. It learns to perform tasks by digesting a large number of existing human-generated text. OpenAI, an American software company, released the tool in November last year and made it available to users for free. Since its publication, researchers have been trying to solve relevant ethical problems, because most of its output may be difficult to distinguish from the text written by humans. Recently, a research team at Northwestern University of the United States used ChatGPT to generate abstracts of artificial research papers to test whether scientists can identify them. Researchers asked the robot to write abstracts based on 50 studies published in the Journal of the American Medical Association, the New England Journal of Medicine, the British Medical Journal, the Lancet and Nature Medicine. Then, they compared these abstracts with the original abstracts through the plagiarism detector and AI output detector, and asked a group of medical researchers to find out the fabricated abstracts. As a result, the summary generated by ChatGPT passed the plagiarism checker, and the median score of originality was 100%, indicating that no plagiarism was detected. The AI output detector found 66% of the generated abstracts, but the manual reviewers only correctly identified 68% of the generated abstracts and 86% of the real abstracts. They mistakenly identified 32% of the generated abstracts as true and 14% of the real abstracts as fabricated. Researchers said that ChatGPT has written a scientific summary that can deceive humans. If scientists cannot determine whether the research is true, it may have "terrible consequences". Researchers said that journals need to adopt more rigorous methods to verify the accuracy of information in fields such as medicine, where false information will endanger people's lives. The solution to these problems should not focus on the robot itself, "but the improper incentives that lead to such behavior, such as recruitment and promotion review through the number of papers, without considering its quality or impact". (Reporter Zhang Mengran) [Editor in Chief Circle] ChatGPT is not only a chat tool, but its penetration in the field of education has aroused widespread concern and discussion. Of course, there are also worries. If you have used ChatGPT, you will also be surprised at its ability to write reports. Given a theme, it can generate an outline in seconds and refine it in seconds. It looks like that. This time, researchers found that the papers it wrote could even deceive reviewers. This is another classic ethical question about how to use new technology. The tool itself is innocent, but the application boundary should be clear, and users should also take responsibility for their own actions. (Outlook New Era)

Edit:sishi    Responsible editor:xingyong

Source:http://digitalpaper.stdaily.com/

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>