Sci-Tech

The academic use of generative AI urgently needs attention

2024-08-08   

The rapid popularization of generative artificial intelligence (AI) tools has shown explosive application momentum in the field of academic writing. Using generative AI tools based on Large Language Models (LLM) can save time, reduce language barriers, and make papers more clear and coherent. But the application of these tools also makes plagiarism more complex. In a recent report, the UK's Nature website pointed out that the scientific community should fully explore and develop clearer guidelines for the use of AI in academic writing regarding whether it constitutes plagiarism and under what circumstances AI can be used for writing. A team led by data scientist Dmitry Kobak from the University of T ü bingen in Germany analyzed 14 million abstracts published in the academic database PubMed between 2010 and June 2024. They estimate that in the first half of 2024, at least 10% of biomedical paper abstracts (approximately 75000) will be written using LLM, and the emergence of LLM based writing assistants has had an unprecedented impact on the academic community. Meanwhile, some people believe that AI tools are "great helpers" for academic writing. They can make text and concepts clearer, reduce language barriers, and free up more time for scientists to conduct experiments and think. Plagiarism is difficult to pinpoint. A 2015 study estimated that 1.7% of scientists admitted to plagiarism, and 30% of scientists knew colleagues who engaged in plagiarism. LLM is trained to generate text by "digesting" a large number of previously published articles. Therefore, using them may lead to situations similar to plagiarism. For example, researchers impersonate papers generated by AI as papers written by themselves; Or machine generated papers that are very similar to someone's paper but do not indicate the source, etc. Ecologist Peter Cotton from Plymouth University in the UK pointed out that in the era of AI, it will become very difficult to define the boundaries of academic dishonesty or plagiarism, as well as the reasonable use of AI. If LLM slightly modifies its wording, its plagiarism of human written text content can easily be concealed. Because people can provide prompts for these AI tools to write papers in complex ways, such as in the style of an academic journal. In a survey of 1600 researchers conducted in 2023, 68% of respondents stated that AI will make plagiarism harder to detect. Another core issue is whether using unsigned content written entirely by machines rather than humans constitutes plagiarism. Deborah Weber Wolf, an expert at the Berlin University of Applied Sciences in Germany, stated that although some generative AI generated texts may appear similar to human written content, they cannot be considered plagiarism. Associate Professor Sohail Fitz, Director of the Reliable Artificial Intelligence Laboratory at the University of Maryland, believes that using LLM to rewrite the content of existing papers clearly constitutes plagiarism. But using LLM to help express ideas, whether it's generating text based on detailed prompts or editing drafts, should not be penalized if done transparently. The scientific community should allow researchers to use LLM to easily and clearly express their ideas. Many journals now have policies that allow contributors to use LLM to a certain extent. The journal Science updated its policy in November 2023, stating that authors should fully disclose their use of AI technology in the process of writing papers, including which AI systems were used and what prompts were used. The journal Nature also suggests that authors should document the use of LLM. An analysis of 100 large academic publishers and 100 top ranked journals found that as of October 2023, 24% of publishers and 87% of journals have established guidelines for using generative AI. Almost all of these journals declare that AI tools cannot be listed as authors. Wolf emphasized that scientists urgently need clearer guidelines for academic writing AI usage. The detection tools urgently need to be improved. While some scientists use LLM to write academic papers, another group of scientists are developing tools aimed at detecting the use of LLM. Although some tools have high accuracy, exceeding 90% in some cases, research shows that the majority of tools are "not living up to their name". In a study published in December last year, Wolf and colleagues evaluated 14 widely used AI detection tools in academia. The results showed that only 5 tools had an accuracy rate higher than 70%, and none of the tools scored above 80%. When the research team fine tuned the AI generated text by replacing synonyms and rearranging sentence order, the accuracy of the detection tool decreased to an average of less than 50%. If researchers make AI rewrite text written by humans multiple times, the accuracy of detection tools will also be greatly reduced. AI detection tools also face other issues, such as non-native English speakers writing in English being more likely to be mistaken for being generated by AI. Fitz pointed out that AI detection tools cannot reliably distinguish between text written entirely by AI and text polished by authors using AI. Being wrongly accused of abusing AI could cause significant damage to the reputation of these scholars or students. (New Society)

Edit:Xiong Dafei Responsible editor:Li Xiang

Source:Stdaily.com

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links