Tai-Lin Lee, Julia Ding, Hari M Trivedi, Judy W Gichoya, John T Moon, Hanzhou Li
J Am Coll Radiol . 2024 Apr;21(4):678-682. doi: 10.1016/j.jacr.2023.08.001. Epub 2023 Aug 7.
Large language models (LLMs) have gained widespread use since the recent release of the LLM-based chatbot, ChatGPT (OpenAI, San Francisco, California) on November 30, 2022. As with other industries, LLMs have been applied in academia for various use cases such as creating outlines and abstracts, summarizing articles, reviewing and editing manuscripts, drafting cover letters, and even writing parts of or whole manuscripts de novo [ 1 , 2 ]. However, the publication of the first ChatGPT co-authored article by Kung et al in 2022 [ 3 ] sparked vigorous discussion on its qualification for authorship in scientific literature [ 4 ]. Publishers have taken different stances on the use of LLMs in academic writing. For example, Nature and Taylor & Francis accept the use of LLMs under specific documented circumstances, and others such as Science and ArXiv consider it a form of plagiarism and disapprove of such practice [ 4 ]. This discrepancy highlights the concerns surrounding the ethics and appropriateness of artificial intelligence (AI)-generated content in academic literature.