By The European Business Review Editorial Team
With ChatGPT credited as one of 12 authors on a preprint on the medical repository MedRxiv about medical education, the artificial intelligence (AI) chatbot has caught the globe by storm.
Artificial intelligence has come a long way in recent years, with daily breakthroughs. One of the most impressive advances in this field is the development of language models, and among them, the most impressive is ChatGPT, an AI language model created by OpenAI. This AI model can generate human-like responses to text inputs and has made a significant impact on several fields, including science and academic journals.
The development of ChatGPT has been a significant milestone in AI, as it can understand and generate coherent and meaningful text based on the input provided. This makes it a highly versatile tool that can be used in a variety of applications, from customer service to creative writing.
The application of ChatGPT in published journals
The use of ChatGPT in summarising scientific articles improves the amount of processing time that deals with large amounts of data. This has been especially useful to information-driven industries such as medicine. New York Medical College released a statement saying, “ChatGPT has the potential to streamline processes, increase efficiency, and reduce costs in healthcare. Automating routine tasks with ChatGPT frees up medical staff to focus on complex tasks, improving overall efficiency and effectiveness.”
Alex Zhavoronkov, chief executive of Insilico Medicine, credited ChatGPT as a co-author of a perspective article in the journal Oncoscience recently. He says that his company has published more than 80 papers produced by generative AI tools. He argues that ChatGPT wrote a much better article than previous generations of generative AI tools have. Zhavoronkov elaborates, “[…] ChatGPT provided the pros and cons for the use of Rapamycin considering the preclinical evidence of potential life extension in animals. This article demonstrates the potential of ChatGPT to produce complex philosophical arguments.”
By automating the process of summarising articles, ChatGPT can help reviewers quickly identify relevant articles, saving them time and effort. It’s a powerful tool that has made a big impact on the field of artificial intelligence and has the potential to revolutionise many other fields as well. Its ability to understand and generate text has made it an invaluable tool for anyone who applies it properly and ethically.
A Word of Caution on the Misuse of ChatGPT
While useful, it’s worth nothing that ChatGPT is also capable of propagating numerous errors in scientific studies. This prompts the need for open-source alternatives whose functioning can be scrutinised in a more transparent manner. “First and foremost, ChatGPT lacks the ability to truly understand the complexity of human language and conversation,” writes Tyler Comrie from The Atlantic. Oxford University has cautioned students against its use in assessed work and exams. Cherwell, the university’s oldest student newspaper, released an issue to address the use of the language tool. They note that the use of AI tools is a “serious disciplinary offence which constitutes cheating and is covered under existing regulations”, adding that “further guidance to students will be issued soon.”
There is also an element of data bias that researchers should consider before crediting the LLM. ChatGPT was trained on more than 300 billion words, or roughly 570 GB of data, according to an Insider research. It presupposes that a well-functioning AI requires a massive amount of data to be supplied to it, with a large portion of this information originating from the internet and was created by people that may be biased. Prejudice is infused into the AI system in this way, according to Boris Ruf, a research scientist in algorithmic fairness.
Transparency on Authorship and Acknowledgement
At the time of writing, at least four articles credit the AI tool as a co-author as publishers scramble to regulate its use. The debate around the use of ChatGPT and its effect on the transparency of authorship will likely depend on how the technology is used and the policies and guidelines that are put in place to ensure that it is used in a responsible and ethical manner. It’s essential for organisations and institutions to be proactive in addressing the potential risks associated with this technology and to put measures in place to ensure that it is used in a way that enhances, rather than undermines, the transparency of authorship.
The European Business Review’s Editorial Policy on the Use of ChatGPT
The European Business Review requires authors to disclose the use of ChatGPT in their submissions and be transparent with their use of other AI-powered tools that directly impact their writing. Only human individuals who have made significant contributions to the work should be able to claim authorship. For the submission of each version of the article and for any change in authorship, the corresponding author must have received approval from all authors. Software that uses artificial intelligence (AI), not exclusive to ChatGPT, must be acknowledged in the manuscript’s acknowledgements section and should not be listed as a primary author.