{"title":"IS CHATGPT A SILVER BULLET FOR SCIENTIFIC MANUSCRIPT WRITING?","authors":"Faaiz Ali, Shah","doi":"10.54079/jpmi.37.1.3219","DOIUrl":null,"url":null,"abstract":"ChatGPT cannot replace human medical writers because it lacks the expertise and level of understanding that humans possess. It can generate text that is convincing or “Sounds Plausible” but can possibly be nonsensical or incorrect. This is a common phenomenon exhibited by language models and has been termed as “Hallucination”. ChatGPT, at this moment, cannot provide citations or references. It may also overuse phrases. Moreover, errors and biases in the text generated by ChatGPT cannot be overruled. Articles generated with ChatGPT are not free of plagiarism and need to be corrected. As ChatGPT relies on previously stored data, text repetition can be a possible consequence, which may result in a lack of innovation, creativity and originality. Another shortcoming of ChatGPT is that it cannot distinguish between fact, fiction and unreliable information. As a result, the potential misuse of ChatGPT cannot be overlooked as ChatGPT can be tricked and misused.","PeriodicalId":16878,"journal":{"name":"Journal of Postgraduate Medical Institute","volume":"90 6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Postgraduate Medical Institute","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54079/jpmi.37.1.3219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 1
Abstract
ChatGPT cannot replace human medical writers because it lacks the expertise and level of understanding that humans possess. It can generate text that is convincing or “Sounds Plausible” but can possibly be nonsensical or incorrect. This is a common phenomenon exhibited by language models and has been termed as “Hallucination”. ChatGPT, at this moment, cannot provide citations or references. It may also overuse phrases. Moreover, errors and biases in the text generated by ChatGPT cannot be overruled. Articles generated with ChatGPT are not free of plagiarism and need to be corrected. As ChatGPT relies on previously stored data, text repetition can be a possible consequence, which may result in a lack of innovation, creativity and originality. Another shortcoming of ChatGPT is that it cannot distinguish between fact, fiction and unreliable information. As a result, the potential misuse of ChatGPT cannot be overlooked as ChatGPT can be tricked and misused.