What is ChatGPT? ChatGPT (Chat Generative Pre-trained Transformer) is a large language model (LLM)-based artificial intelligence (AI) released by OpenAI in November 2022 that trained on a massive dataset of articles, websites, books, and written conversations. ChatGPT is designed to generate, understand, and interpret human language to simulate human interaction and respond to prompts in a human-like conversational way.
The launch of ChatGPT was met with a flurry of excitement and the publication of several studies to determine its strengths and potential applications in healthcare and clinical research, as well as its limitations, risks, and consequences of widespread, unchecked use. The tone appears to be cautious optimism as researchers raise ethical concerns, the risk of artificial hallucinations, and the uncertainty of how ChatGPT will perform in real-world situations, amongst others. In addition, Arif et al. (2023) cite a lack of critical thinking and the inclusion of redundant information as reasons why scientific experts and journals reject ChatGPT.
Can ChatGPT or GPT-4 help with drug development? We’ll break it down in this article and discuss how large language models are changing drug discovery and development.
The Application of ChatGPT in Drug Development
Medical institutions and clinical research organizations (CROs) can transform the landscape of clinical research by leveraging AI algorithms and machine learning. The application of AI models opens up opportunities to analyze massive amounts of unstructured data, expanding data-based research and simplifying the research process. To support drug development, ChatGPT can potentially predict drug-target interactions, speed up the identification of potential drug candidates, and help identify opportunities for drug repurposing.
Generation of novel hypotheses
Dahmen et al. (2023) suggest that ChatGPT can assist in creating novel hypotheses and generating new ideas for research. ChatGPT can swiftly and (mostly) accurately analyze large amounts of data, which can provide novel insights into a therapeutic area. To study the potential of ChatGPT, Lahat et al. (2023) found that LLMs may help identify research priorities but need to be improved in identifying novel research questions.
Xue et al. (2023) found that AI models may help in clinical decision support, clinical trial recruitment, clinical data management, research support, and other areas. AI has image recognition capabilities for drug discovery to identify, classify, and describe chemical formulas or molecular structures. The authors see a role for ChatGPT in providing a more objective and evidence‐based approach to decision‐making and reducing the risk of human error.
ChatGPT appears to be helpful in scientific writing. Salvagno et al. (2023) caution that ChatGPT should not attempt to replace human judgment and that study findings should continually be reviewed by experts before contributing to any critical decision-making.
ChatGPT may expedite the dissemination of study findings by presenting information in understandable language for the general public.
Limitations and Risks of Using ChatGPT for Drug Discovery
By OpenAI’s admission, ChatGPT has limitations.
- ChatGPT’s output can be incorrect or biased, e.g., citing non-existent references or perpetuating sexist stereotypes
- erroneous ChatGPT outputs used to train future iterations of the model will be amplified
- inaccuracies in ChatGPT outputs could fuel the spread of misinformation.
- risk of introducing errors and plagiarized content into publications which in the long run may negatively impact research and health policy decisions
- users can circumvent OpenAI guardrails set up to minimize these risks.
Limitations and failure to disclose them
Cascella et al. (2023) found that in response to a prompt to write a structured journal abstract, ChatGPT was limited by its inability to perform statistical analyses. Of particular concern, after running different simulations, ChatGPT did not advise on its limitations unless expressly asked.
Other studies illustrated ChatGPT’s ability to one, generate a patient discharge summary and, two, simplify radiology reports; however, errors were found in both.
A recent article studied the feasibility of ChatGPT in clinical and research scenarios. The authors found that the feasibility of possible misuse of ChatGPT was evident in the following scenarios:
- To fabricate research data to meet funding or publication requirements.
- To make diagnoses or treatment recommendations without proper validation or oversight.
- To generate misinformation.
- To plagiarize.
- To generate data analysis that is not aligned with the actual data collected or the user’s stated purpose.
Training bias is a critical technical issue as ChatGPT’s underlying algorithms depend on the data they are trained on. ChatGPT cannot update the training data in real-time.
In addition to data privacy and security issues, incorrect or misleading information may cause harm to patients. ChatGPT can only give general and vague answers in some conversations.
Clinical reasoning and critical thinking
Rising concern that ChatGPT can be easily used for scientific writing, which potentially lacks both clinical sense and critical thinking. The authors stressed the need for an intellectual human mind and policies to cross-check data generated by AI systems.
Issues in scientific writing
- Salvagno et al. (2023) highlight ethical issues, namely the risk of plagiarism and inaccuracies and potential imbalance if free access to the software is halted. The authors advise that ChatGPT should not replace human judgment and that experts continually review manuscripts before their findings are applied in real-world circumstances.
- Highlighting the issue of scientific essays written by ChatGPT having a mix of accurate and fabricated data, Alkasissi and McFarlane (2023) question the integrity and accuracy of using LLM in academic writing.
- Further, the AI bot could underestimate the importance or novelty of articles leading to potentially essential research findings being overlooked.
- Text generated by ChatGPT can lack context (ChatGPT may not have enough information about a specific case), be inaccurate, introduce bias, and suffer from a lack of understanding of the nuances related to medical science(s) and language.
The Role of GPT-4 in Drug Discovery
Andrew White, an expert in the field of Artificial Intelligence (AI) and Chemistry dove into GPT-4, a generative AI model that responds to text and images, and its role in drug discovery. ChatGPT is a product based on GPT-3 and GPT-4. White discovered that in the GPT-4 technical report, it seemed to be able to propose molecules that could be a potential new drug.
After conducting an experiment of testing how GPT-4 can propose new drugs for the treatment of psoriasis by targeting the known protein TYK2, he discovered that GPT-4 cannot yet do drug discovery. However, it can assist in the process by proposing new compounds. His opinion was that GPT-4 and other AI models currently can mostly be used for reasoning, selecting tools, and identifying compound names. The potential of GPT-4 in the field of drug discovery is exciting, but it still has a way to go. Read more on Andrew White’s experiment and thoughts on GPT-4 here.
Moving Forward in Drug Development
The application of AI models like ChatGPT in drug discovery and drug development, clinical research, and scientific publishing must be proactively managed. Leveraging these technologies, clinical research in different settings, including CROs, can have a drug discovery and development process that is quicker and more efficient. Research strategies and regulatory policies must be in place to mitigate risks and adverse outcomes.
In January 2023, the World Association of Medical Editors published recommendations on using ChatGPT and other chatbots in scholarly publications. As the tech evolves, stakeholders in the drug discovery process must understand ChatGPT’s limits and capabilities to utilize it and avoid unintended consequences effectively. So, can ChatGPT or GPT-4 help with drug development? It would seem the debate is still out, but most are saying no — or at least, not yet.
Run Faster, More Efficient, and Cost-Effective Clinical Trials
Whether you’re in drug discovery or in Phases I-IV of drug development, contact a Vial team member today to learn how we can help streamline your clinical trial processes. Vial is a next-generation, technology-first contract research organization (CRO) reimagining clinical trials to deliver faster, more efficient trial results at dramatically lower costs for biotech sponsors.
Our CRO’s mission is to empower scientists to discover groundbreaking scientific therapeutics that help people live happier, healthier lives. Our modern, intuitive technology platform integrates trial onboarding, patient enrollment, site communication, and data collection processes into one connected system.
Vial is building towards a more efficient future for clinical trials. By deploying technology at every step, we are driving efficiencies in speed and cost savings for innovative biotech companies of all sizes.