On The Use of Large Language Models in Medicine Skip to main content

On The Use of Large Language Models in Medicine

Large language models (LLMs) are artificial intelligence systems that can generate natural language texts based on a given input, such as a prompt, a keyword, or a query. LLMs have been applied to various domains, including medicine, where they can potentially assist doctors, researchers, and patients with tasks such as diagnosis, treatment, literature review, and health education.

However, the use of LLMs in medicine also poses significant challenges and risks, such as:
  • Data quality and privacy: LLMs are trained on massive amounts of text data, which may not be reliable, representative, or relevant for the medical domain. Moreover, the data may contain sensitive personal information that needs to be protected from unauthorized access or disclosure.
  • Ethical and legal implications: LLMs may generate texts that are inaccurate, misleading, biased, or harmful for the users or the recipients. For example, an LLM may suggest a wrong diagnosis or a harmful treatment, or it may violate the principles of informed consent, confidentiality, or autonomy. Additionally, the use of LLMs may raise questions about the accountability and liability of the developers, providers, and users of these systems.
  • Human-AI interaction and collaboration: LLMs may affect the communication and decision-making processes between humans and AI systems, as well as among humans. For example, an LLM may influence the trust, confidence, or authority of the users or the recipients of its texts. Furthermore, an LLM may not be able to explain its reasoning or justify its outputs, which may limit its transparency and interpretability.
                                          Large language models in medicine | Nature Medicine

Therefore, the use of LLMs in medicine requires careful evaluation and regulation to ensure their safety, quality, and fairness. Moreover, the use of LLMs in medicine should be guided by the values and norms of the medical profession and society at large.

The available medical large language models: Med-LLMs

In recent years, there has been a surge of interest in applying natural language processing (NLP) techniques to various medical tasks, such as diagnosis, prognosis, treatment recommendation, and clinical documentation. However, most of the existing NLP models are trained on general-domain corpora, which may not capture the specific vocabulary, syntax, and semantics of the medical domain. To address this gap, several researchers have proposed and developed larger language models that are specialized for the medical domain, using large-scale medical corpora such as PubMed, MIMIC-III, and CORD-19. These models aim to leverage the power of pre-training and fine-tuning to achieve better performance and generalization on downstream medical tasks.
In this blog post, we provide an overview of some of the available medical large language models (LLMs), such as BioBERT, ClinicalBERT, BlueBERT, and MedNLI
Large language models (LLMs) are powerful artificial intelligence systems that can process and generate natural language texts. They are trained on massive amounts of text data from various sources, such as books, articles, websites, and social media. LLMs can perform a variety of natural language processing tasks, such as question answering, summarization, translation, and text generation, without requiring much task-specific fine-tuning.
LLMs have been applied to various domains, such as finance, law, education, and entertainment. However, one of the most promising and challenging domains for LLMs is medicine. Medicine is a domain that requires specialized knowledge, terminology, and reasoning skills. Moreover, medicine is a domain that has high stakes and ethical implications for the use of LLMs. Therefore, developing and evaluating LLMs for medical applications is an important and active research area.

1. BioBERT

BioBERT is a biomedical LLM that is based on BERT, one of the most popular and influential LLMs. BioBERT is pre-trained on a large corpus of biomedical texts, such as PubMed abstracts and full-text articles. BioBERT can be fine-tuned on various biomedical natural language processing tasks, such as named entity recognition, relation extraction, question answering, and document classification.
BioBERT has shown superior performance compared to BERT and other baseline models on several biomedical benchmarks, such as BioASQ, BC5CDR, and BioNLP. BioBERT can also be used as a feature extractor to improve the performance of other models, such as BiLSTM-CRF and SVM. BioBERT is publicly available and can be easily accessed and used through the Hugging Face Transformers library.

2. ClinicalBERT

ClinicalBERT is a clinical LLM that is based on BERT. ClinicalBERT is pre-trained on a large corpus of clinical notes from the MIMIC-III database, which contains de-identified electronic health records of intensive care unit patients. ClinicalBERT can be fine-tuned on various clinical natural language processing tasks, such as de-identification, concept extraction, sentiment analysis, and mortality prediction.
ClinicalBERT has shown superior performance compared to BERT and other baseline models on several clinical benchmarks, such as i2b2, n2c2, and MIMIC-III. ClinicalBERT can also be used as a feature extractor to improve the performance of other models, such as BiLSTM-CRF and CNN. ClinicalBERT is publicly available and can be easily accessed and used through the Hugging Face Transformers library.

3. BlueBERT

BlueBERT is a hybrid LLM that combines both biomedical and clinical texts for pre-training. BlueBERT is based on BERT and is pre-trained on a large corpus of PubMed abstracts and MIMIC-III clinical notes. BlueBERT can be fine-tuned on various biomedical and clinical natural language processing tasks, such as named entity recognition, relation extraction, question answering, and document classification.
BlueBERT has shown superior performance compared to BioBERT, ClinicalBERT, and other baseline models on several biomedical and clinical benchmarks, such as BioASQ, BC5CDR, BioNLP, i2b2, n2c2, and MIMIC-III. BlueBERT can also be used as a feature extractor to improve the performance of other models, such as BiLSTM-CRF and SVM. BlueBERT is publicly available and can be easily accessed and used through the Hugging Face Transformers library.

4. MedNLI

MedNLI is a medical LLM that is based on BERT. MedNLI is pre-trained on a large corpus of clinical notes from the MIMIC-III database, and then further pre-trained on a medical natural language inference dataset, which contains pairs of sentences and labels indicating whether the sentences entail, contradict, or are neutral to each other. MedNLI can be fine-tuned on various clinical natural language processing tasks, such as de-identification, concept extraction, sentiment analysis, and mortality prediction.
MedNLI has shown superior performance compared to BERT, ClinicalBERT, and other baseline models on several clinical benchmarks, such as i2b2, n2c2, and MIMIC-III⁶. MedNLI can also be used as a feature extractor to improve the performance of other models, such as BiLSTM-CRF and CNN⁶. MedNLI is publicly available and can be easily accessed and used through the Hugging Face Transformers library.

Conclusion

In this blog post, we have provided an overview of some of the available medical LLMs, such as BioBERT, ClinicalBERT, BlueBERT, ALBERT-Med, and MedNLI. We have briefly described their main features, advantages, and limitations, and compared their performance on some common medical natural language processing tasks. We hope that this blog post can help readers to understand the current state-of-the-art and the future directions of medical LLMs.


Comments

You may like

Latest Posts

SwiGLU Activation Function

Position Embedding: A Detailed Explanation

How to create a 1D- CNN in TensorFlow

Introduction to CNNs with Attention Layers

Meta Pseudo Labels (MPL) Algorithm

Video Classification Using CNN and Transformer: Hybrid Model

Liquid Neural Networks: Introduction