Abstract
Text summarization is a fundamental Natural language processing task that plays a crucial role in efficiently condensing large textual documents into concise and clear summaries for human comprehension. The amount of data being generated in the medical domain nowadays requires substantial application of the current deep learning approaches such as transformers. The main goal of this research is to extract relevant summaries from the abstracts of the research articles published related to cancer, blood cancer, tinnitus, and Alzheimer’s. As the domain-related data requires special attention, our approach uses a fine-tuned transformer model, to guarantee that the summaries produced are not only brief but also accurate. Moreover, as a part of this research, we have effectively collected the information from PubMed and also prepared the data for analysis. A comparative analysis of the Bidirectional and Auto-Regressive Transformers (BART), Text-to-Text Transfer Transformer (T5), Textrank, and Lexrank models on the dataset is carried out in this study to understand the medical insights effectively. The fine-tuned transformer’s performance in comparison with other models brings out a newer dimension for future studies.