School of Engineering and Science(SEAS)

Publications

Department of Computer Science and Engineering

Publications

Total no of publications(As of Feb-2026)856
2025 Publications213
  • 1. Multi-stream CNN for Salient Object Detection

    Rafi M., Saikeerthan S., Sahithi A., Dutta S.R.

    Conference paper, Communications in Computer and Information Science, 2026, DOI Link, View abstract ⏷

    Saliency detection is finding the visually significant and attention grabbing objects present in an image. The present work is about finding saliency detection methods using Multi-Stream Convolution Neural Network. The main aim of this is to train a CNN model which captures the contextual information and multiscale features. Different metrics like f-measure, recall, precision and MAE are used to know how our model is performing with respect to other models. We also used cross dataset evaluation to know how our model is performing with unknown data to know the generalization capabilities. We compared our results with other well-known methods such as IT, MZ and SR which proves the efficacy of our work.
  • 2. Breast Cancer Detection Using Thresholded Wavelet Transformation and Transfer Learning

    Venkateswarlu I.B., Kakarla J.

    Conference paper, Communications in Computer and Information Science, 2026, DOI Link, View abstract ⏷

    The breast cancer detection has received a great attention in the histopathology image classification. In this paper, a thresholded wavelet transformation with deep transfer learning has devised for breast cancer detection. The breast histopathology images are enhanced using thresholded wavelet transformation. Then, a fusion based deep transfer learning has employed to perform binary classification (benign/malignant) of breast histopathology images. The proposed fusion model has evaluated on Breast Cancer Histopathological (BreakHis) dataset and achieved 97.09% on 40X magnified images of the dataset. Further, the proposed model outperforms existing state-of-the-art models and pre-trained models in vital metrics.
  • 3. A fully decentralized federated adversarial vision transformer with blockchain and secure aggregation for visual-based intrusion and malware forensics

    Shiva, Fazad

    Journal, International Journal of Data science and analytics, 2026, Quartile: Q2, DOI Link, View abstract ⏷

    This paper presents a fully decentralized federated adversarial vision transformer (ViT) framework for secure, privacy-preserving, and robust image-based malware classification. Unlike conventional federated learning that relies on centralized aggregation and remains vulnerable to privacy breaches and adversarial attacks, the proposed system employs blockchain- based decentralized aggregation integrated with secure multi-party computation. Encrypted local model updates are securely aggregated without a central server, while the blockchain ledger ensures transparency, tamper resistance, and trust. To further enhance security, a zero-knowledge proof-based mechanism validates masked model updates, enabling verifiable aggregation without exposing raw parameters. Clients reconstruct the global model through decentralized consensus, preventing direct access to others’ updates. Adversarial robustness is improved via client-side adversarial ViT training, incorporating projected gradient descent-generated malware images with clean samples, thereby reducing false classifications. Computational efficiency is achieved by leveraging pre-trained ViT variants for resource-constrained environments. Extensive experiments on Malimg, Microsoft BIG 2015, and Malevis datasets demonstrate superior performance, achieving accuracies of 98.30%, 98.93%, and 95.72%, respectively. Compared to centralized and federated adversarial ViTs, as well as state-of-the-art methods (FASe-ViT, FASNet, DM-Mal, Fed-Mal), the proposed framework consistently achieves higher accuracy, precision, recall, and F1-scores, while ensuring privacy, resilience, and decentralized trust.
  • 4. Lead Scoring Model Using Machine Learning

    Rafi M., Faiz Ahmad M., Venkata Sumanth S., Sarvan K.B.S.V.R., Harsha Vardhan K., Shabber S.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    Lead scoring is an essential process in sales and marketing that prioritizes prospective customers based on their potential to convert. In this study, we present a robust machine learning framework for lead scoring using the publicly available X Education dataset, which comprises 9240 leads described by 37 diverse features including online behavior, engagement metrics, and demographic details. Our approach begins with thorough data preprocessing removing irrelevant identifiers, handling missing values, and converting categorical variables followed by normalization and dimensionality reduction using Principal Component Analysis (PCA). We evaluated several PCA configurations (with 3–30 components) to capture the intrinsic variance in the dataset. Four classifiers, namely K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree, and Random Forest, were then trained with optimal hyperparameters determined through GridSearchCV and stratified cross validation. Specifically, KNN achieved its best performance with 15 principal components and n_neighbors=9, while SVM attained an accuracy of 91.8% at 25 components with C=10, γ=0.01, and an RBF kernel. The Decision Tree and Random Forest models also demonstrated competitive results. Moreover, ensemble methods—namely a soft voting ensemble and a stacking ensemble using Logistic Regression as a meta-classifier—were implemented to integrate the strengths of individual models. The stacking ensemble delivered the highest performance, with an overall accuracy of 92% and an AUC of 0.967. This study underscores the potential of machine learning, particularly ensemble approaches, to significantly enhance the precision of lead scoring and thereby optimize resource allocation in marketing strategies.
  • 5. Neuro-Symbolic Sentiment Analysis: Integrating Lexicon Features with Deep Learning Models

    Madala N.C., Tallapudi S.D.S.P.D., Malladi V.S., Senapati R.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    Sentiment analysis is critical for extracting views and emotions from textual data, with applications including consumer feedback and social media insights. This paper shows a mixed system that combines Neuro-Symbolic Sentiment Analysis (using deep learning models to combine symbolic lexicon features) and Topic-Driven Sentiment Analysis (using techniques for latent variables). By combining symbolic thinking and current machine learning, we improve both interpretability and classification accuracy. The framework uses a number of methods for binary and multiclass sentiment categorization, including Logistic Regression, Naive Bayes, SVM, Random Forests, and Fully Connected Neural Networks (FCNN). To ensure reliability, we use k-fold cross-validation for model evaluation. The use of latent variable modeling reveals underlying thematic implications on sentiment classification. Experimental validation on a real-world sentiment dataset reveals the usefulness of our strategy, which achieves high accuracy while remaining comprehensible. This study demonstrates the utility of neurosymbolic and topic-driven modeling in enhancing understandable sentiment analysis.
  • 6. AI-Powered Waste Segregation: Enhancing Recycling Efficiency Through Machine Learning

    Nuthalapati S.H., Muddana S., Kommineni S., Vegesna S.V., Senapati R.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    In the current world, the growing waste has become a major problem; the only solution is waste recycling. The crucial step in this process is segregating the waste into recyclable and organic waste. With growing concerns over improper waste disposal, automating the segregation process can significantly enhance the efficiency of recycling systems and reduce human intervention. The proposed method integrates Convolutional Neural Networks (CNNs), XG Boost, and Random Forest to segregate the waste into two classes: recyclable waste (such as plastic, glass, and metals) and organic waste (such as food and biodegradable materials). CNNs are highly effective in image classification tasks. We use XG Boost to model more complex relationships between features that may not be purely visual. To enhance robustness, Random Forest is employed to build multiple decision trees and aggregate their outputs for final classification. The automation of this process reduces the reliance on manual sorting, making waste management systems more efficient and environmentally sustainable.
  • 7. MedXPERT: A Novel ML and G-AI Based Framework for Disease Diagnosis

    Shaik H., Taathvika M., Peram S.T., Nidhi L.S., Senapati R.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    Generative AI has now become a booming software in the present digital and modernized world. This has proven its worth by showcasing excellent and reliable results in multiple fields, the one field that needs concentration is the Medical and Healthcare Industry as it is very complex and people rely on it. Machine Learning and Deep Learning Architectures can meticulously diagnose diseases so that public can evaluate and practitioners can analyze effortlessly. The model that we are discussing in this report is the advancement of the existing models with the integration of Generative AI and incorporating Image pre-processing techniques like Image Enhancement, and Disease Classification using AI, Machine Learning, Deep Learning, CNN, NLP, XAI (Explainable AI), Visual Transformers, GANs, LLMs, Web Technologies, SVM, OpenCV and G-AI API References to deliver tailored diet, exercise, remedies, medication, diagnosis and consultation advice based on comprehensive medical history, and Customer Data. This paper also discusses on the comparison of various state-of-the-art disease classification models to be incorporated in our model.
  • 8. A Robust Model for Quantum-Resistant Cryptography to Tackle Quantum Risks

    Guha D., Lenka R., Sharma V., Mishra S.K., Alkhayyat A., Tripathy H.K.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    As quantum computing advances, conventional cryptographic algorithms face developing threats, necessitating the improvement of quantum-resistant security mechanisms. Winternitz One-Time Signature (WOTS) is a promising cryptographic scheme that offers robust resistance in competition to quantum attacks. This paper explores the software of WOTS in enhancing the protection of digital communications and information integrity in a quantum computing generation. By manner of analysing the fundamental standards, sensible implementations, and ability demanding situations of WOTS, this research dreams to provide insights into its effectiveness as a quantum-resistant protection solution.
  • 9. Assessing the Performance of Energy Minimization Through Blended and Independent Optimization Algorithms in Video Synopsis Framework

    Chanda D., Sayyad S.M., Ghatak S., Behera A.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    Surveillance videos are a ubiquitous and powerful tool in modern security and surveillance systems. Storing and analyzing these surveillance videos pose a challenge to both cost and time. Surveillance videos have a lot of spatio-temporal redundancies. Owing to this, video synopsis aims to reduce the redundancy to produce a summary, through the preservation of all activities of objects in a short span. Video synopsis has multiple steps, of which the optimization module is the main focus. Reducing activity loss, minimizing collision occurrences, and ensuring temporal consistency are some of the objectives that the energy minimization component within the video synopsis framework serves. This paper measures and studies the performances of various standalone (generic) algorithms and hybrid algorithms. Comprehensive experiments are conducted, and the outcomes are analyzed to assess their effectiveness, considering the reduction of activity loss, and collision occurrences, and ensuring temporal consistency. This paper highlights the practical application of optimization algorithms and emphasizes the significance of choosing the right optimization algorithm to minimize energy when creating the synopsis of an object-based surveillance video.
  • 10. Arithmetic Optimization Algorithm in Enhancing Video Synopsis Generation

    Abinash K., Bodapati B.S., Kamma J.S., Ghatak S., Behera A.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    Effective security solutions for consumer applications require significant advances in the field of video surveillance. Video synopsis (VS) has been a crucial tool in streamlining consumer surveillance investigations, through rapid assessment of the video data and projecting multiple objects simultaneously. The framework of VS is significantly impacted by the operation of the optimization module. Techniques such as simulated annealing (SA) is utilized to achieve the minimization of the energy. However, the real-time implications lead to a prolonged convergence time. This work proposes an approach to integrating the arithmetic optimization algorithm (AOA) into the VS framework. Achieving a globally optimal solution along with a faster convergence rate. The effectiveness of the proposal is estimated through various experimental evaluations and analyses. Thus, intelligent and effective reviewing of any surveillance video would be possible through the implementation of the proposed approach.
  • 11. Water Withdrawal Trends Across Multiple UN Member Nations Using Time Series Forecasting

    Ghosh M., Ray P., Mukherjee J., Chakrabarti A.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    This paper comprehensively analyzes freshwater withdrawal patterns in six UN member countries—China, France, India, Russia, United Kingdom, and the United States. By analyzing historical data, this study explores the time-related trends in the use of water, identifies the factors that drive these patterns, and forecasts future water demands through the application of sophisticated time series modeling techniques, specifically the ARIMA model. Upon analysis of the results, striking differences are observed among the countries in respect to the withdrawal patterns of water and the main drivers behind these; including agricultural practices, industrial activities, demographic growth, governmental policies, among others. The ARIMA model represents the customized specific water usage of each country and provides reliable forecasts that reveal both challenges and opportunities for water resource management in the near future. This research emphasizes the need for proactive policy interventions to promote sustainable water use amid increasing demand and environmental variability.
  • 12. Dynamic RBFN with vector attention-guided feature selection for spam detection in social media

    Elakkiya E., Saleti S., Balakrishnan A.

    Article, Complex and Intelligent Systems, 2026, DOI Link, View abstract ⏷

    Online social media platforms have emerged as primary engagement channels for internet users, leading to increased dependency on social network information. This growing reliance has attracted cybercriminals, resulting in a surge of malicious activities such as spam. Consequently, there is a pressing need for efficient spam detection mechanisms. Although several techniques have been proposed for social network spam detection, spammers continually evolve their strategies to bypass these systems. In response, researchers have focused on extracting additional features to better identify spammer patterns. However, this often introduces feature redundancy and complexity, which traditional machine learning-based feature selection methods struggle to manage in highly complex datasets. To address this, we propose a novel attention network-based feature selection method that assigns weights to features based on their importance, reducing redundancy while retaining relevant information. Additionally, an adaptive Radial Basis Function Neural Network (RBFN) is employed for spam classification, enabling dynamic weight updates to reflect evolving spam behaviors. The proposed method is evaluated against state-of-the-art feature selection, deep learning models, and existing spam detection techniques using accuracy, F-measure, and false-positive rate. Experimental results demonstrate that our approach outperforms existing methods, offering superior performance in detecting spam on social networks.
  • 13. FIGNNCF: Feature integrated graph neural network based collaborative filtering for sequential recommendation

    Vigrahala J.L., Pujahari A.

    Article, Neurocomputing, 2026, DOI Link, View abstract ⏷

    Graph neural network-based recommender system models have gained popularity in the recent past due to their effective representation of user-item interactions in the latent feature space. Among them, item sequence-based collaborative filtering models are widely explored, which use the sequence of items consumed by different users and try to generate a set of items that may suit a new user. However, this item sequence generation only relies on other users’ past behaviors across different layers in a GNN framework and does not provide any intuitive reasoning behind the recommendation generation. Furthermore, the item-embedding information propagated across different layers may not provide sufficient user preference information towards items. To alleviate this, we propose a model, i.e., FIGNNCF, that uses the sequence-based recommendation technique but with a feature-based approach. The item features are integrated into the embeddings to propagate user preference information. Additionally, our proposed approach only uses a user-item bipartite graph and eliminates the item-item sequences graph, reducing the time required for training while maintaining the recommendation accuracy. The feature information is propagated using a one-hot encoding vector, which defines the model's simplicity. The proposed model significantly improves performance when tested on three benchmark datasets using standard evaluation measures.
  • 14. Semi-total domination in unit disk graphs and general graphs

    Rout S., Das G.K.

    Article, Discrete Applied Mathematics, 2026, DOI Link, View abstract ⏷

    Let G=(V,E) be a simple undirected graph with no isolated vertex. A set D⊆V is a dominating set if each vertex u∈V is either in D or is adjacent to a vertex v∈D. A set Dt2⊆V is called a semi-total dominating set if (i)Dt2 is a dominating set, and (ii) for every vertex u∈Dt2, there exists another vertex v∈Dt2 such that the distance between u and v in G is at most 2. Given a graph G, the semi-total domination problem finds a semi-total dominating set of minimum size. This problem is known to be NP-complete for general graphs and remains NP-complete for some special graph classes, such as planar, split, and chordal bipartite graphs. In this paper, we demonstrate that the problem is also NP-complete for unit disk graphs and propose a 6-factor approximation algorithm. The algorithm’s running time is O(nlogn), where n is the number of vertices in the given unit disk graph. In addition, we show that the minimum semi-total domination problem in a graph with maximum degree Δ admits a 2+ln(Δ+1)-factor approximation algorithm, which is an improvement over the best-known result 2+3ln(Δ+1).
  • 15. A Dynamic Context-Aware and Role-Capability Based Access Control Mechanism for Internet of Things

    Krishnasrija R., Mandal A.K., Halder R., Cortesi A.

    Article, Journal of Network and Systems Management, 2026, DOI Link, View abstract ⏷

    The Internet of Things (IoT) presents distinct challenges for access control due to its dynamic, heterogeneous, and evolving nature, which existing mechanisms often struggle to address. To overcome these challenges, this paper proposes a novel context-aware role-capability based access control (CRCBAC) system which effectively handles key issues such as dynamic adaptation, capability delegation, context awareness, scalability, and security. At its core, CRCBAC utilizes a structured role capability tree (RCT) to ensure secure capability propagation and management across roles, resolving conflicts through a priority system. Additionally, we design a set of protocols leveraging RCT-operations to securely evaluate access requests, as well as to create, transfer, and revoke capabilities. These protocols are validated through formal analysis using BAN logic and Scyther-based attack simulation, demonstrating CRCBAC’s robustness in ensuring both confidentiality and integrity. Experimental evaluation confirms CRCBAC’s superior scalability and efficiency, achieving up to lower response times and 4.6 times higher throughput compared to state-of-the-art approaches. The capability delegation mechanism consistently maintains response times below 3 ms, even as user capabilities scale, while also reducing energy consumption by compared to state-of-the-art approach, making CRCBAC particularly well-suited for energy-constrained IoT environments.
  • 16. Cutting-edge CNN-based skin cancer detection with batch normalization and advanced imbalance learning for superior medical image classification

    Govindu S., Devi O.R., Sitharam M., Koreddi V., Kumar M.K., Sunitha M.

    Article, Biomedical Signal Processing and Control, 2026, DOI Link, View abstract ⏷

    This study presents an advanced system for detecting skin cancer using Convolutional Neural Networks (CNNs), enhanced by Batch Normalization to improve model stability during training. CNNs, widely recognized for their effectiveness in image analysis, form the foundation of this system, which is designed to address the global challenge of skin cancer detection. The model's capacity to manage a variety of datasets, providing enhanced adaptability, is one of its primary characteristics. It tackles the common issue of imbalanced skin cancer data by employing techniques such as SMOTE, undersampling, and oversampling, resulting in increased accuracy and sensitivity, particularly for less common cases. Comparative experiments demonstrate that this model surpasses previous benchmarks in identifying skin disorders. The integration of Group Normalization further boosts stability, and the combined methods for addressing data imbalances enhance the model's ability to generalize across varied data. This makes the system a highly valuable tool for healthcare professionals. Experimental evaluation on the HAM10000 dataset achieved a test accuracy of 96.4%, a training accuracy of 99.74%, and a validation accuracy of 96.35%, with a minimal loss of 0.0079. The adaptive data balancing strategy further enhanced classification, improving F1-scores by 12–15% for rare classes such as melanoma and dermatofibroma, while preserving 98.2% accuracy for majority classes. The study underscores the potential of modern deep learning techniques to transform the interpretation of medical images, setting a new standard to combating skin diseases in healthcare.
  • 17. Blockchain-Based Authentication Protocol for Healthcare Security Using NTRUEncrypt

    Vasigani J.A., Vivekanandan M., Ghatak S.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link, View abstract ⏷

    Healthcare is experiencing a rapid increase in medical records, requiring confidentiality and data integrity. Privacy plays a significant role in healthcare security. In general, users used to share data through insecure channels, and attacks may occur. Therefore, the healthcare system should ensure a secure authentication process. The proposed model uses blockchain technology to share and store patient data securely. For secure data access, the authentication process is to be performed between users and the blockchain using cryptographic algorithms. The proposed protocol uses the zero-knowledge proof (ZKP) embedded with the post-quantum cryptography technique NTRUEncrypt for the authenticating process, which is resistant to all attacks. The security analysis of the proposed protocol is performed through formal security verification using the Scyther tool and informal security analysis. The security analysis proves that the proposed protocol is resistant to well-known attacks. In addition, the proposed protocol provides better performance than existing models.
  • 18. Recognizing Image Manipulations Utilizing CNN and ELA

    Nallamothu K., Rafi S., Kokkiligadda S., Jany S.M.

    Conference paper, Lecture Notes in Networks and Systems, 2025, DOI Link, View abstract ⏷

    Tampering of digital photos or images is known as image forgery. The ability to create phony images or information has become easier due to the rapid advancement of technology. In order to detect image forgeries, this paper proposes a model that employs Error Level Analysis (ELA) with Convolutional Neural Networks (CNN). ELA is used as a preprocessing step to highlight regions of an image that may have been tampered with. CNN is then trained on this enhanced data to classify images based on their authenticity and detect digital modifications. This initiative’s main goals include image classification, attribute extraction, image authenticity verification, and digital image modification detection. Our suggested solution makes use of CNNs’ deep learning capabilities and the refinement found by ELA.
  • 19. Multimodal Multi-objective Grey Wolf Optimisation with SVM and Random Forest as Classifier in Feature Selection

    Das R., Rafi S., Purwar H., Laskar R.H., Rajshekhar A., Chandrawanshi N.

    Conference paper, Lecture Notes in Networks and Systems, 2025, DOI Link, View abstract ⏷

    A technique called feature selection, often referred as attribute subset selection, selects the optimal subset of features for a given set of data by reducing the dimensionality and eliminating unnecessary characteristics. There can be 2n feasible solutions for a dataset with “n” features that is challenging to address using the conventional attribute selection method. Meta heuristic-based approaches perform better than traditional procedures in such situations. Numerous evolutionary computing techniques have been effectively used in Feature Selection challenges. On the distribution of options in the choice space, some research has been done. Achieving one, however, necessitates advancing the other since many optimisation problems contain two or more competing goals. The multi-objective optimisation technique discussed in this research finds the best effective trade-off between numerous objectives. Multi-objective Programs require multiple non-dominated solutions that could be found as opposite to just one. In the initial stage, we applied the Grey Wolf Optimisation (GWO) to acquire the optimised features. On the basis of the features selected, we trained the classifiers—Support Vector Machine (SVM) and Random Forest (RF) in the second phase. Experiment has been carried out on three benchmark datasets namely Glass, Wine and Breast Cancer Datasets retrieved from the UCI repository to show the supremacy of the proposed technique, the effectiveness of the recommended feature selection approach has been evaluated. The testing results show that the suggested GWO with Random Forest performs better than GWO with SVM.
  • 20. Enhancing Disease Prediction with Correctness-Driven Ensemble Models

    Kapila R., Saleti S.

    Article, SN Computer Science, 2025, DOI Link, View abstract ⏷

    Heart disease is the most dangerous and hazardous one. Human lives can be spared if the disease is diagnosed early enough and treated properly. We propose an efficient ensemble model which classifies all records correctly on the benchmark datasets. The correctness is accomplished by using Anova-Principal Component Analysis (Anv-PCA) techniques with a Stacking Classifier (SC) to select and extract the best features. The most significant component to evaluate in the medical area is recall. The findings show that the proposed Anv-PCA with SC meets all of the correctness requirements in concepts of accuracy, precision, recall, and f1-score with the highest results compared with the existing approaches. Anv-PCA, a method for selecting and extracting features, is paired with an ensemble classification algorithm in the approach we propose, which makes use of the Cleveland heart disease UCI dataset. All patient records are correctly categorized using this method, fulfilling the required criteria for correctness. The proposed model is also validated on other six publicly available benchmark datasets for diabetes, cardiovascular, Framingham, CBC, COVID-specific and HD (comprehensive) datasets available in the UCI repository, which presently meets the correctness requirements. The proposed approach exceeds all cutting-edge models.