Tversky Similarity based UnderSampling with Gaussian Kernelized Decision Stump Adaboost Algorithm for Imbalanced Medical Data Classification


  • M. Kamaladevi
  • V. Venkatraman



Data Imbalance, Undersampling, Tversky, Similarity Indexive Regression, Gaussian Kernelized, Decision Stump AdaBoosting


In recent years, imbalanced data classification are utilized in several domains including, detecting fraudulent activities in banking sector, disease prediction in healthcare sector and so on. To solve the Imbalanced classification problem at data level, strategy such as undersampling or oversampling are widely used. Sampling technique pose a challenge of significant information loss. The proposed method involves two processes namely, undersampling and classification. First, undersampling is performed by means of Tversky Similarity Indexive Regression model. Here, regression along with the Tversky similarity index is used in analyzing the relationship between two instances from the dataset. Next, Gaussian Kernelized Decision stump AdaBoosting is used for classifying the instances into two classes. Here, the root node in the Decision Stump takes a decision on the basis of the Gaussian Kernel function, considering average of neighboring points accordingly the results is obtained at the leaf node. Weights are also adjusted to minimizing the training errors occurring during classification to find the best classifier. Experimental assessment is performed with two different imbalanced dataset (Pima Indian diabetes and Hepatitis dataset). Various performance metrics such as precision, recall, AUC under ROC score and F1-score are compared with the existing undersampling methods. Experimental results showed that prediction accuracy of minority class has improved and therefore minimizing false positive and false negative.


[1] Zhaozhao Xu, Derong Shen, Tiezheng Nie, Yue Kou, (2020). "A hybrid sampling algorithm combining M SMOTE and ENN based on Random Forest for medical imbalanced data", Journal of Biomedical Informatics, Elsevier, [edited nearest neighbor rule].

[2] Bin Liu, Grigorios Tsoumakas ,(2020). "Dealing with class imbalance in classifier chains via random undersampling", Pattern Recognition, Elsevier, Volume 102, Pages 1-34 [random undersampling]

[3] Pattaramon Vuttipittayamongko, Eyad Elyan, (2019)."Neighbourhood-based undersampling approach for handling imbalanced and overlapped data", Information Sciences, Elsevier,[tomeklinks]

[4] MichałKoziarski, ,(2020)."Radial-Based Undersampling for imbalanced data classification", Pattern Recognition, Elsevier.

[5] Nijaguna Gollara Siddappa, Thippeswamy Kampalappa, "Adaptive Condensed Nearest Neighbor for Imbalance Data Classification ", International Journal of Intelligent Engineering & Systems

[6] Ikram Chaabane, Radhouane Guermazi, Mohamed Hammami, (2019). "Enhancing techniques for learning decision trees from imbalanced data", Advances in Data Analysis and Classification, Springer.

[7] Colin Bellinger, Shiven Sharma, Nathalie Japkowicz, Osmar R. Zaí¯ane,(2019). "Framework for extreme imbalance classification: SWIM-sampling with themajority class", Knowledge and Information Systems, Springer,

[8] Zeina Abu-Aisheh, Romain Raveaux, Jean-Yves Ramel (2018)."Efficient k-nearest neighbors search in graph space", Pattern Recognition Letters, Elsevier

[9] Ahmad S. Tarawneh, Ahmad B. A. Hassanat, Khalid Almohammadi, Dmitry Chetverikov, Colin Bellinge, (2020). "SMOTEFUNA: Synthetic Minority Over-Sampling Technique Based on Furthest Neighbour Algorithm", IEEE Access.

Additional Files



Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.