B.K. Tripathy

Work place: School of Computing Science and Engineering, VIT University, Vellore, Tamilnadu, India

E-mail: tripathybk@vit.ac.in

Website: https://scholar.google.co.in/citations?user=TuqZg_0AAAAJ&hl=en

Research Interests: Social Networks, Multicriteria Decision Making, Clustering, Soft Computing, Big Data, Information Retrieval, Social Computing, Knowledge Management

Biography

B.K. Tripathy is a Senior Professor and Dean in the School of Information Technology and Engineering, VIT, Vellore, India. He has published around 530 technical papers in International Journals/Proceedings of International Conferences/Edited book chapters, published 7 books and a monograph. He has guided 29 PhDs, 13 MPhils and 5 M.S candidates so far. Dr. Tripathy is a senior/ life member of ACM, IEEE, IRSS, CSI, OMS.OITS, IACSIT, IST and IMS. He is a reviewer/ editorial board member of 80 international journals. His current research interests include Fuzzy Set Theory and Applications, Rough set Theory and Knowledge Engineering, Multiset theory and its Applications, Social Networks Analysis, Granular Computing, Soft Computing, Data Clustering Techniques and Applications, Content Based Learning, Knowledge Representation and Reasoning, Soft Set Theory and Applications, Neighbourhood Systems, Information Retrieval, Big Data Analytics, Social Internet of Things and Multicriteria Decision Making.

Author Articles
A Comparative Analysis of Firefly and Fuzzy-Firefly based Kernelized Hybrid C-Means Algorithms

By B.K. Tripathy Anmol Agrawal A. Jayaram Reddy

DOI: https://doi.org/10.5815/ijisa.2019.06.05, Pub. Date: 8 Jun. 2019

In most of the clustering algorithms, the assignment of initial centroids is performed randomly, which affects both the final outcome and the number of iterations required. Another aspect of the approaches in clustering algorithms is the use of Euclidean distance as the measure of similarity between data points, which is handicapped by linear separability of input data. The purpose of this paper is to combine suitable techniques so that both the above problems can be handled suitably leading to efficient algorithms. For the initial assignment of centroids we use Firefly and Fuzzy Firefly algorithms. We replace the Euclidean distance by Kernels (Gaussian and Hyper-tangent) leading to hybridized versions. For experimental analysis we use five different images from different domains as input. Two efficiency measures; Davis Bouldin index (DB) and Dunn index (D) are used for comparison. The tabular values, their graphical representations and output images are generated to support the claims. The analysis proves the superiority of the optimized algorithms over their existing counterparts. We also find that Hyper-tangent kernel with Rough Intuitionistic Fuzzy C-Means algorithm using Fuzzy Firefly algorithm produces the best results and has a much faster convergence rate. The analysis of medical, satellite or geographical images can be done more efficiently using the proposed optimized algorithms. It is supposed to play an important role in image segmentation and analysis.

[...] Read more.
Fuzzy Clustering of Sequential Data

By B.K. Tripathy Rahul Dahiya

DOI: https://doi.org/10.5815/ijisa.2019.01.05, Pub. Date: 8 Jan. 2019

With the increase in popularity of the Internet and the advancement of technology in the fields like bioinformatics and other scientific communities the amount of sequential data is on the increase at a tremendous rate. With this increase, it has become inevitable to mine useful information from this vast amount of data. The mined information can be used in various spheres; from day to day web activities like the prediction of next web pages, serving better advertisements, to biological areas like genomic data analysis etc. A rough set based clustering of sequential data was proposed by Kumar et al recently. They defined and used a measure, called Sequence and Set Similarity Measure to determine similarity in data. However, we have observed that this measure does not reflect some important characteristics of sequential data. As a result, in this paper, we used the fuzzy set technique to introduce a similarity measure, which we termed as Kernel and Set Similarity Measure to find the similarity of sequential data and generate overlapping clusters. For this purpose, we used exponential string kernels and Jaccard's similarity index. The new similarity measure takes an account of the order of items in the sequence as well as the content of the sequential pattern. In order to compare our algorithm with that of Kumar et al, we used the MSNBC data set from the UCI repository, which was also used in their paper. As far as our knowledge goes, this is the first fuzzy clustering algorithm for sequential data.

[...] Read more.
Properties of Multigranular Rough Sets on Fuzzy Approximation Spaces and their Application to Rainfall Prediction

By B.K. Tripathy Urmi Bhambhani

DOI: https://doi.org/10.5815/ijisa.2018.11.08, Pub. Date: 8 Nov. 2018

Basic rough set model introduced by Pawlak in 1982 has been extended in many directions to enhance their modeling power. One such attempt is the notion of rough sets on fuzzy approximation spaces by De et al in 1999. This basic model uses equivalence relation for its definition, which decompose the universal set into disjoint equivalence classes. These equivalence classes are called granules of knowledge. From the granular computing point of view the basic rough set model is unigranular in character. So, in order to handle more than one granular structure simultaneously, two types of multigranular rough sets, called the optimistic and pessimistic multigranular rough sets were introduced by Qian et al in 2006 and 2010 respectively. In this paper, we introduce two types of multigranular rough sets on fuzzy approximation spaces (optimistic and pessimistic), study several of their properties and illustrate how this notion can be used for prediction of rainfall. The introduced notions are explained through several examples.

[...] Read more.
MMeMeR: An Algorithm for Clustering Heterogeneous Data using Rough Set Theory

By B.K. Tripathy Akarsh Goyal Rahul Chowdhury Patra Anupam Sourav

DOI: https://doi.org/10.5815/ijisa.2017.08.03, Pub. Date: 8 Aug. 2017

In recent times enumerable number of clustering algorithms have been developed whose main function is to make sets of objects having almost the same features. But due to the presence of categorical data values, these algorithms face a challenge in their implementation. Also some algorithms which are able to take care of categorical data are not able to process uncertainty in the values and so have stability issues. Thus handling categorical data along with uncertainty has been made necessary owing to such difficulties. So, in 2007 MMR algorithm was developed which was based on basic rough set theory. MMeR was proposed in 2009 which surpassed the results of MMR in taking care of categorical data and it could also handle heterogeneous values as well. SDR and SSDR were postulated in 2011 which were able to handle hybrid data. These two showed more accuracy when compared to MMR and MMeR. In this paper, we further make improvements and conceptualize an algorithm, which we call MMeMeR or Min-Mean-Mean-Roughness. It takes care of uncertainty and also handles heterogeneous data. Standard data sets have been used to gauge its effectiveness over the other methods.

[...] Read more.
Covering Based Optimistic Multigranular Approximate Rough Equalities and their Properties

By B.K. Tripathy S.C.Parida

DOI: https://doi.org/10.5815/ijisa.2016.06.08, Pub. Date: 8 Jun. 2016

Since its inception rough set theory has proved itself to be one of the most important models to capture impreciseness in data. However, it was based upon the notion of equivalence relations, which are relatively rare as far as applicability is concerned. So, the basic rough set model has been extended in many directions. One of these extensions is the covering based rough set notion, where a cover is an extension of the concept of partition; a notion which is equivalent to equivalence relation. From the granular computing point of view, all these rough sets are unigranular in character; i.e. they consider only a singular granular structure on the universe. So, there arose the necessity to define multigranular rough sets and as a consequence two types of multigranular rough sets, called the optimistic multigranular rough sets and pessimistic rough sets have been introduced. Four types of covering based optimistic multigranular rough sets have been introduced and their properties are studied. The notion of equality of sets, which is too stringent for real life applications, was extended by Novotny and Pawlak to define rough equalities. This notion was further extended by Tripathy to define three more types of approximate equalities. The covering based optimistic versions of two of these four approximate equalities have been studied by Nagaraju et al recently. In this article, we study the other two cases and provide a comparative analysis.

[...] Read more.
Covering Based Pessimistic Multigranular Rough Equalities and their Properties

By B.K. Tripathy S.C.Parida

DOI: https://doi.org/10.5815/ijitcs.2016.04.07, Pub. Date: 8 Apr. 2016

The basic rough set theory introduced by Pawlak as a model to capture imprecision in data has been extended in many directions and covering based rough set models are among them. Again from the granular computing point of view, the basic rough sets are unigranular by nature. Two types of extensions to the context of multigranular computing are done; called the optimistic and pessimistic multigranulation by Qian et al in 2006 and 2010 respectively. Combining these two concepts of covering and multigranulation, covering based multigranular models have been introduced by Liu et al in 2012. Extending the stringent concept of mathematical equality of sets rough equalities were introduced by Novotny and Pawlak in 1985. Three more types of such approximate equalities were introduced by Tripathy in 2011. In this paper we study the approximate equalities introduced by Novotny and Pawlak from the pessimistic multigranular computing point of view and establish several of their properties. These concepts and properties are shown to be useful in approximate reasoning.

[...] Read more.
A Bag Theoretic Approach towards the Count of an Intuitionistic Fuzzy Set

By B.K. Tripathy S.Khandelwal M.K.Satapathy

DOI: https://doi.org/10.5815/ijisa.2015.05.03, Pub. Date: 8 Apr. 2015

The cardinality of fuzzy sets was introduced by DeLuca and termini, Zadeh and Tripathy et al, where the first one is a basic one, the second one is based on fuzzy numbers and the final one introduces a bag theoretic approach. The only approach to find the cardinality of an intuitionistic fuzzy set is due to Tripathy et al. In this paper, we introduce a bag theoretic approach to find the cardinality of intuitionistic fuzzy set, which extends the corresponding definition of fuzzy sets introduced by Tripathy et al. In fact three types of intuitionistic fuzzy counts are introduced and we also establish several properties of these count functions.

[...] Read more.
Hierarchical Clustering Algorithm based on Attribute Dependency for Attention Deficit Hyperactive Disorder

By J Anuradha B.K. Tripathy

DOI: https://doi.org/10.5815/ijisa.2014.06.04, Pub. Date: 8 May 2014

Attention Deficit Hyperactive Disorder (ADHD) is a disruptive neurobehavioral disorder characterized by abnormal behavioral patterns in attention, perusing activity, acting impulsively and combined types. It is predominant among school going children and it is tricky to differentiate between an active and an ADHD child. Misdiagnosis and undiagnosed cases are very common. Behavior patterns are identified by the mentors in the academic environment who lack skills in screening those kids. Hence an unsupervised learning algorithm can cluster the behavioral patterns of children at school for diagnosis of ADHD. In this paper, we propose a hierarchical clustering algorithm to partition the dataset based on attribute dependency (HCAD). HCAD forms clusters of data based on the high dependent attributes and their equivalence relation. It is capable of handling large volumes of data with reasonably faster clustering than most of the existing algorithms. It can work on both labeled and unlabelled data sets. Experimental results reveal that this algorithm has higher accuracy in comparison to other algorithms. HCAD achieves 97% of cluster purity in diagnosing ADHD. Empirical analysis of application of HCAD on different data sets from UCI repository is provided.

[...] Read more.
On Some Comparison Properties of Rough Sets Based on Multigranulations and Types of Multigranular Approximations of Classifications

By R. Raghavan B.K. Tripathy

DOI: https://doi.org/10.5815/ijisa.2013.06.09, Pub. Date: 8 May 2013

In this paper we consider the inclusion properties for upper and lower approximation of union and intersection of sets for both pessimistic and optimistic multigranulations. We find that two inclusions for pessimistic cases are actually equalities. For other six cases we provide examples to show that actually the proper inclusions hold true. On the approximation of classifications a theorem was proved in Tripathy et al to establish sufficient type properties. We establish here that actually the result is both necessary and sufficient one. Also, we consider types of elements in classifications with respect to both types of multigranulations and establish a general theorem on them.

[...] Read more.
A Comparative Analysis of Multigranular Approaches and on Topoligical Properties of Incomplete Pessimistic Multigranular Rough Fuzzy Sets

By B.K. Tripathy M. Nagaraju

DOI: https://doi.org/10.5815/ijisa.2012.11.12, Pub. Date: 8 Oct. 2012

Rough sets, introduced by Pawlak as a model to capture impreciseness in data have been a very useful tool in several applications. These basic rough sets are defined by taking equivalence relations over a universe. In order to enhance the modeling powers of rough sets, several extensions to the basic definition has been introduced over the past few years. Extending the single granular structure of research in classical rough set theory two notions of Multigranular approaches; Optimistic Multigranulation and Pessimistic Multigranulation have been introduced so far. Topological properties of rough sets along with accuracy measures are two important features of rough sets from the application point of view. Topological properties of Optimistic Multigranular rough sets Optimistic Multigranular rough fuzzy sets and Pessimistic Multigranular rough sets have been studied. Incomplete information systems take care of missing values for items in data tables. Optimistic and pessimistic MGRS have also been extended to such type of incomplete information systems. In this paper we provide a comparative study of the two types of Multigranular approaches along with other related notions. Also, we extend the study to topological properties of incomplete pessimistic MGRFS. These results hold both for complete and incomplete information systems.

[...] Read more.
On Some Topological Properties of Pessimistic Multigranular Rough Sets

By B.K. Tripathy M. Nagaraju

DOI: https://doi.org/10.5815/ijisa.2012.08.02, Pub. Date: 8 Jul. 2012

Rough set theory was introduced by Pawlak as a model to capture impreciseness in data and since then it has been established to be a very efficient tool for this purpose. The definition of basic rough sets depends upon a single equivalence relation defined on the universe or several equivalence relations taken one each at a time. There have been several extensions to the basic rough sets introduced since then in the literature. From the granular computing point of view, research in classical rough set theory is done by taking a single granulation. It has been extended to multigranular rough set (MGRS) model, where the set approximations are defined by taking multiple equivalence relations on the universe simultaneously. Multigranular rough sets are of two types; namely optimistic MGRS and pessimistic MGRS. Topological properties of rough sets introduced by Pawlak in terms of their types were studied by Tripathy and Mitra to find the types of the union, intersection and complement of such sets. Tripathy and Raghavan have extended the topological properties of basic single granular rough sets to the optimistic MGRS context. Incomplete information systems take care of missing values for items in data tables. MGRS has also been extended to such type of incomplete information systems. In this paper we have carried out the study of topological properties of pessimistic MGRS by finding out the types of the union, intersection and complement of such sets. Also, we have provided proofs and examples to illustrate that the multiple entries in the table can actually occur in practice. Our results hold for both complete and incomplete information systems. The multiple entries in the tables occur due to impreciseness and ambiguity in the information. This is very common in many of the real life situations and needed to be addressed to handle such situations in efficient manner.

[...] Read more.
Design and Implementation of Face Recognition System in Matlab Using the Features of Lips

By Sasikumar Gurumurthy B.K. Tripathy

DOI: https://doi.org/10.5815/ijisa.2012.08.04, Pub. Date: 8 Jul. 2012

Human Face Recognition systems are an identification procedure in which a person is verified based on human traits. This paper describes a fast face detection algorithm with accurate result. Lip Tracking is one of the biometric systems based on which a genuine system can be developed. Since the uttering characteristics of an individual are unique and difficult to imitate, lip tracking holds an advantage of making the system secure. We use pre- recorded visual utterance of speakers has been generated and stored in the database for future verification. The entire project occurs in four different stages in which the first stage includes obtaining face region from the original image, the second stage includes mouth region extraction by background subtraction, the third stage includes key points extraction by considering the lip centroid as origin of co-ordinates and the fourth stage includes storing the obtained feature vector in the database. The user who wants to be identified by the system provides the new live information, which is then compared with the existing template in the database. The feedback provided by the system will be ‘a match or a miss-match’. This project will increase the accuracy level of biometric systems.

[...] Read more.
A Centroid Model for the Depth Assessment of Images using Rough Fuzzy Set Techniques

By P. Swarnalatha B.K. Tripathy

DOI: https://doi.org/10.5815/ijisa.2012.03.03, Pub. Date: 8 Apr. 2012

Detection of affected areas in images is a crucial step in assessing the depth of the affected area for municipal operators. These affected areas in the underground images, which are line images are indicative of the condition of buried infrastructures like sewers and water mains. These images identify affected areas and extract their properties like structures from the images, whose contrast has been enhanced... A Centroid Model for the Depth Assessment of Images using Rough Fuzzy Set Techniques presents a three step method which is a simple, robust and efficient one to detect affected areas in the underground concrete images. The proposed methodology is to use segmentation and feature extraction using structural elements. The main objective for using this model is to find the dimensions of the affected areas such as the length, width, depth and the type of the defects/affected areas. Although human eye is extremely effective at recognition and classification, it is not suitable for assessing defects in images, which might have spread over thousands of miles of image lines. The reasons are mainly fatigue, subjectivity and cost. Our objective is to reduce the effort and the labour of a person in detecting the affected areas in underground images. A proposal to apply rough fuzzy set theory to compute the lower and upper approximations of the affected area of the image is made in this paper. In this connection we propose to use some concepts and technology developed by Pal and Maji.

[...] Read more.
A Hybrid Data Mining Technique for Improving the Classification Accuracy of Microarray Data Set

By Sujata Dash Bichitrananda Patra B.K. Tripathy

DOI: https://doi.org/10.5815/ijieeb.2012.02.07, Pub. Date: 8 Apr. 2012

A major challenge in biomedical studies in recent years has been the classification of gene expression profiles into categories, such as cases and controls. This is done by first training a classifier by using a labeled training set containing labeled samples from the two populations, and then using that classifier to predict the labels of new samples. Such predictions have recently been shown to improve the diagnosis and treatment selection practices for several diseases. This procedure is complicated, however, by the high dimensionality of the data. While microarrays can measure the levels of thousands of genes per sample, case-control microarray studies usually involve no more than several dozen samples. Standard classifiers do not work well in these situations where the number of features (gene expression levels measured in these microarrays) far exceeds the number of samples. Selecting only the features that are most relevant for discriminating between the two categories can help construct better classifiers, in terms of both accuracy and efficiency. This paper provides a comparison between dimension reduction technique, namely Partial Least Squares (PLS)method and a hybrid feature selection scheme, and evaluates the relative performance of four different supervised classification procedures such as Radial Basis Function Network (RBFN), Multilayer Perceptron Network (MLP), Support Vector Machine using Polynomial kernel function(Polynomial- SVM) and Support Vector Machine using RBF kernel function (RBF-SVM) incorporating those methods. Experimental results show that the Partial Least-Squares(PLS) regression method is an appropriate feature selection method and a combined use of different classification and feature selection approaches makes it possible to construct high performance classification models for microarray data.

[...] Read more.
Other Articles