Mesut. Polatgil

Work place: Şarkışla Faculty of Applied Sciences / computer technologies, Sivas, 58070, Turkey



Research Interests: Numerical Analysis, Data Structures and Algorithms, Computational Learning Theory


Mesut. Polatgil He is a lecturer in computer science in the Faculty of Applied Sciences. He received his Ph.D. degree in Numerical methods from the University of Sivas Cumhuriyet. He has more than 8 years of teaching and research experience in information systems development and machine learning. His research interests include machine learning, numerical methods and data science. 

Author Articles
Outlier Detection Algorithm Based on Fuzzy C-Means and Self-organizing Maps Clustering Methods

By Mesut. Polatgil

DOI:, Pub. Date: 8 Aug. 2022

Data mining and machine learning methods are important areas where studies have increased in recent years. Data is critical for these areas focus on inferring meaningful conclusions from the data collected. The preparation of the data is very important for the studies to be carried out and the algorithms to be applied. One of the most critical steps in data preparation is outlier detection. Because these observations, which have different characteristics from the observations in the data, affect the results of the algorithms to be applied and may cause erroneous results. New methods have been developed for outlier detection and machine learning and data mining algorithms have been provided with successful results with these methods. Algorithms such as Fuzzy C Means (FCM) and Self Organization Maps (SOM) have given successful results for outlier detection in this area. However, there is no outlier detection method in which these two powerful clustering methods are used together. This study proposes a new outlier detection algorithm using these two powerful clustering methods. In this study, a new outlier detection algorithm (FUSOMOUT) was developed by using SOM and FCM clustering methods together. With this algorithm, it is aimed to increase the success of both clustering and classification algorithms. The proposed algorithm was applied to four different datasets with different characteristics (Wisconsin breast cancer dataset (WDBC), Wine, Diabetes and Kddcup99) and it was shown to significantly increase the classification accuracy with the Silhouette, Calinski-Harabasz and Davies-Bouldin indexes as clustering success indexes.

[...] Read more.
Investigation of the Effect of Normalization Methods on ANFIS Success: Forestfire and Diabets Datasets

By Mesut. Polatgil

DOI:, Pub. Date: 8 Feb. 2022

Machine learning and artificial intelligence techniques are more and more in our lives and studies in this field are increasing day by day. Data is vital for these studies. In order to draw meaningful conclusions from the available data, new methods are proposed and successful results are obtained. The preparation of the obtained data is very important in the studies to be carried out. Data preprocessing is very important in the preparation of data. The most critical stage of the data preprocessing process is the scaling or normalization of the data. Machine learning libraries such as scikit-learn and programming languages such as R provide the necessary libraries to scale data. However, it is not known exactly which normalization method will be applied and which will yield more successful results. The success of these normalization methods has been investigated on many different methods, but such a study has not been done on the adaptive neural fuzzy inference system (ANFIS). The aim of this study is to examine the success of normalization methods on ANFIS in terms of both classification and regression problems. So, for studies using the Anfis method, guidance will be provided on which normalization process will give better results in the data preprocessing stage. Four different normalization methods in the scikit-learn library were applied on the Diabets and Forestfire datasets in the UCI database. The results are presented separately for both classification and regression. It has been determined that min-max normalization in classification problems and working with original data in regression problems are more successful.

[...] Read more.
Other Articles