Work place: Department of Computer Science and Engineering, Aliah University, Kolkata, 700160, West Bengal, India
E-mail:
Website:
Research Interests: Software Engineering, Computational Science and Engineering
Biography
Ayatullah Faruk Mollah (Senior member, IEEE, USA) is an Assistant Professor and former Head (Officiating) of the Department of Computer Science and Engineering, Aliah University, India. He has completed his doctoral study from Jadavpur University, India. He is also a Life Member of Indian Unit of International Association for Pattern Recognition (IAPR). He was a Senior Software Engineer at Atrenta (I) Pvt. Ltd. Noida, India. He has also worked with ScanBiz Mobile Solutions, New York, USA. He was awarded prestigious European Union Erusmus Mundus cLink Fellowship, University Grants Commission (UGC) Research Fellowship for Meritorious Students in Science, and Postdoctoral Fellowship from University of Warsaw, Poland. He is actively engaged in research. His research interests include deep learning, data science, image and video analysis, machine learning, bioinformatics, etc. He is currently guiding a number of PhD students in frontier areas of image analysis. So far, he has authored nearly hundred articles in refereed journals and conference proceedings. Dr. Mollah is also a Co-Principal Investigator of the Multi-script project funded by Department of Science and Technology, Govt. of India.
By Tauseef Khan Ayatullah Faruk Mollah
DOI: https://doi.org/10.5815/ijigsp.2024.01.05, Pub. Date: 8 Feb. 2024
Scene text detection from natural images has been a prime focus from last few decades. Classification of foreground object components is an essential task in many scene text detection approaches under uncontrollable environment. As it heavily relies upon robust and discriminating features, several features have been engineered for component-level text non-text classification. Competency of such feature descriptors particularly in respect of deep features needs to be examined. In this paper, we present prospective feature descriptors applicable to component-level text non-text classification and examine their performance along with convolutional neural network based deep features. Series of experiments have been carried out on publicly available benchmark dataset(s) of multi-script document-type, scene-type, and combined text vs. non-text components. Interestingly, feature combination is found to put well-demonstrated deep features into tough competition on most datasets under consideration. For instance, on the combined text non-text classification problem, CNN based deep features yield 97.6%, whereas aggregated features produce an accuracy of 98.4%. Similar findings are obtained on other experiments as well. Along with the quantitative figures, results have been analyzed and insightful discussion is made to ascertain the conjectures drawn herein. This study may cater the need of leveraging potentially strong handcrafted feature descriptors.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals