David Raju Kolluri

Work place: St.Peter’s Engineering College, Department of CSE, Secunderabad, 500014, India

E-mail: kolluridavid@gmail.com

Website:

Research Interests: Data Structures and Algorithms, Computer Architecture and Organization, Computational Learning Theory, Computer systems and computational processes

Biography

David Rau Kolluri received M.Tech Degree in Computer Science & Enginnering From JNTUH in 2010, B.Tech(CSE) from JNTUH in 2002 and Diploma in CSE from Kakatiya University in 1996. He is currently pursuing PhD (CSE) from Rayalaseema University in the area Data Mining. His research interests include Big data and Machine Learning. He is having total 15 years of teaching experience. He is currently working with St.Peter's Engineering College as a associate professor in the department of Computer Science and Engineering and strong in programming language subjects.

Author Articles
Efficient Modelling Technique based Speaker Recognition under Limited Speech Data

By Satyanand Singh Abhay Kumar David Raju Kolluri

DOI: https://doi.org/10.5815/ijigsp.2016.11.06, Pub. Date: 8 Nov. 2016

As on date, Speaker-specific feature extraction and modelling techniques has been designed in automatic speaker recognition (ASR) for a sufficient amount of speech data. Once the speech data is limited the ASR performance degraded drastically. ASR system for limited speech data is always a highly challenging task due to a short utterance. The main goal of ASR to form a judgment for an incoming speaker to the system as being which member of registered speakers. This paper presents a comparison of three different modelling techniques of speaker specific extracted information (i) Fuzzy c-means (FCM) (ii) Fuzzy Vector Quantization2 (FVQ2) and (iii) Novel Fuzzy Vector Quantization (NFVQ). Using these three modelling techniques, we developed a text independent automatic speaker recognition system that is computationally modest and equipped for recognizing a non-cooperative speaker. In this investigation, the speaker recognition efficiency is compared to less than 2 sec of text-independent test and train utterances of Texas Instruments and Massachusetts Institute of Technology (TIMIT) and self-collected database. The efficiency of ASR has been improved by 1% with the baseline by hiding the outliers and assigns them by their closest codebook vectors the efficiency of proposed modelling techniques is 98.8%, 98.1% respectively for TIMIT and self-collected database. 

[...] Read more.
Other Articles