D.T.V. Dharmajee Rao

Work place: Aditya Institute of Technology and Management, Tekkali-532201, Srikakulam, Andhra Pradesh, India

E-mail: dtvdrao@gmail.com

Website:

Research Interests: Neural Networks, Data Mining, Data Structures and Algorithms, Programming Language Theory

Biography

D.T.V. Dharamajee Rao is currently working as Professor in the Department of Computer Science and Engineering at Aditya Institute of Technology and Management, Tekkali, Srikakulam, Andhra Pradesh, India. He received B.Tech. degree in Computer Science and Engineering in 1993 and M.Tech. degree in Computer Science and Technology in 2001 from Andhra University, Visakhapatnam, Andhra Pradesh, India. He is pursuing Ph.D. in the Department of Computer Science and Engineering, JNT University, Kakinada, Andhra Pradesh, India. He got published more than 12 papers in International and National, Conferences and Journals. His current research interests include Data Mining, Neural Networks, Parallel Computing and Linear Algebra Techniques.

Author Articles
Accelerating Training of Deep Neural Networks on GPU using CUDA

By D.T.V. Dharmajee Rao K.V. Ramana

DOI: https://doi.org/10.5815/ijisa.2019.05.03, Pub. Date: 8 May 2019

The development of fast and efficient training algorithms for Deep Neural Networks has been a subject of interest over the past few years because the biggest drawback of Deep Neural Networks is enormous cost in computation and large time is consumed to train the parameters of Deep Neural Networks. This aspect motivated several researchers to focus on recent advancements of hardware architectures and parallel programming models and paradigms for accelerating the training of Deep Neural Networks. We revisited the concepts and mechanisms of typical Deep Neural Network training algorithms such as Backpropagation Algorithm and Boltzmann Machine Algorithm and observed that the matrix multiplication constitutes major portion of the work-load for the Deep Neural Network training process because it is carried out for a huge number of times during the training of Deep Neural Networks. With the advent of many-core GPU technologies, a matrix multiplication can be done very efficiently in parallel and this helps a lot training a Deep Neural Network not consuming time as it used to be a few years ago. CUDA is one of the high performance parallel programming models to exploit the capabilities of modern many-core GPU systems. In this paper, we propose to modify Backpropagation Algorithm and Boltzmann Machine Algorithm with CUDA parallel matrix multiplication and test on many-core GPU system. Finally we discover that the planned strategies achieve very quick training of Deep Neural Networks than classic strategies.

[...] Read more.
Winograd’s Inequality: Effectiveness for Efficient Training of Deep Neural Networks

By D.T.V. Dharmajee Rao K.V. Ramana

DOI: https://doi.org/10.5815/ijisa.2018.06.06, Pub. Date: 8 Jun. 2018

Matrix multiplication is widely used in a variety of applications and is often one of the core components of many scientific computations. This paper will examine three algorithms to compute the product of two matrices: the Naive, Strassen’s and Winograd’s algorithms. One of the main factors of determining the efficiency of an algorithm is the execution time factor, how much time the algorithm takes to accomplish its work. All the three algorithms will be implemented and the execution time will be calculated and we find that Winograd’s algorithm is the best and fast method experimentally for finding matrix multiplication. Deep Neural Networks are used for many applications. Training a Deep Neural Network is a time consuming process, especially when the number of hidden layers and nodes is large. The mechanism of Backpropagation Algorithm and Boltzmann Machine Algorithm for training a Deep Neural Network is revisited and considered how the sum of weighted input is computed. The process of computing the sum of product of weight and input matrices is carried out for several hundreds of thousands of epochs during the training of Deep Neural Network. We propose to modify Backpropagation Algorithm and Boltzmann Machine Algorithm by using fast Winograd’s algorithm. Finally, we find that the proposed methods reduce the long training time of Deep Neural Network than existing direct methods.

[...] Read more.
Other Articles