Work place: Wireless Sensor Networks Lab, Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, 800005, India
E-mail: rajeev.arya@nitp.ac.in
Website:
Research Interests: Communications, Wireless Communication, Wireless Networks
Biography
Rajeev Arya received the Engineering Degree in Electronics & Communication Engineering from Government Engineering College, Ujjain, (RGPV University, Bhopal) India in 2008, and the Master of Technology in Electronics & Communication Engineering from Indian Institute of Technology (ISM), Dhanbad, India in 2012. He received the Ph.D. in Communication Engineering from Indian Institute of Technology (IIT Roorkee), Roorkee, India in 2016. He has received Ministry of Human Resource Development Scholarship (MHRD India) during M.Tech and Ph.D. He is currently an Assistant Professor with the Department of Electronics & Communication Engineering at National Institute of Technology, Patna, India. His current research interests are Communication Systems & Wireless Communication.
By Chellarao Chowdary Mallipudi Saurabh Chandra Prateek Prateek Rajeev Arya Akhtar Husain Shamimul Qamar
DOI: https://doi.org/10.5815/ijcnis.2023.04.02, Pub. Date: 8 Aug. 2023
There are billions of inter-connected devices by the help of Internet-of-Things (IoT) that have been used in a number of applications such as for wearable devices, e-healthcare, agriculture, transportation, etc. Interconnection of devices establishes a direct link and easily shares the information by utilizing the spectrum of cellular users to enhance the spectral efficiency with low power consumption in an underlaid Device-to-Device (D2D) communication. Due to reuse of the spectrum of cellular devices by D2D users causes severe interference between them which may impact on the network performance. Therefore, we proposed a Q-Learning based low power selection scheme with the help of multi-agent reinforcement learning to detract the interference that helps to increase the capacity of the D2D network. For the maximization of capacity, the updated reward function has been reformulated with the help of a stochastic policy environment. With the help of a stochastic approach, we figure out the proposed optimal low power consumption techniques which ensures the quality of service (QoS) standards of the cellular devices and D2D users for D2D communication in 5G Networks and increase the utilization of resources. Numerical results confirm that the proposed scheme improves the spectral efficiency and sum rate as compared to Q-Learning approach by 14% and 12.65%.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals