Prakash Jadhav

Work place: K. S. School of Engineering & Management / ECE Department, Bangalore, 560109, India

E-mail: pcjadhav12@gmail.com

Website:

Research Interests: Engineering, Computational Engineering, Computational Science and Engineering

Biography

Prof. Prakash Jadhav is an Assistant professor in K. S. School of Engineering and Management Bangalore, received his B.E. degree in S.T.J.Institute of Technology. Ranebennur in 1999 and M.Tech degree in A.M.C. Engineering College, Bangalore, in 2006, and pursuing Ph.D in Visvesvaraya Technological University Belgaum, Karnataka, India, He has two international publications, two international conference publications & Six National Conference publications to his credit. Research work is carried out in Multimedia Networking and Communication, and has IEEE & MISTE membership, achieved best paper presentation in “KNOWLEDGE UTSAV” conducted in S.B.M.J.C.E., Bangalore.

Author Articles
Codec with Neuro-Fuzzy Motion Compensation & Multi-Scale Wavelets for Quality Video Frames

By Prakash Jadhav Siddesh G K

DOI: https://doi.org/10.5815/ijigsp.2017.09.02, Pub. Date: 8 Sep. 2017

Virtual Reality or Immersive Multimedia as it is sometimes known, is the realization of real-world environment in terms of video, audio and ambience like smell, airflow, background noise and various ingredients that make up the real world. This environment gives us a sense of reality as if we are living in a real world although the implementation of Virtual Reality is on a laboratory scale. Audio has attained unimaginable clarity by splitting the spectrum into various frequency bands appropriate for rendering on a number of speakers or acoustic wave-guides. The combination and synchronization of audio and video with better clarity has transformed the rendition matched in quality by 3D cinema. Virtual Reality still remains in research and experimental stages. The objective of this research is to explore and innovate the esoteric aspects of the Virtual Reality like stereo vision incorporating depth of scene, rendering of video on a spherical surface, implementing depth-based audio rendering, applying self-modifying wavelets to compress the audio and video payload beyond levels achieved hitherto so that maximum reduction in size of transmitted payload will be achieved. Considering the finer aspects of Virtual Reality we propose to implement like stereo rendering of video and multi-channel rendering of audio with associated back channel activities, the bandwidth requirements increase considerably. Against this backdrop, it becomes necessary to achieve more compression to achieve the real-time rendering of multimedia contents effortlessly.

[...] Read more.
Other Articles