Sabyasachi Chakraborty

Work place: School of Computer Engineering, KIIT University, Bhubaneswar, India

E-mail: c.sabyasachi99@gmail.com

Website:

Research Interests: Computer systems and computational processes, Natural Language Processing, Data Compression, Data Structures and Algorithms, Programming Language Theory

Biography

Sabyasachi Chakraborty was born on the 9th of September 1996 in Shillong. He is continuing his studies in School of Computer Engineering as a B.Tech undergraduate at KIIT University. His research areas include Data Analytics and Big Data, Brain Computer Interface, Natural Language Processing and Machine Learning. He is currently interning at HighRadius Technologies, Hyderabad, India as a Machine Learning Engineer. His paper on “A Proposal for Shelf Placement Optimization for Retail Industry using Big Data Analytics” was accepted at Data Science Congress 2017. He has also had his article on “Healthcare after the advent of Information Technology” published in CSI Communications.
Sabyasachi Chakraborty is the member of the IET (The Institution of Engineering and Technology).

Author Articles
A Proposal for High Availability of HDFS Architecture based on Threshold Limit and Saturation Limit of the Namenode

By Sabyasachi Chakraborty Kashyap Barua Manjusha Pandey Siddharth Rautaray

DOI: https://doi.org/10.5815/ijieeb.2017.06.04, Pub. Date: 8 Nov. 2017

Big Data which is one of the newest technologies in the present field of science and technology has created an enormous drift of technology to a salient data architecture. The next thing that comes right after big data is Hadoop which has motivated the complete Big Data Environment to its jurisdiction and has reinforced the complete storage and analysis of big data. This paper discusses a hierarchical architecture of Hadoop Nodes namely Namenodes and Datanodes for maintaining a High Availability Hadoop Distributed File System. The High Availability Hadoop Distributed File System architecture establishes itself onto the two fundamental model of Hadoop that is Master-Slave Architecture and elimination of single point node failure. The architecture will be of such utilization that there will be an optimum load on the data nodes and moreover there will be no loss of any data in comparison to the size of data.

[...] Read more.
Other Articles