International Journal of Image, Graphics and Signal Processing (IJIGSP)

IJIGSP Vol. 14, No. 5, Oct. 2022

Cover page and Table of Contents: PDF (size: 664KB)

Table Of Contents

REGULAR PAPERS

Self-supervised Model Based on Masked Autoencoders Advance CT Scans Classification

By Jiashu Xu Sergii Stirenko

DOI: https://doi.org/10.5815/ijigsp.2022.05.01, Pub. Date: 8 Oct. 2022

The coronavirus pandemic has been going on since the year 2019, and the trend is still not abating. Therefore, it is particularly important to classify medical CT scans to assist in medical diagnosis. At present, Supervised Deep Learning algorithms have made a great success in the classification task of medical CT scans, but medical image datasets often require professional image annotation, and many research datasets are not publicly available. To solve this problem, this paper is inspired by the self-supervised learning algorithm MAE and uses the MAE model pre-trained on ImageNet to perform transfer learning on CT Scans dataset. This method improves the generalization performance of the model and avoids the risk of overfitting on small datasets. Through extensive experiments on the COVID-CT dataset and the SARS-CoV-2 dataset, we compare the SSL-based method in this paper with other state-of-the-art supervised learning-based pretraining methods. Experimental results show that our method improves the generalization performance of the model more effectively and avoids the risk of overfitting on small datasets. The model achieved almost the same accuracy as supervised learning on both test datasets. Finally, ablation experiments aim to fully demonstrate the effectiveness of our method and how it works.

[...] Read more.
Transformation of Classical to Quantum Image, Representation, Processing and Noise Mitigation

By Shyam Sihare

DOI: https://doi.org/10.5815/ijigsp.2022.05.02, Pub. Date: 8 Oct. 2022

Quantum and classical computers have drastically different image representations. In a classical computer, bits are used. However, in a quantum computer, qubits are used. In this paper, the quantum image representation is the similar as the classical image representation. To represent quantum images, qubits and their associated properties have been used. Quantum imaging has previously been done via superposition. As a result, quantum imaging implemented using the superposition feature. Unitary matrices are then used to represent quantum circuits. For the quantum representation, we've gone with a modest image. To create quantum circuits, IBM's Qiskit software and Anaconda Python was used. On an IBM real time computer and an Aer simulator, a quantum circuit with 10,000 shots runs. Noise has been reduced more in the IBM real time computer than in the IBM Aer simulator. As a result, the Aer simulator's noise and qubit errors are higher than the IBM real time computer's. Quantum circuit design and image processing are both done with Qiskit programming, which is an appendix at the end of the paper. As the number of shots raise, the noise level decreases even further. Noise and qubit errors increase when the image operates at a low number of shots. Quantum image processing, noise reduction, and error correction done by circuit computation shots increase. Quantum image processing, representation, noise reduction, and error correction all make use of the quantum superposition concept. 

[...] Read more.
FeatureGAN: Combining GAN and Autoencoder for Pavement Crack Image Data Augmentations

By Xinkai Zhang Bo Peng Zaid Al-Huda Donghai Zhai

DOI: https://doi.org/10.5815/ijigsp.2022.05.03, Pub. Date: 8 Oct. 2022

In the pavement crack segmentation task, the accurate pixel-level labeling required in the fully supervised training of deep neural networks (DNN) is challenging. Although cracks often exhibit low-level image characters in terms of edges, there might be various high-level background information based on the complex pavement conditions. In practice, crack samples containing various semantic backgrounds are scarce. To overcome these problems, we propose a novel method for augmenting the training data for DNN based crack segmentation task. It employs the generative adversarial network (GAN), which utilizes a crack-free image, a crack image, and a corresponding image mask to generate a new crack image. In combination with an auto-encoder, the proposed GAN can be used to train crack segmentation networks. By creating a manual mask, no additional crack images are required to be labeled, and data augmentation and annotation are achieved simultaneously. Our experiments are conducted on two public datasets using five segmentation models of different sizes to verify the effectiveness of the proposed method. Experimental results demonstrate that the proposed method is effective for crack segmentation.

[...] Read more.
A Review on HEVC Video Forensic Investigation under Compressed Domain

By Neetu Singla Sushama Nagpal Jyotsna Singh

DOI: https://doi.org/10.5815/ijigsp.2022.05.04, Pub. Date: 8 Oct. 2022

In recent years, video forensic investigation has become a prominent research area, due to the adverse effect of fake videos on networks, people and society. This paper summarizes all the existing methodologies used for forgery detection in H.265/HEVC videos. HEVC video forgery is generally classified into two categories as video quality forgery and video content forgery. The occurrence of various forgeries such as transcoding, fake-bitrate, inter-frame forgery and intra-frame forgery is deeply analyzed based on features extracted from the HEVC compression domain. The major findings of this research are (i) Less focus on transcoding detection, (ii) Non-availability of HEVC forged video dataset (iii) More focus on double compression detection for forgery detection, and (iv) Non-consideration of adaptive-GOP structure. The forgery detection in the video is critically important due to its wide use as the primary source of information in criminal investigations and proving the authenticity of contents. So, the forgery detection accuracy is of major concern at the present time. Although, various forgery detection methods are developed in past but the findings of this review point out the need of developing more effective detection methods with high accuracy.

[...] Read more.
MLSMBQS: Design of a Machine Learning Based Split & Merge Blockchain Model for QoSAware Secure IoT Deployments

By Shital Agrawal Shailesh Kumar

DOI: https://doi.org/10.5815/ijigsp.2022.05.05, Pub. Date: 8 Oct. 2022

Internet of Things (IoT) Networks are multitier deployments which assist on-field data to be sensed, processed, communicated, and used for taking control decisions. These deployments utilize hardware-based components for data sensing & actuation, while cloud components are used for data-processing & recommending control decisions. This process involves multiple low-security, low-computational capacity & high-performance entities like IoT Devices, short range communication interfaces, edge devices, routers, & cloud virtual machines. Out of these entities, the IoT Device, router, & short-range communication interfaces are highly vulnerable to a wide-variety of attacks including Distributed Denial of Service (DDoS), worm hole, sybil, Man in the Middle (MiTM), Masquerading, spoofing attacks, etc. To counter these attacks, a wide variety of encryption, key-exchange, and data modification models are proposed by researchers. Each of these models have their own levels of complexities, which reduces QoS of underlying IoT deployments. To overcome this limitation, blockchain-based security models were proposed by researchers, and these models allow for high-speed operations for small-scale networks. But as network size is increased, delay needed for blockchain mining increases exponentially, which limits its applicability. To overcome this issue, a machine learning based blockchain model for QoS-aware secure IoT deployments is proposed in this text. The proposed MLSMBQS model initially deploys a Proof-of-Work (PoW) based blockchain model, and then uses bioinspired computing to split the chain into multiple sub-chains. These sub-chains are termed as shards, and assists in reduction of mining delay via periodic chain splitting process. The significance of this research is use of Elephant Herd Optimization (EHO) which assists in managing number of blockchain-shards via splitting or merging them for different deployment conditions. This decision of splitting or merging depends on blockchain’s security & quality of service (QoS) performance. Due to integration of EHO for creation & management of sidechains, the findings of this research showcase that the proposed model is capable of improving throughput by 8.5%, reduce communication delay by 15.3%, reduce energy consumption by 4.9%, and enhance security performance by 14.8% when compared with existing blockchain & non-blockchain based security models. This is possible because EHO initiates dummy communication requests, which are arbitrarily segregated into malicious & non-malicious, and usedfor continuous QoS & security performance improvement of the proposed model. Due to this continuous performance improvement, the proposed MLSMBQS model is capable of deployment for a wide variety of high-efficiency IoT network scenarios.

[...] Read more.
Sliding Window Based High Utility Item-Sets Mining over Data Stream Using Extended Global Utility Item-Sets Tree

By P. Amaranatha Reddy MHM Krishna Prasad

DOI: https://doi.org/10.5815/ijigsp.2022.05.06, Pub. Date: 8 Oct. 2022

High utility item-sets mining(HUIM)is a special topic in frequent item-sets mining(FIM). It gives better insights for business growth by focusing on the utility of items in a transaction. HUIM is evolving as a powerful research area due to its vast applications in many fields. Data stream processing, meanwhile, is an interesting and challenging problem since, processing very fast generating a huge amount of data with limited resources strongly demands high-performance algorithms. This paper presents an innovative idea to extract the high utility item-sets (HUIs) from the dynamic data stream by applying sliding window control. Even though certain algorithms exist to solve the same problem, they allow redundant processing or reprocessing of data. To overcome this, the proposed algorithm used a trie like structure called Extended Global Utility Item-sets tree (EGUI-tree), which is flexible to store and retrieve the mined information instead of reprocessing. An experimental study on real-world datasets proved that EGUI-tree algorithm is faster than the state-of-the-art algorithms.

[...] Read more.
Covid-19 Automatic Detection from CT Images through Transfer Learning

By B. Premamayudu Chavala Bhuvaneswari

DOI: https://doi.org/10.5815/ijigsp.2022.05.07, Pub. Date: 8 Oct. 2022

Identification of COVID-19 may help the community and patient to prevent the disease containment and plan to attend disease in right time. Deep neural network models widely used to analyze the medical images of COVID-19 for automatic detection and give the decision support for radiologists to summarize the accurate remarks.  This paper proposed deep transfer learning for chest CT scan images to detection and diagnosis of COVID-19.  VGG19, InceptionRestNetV3, InceptionV3 and DenseNet201 neural network used for automatic detection of COVID-19 disease form CT scan images (SARS-CoV-2 CT scan Dataset). Four deep transfer learning models were developed, tested and compared. The main objective of this paper is to use pre-trained features and converge pre-trained features with targeted features to improve the classification accuracy.  It is observed that DenseNet201 noted the best performance and the classification accuracy is 99.98% for 300 epochs. The findings of the experiments show that the deeper networks struggle to train adequately and give less consistency when there is limited data. The DenseNet201 model adopted for COVID-19 identification from lung CT scans has been intensively optimized with optimal hyper parameters and performs at noteworthy levels with precision 99.2%, recall 100%, specificity 99.2%, and F1 score 99.2%. 

[...] Read more.