IJEM Vol. 12, No. 3, Jun. 2022
Cover page and Table of Contents: PDF (size: 535KB)
REGULAR PAPERS
Detection and analysis of software vulnerabilities is a very important consideration. For this reason, software security vulnerabilities that have been identified for many years are listed and tried to be classified. Today, this process, performed manually by experts, takes time and is costly. Many methods have been proposed for the reporting and classification of software security vulnerabilities. Today, for this purpose, the Common Vulnerability Scoring System is officially used. The scoring system is constantly updated to cover the different security vulnerabilities included in the system, along with the changing security perception and newly developed technologies. Different versions of the scoring system are used with vulnerability reports. In order to add new versions of the published scoring system to the old vulnerability reports, all analyzes must be done manually backwards in accordance with the new security framework. This is a situation that requires a lot of resources, time and expert skill. For this reason, there are large deficiencies in the values of vulnerability scoring systems in the database. The aim of this study is to estimate missing security metrics of vulnerability reports using natural language processing and machine learning algorithms. For this purpose, a model using term frequency inverse document frequency and K-Nearest Neighbors algorithms is proposed. In addition, the obtained data was presented to the use of researchers as a new database. The results obtained are quite promising. A publicly available database was chosen as the data set that all researchers accepted as a reference. This approach facilitates the evaluation and analysis of our model. This study was performed with the largest dataset size available from this database to the best of our knowledge and is one of the limited studies on the latest version of the official scoring system published for classification of software security vulnerabilities. Due to the mentioned issues, our study is a comprehensive and original study in the field.
[...] Read more.Classically, the points where digital image brightness transforms rapidly are ordered into a group of curved line segments termed as edges. Edge detection is an important feature and tool in digital image processing to analyze the significant changes in gray level image intensity. In this paper, an edge detection method is proposed. In the proposed method divergent operation is applied to the image to compute the Laplacian of the image. After then the sample rate of Laplacian of image is decreased by downsampling. A threshold value is yielded by computing the mean on the down sample value. Laplacian of image and threshold value is compared and pixel values are set according to the threshold value. Then the morphological operation is performed on the processed image to produce the final edge detection image. The significance and value of this research are reducing image noise by downsampling and searching vital edge information through divergence operation. The present study introduces a new method of edge detection. The finding of this research work is to detect the edges of objects. The proposed method is compared with other existing edge detection methods i.e., Canny, Sobel, Robert, Zero cross, and Frei-Chen. Quantitative evaluation is performed through various metrics i.e., Entropy, Edge-based contrast measure (EBCM), F-Measure, and Performance ratio. Experimental results obtained from MATLAB 2018a show that the proposed method performs better than other well-known edge detection methods.
[...] Read more.With the increasingly broadening adoption of Electronic Health Record (EHR) worldwide, there is a growing need to widen the use of EHR to support clinical decision making and research particularly in radiology. A number of studies on generation, analysis and presentation of chest x-ray reports from digital images to detect abnormalities have been well documented in the literature but studies on automatic analysis of chest x-ray reports have not been well represented. Interestingly, there is a large amount of unstructured electronic chest x-ray notes that need to be organized and processed in such a way that it can be automated for the purpose of giving urgent attention to abnormal radiographs in clinical findings to allow for quicker report analysis and decision making. This study developed a system to automate this analysis in order to prioritize findings from chest x-rays using support vector machine and Lagrange Multiplier for the constraint optimization. The classification model was implemented using Python programming language and Django framework. The developed system was evaluated based on precision, recall, f1-score, negative predictive value (NPV). Expert’s knowledge was also used as gold standard and comparison with the existing system. The result showed a precision of 96.04%, recall of 95.10%, f1-score of 95.57%, specificity of 86.21%, negative predictive value of 83.33% and an accuracy of 93.13%. The study revealed that a limited but important number of relevant attributes provided an effective and efficient model for the detection of cardiomegaly in clinical chest x-ray reports. From the evaluation result, it is evident that this system can help the clinicians to quickly prioritize findings from chest x-ray reports, thereby reducing the delay in attending to patients. Hence, the developed system could be used for the analysis of chest x-ray reports with the purpose of diagnosing the patient for cardiomegaly. Chest X-ray reports are usually textual, therefore, further studies can introduce spell checker to the system to provide higher sensitivity.
[...] Read more.Monitoring systems for electrical appliances have gained massive popularity nowadays. These frameworks can provide consumers with helpful information for energy consumption. Non-intrusive load monitoring (NILM) is the most common method for monitoring a household’s energy profile. This research presents an optimized approach for identifying load needs and improving the identification of NILM occupancy surveillance. Our study suggested implementing a dimensionality reduction algorithm, popularly known as genetic algorithm (GA) along with XGBoost, for optimized occupancy monitoring. This exclusive model can masterly anticipate the usage of appliances with a significantly reduced number of voltage-current characteristics. The proposed NILM approach pre-processed the collected data and validated the anticipation performance by comparing the outcomes with the raw dataset’s performance metrics. While reducing dimensionality from 480 to 238 features, our GA-based NILM approach accomplished the same performance score in terms of accuracy (73%), recall (81%), ROC-AUC Score (0.81), and PR-AUC Score (0.81) like the original dataset. This study demonstrates that introducing GA in NILM techniques can contribute remarkably to reduce computational complexity without compromising performance.
[...] Read more.This paper aims to study the prey refuge impact on the dynamic behaviour of a stage structure predator-prey model. The model consists of four ecological species: prey in the protected and unprotected area and immature and mature predators. It assumes the grown predator can feeds only on the prey in an unreserved area. The conditions that guarantee the existence of the possible fixed points are found. Further, the local stability around all of the equilibria is considered. Then, using the Lyapunov direct method, the essential conditions for the global stability of the equilibria are adopted. Numerical simulations are illustrated to confirm our results. It concluded that the protected area positively affects the system's co-existence.
[...] Read more.