IJITCS Vol. 13, No. 1, Feb. 2021
Cover page and Table of Contents: PDF (size: 284KB)
REGULAR PAPERS
For software organizations that rely on Open Source Software (OSS) to develop customer solutions and products, it is essential to accurately estimate how long it will take to deliver the expected functionalities. While OSS is supported by government policies around the world, most of the research on software project estimation has focused on conventional projects with commercial licenses. OSS effort estimation is challenging since OSS participants do not record effort data in OSS repositories. However, OSS data repositories contain dates of the participants’ contributions and these can be used for duration estimation. This study analyses historical data on WordPress and Swift projects to estimate OSS project duration using either commits or lines of code (LOC) as the independent variable. This study proposes first an improved classification of contributors based on the number of active days for each contributor in the development period of a release. For the WordPress and Swift OSS projects environments the results indicate that duration estimation models using the number of commits as the independent variable perform better than those using LOC. The estimation model for full-time contributors gives an estimate of the total duration, while the models with part-time and occasional contributors lead to better estimates of projects duration with both for the commits data and the lines of data.
[...] Read more.The smart meter can process sensor data in a residential grid. These sensors transmit different parameters or measurement data (index, power, temperature, fluctuation of voltage and electricity, etc.) to the smart meter. All of these measurement data can come in different ways at the smart meter. The sensors transmit each measurement data to the smart meter. In addition, the collection of this data to a central system is a significant concern to ensure data integrity and protect the privacy of residents. The complexity of these data management also lies in their volume, frequency, and scheduling. This work presents a scheduling and a collection mechanism in private power consumption data between both sensors and smart meters on one hand and between smart meters and the central data collection system on other hand. We have found several approaches to intelligent meter data management in scientific researches. We propose another approach in response to this concern for the scheduling and collection of measurement data to a central system from residential areas of sensors’ network connected to smart meters. This work is also an example of a link between data collection and data scheduling in intelligent information management, transmission, and protection. We also propose a modeling of the measurement objects of smart grid and highlight the changes made to these objects throughout the process of data processing. It should be noted that this smart grid system consists of three main active systems namely sensors, smart meters and central system. In addition to these three systems, there are other systems that communicate with the smart meters and the central system. We have identified three implementation models for the smart metering system. We also present an intelligent architecture based on multi-agent systems for the smart grid. Most current electricity management systems are not adapted to the new challenges imposed by social and economic development in Africa. The objectives of this study are to initiate the design of a smart grid system for the management of electricity data.
[...] Read more.The paper considers the symmetric traveling salesman problem and applies it to sixty-four (64) districts of Bangladesh (with geographic coordinates) as a new instance of the problem of finding an optimized route in need of emergency. It approached three different algorithms namely Integer Linear Programming, Nearest-neighbor, and Metric TSP as exact, heuristic, or approximate methods of solving the NP-hard class of problem to model the emergency route planning. These algorithms have been implanted using computer codes, used IBM ILOG CPLEX parallel optimization, visualized using Geographic Information System tools. The performance of these algorithms also has been evaluated in terms of computational complexity, their run-time, and resulted tour distance using exact, approximate, and heuristic methods to find the best fit of route optimization in emergence thus contributing to the field of combinatorial optimization.
[...] Read more.There is a massive amount of different information and data in the World Wide Web, and the number of Arabic users and contents is widely increasing. Information extraction is an essential issue to access and sort the data on the web. In this regard, information extraction becomes a challenge, especially for languages, which have a complex morphology like Arabic. Consequently, the trend today is to build a new corpus that makes the information extraction easier and more precise. This paper presents Arabic linguistically analyzed corpus, including dependency relation. The collected data includes five fields; they are a sport, religious, weather, news and biomedical. The output is CoNLL universal lattice file format (CoNLL-UL). The corpus contains an index for the sentences and their linguistic meta-data to enable quick mining and search across the corpus. This corpus has seventeenth morphological annotations and eight features based on the identification of the textual structures help to recognize and understand the grammatical characteristics of the text and perform the dependency relation. The parsing and dependency process conducted by the universal dependency model and corrected manually. The results illustrated the enhancement in the dependency relation corpus. The designed Arabic corpus helps to quickly get linguistic annotations for a text and make the information Extraction techniques easy and clear to learn. The gotten results illustrated the average enhancement in the dependency relation corpus.
[...] Read more.Fish species recognition is an increasing demand to the field of fish ecology, fishing industry sector, fisheries survey applications, and other related concerns. Traditionally, concept-based fish specifies identification procedure is used. But it has some limitations. Content-based classification overcomes these problems. In this paper, a content-based fish recognition system based on the fusion of local features and global feature is proposed. For local features extraction from fish image, Local Binary Pattern (LBP), Speeded-Up Robust Feature (SURF), and Scale Invariant Feature Transform (SIFT) are used. To extract global feature from fish image, Color Coherence Vector (CCV) is used. Five popular machine learning models such as: Decision Tree, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Naïve Bayes, and Artificial Neural Network (ANN) are used for fish species prediction. Finally, prediction decisions of the above machine learning models are combined to select the final fish class based on majority vote. The experiment is performed on a subset of ‘QUT_fish_data’ dataset containing 256 fish images of 21 classes and the result (accuracy 98.46%) shows that though the proposed method does not outperform all existing fish classification methods but it outperforms many existing methods and so, the method is a competitive alternative in this field.
[...] Read more.