IJMECS Vol. 10, No. 11, Nov. 2018
Cover page and Table of Contents: PDF (size: 232KB)
REGULAR PAPERS
The article provides a solution to the problem of placing mobile users’ queries (tasks or software applications) on a balanced virtual machine (VMs) developed on cloudlets placed near base stations of the Wireless Metropolitan Area Networks (WMAN) taking into account their technical capabilities. For this purpose, hierarchically structured architecture and algorithm based on cloudlets are proposed for the selection of virtual machines that provide the requirements (solution time and cost) to the solution of the user’s task. An approach to the optimal VM selection is proposed for the solution of Bi-Criteria selection out of set of VMs based on Skyline operator.
[...] Read more.Many approaches to sentiment analysis benefit from polarity lexicons. Existing methods proposed for building such lexicons can be grouped into two categories: (1) Lexicon based approaches which use lexicons such as dictionaries and WordNet, and (2) Corpus based approaches which use a large corpus to extract semantic relations among words. Adjectives play an important role in polarity lexicons because they are better polarity estimators compared to other parts of speech. Among natural languages, Turkish, similar to other non-English languages suffers from the shortage of polarity resources. In this work, a hybrid approach is proposed for building adjective polarity lexicon, which is experimented on Turkish combines both lexicon based and corpus based methods. The obtained classification accuracies in classifying adjectives as positive, negative, or neutral, range from 71% to 91%.
[...] Read more.The automated speaker endorsement technique used for recognition of a person by his voice data. The speaker identification is one of the biometric recognition and they were also used in government services, banking services, building security and intelligence services like this applications. The exactness of this system is based on the pre-processing techniques used to select features produced by the voice and to identify the speaker, the speech modeling methods, as well as classifiers, are used. Here, the edges and continuous quality point are eliminated in the normalization process. The Mel-Scale Frequency Cepstral Coefficient is one of the methods to grab features from a wave file of spoken sentences. The Gaussian Mixture Model technique is used and done experiments on MARF (Modular Audio Recognition Framework) framework to increase outcome estimation. We have presented an end pointing elimination in Gaussian selection medium for MFCC.
[...] Read more.Currently, the online discussion forums have become the focal point for e-learning in many Higher Learning Institutions (HLIs); this is due to the ubiquitous of Information and Communication Technology (ICT) tools, significant rate and fast growing technology adoption and use in many fields including education. However, developing countries, such as Tanzania, are experiencing technical adoption difficulties, such as limited access to computers, problems with Internet connections as well as the technological reliance gap between tutors and learners; these affect the use of technology in Teaching and Learning (T/L). This study aims to use an Online Discussion Platform (onlineDP) to bridge the technological reliance gap between the tutors and learners in HLIs in Tanzania. In this study, the literature review and qualitative research methods were conducted to develop the prototype of the platform. The UMBC semantic similarity service was used to develop the contents filter used to reduce the number of duplicate discussion questions. The application was mainly developed using Laravel Pre-processor (PHP) framework and My Structured Query Language (MySQL) database. The result is the web-based application prototype that enhances the collaborative learning environment in HLIs in Tanzania. The technologies to be used for T/L, should consider both sides of tutors and learners as well as the theoretical framework for their implementations.
[...] Read more.The demand of recommendation has aroused severely since there are huge number of choices available and the end user desires to extract information in least time and with high accuracy. The traditional recommendation systems generate recommendations in the same domain but now cross domain recommendations are gaining importance. The cross domain recommendations address well the limitations of single domain analysis such as data sparsity and cold start problem. Under this research work cross domain recommendation model is designed based on the study of various supervised classification algorithms. 3 domains are under consideration music, movie and book. Model is capable of generating one to many cross domain recommendations exploiting movie domain knowledge to generate recommendations for books and music. Data is collected through survey and data pre-processing has been performed. Study is carried out over K-Nearest Neighbor, Decision Tree, Gaussian Naïve Bayes and Support Vector Machine classifiers and also over majority voting Ensembling, cross validation and data sampling by applying these classifiers to choose the best classifier to form the base of content-based recommendation. Recommendation model uses a hybrid approach of combination of content-based recommendation, user to user collaborative filtering and personalized recommendation techniques. The model perform Twitter sentiment analysis over the recommended entities generated by the model to help the user in decision making by knowing the positive, negative and neutral polarity percentage based on tweets done by people. The designed model achieved good accuracy on testing.
[...] Read more.From a philosophical point of view, the words of a text or a speech are not held just for informational purposes, but they act and react; they have the power to react on their counterparts. Each word, evokes similar or different senses that can influence and interact with the following words, it has a vibratory property. It's not the words themselves that have the impact, but the semantic reaction behind the words. In this context, we propose a new textual data classification approach while trying to imitate human altruistic behavior in order to show the semantic altruistic stakes of natural language words through statistical, semantic and distributional analysis. We present the results of a word extraction method, which combines a distributional proximity index, a selection coefficient and a co-occurrence index with respect to the neighborhood.
[...] Read more.