IJMECS Vol. 13, No. 4, Aug. 2021
Cover page and Table of Contents: PDF (size: 694KB)
REGULAR PAPERS
The paper aims at implementing new pedagogy and assessment practices from rigorous literature survey perusing quality papers and articles. The appropriate pedagogy and relevant assessment always go hand in hand. One cannot achieve effective teaching by compromising the other component. In engineering, pedagogy and assessment play extremely important roles. In recent years, engineering education has lost track of the big picture of what the curriculum has to be. In Computer Science Engineering, the course contents often change according to the demands for new technology in the market. Adhering to this fact, the courses must be designed either practical-based or case study based. Rote teaching-learning methods are not as effective as far as the curriculum design in the computing field is concerned. Cyber Security is one such course where students are expected to learn how to create a protective environment for computing and computing resources. The course must be designed in such a way that, students must learn how to identify the vulnerabilities in computing resources and methodologies to mitigate them. This course aims such that students must learn some popular attacks which help them to identify what are the possible ways from where the attacks could happen. The paper makes use of strategic assessment tools for the Cyber Security course (Taught to post-graduation students) and discusses outcomes through course outcome attainment analysis as per the threshold value of attainment set by the Computer Engineering Department to adhere to the accreditation standards. Through course attainment analysis it is observed that, Viva voce assessment tool is not suitable for evaluation of the course as it does not impose the technical details and working of Cyber Security concepts. The overall attainment was 55.55% which is 15% less than the threshold set by Computer Engineering Department.
[...] Read more.The article is devoted to the question of communicative competence formation, represented in all spheres of professional application in higher education and states that the degree of its formation depends on a person's approach to behave in different social situations. This study examines the essence and structure of communicative competence, as well as the system of its formation while teaching a foreign language to higher education students of economics. Evaluation of the characteristics of developing communicative competence when working with economic texts is carried out as the main communicative unit on the example of the use of specific material in speech. The methodology of the formation of communicative competence among future economists is theoretically determined and experimentally tested through interaction with economic texts in English for professional purposes (49 students aged 17–20 years participated in the research). Analysis of linguistic, psychological, psycholinguistic and methodological bases of communicative competence formation, questionnaires of students and survey results gave grounds for the development of experimental methods of these competences formation by future economists in the process of studying modern foreign language.
The interactive methods of learning from economic texts were developed under a new concept of teaching foreign language for the formation of communicative competence and introduced in experimental groups of learners. The data indicated a significant increase in intermediate and high levels of future economists‟ communicative competences formation in groups with interactive classes.
In the modern digital world, online shopping becomes essential in human lives. Online shopping stores like Amazon show up the "Frequently Bought Together" for their customers in their portal to increase sales. Discovering frequent patterns is a fundamental task in Data Mining that find the frequently bought items together. Many transactional data were collected every day, and finding frequent itemsets from the massive datasets using the classical algorithms requires more processing time and I/O cost. A GPU accelerated Novel algorithm for finding the frequent patterns using Vertical Data Format (GNVDF) has been introduced in this research article. It uses a novel pattern formation. In this, the candidate i-itemsets is divided into two buckets viz., Bucket-1 and Bucket-2. Bucket-1 contain all the possible items to form candidate-(i+1) itemsets. Bucket-2 has the items that cannot include in the candidate-(i+1) itemsets. It compactly employs a jagged array to minimize the memory requirement and remove common transactions among the frequent 1-itemsets. It also utilizes a vertical representation of data for efficiently extracting the frequent itemsets by scanning the database only once. Further, it is GPU-accelerated for speeding up the execution of the algorithm. The proposed algorithm was implemented with and without GPU usage and compared. The comparison result revealed that GNVDF with GPU acceleration is faster by 90 to 135 times than the method without GPU.
[...] Read more.Inertial measurement units based on microelectromechanical systems are perspectives for motion capture applications due to their numerous advantages. A motion trajectory is restored using a well-known navigation algorithm, which assumes integration of the signals from accelerometers and gyroscopes. Readings of both sensors contain errors, which quickly accumulate due to integration. The applicability of an inertial measurement unit for motion capture depends on the trajectory being tracked and can be predicted due to the simulation of signals from inertial sensors. The first simulation step is prescribing a motion trajectory and corresponding velocities. The existing simulation software provides no user-friendly graphical tools for the completion of this step. This work introduces an algorithm for the simulation of accelerometer signals upon a two-dimensional trajectory drawn with a computer mouse and then vectorized. We propose a modification of the Potrace algorithm for tracing motion trajectories. Thus, a trajectory and velocities can be set simultaneously. The obtained results form a basis for simulating three-dimensional motion trajectories since the latter can be represented by three mutually orthogonal two-dimensional projections.
[...] Read more.Gender is one of the vital information to identify someone. If we can decide with conviction whether an individual is male or female, it will restrain the inquiry list and abbreviate the pursuit time. The way toward distinguishing fingerprints is one of the significant, simple to do assortment strategies, the cost is cheap, and a dactyloscopy authority does the particular outcome. The classification of the image gets the issues in computer vision, where a computer can mimic the capacity of an individual to comprehend the data in the image. Process of classifying image can be performing with deep learning where the process like the working of the brain in thinking and trying to reproduce part of its functions by using units associated with relationship, like a neuron. Convolutional neural network is one type of deep learning. In this research, will be doing to classification gender based on fingerprint using method Convolutional Neural Network, and then we will make three models to determined gender, with a total of 49270 image data that included test data and training data by classifying two categories, male and female. Of the three models, we are taking the highest accuracy to use in making this application. Results of this research is we get Model2 will be used as a model CNN with the accuracy level of 99.9667%.
[...] Read more.Parkinson's disease (PD) is an age-related neurodegenerative disorder affecting millions of elderly people world-wide. The early and accurate diagnosis of PD with available treatment might delay neurodegeneration and prevent disabilities. The existing diagnosis method such as brain scan is an expensive process. The use of speech recognition with machine learning technologies for the diagnosis of PD patients could be less expensive. In this work, we have worked with the voice recorded dataset from UCI machine learning repository. Several studies were performed to identify PD patients from the healthy individuals by using voice recorded data with machine learning algorithms. In this paper, we have proposed an optimized approach of data pre-processing that enhances prediction accuracy for diagnosing PD. We obtain 97.4% prediction accuracy with higher sensitivity, specificity, precision, F1 score and kappa value by using AdaBoost. These improved performance evaluation metrics indicate, the use of voice recording with our optimised machine learning approach is highly reliable in prediction of PD. This approach may have significant implications for early stage diagnosis of PD in a cost-effective manner.
[...] Read more.