Hedieh Sajedi

Work place: Mathematics, Statistics and Computer Science School, College of Science, University of Tehran, Tehran, Iran

E-mail: hhsajedi@ut.ac.ir

Website:

Research Interests: Computational Learning Theory, Pattern Recognition

Biography

Hedieh Sajedi received a B.Sc. degree in Computer Engineering from AmirKabir University of Technology in 2003, and M.Sc. and Ph.D degrees in Computer Engineering (Artificial Intelligence) from Sharif University of Technology, Tehran, Iran in 2006 and 2010, respectively. She is currently an Assistant Professor at the Department of Computer Science, University of Tehran, Iran. Her research interests include Pattern Recognition, Machine Learning, and Signal Processing.

Author Articles
A Metaheuristic Algorithm for Job Scheduling in Grid Computing

By Hedieh Sajedi Maryam Rabiee

DOI: https://doi.org/10.5815/ijmecs.2014.05.07, Pub. Date: 8 May 2014

These days the number of issues that we can not do on time is increasing. In the mean time, scientists are trying to make questions simpler and using computers. Still, more problems that are complicated need more complex calculations by using highly advanced technology. Grid computing integrates distributed resources to solve complex scientific, industrial, and commercial problems. In order to achieve this goal, an efficient scheduling system as a vital part of the grid is required. In this paper, we introduce CUckoo-Genetic Algorithm (CUGA), which inspired from cuckoo optimization algorithm (COA) with genetic algorithm (GA) for job scheduling in grids. CUGA can be applied to minimize the completion time of machines, and it could avoid trapping in a local minimum effectively. The results illustrate that the proposed algorithm, in comparison with GA, COA, and Particle Swarm Optimization (PSO) is more efficient and provides higher performance.

[...] Read more.
Construction of High-accuracy Ensemble of Classifiers

By Hedieh Sajedi Elham Masoumi

DOI: https://doi.org/10.5815/ijitcs.2014.05.01, Pub. Date: 8 Apr. 2014

There have been several methods developed to construct ensembles. Some of these methods, such as Bagging and Boosting are meta-learners, i.e. they can be applied to any base classifier. The combination of methods should be selected in order that classifiers cover each other weaknesses. In ensemble, the output of several classifiers is used only when they disagree on some inputs. The degree of disagreement is called diversity of the ensemble. Another factor that plays a significant role in performing an ensemble is accuracy of the basic classifiers. It can be said that all the procedures of constructing ensembles seek to achieve a balance between these two parameters, and successful methods can reach a better balance. The diversity of the members of an ensemble is known as an important factor in determining its generalization error. In this paper, we present a new approach for generating ensembles. The proposed approach uses Bagging and Boosting as the generators of base classifiers. Subsequently, the classifiers are partitioned by means of a clustering algorithm. We introduce a selection phase for construction the final ensemble and three different selection methods are proposed for applying in this phase. In the first proposed selection method, a classifier is selected randomly from each cluster. The second method selects the most accurate classifier from each cluster and the third one selects the nearest classifier to the center of each cluster to construct the final ensemble. The results of the experiments on well-known datasets demonstrate the strength of our proposed approach, especially applying the selection of the most accurate classifiers from clusters and employing Bagging generator.

[...] Read more.
Other Articles