Application of Hybrid Search Based Algorithms for Software Defect Prediction

Full Text (PDF, 1478KB), PP.51-62

Views: 0 Downloads: 0

Author(s)

Wasiur Rhmann 1,*

1. Department of Computer Science and Information Technology, Babasaheb Bhimrao Ambedkar University, (A Central University), Satellite Campus, Amethi, U.P., India

* Corresponding author.

DOI: https://doi.org/10.5815/ijmecs.2018.04.07

Received: 20 Nov. 2017 / Revised: 26 Dec. 2017 / Accepted: 15 Jan. 2018 / Published: 8 Apr. 2018

Index Terms

Defect, Static metrics, Cyclomatic complexity, Halstead metrics

Abstract

In software engineering software defect class prediction can help to take decision for proper allocation of resources in software testing phase. Identification of highly defect prone classes will get more attention from tester as well as security experts. In recent years various artificial techniques are used by researchers in different phases of SDLC. Main objective of the study is to compare the performances of Hybrid Search Based Algorithms in prediction of defect proneness of a class in software. Statistical test are used to compare the performances of developed prediction models, Validation of the models is performed with the different releases of datasets.

Cite This Paper

Wasiur Rhmann, " Application of Hybrid Search Based Algorithms for Software Defect Prediction", International Journal of Modern Education and Computer Science(IJMECS), Vol.10, No.4, pp. 51-62, 2018. DOI:10.5815/ijmecs.2018.04.07

Reference

[1]H. Lim, A. L. Goel, Software Effort Prediction, Wiley Encyclopedia of Computer Science and Engineering, 2008.
[2]Ruchika Malhotra, “An empirical framework for defect prediction using machine learning techniques with Android software”, Applied Soft Computing, 2016, pp. 1-17.
[3]M. Jorgensen, “Experience with the accuracy of software maintenance task effort prediction models”, IEEE Trans. Softw. Eng., vol. 21, pp. 674-681, 1995.
[4]M. Riaz, E. Mendes, E. Tempero, “A systematic review of software maintainability prediction and metrics”, in: Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, 2009, pp. 367–377.
[5]Y. Zhou, H. Leung, “Predicting object-oriented software maintainability using multivariate adaptive regression splines”, J. Syst. Softw., vol. 80, pp. 1349–1361, 2007.
[6]Y. Ma, G. Luo, X. Zeng, A. Chen, “Transfer learning for cross-company software defect prediction”, Inform. Softw. Technol., vol. 54, pp. 248–256, 2012.
[7]A. Tosun, A. Bener, B. Turhan, T. Menzies, “Practical considerations in deploying statistical methods for defect prediction: a case study within the Turkish telecommunications industry”, Inform. Softw. Technol., vol. 52, pp. 1242-1257, 2010.
[8]X. Yuan, T.M. Khoshgoftaar, E.B. Allen, K. Ganesan, “An application of fuzzy clustering to software quality prediction”, in: Proceedings, 3rd IEEE Symposium on Application-Specific Systems and Software Engineering Technology, 2000, pp. 85–90.
[9]N.E. Fenton, M. Neil, “A critique of software defect prediction models”, IEEE Trans. Softw. Eng., vol. 25, pp. 675-689, 1999.
[10]Parag C. Pendharkar, “Exhaustive and heuristic search approaches for learning a software defect prediction model”, Engineering Applications of Artificial Intelligence, vol. 23, pp. 34–40, 2010.
[11]Annibale Panichella, Carol V. Alexandru, Sebastiano Panichella, Alberto Bacchelli, Harald C. Gall, “A Search Based Training Algorithm for Cost-aware Defect Prediction”, ACM, Colorado, USA, 2016, pp. 1077-1084.
[12]Ohood A. Aljohani, Rizwan J. Qureshi, “Proposal to Decrease Code Defects to Improve Software Quality”, I. J. Information Engineering and Electronic Business, vol. 5, pp. 44-51, 2016.
[13]Gyimothy T, Ference R, Siket I, “Empirical validation of object-oriented metrics on open source software for fault prediction”, IEEE Transactions on Software Engineering, vol. 31, no. 10, pp. 897-910, 2005.
[14]Olague H, Etzkorn L, Ghoston S, Quattlebaum S, “Empirical validation of three software metrics suites to predict fault proneness of object-oriented classes developed using highly iterative or agile software development process”, IEEE Trans. Software. Eng., vol. 33, no. 8, pp. 402-419, 2007.
[15]Yuming Z, Hareton L, “Empirical analysis of object oriented design metrics for predicting high severity faults”, IEEE Transaction Softw. Engineering, vol. 32, no. 10, pp. 771-784, 2006.
[16]Pai G., “Empirical analysis of software fault content and fault proneness using Bayesian methods”, IEEE Transaction Software Engineering, vol. 33, no. 10, pp. 675-686, 2007.
[17]Wasiur Rhmann, “UML Models of Research Process in Empirical Software Engineering”, International Journal of Computer Sciences and Engineering, vol. 5, no. 10, pp. 176-180, 2017.
[18]Surbhi Maggo, Chetna Gupta, “A Machine Learning based Efficient Software Reusability Prediction Model for Java Based Object Oriented Software”, I. J. Information Technology and Computer Science, vol. 2, pp. 1-13, 2014.
[19]Ruchika Malhotra, ArviderKaur, |Yogesh Sigh, “Empirical validation of object-oriented metrics for predicting fault proneness at different severity levels using support vector machines”, Int J. Syst. Assur. Eng. Manag., vol. 1, no. 3, pp. 269-281, 2010.
[20]Sayyad Shirabad, J. and Menzies T. J., The PROMISE Repository of Software Engineering Databases, School of Information Technology and Engineering, University of Ottawa,Canada,http://promise.site.uottoawa.ca/SERepository
[21]T.J. McCabe, “A Complexity Measure”, IEEE Transaction on Software Engineering, vol. 2, no. 4, pp. 308-320, 1976.
[22]M.H. Halstead, Elements of Software Science, Elsevier, 1977.
[23]Shatnawi R, Li W., “The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process”, J. Syst. Softw., vol. 81, no. 11, pp. 1868–1882, 2008.
[24]Succi G, Pedrycz W, Djokic S, Zuliani P, Russo B. “An empirical exploration of the distributions of the Chidamber and Kemerer object-oriented metrics suite”. Empir. Softw. Eng ., vol. 10, pp. 81–103, 2005.
[25]Maysam Toghraee, Hamid Parvin and Farhad Rad, “The Impact of Feature Selection on Meta-Heuristic Algorithms to Data Mining Methods”, I. J. Modern Education and Computer Science, vol. 10, pp. 33-39, 2016.
[26]Jihoon Yang, Vasant Honavar, “Feature Subset Selection Using a Genetic Algorithm”, IEEE Intelligent System, Vol. , pp. 44-49, 1998.
[27]Freund Y, Schapire RE, Short A. “Introduction to Boosting”. J. Jpn Soc. Artif. Intell, vol. 14, pp. 771–780, 1999.
[28]Jesus MJ, Hoffmann F, Navascués LJ, Sánchez L. “Induction of fuzzy-rule-based classifiers with evolutionary boosting algorithms”. IEEE Trans Fuzzy Syst., vol. 12, pp. 296–308, 2004.
[29]Otero J, Sanchez L. “Induction of descriptive fuzzy classifiers with the Logitboost algorithm”. Soft. Comput., vol. 10, pp. 825–835, 2006.
[30]Sanchez L, Otero J. “Boosting fuzzy rules in classification problems under single-winner inference”. Int. J. Intell. Syst., vol. 22, pp. 1021–1034, 2007.
[31]Aguilar-Ruiz JS, Riquelme JC, Toro M. “Evolutionary learning of hierarchical decision rules”. IEEE Trans. Syst., vol. 33, pp. 324–331, 2003.
[32]Lin SW, Chen SC., “PSOLDA: a particle swarm optimization approach for enhancing classification accuracy rate of linear discriminant analysis”. Appl. Soft. Comput., vol. 9, pp. 1008–1015, 2009.
[33]Ruchika Malhotra, Empirical Research in Software Engineering, CRC Press, Taylor and Francis group, 2016.