An Algorithm for Static Tracing of Message Passing Interface Programs Using Data Flow Analysis

Full Text (PDF, 411KB), PP.1-8

Views: 0 Downloads: 0

Author(s)

Alaa I. Elnashar 1,2,* Said F. El-Zoghdy 2,3

1. Department of Computer Science, Faculty of Science, Minia University, Egypt

2. College of Computers and Information Technology, Taif University, KSA

3. Department of Mathematics & Computer Science, Faculty of Science, Menoufia University, Egypt

* Corresponding author.

DOI: https://doi.org/10.5815/ijcnis.2015.01.01

Received: 13 May 2014 / Revised: 22 Aug. 2014 / Accepted: 2 Oct. 2014 / Published: 8 Dec. 2014

Index Terms

Parallel Programming, Message Passing Interface, data and control flow analysis

Abstract

Message Passing Interface (MPI) is a well know paradigm that is widely used in coding explicit parallel programs. MPI programs exchange data among parallel processes using communication routines. Program execution trace depends on the way that its processes are communicated together. For the same program, there are a lot of processes transitions states that may appear due to the nondeterministic features of parallel execution. In this paper we present a new algorithm that statically generates the execution trace of a given MPI program using data flow analysis technique. The performance of the proposed algorithm is evaluated and compared with that of two heuristic techniques that use a random and genetic algorithm approaches to generate trace sequences. The results show that the proposed algorithm scales well with the program size and avoids the problem of processes state explosion which the other techniques suffer from.

Cite This Paper

Alaa I. Elnashar, Said F. El-Zoghdy, "An Algorithm for Static Tracing of Message Passing Interface Programs Using Data Flow Analysis", International Journal of Computer Network and Information Security(IJCNIS), vol.7, no.1, pp.1-8, 2015. DOI:10.5815/ijcnis.2015.01.01

Reference

[1]A. Chan D. Ashton, R. Lusk, and W. Gropp, Jumpshot-4 Users Guide, Mathematics and Computer Science Division, Argonne National Laboratory July 11, 2007.
[2]A. Faraj and X. Yuan. Communication characteristics in the NAS parallel benchmarks. In International Conference onParallel and Distributed Computing Systems, 2002.
[3]A. Farzan and P. Madhusudan, “Monitoring atomicity in concurrent programs,” in CAV, pp. 52–65, 2008
[4]A. Farzan and P. Madhusudan, “The complexity of predicting atomicity violations,” in TACAS, pp. 155–169, 2009
[5]A. Sinha, S. Malik, C. Wang, and A. Gupta. Predictive analysis for detecting serializability violations through Trace Segmentation. 9th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE). 2011.
[6]Ahmed S Ghiduk , Alaa I. Elnashar Automatic Generation of Valid Parallel-Processes Transition Using Genetic Algorithms and Communication-Flow Analysis, Int.J.Computer Technology & Applications,Vol 5 (3),973-982, 2014
[7]B. Krammer, K. Bidmon, M. S. M¨uller and M. M. Resch. MARMOT: An MPI Analysis and Checking Tool. In PARCO 2003, Dresden, Germany, September 2003
[8]B. Krammer, M. S. M¨uller, and M. M. Resch. MPI Application Development Using the Analysis Tool MARMOT. In ICCS 2004, volume LNCS 3038, pp. 464–471. Springer, 2004.
[9]B. Mohr and F. Wolf. KOJAK–A tool set for automatic performance analysis of parallel programs. In Euro-Par, 2003.
[10]Chang-Seo Park and Koushik Sen, "Randomized Active Atomicity Violation Detection in Concurrent Programs" Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering, Pages 135-145 ACM New York, NY, USA, 2008
[11]D. Bruening. Systematic testing of multithreaded Java programs. Master’s thesis, MIT, 1999.
[12]D. Kranzlmüller. Event Graph Analysis for Parallel Program Testing. PhD thesis, GUP Linz, Joh. Kepler University Linz, http://www.gup.uni-linz.ac.at/~dk/thesis. 2000.
[13]David Kelk, Kevin Jalbert and Jeremy S. Bradbury, "Automatically Repairing Concurrency Bugs with ARC", Lecture Notes in Computer Science, Multicore Software Engineering, Performance, and Tools International Conference, MUSEPAT 2013, St. Petersburg, Russia, Volume 8063, pp 73-84, 2013
[14]E. Gabriel, G. E. Fagg, G. Bosilca, T. Angskun, J. J. Dongarra, J. M. Squyres, V. Sahay, P. Kambadur, B. Barrett, A. Lumsdaine, R. H. Castain, D. J. Daniel, R. L. Graham, and T. S. Woodall, “Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation”, In Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary, pp. 97–104, September 2004.
[15]F. E. Allen and J. Cocke “ A Program Data Flow Analysis Procedure,” Communications of the ACM, vol. 9, p.137-147, 1976.
[16]G. Holzmann. The Spin model checker. IEEE Transactions on Software Engineering, 23(5):279–295, 1997.
[17]G. Luecke, H. Chen, J. Coyle, J. Hoekstra, M. Kraeva, and Y. Zou, “MPI-CHECK: a tool for checking Fortran 90 MPI programs”, Concurrency and Computation: Practice and Experience, Volume 15, pp 93-100, 2003.
[18]Guy Martin Tchamgoue, Kyong Hoon Kim, and Yong-Kee Jun, Verification of Data Races in Concurrent Interrupt Handlers, International Journal of Distributed Sensor Networks, Volume 2013, 2013
[19]H. D. Park, and Y. K. Jun. First Race Detection in Parallel Program with Random Synchronization using Trace Information. International Journal of Software Engineering and Its Applications. 7, No.5, 2013, pp. 65-76.
[20]I Puaut. Operating systems - process management (SGP). Master's degree in computer science, http://www.irisa.fr/alf/downloads/puaut/Systeme/LecturesSGPEnglish.pdf. 2013.
[21]J. DeSouza, B. Kuhn, and B. R. de Supinski, “Automated, scalable debugging of MPI programs with Intel message checker”, In Proceedings of the 2nd international workshop on Software engineering for high performance computing system applications, Volume 4, pp 78–82, ACM Press, NY, USA, 2005.
[22]J. E. M. Clarke, O. Grumberg, and D. A. Peled. Model checking. MIT Press, 1999.
[23]J. Huselius,” Debugging Parallel Systems: A State of the Art Report”, MRTC Report no. 63, September 2002.
[24]J. Kim and D. J. Lilja. Characterization of communication patterns in message-passing parallel scientific application programs. In CANPC, pages 202–216, 1998.
[25]J. Labarta, S. Girona, V. Pillet, T. Cortes, and L. Gregoris DiP: A parallel program development environment. In EuroPar’96, pages 665–674, 1996.
[26]J. S. Vetter and B. R. de Supinski. Dynamic software testing of MPI applications with Umpire. In Supercomputing, pages 51–51. ACM/IEEE, 2000.
[27]J. S. Vetter and F. Mueller. Communication characteristics of large-scale scientific applications for contemporary cluster architectures. In IPDPS, pages 853–865, 2002.
[28]J. S. Vetter and M. O. McCracken. Statistical scalability analysis of communication operations in distributed applications. In PPoPP, pages 123–132, 2001.
[29]Jeff Huang and Charles Zhang " ,An Efficient Static Trace Simplification Technique for Debugging Concurrent Programs", Lecture Notes in Computer Science Volume 6887, pp 163-179, 2011
[30]Jidong Zhai, Tianwei Sheng, Jiangzhou He, Wenguang Chen, Weimin Zheng, FACT: fast communication trace collection for parallel applications through program slicing, Proceedings of the Conference on High Performance Computing Networking, Storage and Analysi, 1 – 12, 2009
[31]K. Havelund and T. Pressburger. Model Checking Java Programs using Java PathFinder. Int. Journal on Software Tools for Technology Transfer, 2(4):366–381, 2000.
[32]K. Sen and G. Agha. A race-detection and flipping algorithm for automated testing of multi-threaded programs. In Haifa verification conference 2006 (HVC’06), Lecture Notes in Computer Science. Springer, 2006.
[33]K. Sen. Effective random testing of concurrent programs. In 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE’07). 2007.
[34]K. Sen. Race directed random testing of concurrent programs. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’08), 2008.
[35]M. B. Dwyer, S. Elbaum, S. Person, and R. Purandare. Parallel randomized state-space search. In 29th International Conference on Software Engineering (ICSE), pages 3–12. IEEE, 2007.
[36]M. M. Strout, B. Kreaseck, and P. D. Hovland, “Data-flow analysis for MPI programs.” in Proceedings of the International Conference on Parallel Processing, pp. 175–184, 2006
[37]M. Musuvathi and S. Qadeer. Iterative context bounding for systematic testing of multithreaded programs. In ACM Symposium on Programming Language Design and Implementation (PLDI’07), 2007.
[38]M. R. Girgis and M. R. Woodward “ An Integrated System for Program Testing Using Weak Mutation and Data Flow Analysis”, Proceedings of Eights International Conference on Software Engineering , IEEE Computer Society, p. 313-319, 1985.
[39]M. R. Girgis “Using Symbolic Execution and Data Flow Criteria to Aid Test Data Selection”, software testing, verification and reliability, v. 3, p.101-113, 1993.
[40]N. Nethercote and J. Seward ”Valgrind: A framework for heavyweight dynamic binary instrumentation”, In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), San Diego, California, USA, 2007.
[41]P. Godefroid. Model checking for programming languages using verisoft. In 24th Symposium on Principles of Programming Languages, pages 174–186, 1997.
[42]R. Grosu and S. A. Smolka. Monte carlo model checking. In 11th International Conference Tools and Algorithms for the Construction and Analysis of Systems (TACAS
2005), volume 3440 of LNCS, pages 271–286, 2005.
[43]R. H. Carver and Y. Lei. A general model for reachability testing of concurrent programs. In 6th International Conference on Formal Engineering Methods (ICFEM’04), volume 3308 of LNCS, pages 76–98, 2004.
[44]R. Preissl, M. Schulz, D. Kranzlm¨uller, B. R. de Supinski, and D. J. Quinlan. Transforming MPI source code based on communication patterns. Future Generation Comp. Syst. Vol. 26, No. 1, 2010, pp. 147–154.
[45]R. Zamani and A. Afsahi. Communication characteristics of message-passing scientific and engineering applications. In International Conference on Parallel and Distributed Computing Systems, 2005.
[46]S. Burckhardt, R. Alur, and M. M. K. Martin. Checkfence: checking consistency of concurrent data types on relaxed memory models. In CM SIGPLAN 2007 Conference on Programming Language Design and Implementation (PLDI), pages 12–21, 2007.
[47]S. Chodnekar, V. Srinivasan, A. S. Vaidya, A. Sivasubramaniam, and C. R. Das. Towards a communication characterization methodology for parallel applications. In HPCA, 1997.
[48]S. F. Siegel, A. Mironova, G. S. Avrunin, and L. A. Clarke. Using model checking with symbolic execution to verify parallel numerical programs. In International symposium on Software testing and analysis (ISSTA), pages 157–168. ACM Press, 2006.
[49]S. Ho and N. Lin. Static analysis of communication structures in parallel programs. In International Computer Symposium, 2002.
[50]S. Horwitz, T. Reps, and D. Binkley. Interprocedural slicing using dependence graphs. ACM Trans. Program. Lang. Syst., 12(1):26–60, 1990.
[51]S. Qadeer and D. Wu. Kiss: keep it simple and sequential. In ACM SIGPLAN 2004 conference on Programming language design and implementation (PLDI), pages 14–24. ACM, 2004.
[52]S. Shao, A. K. Jones, and R. G. Melhem. A compiler-based communication analysis approach for multiprocessor systems. In IPDPS, 2006.
[53]S. Sharma, G. Gopalakrishnan, and R. M. Kirby. A survey of MPI related debuggers and tools. 2007.
[54]S. Shende and A. D. Malony. TAU: The tau parallel performance system. International Journal of High Performance Computing Applications, 20(2), 2006.
[55]Saito, N. Synchronization Mechanisms for Parallel Processing. Lecture Notes in Computer Science Volume 143. 1982, pp. 1-22.
[56]Shires. D, Pollock. L,Sprenkle. S, rogram Flow Graph Construction For Static Analysis of MPI Programs, International Conference on Parallel and Distributed Processing Techniques and Applications, PDPTA 1999, June 28 - Junlly 1, 1999, Las Vegas, Nevada, USA, 1847-1853
[57]T. A. Henzinger, R. Jhala, and R. Majumdar. Race checking by context inference. SIGPLAN Not., 39(6):1–13, 2004.
[58]V. S. Sunderam, “PVM: A framework for parallel distributed computing”, Concurrency: Practice & Experience, Volume 2, Number 4, pp 315–339, Dec. 1990. 33
[59]W. E. Nagel, A. Arnold, M. Weber, H. C. Hoppe, and K. Solchenbach. VAMPIR: Visualization and analysis of MPI resources. Supercomputer, 12(1), Jan. 1996.
[60]W. Visser, K. Havelund, G. Brat, and S. Park. Model checking programs. In 15th International Conference on Automated Software Engineering (ASE). IEEE, 2000.
[61]Y. Aoyama J. Nakano “Practical MPI Programming”, International Technical Support Organization, IBM Coorporation SG24-5380-00, August 1999. 34