Distributed simulation performance data mining

https://doi.org/10.1016/S0167-739X(01)00050-4Get rights and content

Abstract

The performance of logical process based distributed simulation (DS) protocols like Time Warp and Chandy/Misra/Bryant is influenced by a variety of factors such as the event structure underlying the simulation model, the partitioning into submodels, the performance characteristics of the execution platform, the implementation of the simulation engine and optimizations related to the protocols. The mutual performance effects of parameters exhibit a prohibitively complex degree of interweaving, giving analytical performance investigations only relative relevance. Nevertheless, performance analysis is of utmost practical interest for the simulationist who wants to decide on the suitability of a certain DS protocol for a specific simulation model before substantial efforts are invested in developing sophisticated DS codes.

Since DS performance prediction based on analytical models appears doubtful with respect to adequacy and accuracy, this work presents a prediction method based on the simulated execution of skeletal implementations of DS protocols. Performance data mining methods based on statistical analysis and a simulation tool for DS protocols have been developed for DS performance prediction, supporting the simulationist in three types of decision problems: (i) given a simulation problem and parallel execution platform, which DS protocol promises best performance, (ii) given a simulation model and a DS strategy, which execution platform is appropriate from the performance viewpoint, and (iii) what class of simulation models is best executed on a given multiprocessor using a certain DS protocol. Methodologically, skeletons of the most important variations of DS protocols are developed and executed in the N-MAP performance prediction environment. As a mining technique, performance data is collected and analyzed based on a full factorial design. The design predictor variables are used to explain DS performance.

Section snippets

Motivation

Distributed and parallel discrete event simulation techniques [1] in their traditional sense aim at an acceleration of the execution of a self contained simulation model by the spatial decomposition of that model and the concurrent simulation of the submodels by so-called logical processes (LPs). More than 15 years of research have been devoted to studying various issues related to this goal, establishing techniques along two lines: conservative Chandy/Misra/Bryant (CMB) and optimistic Time

Early performance prediction of DS protocols

The main goal of an early performance analysis of DS applications is to provide the simulationist with predicted, but sufficiently trustful data to solve decision problems before substantial manpower and financial investments are undertaken. Typically, three types of questions stand in the first line of every DS project: (i) given a simulation model and a DS strategy, which (parallel/distributed) platform should be used to gain maximum performance? (ii) given a simulation model and an execution

Experiments

For the design setup in Fig. 6, a collection of experiments has been conducted in the framework in a fully automated way. The output of this analysis, statistically aggregating 240 case executions (where each case was repeated 30 times), is condensed in the tables in Fig. 7, Fig. 8. Fig. 7 gives for each possible combination of factor levels the average event rate the respective protocol could achieve on the simulation model defined above. Fig. 8 summarizes the respective probability of the F

Conclusions

Performance prediction methods and tools for DS protocols is without any doubt critical for the future success and general acceptance of DS in practice. For a simulationist it is of utmost importance to be able to evaluate the suitability of a certain DS protocol for a specific simulation task, for a certain multiprocessor system and a certain operational environment before substantial programming efforts are invested. A performance prediction method and a set of tools has been developed (i)

Alois Ferscha received the Mag. degree in 1984, and PhD in business informatics in 1990, both from the University of Vienna, Austria. In 1986, he joined the Department of Applied Computer Science at the University of Vienna. In 2000 he joined the University of Linz as Full Professor where he is now head of the Department for Practical Computer Science. He has been active and has published more than 50 technical papers on topics related to parallel and distributed computing, like Computer Aided

References (20)

  • I.F. Akyildiz et al.

    The effect of memory capacity on time warp performance

    J. Parallel Distr. Comput.

    (1993)
  • R. Ayani et al.

    Parallel discrete event simulation on simd computers

    J. Parallel Distr. Comput.

    (1993)
  • R.M. Fujimoto

    Parallel discrete event simulation

    Commun. ACM

    (1990)
  • L. Lamport

    Time, clocks, and the ordering of events in distributed systems

    Commun. ACM

    (1978)
  • K.M. Chandy et al.

    Distributed simulation: a case study in design and verification of distributed programs

    IEEE. Trans. Software Eng.

    (1979)
  • D.A. Jefferson

    Virtual time

    ACM Trans. Progr. Lang. Sys.

    (1985)
  • R.M. Fujimoto

    Parallel discrete event simulation: will the field survive?

    ORSA J. Comput.

    (1993)
  • D.M. Nicol, P. Heidelberger, On extending parallelism to serial simulators, in: Proceedings of the Ninth Workshop on...
  • J.Y.-B. Lin

    Will parallel simulation come to an end?

    Simul. Digest

    (1996)
  • Y.-B. Lin, B. Preiss, W. Loucks, E. Lazowska, Selecting the checkpoint interval in time warp simulation, in: R....
There are more references available in the full text version of this article.

Cited by (11)

  • Urgent computing for decision support in critical situations

    2018, Future Generation Computer Systems
  • Un-identical federate replication structure for improving performance of HLA-based simulations

    2014, Simulation Modelling Practice and Theory
    Citation Excerpt :

    Therefore, we can declaim that the overhead our replication structure is not affect by the message size significantly. Many research works have been done on evaluating and predicting the performances of conservative and optimistic approaches [9,26,27]. It is generally agreed that a conservative approach cannot outperform an optimistic approach in every situation, and vice versa.

  • Distributed data association rule mining: Tools and techniques

    2016, Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016
  • Automatic algorithm selection for complex simulation problems

    2014, Automatic Algorithm Selection for Complex Simulation Problems
  • Pervasive adaptation in car crowds

    2009, Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering
View all citing articles on Scopus

Alois Ferscha received the Mag. degree in 1984, and PhD in business informatics in 1990, both from the University of Vienna, Austria. In 1986, he joined the Department of Applied Computer Science at the University of Vienna. In 2000 he joined the University of Linz as Full Professor where he is now head of the Department for Practical Computer Science. He has been active and has published more than 50 technical papers on topics related to parallel and distributed computing, like Computer Aided Parallel Software Engineering, Performance Oriented Distributed/Parallel Program Development, Parallel and Distributed Discrete Event Simulation, Performance Modeling/Analysis of Parallel Systems and Parallel Visual Programming. Currently he has focussed on Distributed Interactive Simulation, Distributed Interaction and Embedded Software Systems. He has been the project leader of several national and international research projects like Network Computing, Performance Analysis of Parallel Systems and their Workload, Parallel Simulation of Very Large Office Workflow Models, Distributed Simulation on High Performance Parallel Computer Architectures, Modeling and Analysis of Time Constrained and Hierarchical Systems (MATCH, HCM), Broadband Integrated Satellite Network Traffic Evaluation (BISANTE, ESPRIT IV) and Distributed Cooperative Environments (COOPERATE) and Virtual Enterprises. He has been a visiting researcher at the Dipartimento di Informatica, Universita di Torino, Italy, at the Dipartimento di Informatica, Universita di Genoa, Italy, at the Computer Science Department, University of Maryland at College Park, College Park, Maryland, USA, and at the Department of Computer and Information Sciences, University of Oregon, Eugene, USA. He has served on the committees of several conferences like PADS, SIGMETRICS, MASCOTS, TOOLS, PNPM, ICS, etc.

1

http: //www.soft.uni-linz.ac.at/.

View full text