Abstract
For applications involving large data sets yielding variablecost computations, achieving both efficient I/O and load balancing may become particularly challenging though performance-critical tasks. In this work, we introduce a data scheduling approach that integrates several optimizing techniques, including dynamic allocation, prefetching, and asynchronous I/O and communications. We show that good scalability is obtained by both hiding the I/O latency and appropriately balancing the workloads. We use a statistical metric for data skewness to further improve the performance by adequately selecting among data-scheduling. We test our approach on sparse benchmark matrices for matrix-vector computations and show experimentally that our method can accurately predict the relative performance of different input/output schemes for a given data set and choose the best technique accordingly.
Author currently on leave of absence from the Department of Control and Computers, the Polytechnic University of Bucharest. Romania.
Supported by CESDE/USRA.
This work is supported in part by the National Science Foundation under contract number IRI-9357785.
Author currently on leave from the Department of Computer Science, George Mason University. Fairfax, Virginia.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Arunachalan, M., Choudhary, A., Rullman, B., Implementation and evaluation of prefetching in the Intel Paragon Parallel File System. Proceedings of the 10th Int'l. Parallel Processing Symposium, Honolulu. Hawaii, April 15–19, 1996.
Cheung, A. L., Reeves. A. P., Sparse data representation, Proceedings Scalable High Performance Computing Conference. 1992.
Duff, I. S., Sparse matrix test problems, ACM Transactions on Mathematical Software, Vol. 15, No. 1, 1–14, 1989.
Duff, I. S., Grimes, R.G., Lewis, J. G., User's guide for the Harwell-Boeing Sparse Matrix Collection, CERFACS Report TR/PA/92/86.1992.
Nastea, S. G., El-Ghazawi, T., Frieder, O., Parallel input/output impact on sparse matrix compression. Proceedings of the IEEE Data Compression Conference, Snowbird, Utah, 1996.
Nastea, S., El-Ghazawi, T., Frieder, O., A statisticallybased multi-algorithmic approach for load-balancing sparse matrix computations. Proceedings of the IEEE Frontiers of Massively Parallel Computations (Frontiers '96), Annapolis, Maryland, October 1996.
Nastea, S., Frieder, O., El-Ghazawi, T., “Load balancing in sparse matrix-vector multiplication”. Proceedings of the IEEE Symposium of Parallel and Distributed Processing, New Orleans, Louisiana, October 1996.
Paolini, G. V., Santangelo, P., A graphic tool for the structure of large sparse matrices, IBM Technical Report, ICE-0034 IBM ECSEC Rome (1989).
Park, S. C., Draayer, J. P., Zheng, S. Q., Fast sparse matrix multiplication, Computer Physics Communications, Vol. 70, 1992.
Peters, A., Sparse matrix vector multiplication technique on the IBM 3090 VP, Parallel Computing 17, 1991.
Petiton, S., Saad, Y., Wu, K., Ferng, W., Basic Sparse matrix computations on the CM-5, International Journal of Modern Physics C vol. 4, No. 1, 63–83, 1993.
Nitzberg, B., Fineberg, S. A., Parallel I/O on Highly Parallel Systems, Tutorial notes, Proceedings “Supercomputing '94”, Washington D.C., 1994.
Rothberg, E., Schreiber, R., Improved load balancing in parallel sparse Choleski factorization, “Supercomputing '94”, Washington D.C., 1994.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Nastea, S.G., El-Ghazawi, T., Frieder, O. (1997). Performance optimization of combined variable-cost computations and I/O. In: Bilardi, G., Ferreira, A., Lüling, R., Rolim, J. (eds) Solving Irregularly Structured Problems in Parallel. IRREGULAR 1997. Lecture Notes in Computer Science, vol 1253. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63138-0_18
Download citation
DOI: https://doi.org/10.1007/3-540-63138-0_18
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63138-5
Online ISBN: 978-3-540-69157-0
eBook Packages: Springer Book Archive