Abstract:
We propose a hybrid parallelism-independent scheduling method, predominantly performed at compile time, which generates a machine code efficiently executable on any numbe...Show MoreMetadata
Abstract:
We propose a hybrid parallelism-independent scheduling method, predominantly performed at compile time, which generates a machine code efficiently executable on any number of workstations or PCs in a cluster computing environment. Our scheduling algorithm called the dynamical level parallelism-independent scheduling algorithm (DLPIS) is applicable for distributed computer systems because additionally to the task scheduling, we perform message communication scheduling. It provides an explicit task synchronization mechanism guiding the task allocation and data dependency solution at run time at reduced overhead. Furthermore, we provide a mechanism allowing the self-adaptation of the machine code to the degree of parallelism of the system at run-time. Therefore our scheduling method supports the variable number of processors in the users' computing systems and the adaptive parallelism, which may occur in distributed computing systems due to computer or link failure.
Published in: Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications
Date of Conference: 16-19 June 2002
Date Added to IEEE Xplore: 07 August 2002
Print ISBN:0-7695-1626-2