Loading [MathJax]/extensions/TeX/mhchem.js
Accelerating Training Data Generation Using Optimal Parallelization and Thread Counts | IEEE Conference Publication | IEEE Xplore

Accelerating Training Data Generation Using Optimal Parallelization and Thread Counts


Abstract:

This paper presents a method for accelerating training data generation by optimizing the thread allocation and number of simulations run in parallel on commercially avail...Show More

Abstract:

This paper presents a method for accelerating training data generation by optimizing the thread allocation and number of simulations run in parallel on commercially available numerical simulation software targeting consumer-level CPUs. Hardware facilities for thread management and disparate CPU core capabilities are addressed by the method. The method scales with CPU cores and a demonstrated speed-up in data generation throughput of approximately 550% compared to relevant previous work is reported to support the method. In general the proposed method involves a relatively minor pre-processing step that enables drastic throughput improvements in subsequent dataset generation steps, with direct application to neural network development.
Date of Conference: 25-29 September 2023
Date Added to IEEE Xplore: 25 December 2023
ISBN Information:

ISSN Information:

Conference Location: Boston, MA, USA

Contact IEEE to Subscribe

References

References is not available for this document.