Abstract
As the scale of supercomputers rapidly grows, the reliability problem dominates the system availability. Existing fault tolerance mechanisms, such as periodic checkpointing and process redundancy, cannot effectively fix this problem. To address this issue, we present a new fault tolerance framework using process replication and prefetching (FTRP), combining the benefits of proactive and reactive mechanisms. FTRP incorporates a novel cost model and a new proactive fault tolerance mechanism to improve the application execution efficiency. The novel cost model, called the ‘work-most’ (WM) model, makes runtime decisions to adaptively choose an action from a set of fault tolerance mechanisms based on failure prediction results and application status. Similar to program locality, we observe the failure locality phenomenon in supercomputers for the first time. In the new proactive fault tolerance mechanism, process replication with process prefetching is proposed based on the failure locality, significantly avoiding losses caused by the failures regardless of whether they have been predicted. Simulations with real failure traces demonstrate that the FTRP framework outperforms existing fault tolerance mechanisms with up to 10% improvement in application efficiency for common failure prediction accuracy, and is effective for petascale systems and beyond.
Similar content being viewed by others
References
Alam SR, Kuehn JA, Barrett RF, et al., 2007. Cray XT4: an early evaluation for petascale scientific simulation. Proc ACM/IEEE Conf on Supercomputing, p.1–12. https://doi.org/10.1145/1362622.1362675
Babaoglu O, Joy W, 1981. Converting a swap–based system to do paging in an architecture lacking page–referenced bits. Proc 8th ACM Symp on Operating Systems Principles, p.78–86. https://doi.org/10.1145/800216.806595
Bhatele A, Jetley P, Gahvari H, et al., 2011. Architectural constraints to attain 1 exaflop/s for three scientific application classes. Proc IEEE Int Parallel & Distributed Processing Symp, p.80–91, https://doi.org/10.1109/IPDPS.2011.18
Bouguerra MS, Gainaru A, Gomez LB, et al., 2013. Improving the computing efficiency of HPC systems using a combination of proactive and preventive checkpointing. IEEE 27th Int Symp on Parallel Distributed Processing, p.501–512. https://doi.org/10.1109/IPDPS.2013.74
Brown D, Smith G, 2008. MP.2 Syslog Data (2006–2008). Technical Report, PNNL–SA–61371.
Cappello F, Casanova H, Robert Y, 2010. Checkpointing vs. migration for post–petascale supercomputers. Proc 39th Int Conf on Parallel Processing, p.168–177. https://doi.org/10.1109/ICPP.2010.26
Daly JT, 2006. A higher order estimate of the optimum checkpoint interval for restart dumps. Fut Gener Comput Syst, 22(3):303–312. https://doi.org/10.1016/j.future.2004.11.016
Denning PJ, 2005. The locality principle. Commun ACM, 48(7):19–24. https://doi.org/10.1145/1070838.1070856
Dwork C, Lynch N, Stockmeyer L, 1988. Consensus in the presence of partial synchrony. J ACM, 35(2):288–323. https://doi.org/10.1145/42282.42283
Egwutuoha IP, Levy D, Selic B, et al., 2013. A survey of fault tolerance mechanisms and checkpoint/restart implementations for high performance computing systems. J Supercomput, 65(3):1302–1326. https://doi.org/10.1007/s11227–013–0884.0
Elliott J, Kharbas K, Fiala D, et al., 2012. Combining partial redundancy and checkpointing for HPC. IEEE 32nd Int Conf on Distributed Computing Systems, p.615–626. https://doi.org/10.1109/ICDCS.2012.56
Elnozahy ENM, Alvisi L, Wang YM, et al., 2002. A survey of rollback–recovery protocols in message–passing systems. ACM Comput Surv, 34(3):375–408. https://doi.org/10.1145/568522.568525
Fahey M, Larkin J, Adams J, 2008. I/O performance on a massively parallel Cray XT3/XT4. IEEE Int Symp on Parallel and Distributed Processing, p.1–12. https://doi.org/10.1109/IPDPS.2008.4536270
Ferreira K, Stearley J, Laros JH III, et al., 2011. Evaluating the viability of process replication reliability for exascale systems. Proc Int Conf for High Performance Computing, Networking, Storage and Analysis, Article 44. https://doi.org/10.1145/2063384.2063443
Gainaru A, Cappello F, Kramer W, 2012a. Taming of the shrew: modeling the normal and faulty behaviour of large–scale HPC systems. IEEE 26th Int Symp on Parallel Distributed Processing, p.1168–1179. https://doi.org/10.1109/IPDPS.2012.107
Gainaru A, Cappello F, Snir M, et al., 2012b. Fault prediction under the microscope: a closer look into HPC systems. Proc Int Conf on High Performance Computing, Networking, Storage and Analysis, Article 77. https://doi.org/10.1109/SC.2012.57
George C, Vadhiyar S, 2012. ADFT: an adaptive framework for fault tolerance on large scale systems using application malleability. Proc Comput Sci, 9:166–175.
George C, Vadhiyar S, 2015. Fault tolerance on large scale systems using adaptive process replication. IEEE Trans Comput, 64(8):2213–2225. https://doi.org/10.1109/TC.2014.2360536
Gujrati P, Li Y, Lan Z, et al., 2007. A meta–learning failure predictor for blue gene/l systems. Proc Int Conf on Parallel Processing, p.1–8. https://doi.org/10.1109/ICPP.2007.9
Gupta S, Xiang P, Yang Y, et al., 2013. Locality principle revisited: a probability–based quantitative approach. J Parall Distrib Comput, 73(7):1011–1027. https://doi.org/10.1016/j.jpdc.2013.01.010
Hamerly G, Elkan C, 2001. Bayesian approaches to failure prediction for disk drives. Proc 18th Int Conf on Machine Learning, p.202–209.
Hargrove PH, Duell JC, 2006. Berkeley Lab Checkpoint/Restart (BLCR) for Linux clusters. J Phys Conf Ser, 46(1):494. https://doi.org/10.1088/1742–6596/46/1.067
Hellerstein JL, Zhang F, Shahabuddin P, 2001. A statistical approach to predictive detection. Comput Netw, 35(1):77–95. https://doi.org/10.1016/S1389–1286(00)00151.1
Hu W, Jiang Y, Liu G, et al., 2015. DDC: Distributed Data Collection Framework for Failure Prediction in Tianhe Supercomputers. Springer International Publishing, p.18–32. https://doi.org/10.1007/978–3–319–23216–4.2
Kalaiselvi S, Rajaraman V, 2000. A survey of checkpointing algorithms for parallel and distributed computers. Sadhana, 25(5):489–510. https://doi.org/10.1007/B.02703630
Lan Z, Li Y, 2008. Adaptive fault management of parallel applications for high–performance computing. IEEE Trans Comput, 57(12):1647–1660. https://doi.org/10.1109/TC.2008.90
Lan Z, Gu J, Zheng Z, et al., 2010. A study of dynamic metalearning for failure prediction in large–scale systems. J Parall Distrib Comput, 70(6):630–643. https://doi.org/10.1016/j.jpdc.2010.03.003
Liang Y, Zhang Y, Jette M, et al., 2006. Bluegene/l failure analysis and prediction models. Int Conf on Dependable Systems and Networks, p.425–434. https://doi.org/10.1109/DSN.2006.18
Lu CD, 2005. Scalable Diskless Checkpointing for Large Parallel Systems. PhD Thesis, Champaign, IL, USA.
Mohammed A, Kavuri R, Upadhyaya N, 2012. Fault tolerance: case study. Proc 2nd Int Conf on Computational Science, Engineering and Information Technology, p.138–144. https://doi.org/10.1145/2393216.2393240
Mohror K, Moody A, de Supinski BR, 2012. Asynchronous checkpoint migration with MRNet in the Scalable Checkpoint/Restart Library. IEEE/IFIP Int Conf on Dependable Systems and Networks Workshops, p.1–6. https://doi.org/10.1109/DSNW.2012.6264668
Moody A, Bronevetsky G, Mohror K, et al., 2010. Design, modeling, and evaluation of a scalable multi–level checkpointing system. Proc ACM/IEEE Int Conf for High Performance Computing, Networking, Storage and Analysis, p.1–11. https://doi.org/10.1109/SC.2010.18
Pinheiro E, Weber WD, Barroso LA, 2007. Failure trends in a large disk drive population. Proc 5th USENIX Conf on File and Storage Technologies, p.2.
Plank JS, Beck M, Kingsley G, et al., 1995. Libckpt: transparent checkpointing under Unix. Proc USENIX Technical Conf Proc, p.18.
Plank JS, Li K, Puening MA, 1998. Diskless checkpointing. IEEE Trans Parall Distrib Syst, 9(10):972–986. https://doi.org/10.1109/71.730527
Roman E, 2002. A survey of checkpoint/restart implementations. Technical Report LBNL–54942, Lawrence Berkeley National Laboratory.
Sahoo RK, Oliner AJ, Rish I, et al., 2003. Critical event prediction for proactive management in large–scale computer clusters. Proc 9th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.426–435. https://doi.org/10.1145/956750.956799
Salfner F, Lenk M, Malek M, 2010. A survey of online failure prediction methods. ACM Comput Surv, 42(3):10.1–10.42. https://doi.org/10.1145/1670679.1670680
Sancho JC, Petrini F, Johnson G, et al., 2004. On the feasibility of incremental checkpointing for scientific computing. Proc 18th Int Symp on Parallel and Distribbuted Processing Symp, p.58. https://doi.org/10.1109/IPDPS.2004.1302982
Schroeder B, Pinheiro E, Weber WD, 2009. DRAM errors in the wild: a large–scale field study. Proc 11th Int Joint Conf on Measurement and Modeling of Computer Systems, p.193–204. https://doi.org/10.1145/2492101.1555372
Vetter JS, Mueller F, 2003. Communication characteristics of large–scale scientific applications for contemporary cluster architectures. J Parall Distrib Comput, 63(9):853–865. https://doi.org/10.1016/S0743–7315(03)00104.7
Vilalta R, Ma S, 2002. Predicting rare events in temporal domains. Proc IEEE Int Conf on Data Mining, p.474–481. https://doi.org/10.1109/ICDM.2002.1183991
Weinberg J, McCracken MO, Strohmaier E, et al., 2005. Quantifying locality in the memory access patterns of HPC applications. Proc ACM/IEEE Conf on Supercomputing, p.50. https://doi.org/10.1109/SC.2005.59
Young JW, 1974. A first order approximation to the optimum checkpoint interval. Commun ACM, 17(9):530–531. https://doi.org/10.1145/361147.361115
Zhong Y, Shen X, Ding C, 2009. Program locality analysis using reuse distance. ACM Trans Program Lang Syst, 31(6), Article 20. https://doi.org/10.1145/1552309.1552310
Author information
Authors and Affiliations
Corresponding author
Additional information
Project supported by the National Natural Science Foundation of China (Nos. 61272141, 61120106005, and 61303068) and the National High-Tech R&D Program of China (No. 2012AA01A301)
Rights and permissions
About this article
Cite this article
Hu, W., Liu, GM. & Jiang, YH. FTRP: a new fault tolerance framework using process replication and prefetching for high-performance computing. Frontiers Inf Technol Electronic Eng 19, 1273–1290 (2018). https://doi.org/10.1631/FITEE.1601450
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1631/FITEE.1601450
Key words
- High-performance computing
- Proactive fault tolerance
- Failure locality
- Process replication
- Process prefetching