ABSTRACT
The discovery of the Higgs boson by the Large Hadron Collider (LHC) has garnered international attention. In addition to this singular result, the LHC may also uncover other fundamental particles, including dark matter. Much of this research is being done on data from one of the LHC experiments, the Compact Muon Solenoid (CMS). The CMS experiment was able to capture data at higher sampling frequencies than planned during the 2012 LHC operational period. The resulting data had been parked, waiting to be processed on CMS computers. While CMS has significant compute resources, by partnering with SDSC to incorporate Gordon into the CMS workflow, analysis of the parked data was completed months ahead of schedule. This allows scientists to review the results more quickly, and could guide future plans for the LHC.
- T. Barras and E. Al. Software agents in data and workflow management. In Proceedings of CHEP04, Interlaken, Switzerland, Sept. 2004.Google Scholar
- J. Blomer, C. Aguado-SÃąnchez, P. Buncic, and A. Harutyunyan. Distributing lhc application software and conditions databases using the cernvm file system. Journal of Physics: Conference Series, 331 (4):042003, 2011.Google ScholarCross Ref
- I. Foster, C. Kesselman, G. Tsudik, and S. Tuecke. A security architecture for computational grids. In Proceedings of the 5th ACM conference on Computer and communications security, CCS '98, pages 83--92, New York, NY, USA, 1998. ACM. Google ScholarDigital Library
- J. Frey, T. Tannenbaum, I. Foster, M. Livny, and S. Tuecke. Condor-G: A computation management agent for multi-institutional grids. Cluster Computing, 5:237--246, 2002. Google ScholarDigital Library
- P. M. Papadopoulos. Data oasis. http://www.sdsc.edu/us/resources/oasis/oasis.html, June 2013.Google Scholar
- I. Sfiligoi. glidein WMS-a generic pilot-based workload management system. Journal of Physics: Conference Series, 119(6):062044, 2008.Google ScholarCross Ref
- S. M. Strande, P. Cicotti, R. S. Sinkovits, W. S. Young, R. Wagner, M. Tatineni, E. Hocks, A. Snavely, and M. Norman. Gordon: design, performance, and experiences deploying and supporting a data intensive supercomputer. In Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the eXtreme to the campus and beyond, XSEDE '12, pages 3:1--3:8, New York, NY, USA, 2012. ACM. Google ScholarDigital Library
- D. Weitzel. Bosco. http://bosco.opensciencegrid.org/, June 2013.Google Scholar
Index Terms
- Using Gordon to accelerate LHC science
Recommendations
Polish contribution to the worldwide LHC computing
Building a National Distributed e-Infrastructure - PL-GridThe computing requirements of LHC experiments, as well as their computing models, are briefly presented. The origin of grid technology and its development in high energy community is outlined, including the Polish participation. The LHC Computing Grid ...
Homogenizing OSG and XSEDE: Providing Access to XSEDE Allocations through OSG Infrastructure
PEARC '18: Proceedings of the Practice and Experience on Advanced Research ComputingWe present a system that allows individual researchers and virtual organizations (VOs) to access allocations on Stampede2 and Bridges through the Open Science Grid (OSG), a national grid infrastructure for running high throughput computing (HTC) tasks. ...
Interactive Exploitation of Nonuniform Cloud Resources for LHC Computing at CERN
CLOUD '13: Proceedings of the 2013 IEEE Sixth International Conference on Cloud ComputingComputing at LHC is based on the Grid model, where geographically distributed local batch farms are federated using proper middleware. Several computing centers are considering a conversion to private clouds, capable of supporting alternative computing ...
Comments