- Sponsor:
- sigarch
It is our great pleasure to welcome you to the first installment of the Science of Cyberinfrastructure: Research, Applications, Experience and Models -- SCREAM-15.
There is a need for comprehensive, balanced and flexible distributed cyberinfrastructure (DCI) in support of science and engineering applications. A fundamental technical challenge is to support a broad range of application usage scenarios and modalities on a range of platforms with varying performance. The current generation of DCI has resulted in important scientific results as well as advances in the state-of-practice of delivering DCI as services to the user community, broadly defined. However, a complete conceptual framework for DCI design principles remains prominent by its absence. This missing framework prevents an objective assessment of important technical as well as policy considerations.
The SCREAM workshop generally aims to address this gap, and specifically aims to understand, through a combination of experience, application requirements, and conceptual models, how to best to create a conceptual framework for the objective design and assessment of distributed cyberinfrastructure. In other words, it aims to build toward the science of cyberinfrastructure upon what has hitherto been a purely empirical approach to cyberinfrastructure design and practice. The SCREAM Workshop is interested in all areas that will further this objective, in particular the interaction of multiple cyberinfrastructure components and systems (distributed computing, broadly defined), including academic and commercial production systems and research testbeds.
Significant effort has been invested in the delivery and practice of DCI with different objectives and varying capabilities, and existing (Open Science Grid, XSEDE, GENI, EGI, PRACE, DAS-n) and previous offerings have yielded valuable information. Enough experience now exists to reflect on what has worked and why, and why some approaches have failed. Thus, we believe the time is appropriate to build upon these lessons towards a next generation of DCI that is designed and architected for well-defined usage modes, performance and capabilities.
Although primarily targeted towards computing scientists, we believe this workshop will have an impact beyond the computing specialist in light of the fact that production cyberinfrastructure impacts the effectiveness of other science & engineering endeavors. This workshop welcomed technical contributions delivered via research-based results, experience papers, and vision papers. Understanding the principles and science of cyberinfrastructure has impact beyond just the computing aspects.
The call for papers attracted 12 submissions with authors from Asia, Canada, Australia, Europe, and the United States. The papers received an average of 4.9 reviews each. As a new workshop seeking to promote discussion, we were generous in our decisions, accepting 8 of the papers, for a 66.7% acceptance rate.
We also encourage attendees to attend the keynote and invited talk presentations, and the closing panel. These valuable and insightful talks can and will guide us to a better understanding of the future:
Keynote: What can science cyberinfrastructure learn from commercial IT?, Ian Foster, University of Chicago & Argonne National Laboratory
Invited Talk: Revisiting the Anatomy and Physiology of the Grid, Chris Mattmann, NASA Jet Propulsion Laboratory
Panel: Designing Distributed Computing Infrastructure for Seamless Multi-site Execution,
Proceeding Downloads
Lessons from Industry for Science Cyberinfrastructure: Simplicity, Scale, and Sustainability via SaaS/PaaS
Commercial information technology has changed dramatically over the past decade, with profound consequences for both software developers and software consumers. Software-as-a-service (SaaS) enables remote use of powerful capabilities, from accounting ...
Dynamic Provisioning of Data Intensive Computing Middleware Frameworks: A Case Study
Big data has become an important asset for industry, and academic disciplines now utilize large-scale data in their research. This fourth paradigm of scientific research has led to the inclusion of data management, processing, and analytic tools into ...
Achieving Formal Parallel Program Debugging by Incentivizing CS/HPC Collaborative Tool Development
Many disruptive changes are happening in the arena of parallel computing, including the use of multiple compute element types (CPUs and GPUs), memory and interconnect types, as well as multiple concurrency models. In the face of these changes, ...
Apache Airavata as a Laboratory: Architecture and Case Study for Component-Based Gateway Middleware
Science gateways are more than user interfaces to computational grids and clouds. Gateways are middleware in their own right, providing flexible, lightweight federations of heterogenous collections of computing resources (such as campus clusters, ...
A Revisiting of the Anatomy and Physiology of the Grid
The "Grid" as defined by Foster and Kesselman was a unifying architecture that engendered a new generation of distribution computation, data sharing, and science. Along the way many "grid" technologies were developed, but their mapping to the principal ...
Authentication and Authorization Considerations for a Multi-tenant Service
Distributed cyberinfrastructure requires users (and machines) to perform some sort of authentication and authorization (together simply known as \emph{auth}). In the early days of computing, authentication was performed with just a username and password ...
Data Centric Discovery with a Data-Oriented Architecture
Increasingly, scientific discovery is driven by the analysis, manipulation, organization, annotation, sharing, and reuse of high-value scientific data. While great attention has been given to the specifics of analyzing and mining data, we find that ...
Science Gateway Canvas: A business reference model for Science Gateways
Science Gateways (SGs) have emerged as systems that facilitate access to cyberinfrastructures. There is a growing interest in the exploitation and development of SGs. However, it remains challenging to understand and design SG with the required ...
Jetstream: A Distributed Cloud Infrastructure for Underresourced higher education communities
The US National Science Foundation (NSF) in 2015 awarded funding for a first-of-a-kind distributed cyberinfrastructure (DCI) system called Jetstream. Jetstream will be the NSF's first production cloud for general-purpose science and engineering research ...
Sustained Software for Cyberinfrastructure: Analyses of Successful Efforts with a Focus on NSF-funded Software
Reliable software that provides needed functionality is clearly essential for an effective distributed cyberinfrastructure (CI) that supports comprehensive, balanced, and flexible distributed CI. Effective distributed cyberinfrastructure, in turn, ...
Index Terms
- Proceedings of the 1st Workshop on The Science of Cyberinfrastructure: Research, Experience, Applications and Models
Recommendations
Acceptance Rates
Year | Submitted | Accepted | Rate |
---|---|---|---|
SCREAM '15 | 12 | 8 | 67% |
Overall | 12 | 8 | 67% |