ABSTRACT
Compute in space, e.g., in miniaturized satellites, requires dealing with special physical and boundary constraints, including the limited energy budget. These constraints impose strict operational conditions on the on-board data processing system and its capability in dealing with sophisticated workloads suchlike Machine Learning (ML). In the meantime, the breakthroughs in ML based on Deep Neural Networks (DNNs) in the last decade promise innovative solutions to expand the functional capabilities of on-board data processing and to drive the space industry forward. Therefore, due to the aforementioned special requirements, performance- and power-efficient, and novel solutions and architectures for deploying ML via, e.g., FPGA-enabled SoC, particularly Commercial-Off-The-Shelf (COTS) solutions, are gaining significant interest in the space industry. Therefore it is essential to conduct extensive benchmarking and feasibility and efficiency analyses in different aspects: such analyses would require the investigation of options for programming and deployment as well as the investigation of various real-world models and datasets. To this end, a research and development activity is funded by the European Space Agency (ESA) General Support Technology Programme and is led by Airbus Defence and Space GmbH with the goal of developing an ML Application Benchmark (MLAB) that covers benchmarking aspects mentioned above.
In this invited paper, we provide an overview of the MLAB project and discuss development and progress in various directions, including framework analyses, model, and dataset investigation. We elaborate on a benchmarking methodology developed in the context of this project to enable the analysis of various hardware platforms and options. In the end, focus on a particular use case of aircraft detection as a real-world example and provide an analysis of various performance and accuracy indicators including, accuracy, throughput, latency, and power consumption.
- Cornelius Dennehy. 2019. A NASA GN&C Viewpoint on On-Board Processing Challenges to Support Optical Navigation and Other GN&C Critical Functions. https://indico.esa.int/event/225/contributions/4249/Google Scholar
- Massimiliano Pastena. 2019. ESA Earth Observation on board data processing future needs and technologies. In European Workshop on On-Board Data Processing (OBDP2019). https://indico.esa.int/event/225/contributions/3687/attachments/3357/4395/OBDP2019-S01-03-ESA_Pastena_ESA_Earth_Observation_On_board_data_processing_future_needs_and_technologies.pdfGoogle Scholar
- Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, and Kees Vissers. 2017. FINN. Proc. of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (Feb 2017).Google Scholar
Index Terms
- Benchmarking and feasibility aspects of machine learning in space systems
Recommendations
Composing aspects with aspects
AOSD '10: Proceedings of the 9th International Conference on Aspect-Oriented Software DevelopmentAspect-oriented programming languages modularize crosscutting concerns by separating the concerns from a base program in aspects. What they do not modularize well is the code needed to manage interactions between the aspects themselves. Therefore ...
Benchmarking data warehouses
Database benchmarks can either help users in comparing the performances of different systems, or help engineers in testing the effect of various design choices. In the field of data warehouses, the Transaction Processing Performance Council's standard ...
SPEC HPG benchmarks for high-performance systems
In this paper, we discuss the results and characteristics of the benchmark suites maintained by the Standard Performance Evaluation Corporation's (SPEC) High-Performance Group (HPG). Currently, SPECHPGhas two lines of benchmark suites for measuring ...
Comments