ABSTRACT
For next generation workloads like AI and ML to power the next evolutionary leap in business transformation or research innovation, they need to be able to do more. AI can support enterprise scale requirements - if the data is there. But without a modern data infrastructure that can rapidly process massive volumes of data, it can't deliver on its full promise and potential.
Recent research has found that AI GPU accelerators can spend up to 70% of their time idle, waiting for data. They require a new kind of data infrastructure that is purpose-built to fuel massive quantities of data at low latencies. Speed is critically important to the AI-fueled enterprise - efficiently feeding the right data to the system is the difference between a project requiring months versus mere days.
In this workshop discussion, WEKA CTO Shimon Ben-David will explain why WEKA opted to architect a modern data plane to support next-generation workloads and discuss how the WEKA Data Platform is helping organizations to achieve first to market results with their AI and ML deployments.
Index Terms
- Why a Data Plane Architecture is Critical for Optimizing Next-Generation Workloads
Recommendations
Optimizing stencil application on multi-thread GPU architecture using stream programming model
ARCS'10: Proceedings of the 23rd international conference on Architecture of Computing SystemsWith fast development of GPU hardware and software, using GPUs to accelerate non-graphics CPU applications is becoming inevitable trend. GPUs are good at performing ALU-intensive computation and feature high peak performance; however, how to harness ...
Solving 2D Nonlinear Unsteady Convection-Diffusion Equations on Heterogenous Platforms with Multiple GPUs
ICPADS '09: Proceedings of the 2009 15th International Conference on Parallel and Distributed SystemsSolving complex convection-diffusion equations is very important to many practical mathematical and physical problems. After the finite difference discretization, most of the time for equations solution is spent on sparse linear equation solvers. In ...
Comments