Skip to main content
eScholarship
Open Access Publications from the University of California

Use It or Lose It: Cheap Compute Everywhere

Abstract

Moore’s Law is tapering off, but FLOPS per dollar continues to grow. Inexpensive CPUs are emerging everywhere from network to storage as an effective way of managing and deploying hardware and firmware as well as providing services close to the data path. Examples of this include ARM cores within Mellanox Bluefield, Broadcom Stingray DPUs, switches, and compute in storage. This additional processing power can be useful for (1) enabling higher throughput, (2) decreasing or hiding latency, (3) increasing power/cost efficiency, (4) alleviating contention for oversubscribed resources. In order to make these additional resources available to a wide range of services and applications we must first develop: (1) an understanding of the strengths and weaknesses of the hardware, (2) an understanding of how portions of a workload might be decomposed into tasks for offload, (3) abstractions to allow code portability on the heterogeneous components. We take a look at existing hardware trends through a survey of existing and original work to examine how new compute-in-network show promise, where they fall short and how HPC might evolve to take advantage of them.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View