Enabling One-size-fits-all Compilation Optimization across Machine Learning Computers for Inference
Abstract
Machine Learning Computers (MLCs) with tensor functional units (e.g., NVIDIA's Tensor Core, Google's TPU and Habana's Tensor Processor Core) have emerged significantly over recent years. The broad diversity of MLCs makes it hard to deploy machine learning workloads with optimized performance. Though deep learning compilers (e.g., TVM) are effective to produce optimized code for different hardware back-ends, when deploying to a new MLC, it is...
Paper Details
Title
Enabling One-size-fits-all Compilation Optimization across Machine Learning Computers for Inference
Published Date
Jan 1, 2021
Pages
1 - 1
Citation AnalysisPro
You’ll need to upgrade your plan to Pro
Looking to understand the true influence of a researcher’s work across journals & affiliations?
- Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
- Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.
Notes
History