cs.LG

(what is this?)

# Title: Multi-device, Multi-tenant Model Selection with GP-EI

Abstract: Bayesian optimization is the core technique behind the emergence of AutoML, which holds the promise of automatically searching for models and hyperparameters to make machine learning techniques more accessible. As such services are moving towards the cloud, we ask -- {\em When multiple AutoML users share the same computational infrastructure, how should we allocate resources to maximize the "global happiness" of all users?}
We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, and develop a novel multi-device, multi-tenant extension that is aware of \emph{multiple} computation devices and multiple users sharing the same set of computation devices. Theoretically, given $N$ users and $M$ devices, we obtain a regret bound of $O((\text{\bf {MIU}}(T,K) + M)\frac{N^2}{M})$, where $\text{\bf {MIU}}(T,K)$ refers to the maximal incremental uncertainty up to time $T$ for the covariance matrix $K$. Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices.
 Subjects: Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (stat.ML) Cite as: arXiv:1803.06561 [cs.LG] (or arXiv:1803.06561v1 [cs.LG] for this version)

## Submission history

From: Chen Yu [view email]
[v1] Sat, 17 Mar 2018 19:56:18 GMT (2085kb,D)