- title: 'A New Representation of Successor Features for Transfer across Dissimilar Environments' abstract: 'Transfer in reinforcement learning is usually achieved through generalisation across tasks. Whilst many studies have investigated transferring knowledge when the reward function changes, they have assumed that the dynamics of the environments remain consistent. Many real-world RL problems require transfer among environments with different dynamics. To address this problem, we propose an approach based on successor features in which we model successor feature functions with Gaussian Processes permitting the source successor features to be treated as noisy measurements of the target successor feature function. Our theoretical analysis proves the convergence of this approach as well as the bounded error on modelling successor feature functions with Gaussian Processes in environments with both different dynamics and rewards. We demonstrate our method on benchmark datasets and show that it outperforms current baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/abdolshah21a.html PDF: http://proceedings.mlr.press/v139/abdolshah21a/abdolshah21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-abdolshah21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Majid family: Abdolshah - given: Hung family: Le - given: Thommen Karimpanal family: George - given: Sunil family: Gupta - given: Santu family: Rana - given: Svetha family: Venkatesh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1-9 id: abdolshah21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1 lastpage: 9 published: 2021-07-01 00:00:00 +0000 - title: 'Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling' abstract: 'Using logical clauses to represent patterns, Tsetlin Machine (TM) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using a majority vote. While the evaluation of clauses is fast, being based on binary operators, the voting makes it necessary to synchronize the clause evaluation, impeding parallelization. In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. In brief, every clause runs in its own thread for massive native parallelism. For each training example, we keep track of the class votes obtained from the clauses in local voting tallies. The local voting tallies allow us to detach the processing of each clause from the rest of the clauses, supporting decentralized learning. This means that the TM most of the time will operate on outdated voting tallies. We evaluated the proposed parallelization across diverse learning tasks and it turns out that our decentralized TM learning algorithm copes well with working on outdated data, resulting in no significant loss in learning accuracy. Furthermore, we show that the approach provides up to 50 times faster learning. Finally, learning time is almost constant for reasonable clause amounts (employing from 20 to 7,000 clauses on a Tesla V100 GPU). For sufficiently large clause numbers, computation time increases approximately proportionally. Our parallel and asynchronous architecture thus allows processing of more massive datasets and operating with more clauses for higher accuracy.' volume: 139 URL: https://proceedings.mlr.press/v139/abeyrathna21a.html PDF: http://proceedings.mlr.press/v139/abeyrathna21a/abeyrathna21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-abeyrathna21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kuruge Darshana family: Abeyrathna - given: Bimal family: Bhattarai - given: Morten family: Goodwin - given: Saeed Rahimi family: Gorji - given: Ole-Christoffer family: Granmo - given: Lei family: Jiao - given: Rupsa family: Saha - given: Rohan K. family: Yadav editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10-20 id: abeyrathna21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10 lastpage: 20 published: 2021-07-01 00:00:00 +0000 - title: 'Debiasing Model Updates for Improving Personalized Federated Training' abstract: 'We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.' volume: 139 URL: https://proceedings.mlr.press/v139/acar21a.html PDF: http://proceedings.mlr.press/v139/acar21a/acar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-acar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Durmus Alp Emre family: Acar - given: Yue family: Zhao - given: Ruizhao family: Zhu - given: Ramon family: Matas - given: Matthew family: Mattina - given: Paul family: Whatmough - given: Venkatesh family: Saligrama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 21-31 id: acar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 21 lastpage: 31 published: 2021-07-01 00:00:00 +0000 - title: 'Memory Efficient Online Meta Learning' abstract: 'We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. A fundamental concern arising in online meta-learning is the scalability of memory as more tasks are viewed over time. Heretofore, prior works have allowed for perfect recall leading to linear increase in memory with time. Different from prior works, in our method, prior task instances are allowed to be deleted. We propose to leverage prior task instances by means of a fixed-size state-vector, which is updated sequentially. Our theoretical analysis demonstrates that our proposed memory efficient online learning (MOML) method suffers sub-linear regret with convex loss functions and sub-linear local regret for nonconvex losses. On benchmark datasets we show that our method can outperform prior works even though they allow for perfect recall.' volume: 139 URL: https://proceedings.mlr.press/v139/acar21b.html PDF: http://proceedings.mlr.press/v139/acar21b/acar21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-acar21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Durmus Alp Emre family: Acar - given: Ruizhao family: Zhu - given: Venkatesh family: Saligrama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 32-42 id: acar21b issued: date-parts: - 2021 - 7 - 1 firstpage: 32 lastpage: 42 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Testing and Estimation under Manipulation Attacks' abstract: 'We study robust testing and estimation of discrete distributions in the strong contamination model. Our results cover both centralized setting and distributed setting with general local information constraints including communication and LDP constraints. Our technique relates the strength of manipulation attacks to the earth-mover distance using Hamming distance as the metric between messages (samples) from the users. In the centralized setting, we provide optimal error bounds for both learning and testing. Our lower bounds under local information constraints build on the recent lower bound methods in distributed inference. In the communication constrained setting, we develop novel algorithms based on random hashing and an L1-L1 isometry.' volume: 139 URL: https://proceedings.mlr.press/v139/acharya21a.html PDF: http://proceedings.mlr.press/v139/acharya21a/acharya21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-acharya21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jayadev family: Acharya - given: Ziteng family: Sun - given: Huanyu family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 43-53 id: acharya21a issued: date-parts: - 2021 - 7 - 1 firstpage: 43 lastpage: 53 published: 2021-07-01 00:00:00 +0000 - title: 'GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning' abstract: 'Gaussian processes (GPs) are non-parametric, flexible, models that work well in many tasks. Combining GPs with deep learning methods via deep kernel learning (DKL) is especially compelling due to the strong representational power induced by the network. However, inference in GPs, whether with or without DKL, can be computationally challenging on large datasets. Here, we propose GP-Tree, a novel method for multi-class classification with Gaussian processes and DKL. We develop a tree-based hierarchical model in which each internal node of the tree fits a GP to the data using the P{ó}lya-Gamma augmentation scheme. As a result, our method scales well with both the number of classes and data size. We demonstrate the effectiveness of our method against other Gaussian process training baselines, and we show how our general GP approach achieves improved accuracy on standard incremental few-shot learning benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/achituve21a.html PDF: http://proceedings.mlr.press/v139/achituve21a/achituve21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-achituve21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Idan family: Achituve - given: Aviv family: Navon - given: Yochai family: Yemini - given: Gal family: Chechik - given: Ethan family: Fetaya editor: - given: Marina family: Meila - given: Tong family: Zhang page: 54-65 id: achituve21a issued: date-parts: - 2021 - 7 - 1 firstpage: 54 lastpage: 65 published: 2021-07-01 00:00:00 +0000 - title: 'f-Domain Adversarial Learning: Theory and Algorithms' abstract: 'Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset. In this paper, we introduce a novel and general domain-adversarial framework. Specifically, we derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences. It recovers the theoretical results from Ben-David et al. (2010a) as a special case and supports divergences used in practice. Based on this bound, we derive a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016). We show that many regularizers and ad-hoc objectives introduced over the last years in this framework are then not required to achieve performance comparable to (if not better than) state-of-the-art domain-adversarial methods. Experimental analysis conducted on real-world natural language and computer vision datasets show that our framework outperforms existing baselines, and obtains the best results for f-divergences that were not considered previously in domain-adversarial learning.' volume: 139 URL: https://proceedings.mlr.press/v139/acuna21a.html PDF: http://proceedings.mlr.press/v139/acuna21a/acuna21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-acuna21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Acuna - given: Guojun family: Zhang - given: Marc T. family: Law - given: Sanja family: Fidler editor: - given: Marina family: Meila - given: Tong family: Zhang page: 66-75 id: acuna21a issued: date-parts: - 2021 - 7 - 1 firstpage: 66 lastpage: 75 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Rigorous Interpretations: a Formalisation of Feature Attribution' abstract: 'Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. Task-dependent by nature, precise definitions of "relevance" encountered in the literature are however not always consistent. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.' volume: 139 URL: https://proceedings.mlr.press/v139/afchar21a.html PDF: http://proceedings.mlr.press/v139/afchar21a/afchar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-afchar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Darius family: Afchar - given: Vincent family: Guigue - given: Romain family: Hennequin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 76-86 id: afchar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 76 lastpage: 86 published: 2021-07-01 00:00:00 +0000 - title: 'Acceleration via Fractal Learning Rate Schedules' abstract: 'In practical applications of iterative first-order optimization, the learning rate schedule remains notoriously difficult to understand and expensive to tune. We demonstrate the presence of these subtleties even in the innocuous case when the objective is a convex quadratic. We reinterpret an iterative algorithm from the numerical analysis literature as what we call the Chebyshev learning rate schedule for accelerating vanilla gradient descent, and show that the problem of mitigating instability leads to a fractal ordering of step sizes. We provide some experiments to challenge conventional beliefs about stable learning rates in deep learning: the fractal schedule enables training to converge with locally unstable updates which make negative progress on the objective.' volume: 139 URL: https://proceedings.mlr.press/v139/agarwal21a.html PDF: http://proceedings.mlr.press/v139/agarwal21a/agarwal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-agarwal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Naman family: Agarwal - given: Surbhi family: Goel - given: Cyril family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 87-99 id: agarwal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 87 lastpage: 99 published: 2021-07-01 00:00:00 +0000 - title: 'A Regret Minimization Approach to Iterative Learning Control' abstract: 'We consider the setting of iterative learning control, or model-based policy learning in the presence of uncertain, time-varying dynamics. In this setting, we propose a new performance metric, planning regret, which replaces the standard stochastic uncertainty assumptions with worst case regret. Based on recent advances in non-stochastic control, we design a new iterative algorithm for minimizing planning regret that is more robust to model mismatch and uncertainty. We provide theoretical and empirical evidence that the proposed algorithm outperforms existing methods on several benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/agarwal21b.html PDF: http://proceedings.mlr.press/v139/agarwal21b/agarwal21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-agarwal21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Naman family: Agarwal - given: Elad family: Hazan - given: Anirudha family: Majumdar - given: Karan family: Singh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 100-109 id: agarwal21b issued: date-parts: - 2021 - 7 - 1 firstpage: 100 lastpage: 109 published: 2021-07-01 00:00:00 +0000 - title: 'Towards the Unification and Robustness of Perturbation and Gradient Based Explanations' abstract: 'As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. More specifically, we derive explicit closed form expressions for the explanations output by these two methods and show that they both converge to the same explanation in expectation, i.e., when the number of perturbed samples used by these methods is large. We then leverage this connection to establish other desirable properties, such as robustness, for these techniques. We also derive finite sample complexity bounds for the number of perturbations required for these methods to converge to their expected explanation. Finally, we empirically validate our theory using extensive experimentation on both synthetic and real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/agarwal21c.html PDF: http://proceedings.mlr.press/v139/agarwal21c/agarwal21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-agarwal21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sushant family: Agarwal - given: Shahin family: Jabbari - given: Chirag family: Agarwal - given: Sohini family: Upadhyay - given: Steven family: Wu - given: Himabindu family: Lakkaraju editor: - given: Marina family: Meila - given: Tong family: Zhang page: 110-119 id: agarwal21c issued: date-parts: - 2021 - 7 - 1 firstpage: 110 lastpage: 119 published: 2021-07-01 00:00:00 +0000 - title: 'Label Inference Attacks from Log-loss Scores' abstract: 'Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms. In this paper, we investigate the problem of inferring the labels of a dataset from single (or multiple) log-loss score(s), without any other access to the dataset. Surprisingly, we show that for any finite number of label classes, it is possible to accurately infer the labels of the dataset from the reported log-loss score of a single carefully constructed prediction vector if we allow arbitrary precision arithmetic. Additionally, we present label inference algorithms (attacks) that succeed even under addition of noise to the log-loss scores and under limited precision arithmetic. All our algorithms rely on ideas from number theory and combinatorics and require no model training. We run experimental simulations on some real datasets to demonstrate the ease of running these attacks in practice.' volume: 139 URL: https://proceedings.mlr.press/v139/aggarwal21a.html PDF: http://proceedings.mlr.press/v139/aggarwal21a/aggarwal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-aggarwal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Abhinav family: Aggarwal - given: Shiva family: Kasiviswanathan - given: Zekun family: Xu - given: Oluwaseyi family: Feyisetan - given: Nathanael family: Teissier editor: - given: Marina family: Meila - given: Tong family: Zhang page: 120-129 id: aggarwal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 120 lastpage: 129 published: 2021-07-01 00:00:00 +0000 - title: 'Deep kernel processes' abstract: 'We define deep kernel processes in which positive definite Gram matrices are progressively transformed by nonlinear kernel functions and by sampling from (inverse) Wishart distributions. Remarkably, we find that deep Gaussian processes (DGPs), Bayesian neural networks (BNNs), infinite BNNs, and infinite BNNs with bottlenecks can all be written as deep kernel processes. For DGPs the equivalence arises because the Gram matrix formed by the inner product of features is Wishart distributed, and as we show, standard isotropic kernels can be written entirely in terms of this Gram matrix — we do not need knowledge of the underlying features. We define a tractable deep kernel process, the deep inverse Wishart process, and give a doubly-stochastic inducing-point variational inference scheme that operates on the Gram matrices, not on the features, as in DGPs. We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on fully-connected baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/aitchison21a.html PDF: http://proceedings.mlr.press/v139/aitchison21a/aitchison21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-aitchison21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Laurence family: Aitchison - given: Adam family: Yang - given: Sebastian W. family: Ober editor: - given: Marina family: Meila - given: Tong family: Zhang page: 130-140 id: aitchison21a issued: date-parts: - 2021 - 7 - 1 firstpage: 130 lastpage: 140 published: 2021-07-01 00:00:00 +0000 - title: 'How Does Loss Function Affect Generalization Performance of Deep Learning? Application to Human Age Estimation' abstract: 'Good generalization performance across a wide variety of domains caused by many external and internal factors is the fundamental goal of any machine learning algorithm. This paper theoretically proves that the choice of loss function matters for improving the generalization performance of deep learning-based systems. By deriving the generalization error bound for deep neural models trained by stochastic gradient descent, we pinpoint the characteristics of the loss function that is linked to the generalization error and can therefore be used for guiding the loss function selection process. In summary, our main statement in this paper is: choose a stable loss function, generalize better. Focusing on human age estimation from the face which is a challenging topic in computer vision, we then propose a novel loss function for this learning problem. We theoretically prove that the proposed loss function achieves stronger stability, and consequently a tighter generalization error bound, compared to the other common loss functions for this problem. We have supported our findings theoretically, and demonstrated the merits of the guidance process experimentally, achieving significant improvements.' volume: 139 URL: https://proceedings.mlr.press/v139/akbari21a.html PDF: http://proceedings.mlr.press/v139/akbari21a/akbari21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-akbari21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ali family: Akbari - given: Muhammad family: Awais - given: Manijeh family: Bashar - given: Josef family: Kittler editor: - given: Marina family: Meila - given: Tong family: Zhang page: 141-151 id: akbari21a issued: date-parts: - 2021 - 7 - 1 firstpage: 141 lastpage: 151 published: 2021-07-01 00:00:00 +0000 - title: 'On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting' abstract: 'Deep learning empirically achieves high performance in many applications, but its training dynamics has not been fully understood theoretically. In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. We show that with a specific regularization and sufficient over-parameterization, the student network can identify the parameters of the teacher network with high probability via gradient descent with a norm dependent stepsize even though the objective function is highly non-convex. The key theoretical tool is the measure representation of the neural networks and a novel application of a dual certificate argument for sparse estimation on a measure space. We analyze the global minima and global convergence property in the measure space.' volume: 139 URL: https://proceedings.mlr.press/v139/akiyama21a.html PDF: http://proceedings.mlr.press/v139/akiyama21a/akiyama21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-akiyama21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shunta family: Akiyama - given: Taiji family: Suzuki editor: - given: Marina family: Meila - given: Tong family: Zhang page: 152-162 id: akiyama21a issued: date-parts: - 2021 - 7 - 1 firstpage: 152 lastpage: 162 published: 2021-07-01 00:00:00 +0000 - title: 'Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks' abstract: 'In contrast to traditional weight optimization in a continuous space, we demonstrate the existence of effective random networks whose weights are never updated. By selecting a weight among a fixed set of random values for each individual connection, our method uncovers combinations of random weights that match the performance of traditionally-trained networks of the same capacity. We refer to our networks as "slot machines" where each reel (connection) contains a fixed set of symbols (random values). Our backpropagation algorithm "spins" the reels to seek "winning" combinations, i.e., selections of random weight values that minimize the given loss. Quite surprisingly, we find that allocating just a few random values to each connection (e.g., 8 values per connection) yields highly competitive combinations despite being dramatically more constrained compared to traditionally learned weights. Moreover, finetuning these combinations often improves performance over the trained baselines. A randomly initialized VGG-19 with 8 values per connection contains a combination that achieves 91% test accuracy on CIFAR-10. Our method also achieves an impressive performance of 98.2% on MNIST for neural networks containing only random weights.' volume: 139 URL: https://proceedings.mlr.press/v139/aladago21a.html PDF: http://proceedings.mlr.press/v139/aladago21a/aladago21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-aladago21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Maxwell M family: Aladago - given: Lorenzo family: Torresani editor: - given: Marina family: Meila - given: Tong family: Zhang page: 163-174 id: aladago21a issued: date-parts: - 2021 - 7 - 1 firstpage: 163 lastpage: 174 published: 2021-07-01 00:00:00 +0000 - title: 'A large-scale benchmark for few-shot program induction and synthesis' abstract: 'A landmark challenge for AI is to learn flexible, powerful representations from small numbers of examples. On an important class of tasks, hypotheses in the form of programs provide extreme generalization capabilities from surprisingly few examples. However, whereas large natural few-shot learning image benchmarks have spurred progress in meta-learning for deep networks, there is no comparably big, natural program-synthesis dataset that can play a similar role. This is because, whereas images are relatively easy to label from internet meta-data or annotated by non-experts, generating meaningful input-output examples for program induction has proven hard to scale. In this work, we propose a new way of leveraging unit tests and natural inputs for small programs as meaningful input-output examples for each sub-program of the overall program. This allows us to create a large-scale naturalistic few-shot program-induction benchmark and propose new challenges in this domain. The evaluation of multiple program induction and synthesis algorithms points to shortcomings of current methods and suggests multiple avenues for future work.' volume: 139 URL: https://proceedings.mlr.press/v139/alet21a.html PDF: http://proceedings.mlr.press/v139/alet21a/alet21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-alet21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ferran family: Alet - given: Javier family: Lopez-Contreras - given: James family: Koppel - given: Maxwell family: Nye - given: Armando family: Solar-Lezama - given: Tomas family: Lozano-Perez - given: Leslie family: Kaelbling - given: Joshua family: Tenenbaum editor: - given: Marina family: Meila - given: Tong family: Zhang page: 175-186 id: alet21a issued: date-parts: - 2021 - 7 - 1 firstpage: 175 lastpage: 186 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Pure Exploration in Linear Bandits with Limited Budget' abstract: 'We consider the pure exploration problem in the fixed-budget linear bandit setting. We provide a new algorithm that identifies the best arm with high probability while being robust to unknown levels of observation noise as well as to moderate levels of misspecification in the linear model. Our technique combines prior approaches to pure exploration in the multi-armed bandit problem with optimal experimental design algorithms to obtain both problem dependent and problem independent bounds. Our success probability is never worse than that of an algorithm that ignores the linear structure, but seamlessly takes advantage of such structure when possible. Furthermore, we only need the number of samples to scale with the dimension of the problem rather than the number of arms. We complement our theoretical results with empirical validation.' volume: 139 URL: https://proceedings.mlr.press/v139/alieva21a.html PDF: http://proceedings.mlr.press/v139/alieva21a/alieva21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-alieva21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ayya family: Alieva - given: Ashok family: Cutkosky - given: Abhimanyu family: Das editor: - given: Marina family: Meila - given: Tong family: Zhang page: 187-195 id: alieva21a issued: date-parts: - 2021 - 7 - 1 firstpage: 187 lastpage: 195 published: 2021-07-01 00:00:00 +0000 - title: 'Communication-Efficient Distributed Optimization with Quantized Preconditioners' abstract: 'We investigate fast and communication-efficient algorithms for the classic problem of minimizing a sum of strongly convex and smooth functions that are distributed among $n$ different nodes, which can communicate using a limited number of bits. Most previous communication-efficient approaches for this problem are limited to first-order optimization, and therefore have \emph{linear} dependence on the condition number in their communication complexity. We show that this dependence is not inherent: communication-efficient methods can in fact have sublinear dependence on the condition number. For this, we design and analyze the first communication-efficient distributed variants of preconditioned gradient descent for Generalized Linear Models, and for Newton’s method. Our results rely on a new technique for quantizing both the preconditioner and the descent direction at each step of the algorithms, while controlling their convergence rate. We also validate our findings experimentally, showing faster convergence and reduced communication relative to previous methods.' volume: 139 URL: https://proceedings.mlr.press/v139/alimisis21a.html PDF: http://proceedings.mlr.press/v139/alimisis21a/alimisis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-alimisis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Foivos family: Alimisis - given: Peter family: Davies - given: Dan family: Alistarh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 196-206 id: alimisis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 196 lastpage: 206 published: 2021-07-01 00:00:00 +0000 - title: 'Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions' abstract: 'We tackle the problem of online optimization with a general, possibly unbounded, loss function. It is well known that when the loss is bounded, the exponentially weighted aggregation strategy (EWA) leads to a regret in $\sqrt{T}$ after $T$ steps. In this paper, we study a generalized aggregation strategy, where the weights no longer depend exponentially on the losses. Our strategy is based on Follow The Regularized Leader (FTRL): we minimize the expected losses plus a regularizer, that is here a $\phi$-divergence. When the regularizer is the Kullback-Leibler divergence, we obtain EWA as a special case. Using alternative divergences enables unbounded losses, at the cost of a worst regret bound in some cases.' volume: 139 URL: https://proceedings.mlr.press/v139/alquier21a.html PDF: http://proceedings.mlr.press/v139/alquier21a/alquier21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-alquier21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pierre family: Alquier editor: - given: Marina family: Meila - given: Tong family: Zhang page: 207-218 id: alquier21a issued: date-parts: - 2021 - 7 - 1 firstpage: 207 lastpage: 218 published: 2021-07-01 00:00:00 +0000 - title: 'Dataset Dynamics via Gradient Flows in Probability Space' abstract: 'Various machine learning tasks, from generative modeling to domain adaptation, revolve around the concept of dataset transformation and manipulation. While various methods exist for transforming unlabeled datasets, principled methods to do so for labeled (e.g., classification) datasets are missing. In this work, we propose a novel framework for dataset transformation, which we cast as optimization over data-generating joint probability distributions. We approach this class of problems through Wasserstein gradient flows in probability space, and derive practical and efficient particle-based methods for a flexible but well-behaved class of objective functions. Through various experiments, we show that this framework can be used to impose constraints on classification datasets, adapt them for transfer learning, or to re-purpose fixed or black-box models to classify {—}with high accuracy{—} previously unseen datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/alvarez-melis21a.html PDF: http://proceedings.mlr.press/v139/alvarez-melis21a/alvarez-melis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-alvarez-melis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Alvarez-Melis - given: Nicolò family: Fusi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 219-230 id: alvarez-melis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 219 lastpage: 230 published: 2021-07-01 00:00:00 +0000 - title: 'Submodular Maximization subject to a Knapsack Constraint: Combinatorial Algorithms with Near-optimal Adaptive Complexity' abstract: 'The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the \emph{adaptive complexity}, capturing the number of sequential rounds of parallel computation needed. In this work we obtain the first \emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with \emph{near-optimal} $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: one needs to account for the total number of function evaluations (or value queries) as well. Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first \emph{combinatorial} approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives. Finally, we showcase our algorithms’ applicability on real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/amanatidis21a.html PDF: http://proceedings.mlr.press/v139/amanatidis21a/amanatidis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-amanatidis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Georgios family: Amanatidis - given: Federico family: Fusco - given: Philip family: Lazos - given: Stefano family: Leonardi - given: Alberto family: Marchetti-Spaccamela - given: Rebecca family: Reiffenhäuser editor: - given: Marina family: Meila - given: Tong family: Zhang page: 231-242 id: amanatidis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 231 lastpage: 242 published: 2021-07-01 00:00:00 +0000 - title: 'Safe Reinforcement Learning with Linear Function Approximation' abstract: 'Safety in reinforcement learning has become increasingly important in recent years. Yet, existing solutions either fail to strictly avoid choosing unsafe actions, which may lead to catastrophic results in safety-critical systems, or fail to provide regret guarantees for settings where safety constraints need to be learned. In this paper, we address both problems by first modeling safety as an unknown linear cost function of states and actions, which must always fall below a certain threshold. We then present algorithms, termed SLUCB-QVI and RSLUCB-QVI, for episodic Markov decision processes (MDPs) with linear function approximation. We show that SLUCB-QVI and RSLUCB-QVI, while with \emph{no safety violation}, achieve a $\tilde{\mathcal{O}}\left(\kappa\sqrt{d^3H^3T}\right)$ regret, nearly matching that of state-of-the-art unsafe algorithms, where $H$ is the duration of each episode, $d$ is the dimension of the feature mapping, $\kappa$ is a constant characterizing the safety constraints, and $T$ is the total number of action plays. We further present numerical simulations that corroborate our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/amani21a.html PDF: http://proceedings.mlr.press/v139/amani21a/amani21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-amani21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sanae family: Amani - given: Christos family: Thrampoulidis - given: Lin family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 243-253 id: amani21a issued: date-parts: - 2021 - 7 - 1 firstpage: 243 lastpage: 253 published: 2021-07-01 00:00:00 +0000 - title: 'Automatic variational inference with cascading flows' abstract: 'The automation of probabilistic reasoning is one of the primary aims of machine learning. Recently, the confluence of variational inference and deep learning has led to powerful and flexible automatic inference methods that can be trained by stochastic gradient descent. In particular, normalizing flows are highly parameterized deep models that can fit arbitrarily complex posterior densities. However, normalizing flows struggle in highly structured probabilistic programs as they need to relearn the forward-pass of the program. Automatic structured variational inference (ASVI) remedies this problem by constructing variational programs that embed the forward-pass. Here, we combine the flexibility of normalizing flows and the prior-embedding property of ASVI in a new family of variational programs, which we named cascading flows. A cascading flows program interposes a newly designed highway flow architecture in between the conditional distributions of the prior program such as to steer it toward the observed data. These programs can be constructed automatically from an input probabilistic program and can also be amortized automatically. We evaluate the performance of the new variational programs in a series of structured inference problems. We find that cascading flows have much higher performance than both normalizing flows and ASVI in a large set of structured inference problems.' volume: 139 URL: https://proceedings.mlr.press/v139/ambrogioni21a.html PDF: http://proceedings.mlr.press/v139/ambrogioni21a/ambrogioni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ambrogioni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Luca family: Ambrogioni - given: Gianluigi family: Silvestri - given: Marcel prefix: van family: Gerven editor: - given: Marina family: Meila - given: Tong family: Zhang page: 254-263 id: ambrogioni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 254 lastpage: 263 published: 2021-07-01 00:00:00 +0000 - title: 'Sparse Bayesian Learning via Stepwise Regression' abstract: 'Sparse Bayesian Learning (SBL) is a powerful framework for attaining sparsity in probabilistic models. Herein, we propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP) and show that, as its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression. Further, we derive novel guarantees for Stepwise Regression algorithms, which also shed light on RMP. Our guarantees for Forward Regression improve on deterministic and probabilistic results for Orthogonal Matching Pursuit with noise. Our analysis of Backward Regression culminates in a bound on the residual of the optimal solution to the subset selection problem that, if satisfied, guarantees the optimality of the result. To our knowledge, this bound is the first that can be computed in polynomial time and depends chiefly on the smallest singular value of the matrix. We report numerical experiments using a variety of feature selection algorithms. Notably, RMP and its limiting variant are both efficient and maintain strong performance with correlated features.' volume: 139 URL: https://proceedings.mlr.press/v139/ament21a.html PDF: http://proceedings.mlr.press/v139/ament21a/ament21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ament21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sebastian E. family: Ament - given: Carla P. family: Gomes editor: - given: Marina family: Meila - given: Tong family: Zhang page: 264-274 id: ament21a issued: date-parts: - 2021 - 7 - 1 firstpage: 264 lastpage: 274 published: 2021-07-01 00:00:00 +0000 - title: 'Locally Persistent Exploration in Continuous Control Tasks with Sparse Rewards' abstract: 'A major challenge in reinforcement learning is the design of exploration strategies, especially for environments with sparse reward structures and continuous state and action spaces. Intuitively, if the reinforcement signal is very scarce, the agent should rely on some form of short-term memory in order to cover its environment efficiently. We propose a new exploration method, based on two intuitions: (1) the choice of the next exploratory action should depend not only on the (Markovian) state of the environment, but also on the agent’s trajectory so far, and (2) the agent should utilize a measure of spread in the state space to avoid getting stuck in a small region. Our method leverages concepts often used in statistical physics to provide explanations for the behavior of simplified (polymer) chains in order to generate persistent (locally self-avoiding) trajectories in state space. We discuss the theoretical properties of locally self-avoiding walks and their ability to provide a kind of short-term memory through a decaying temporal correlation within the trajectory. We provide empirical evaluations of our approach in a simulated 2D navigation task, as well as higher-dimensional MuJoCo continuous control locomotion tasks with sparse rewards.' volume: 139 URL: https://proceedings.mlr.press/v139/amin21a.html PDF: http://proceedings.mlr.press/v139/amin21a/amin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-amin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Susan family: Amin - given: Maziar family: Gomrokchi - given: Hossein family: Aboutalebi - given: Harsh family: Satija - given: Doina family: Precup editor: - given: Marina family: Meila - given: Tong family: Zhang page: 275-285 id: amin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 275 lastpage: 285 published: 2021-07-01 00:00:00 +0000 - title: 'Preferential Temporal Difference Learning' abstract: 'Temporal-Difference (TD) learning is a general and very useful tool for estimating the value function of a given policy, which in turn is required to find good policies. Generally speaking, TD learning updates states whenever they are visited. When the agent lands in a state, its value can be used to compute the TD-error, which is then propagated to other states. However, it may be interesting, when computing updates, to take into account other information than whether a state is visited or not. For example, some states might be more important than others (such as states which are frequently seen in a successful trajectory). Or, some states might have unreliable value estimates (for example, due to partial observability or lack of data), making their values less desirable as targets. We propose an approach to re-weighting states used in TD updates, both when they are the input and when they provide the target for the update. We prove that our approach converges with linear function approximation and illustrate its desirable empirical behaviour compared to other TD-style methods.' volume: 139 URL: https://proceedings.mlr.press/v139/anand21a.html PDF: http://proceedings.mlr.press/v139/anand21a/anand21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-anand21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nishanth family: Anand - given: Doina family: Precup editor: - given: Marina family: Meila - given: Tong family: Zhang page: 286-296 id: anand21a issued: date-parts: - 2021 - 7 - 1 firstpage: 286 lastpage: 296 published: 2021-07-01 00:00:00 +0000 - title: 'Unitary Branching Programs: Learnability and Lower Bounds' abstract: 'Bounded width branching programs are a formalism that can be used to capture the notion of non-uniform constant-space computation. In this work, we study a generalized version of bounded width branching programs where instructions are defined by unitary matrices of bounded dimension. We introduce a new learning framework for these branching programs that leverages on a combination of local search techniques with gradient descent over Riemannian manifolds. We also show that gapped, read-once branching programs of bounded dimension can be learned with a polynomial number of queries in the presence of a teacher. Finally, we provide explicit near-quadratic size lower-bounds for bounded-dimension unitary branching programs, and exponential size lower-bounds for bounded-dimension read-once gapped unitary branching programs. The first lower bound is proven using a combination of Neciporuk’s lower bound technique with classic results from algebraic geometry. The second lower bound is proven within the framework of communication complexity theory.' volume: 139 URL: https://proceedings.mlr.press/v139/andino21a.html PDF: http://proceedings.mlr.press/v139/andino21a/andino21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-andino21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fidel Ernesto Diaz family: Andino - given: Maria family: Kokkou - given: Mateus family: De Oliveira Oliveira - given: Farhad family: Vadiee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 297-306 id: andino21a issued: date-parts: - 2021 - 7 - 1 firstpage: 297 lastpage: 306 published: 2021-07-01 00:00:00 +0000 - title: 'The Logical Options Framework' abstract: 'Learning composable policies for environments with complex rules and tasks is a challenging problem. We introduce a hierarchical reinforcement learning framework called the Logical Options Framework (LOF) that learns policies that are satisfying, optimal, and composable. LOF efficiently learns policies that satisfy tasks by representing the task as an automaton and integrating it into learning and planning. We provide and prove conditions under which LOF will learn satisfying, optimal policies. And lastly, we show how LOF’s learned policies can be composed to satisfy unseen tasks with only 10-50 retraining steps on our benchmarks. We evaluate LOF on four tasks in discrete and continuous domains, including a 3D pick-and-place environment.' volume: 139 URL: https://proceedings.mlr.press/v139/araki21a.html PDF: http://proceedings.mlr.press/v139/araki21a/araki21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-araki21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Brandon family: Araki - given: Xiao family: Li - given: Kiran family: Vodrahalli - given: Jonathan family: Decastro - given: Micah family: Fry - given: Daniela family: Rus editor: - given: Marina family: Meila - given: Tong family: Zhang page: 307-317 id: araki21a issued: date-parts: - 2021 - 7 - 1 firstpage: 307 lastpage: 317 published: 2021-07-01 00:00:00 +0000 - title: 'Annealed Flow Transport Monte Carlo' abstract: 'Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC) extensions are state-of-the-art methods for estimating normalizing constants of probability distributions. We propose here a novel Monte Carlo algorithm, Annealed Flow Transport (AFT), that builds upon AIS and SMC and combines them with normalizing flows (NFs) for improved performance. This method transports a set of particles using not only importance sampling (IS), Markov chain Monte Carlo (MCMC) and resampling steps - as in SMC, but also relies on NFs which are learned sequentially to push particles towards the successive annealed targets. We provide limit theorems for the resulting Monte Carlo estimates of the normalizing constant and expectations with respect to the target distribution. Additionally, we show that a continuous-time scaling limit of the population version of AFT is given by a Feynman–Kac measure which simplifies to the law of a controlled diffusion for expressive NFs. We demonstrate experimentally the benefits and limitations of our methodology on a variety of applications.' volume: 139 URL: https://proceedings.mlr.press/v139/arbel21a.html PDF: http://proceedings.mlr.press/v139/arbel21a/arbel21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-arbel21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michael family: Arbel - given: Alex family: Matthews - given: Arnaud family: Doucet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 318-330 id: arbel21a issued: date-parts: - 2021 - 7 - 1 firstpage: 318 lastpage: 330 published: 2021-07-01 00:00:00 +0000 - title: 'Permutation Weighting' abstract: 'A commonly applied approach for estimating causal effects from observational data is to apply weights which render treatments independent of observed pre-treatment covariates. Recently emphasis has been placed on deriving balancing weights which explicitly target this independence condition. In this work we introduce permutation weighting, a method for estimating balancing weights using a standard binary classifier (regardless of cardinality of treatment). A large class of probabilistic classifiers may be used in this method; the choice of loss for the classifier implies the particular definition of balance. We bound bias and variance in terms of the excess risk of the classifier, show that these disappear asymptotically, and demonstrate that our classification problem directly minimizes imbalance. Additionally, hyper-parameter tuning and model selection can be performed with standard cross-validation methods. Empirical evaluations indicate that permutation weighting provides favorable performance in comparison to existing methods.' volume: 139 URL: https://proceedings.mlr.press/v139/arbour21a.html PDF: http://proceedings.mlr.press/v139/arbour21a/arbour21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-arbour21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Arbour - given: Drew family: Dimmery - given: Arjun family: Sondhi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 331-341 id: arbour21a issued: date-parts: - 2021 - 7 - 1 firstpage: 331 lastpage: 341 published: 2021-07-01 00:00:00 +0000 - title: 'Analyzing the tree-layer structure of Deep Forests' abstract: 'Random forests on the one hand, and neural networks on the other hand, have met great success in the machine learning community for their predictive performance. Combinations of both have been proposed in the literature, notably leading to the so-called deep forests (DF) (Zhou & Feng,2019). In this paper, our aim is not to benchmark DF performances but to investigate instead their underlying mechanisms. Additionally, DF architecture can be generally simplified into more simple and computationally efficient shallow forest networks. Despite some instability, the latter may outperform standard predictive tree-based methods. We exhibit a theoretical framework in which a shallow tree network is shown to enhance the performance of classical decision trees. In such a setting, we provide tight theoretical lower and upper bounds on its excess risk. These theoretical results show the interest of tree-network architectures for well-structured data provided that the first layer, acting as a data encoder, is rich enough.' volume: 139 URL: https://proceedings.mlr.press/v139/arnould21a.html PDF: http://proceedings.mlr.press/v139/arnould21a/arnould21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-arnould21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ludovic family: Arnould - given: Claire family: Boyer - given: Erwan family: Scornet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 342-350 id: arnould21a issued: date-parts: - 2021 - 7 - 1 firstpage: 342 lastpage: 350 published: 2021-07-01 00:00:00 +0000 - title: 'Dropout: Explicit Forms and Capacity Control' abstract: 'We investigate the capacity control provided by dropout in various machine learning problems. First, we study dropout for matrix completion, where it induces a distribution-dependent regularizer that equals the weighted trace-norm of the product of the factors. In deep learning, we show that the distribution-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks. These developments enable us to give concrete generalization error bounds for the dropout algorithm in both matrix completion as well as training deep neural networks.' volume: 139 URL: https://proceedings.mlr.press/v139/arora21a.html PDF: http://proceedings.mlr.press/v139/arora21a/arora21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-arora21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Raman family: Arora - given: Peter family: Bartlett - given: Poorya family: Mianjy - given: Nathan family: Srebro editor: - given: Marina family: Meila - given: Tong family: Zhang page: 351-361 id: arora21a issued: date-parts: - 2021 - 7 - 1 firstpage: 351 lastpage: 361 published: 2021-07-01 00:00:00 +0000 - title: 'Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients' abstract: 'We propose a lower bound on the log marginal likelihood of Gaussian process regression models that can be computed without matrix factorisation of the full kernel matrix. We show that approximate maximum likelihood learning of model parameters by maximising our lower bound retains many benefits of the sparse variational approach while reducing the bias introduced into hyperparameter learning. The basis of our bound is a more careful analysis of the log-determinant term appearing in the log marginal likelihood, as well as using the method of conjugate gradients to derive tight lower bounds on the term involving a quadratic form. Our approach is a step forward in unifying methods relying on lower bound maximisation (e.g. variational methods) and iterative approaches based on conjugate gradients for training Gaussian processes. In experiments, we show improved predictive performance with our model for a comparable amount of training time compared to other conjugate gradient based approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/artemev21a.html PDF: http://proceedings.mlr.press/v139/artemev21a/artemev21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-artemev21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Artem family: Artemev - given: David R. family: Burt - given: Mark prefix: van der family: Wilk editor: - given: Marina family: Meila - given: Tong family: Zhang page: 362-372 id: artemev21a issued: date-parts: - 2021 - 7 - 1 firstpage: 362 lastpage: 372 published: 2021-07-01 00:00:00 +0000 - title: 'Deciding What to Learn: A Rate-Distortion Approach' abstract: 'Agents that learn to select optimal actions represent a prominent focus of the sequential decision-making literature. In the face of a complex environment or constraints on time and resources, however, aiming to synthesize such an optimal policy can become infeasible. These scenarios give rise to an important trade-off between the information an agent must acquire to learn and the sub-optimality of the resulting policy. While an agent designer has a preference for how this trade-off is resolved, existing approaches further require that the designer translate these preferences into a fixed learning target for the agent. In this work, leveraging rate-distortion theory, we automate this process such that the designer need only express their preferences via a single hyperparameter and the agent is endowed with the ability to compute its own learning targets that best achieve the desired trade-off. We establish a general bound on expected discounted regret for an agent that decides what to learn in this manner along with computational experiments that illustrate the expressiveness of designer preferences and even show improvements over Thompson sampling in identifying an optimal policy.' volume: 139 URL: https://proceedings.mlr.press/v139/arumugam21a.html PDF: http://proceedings.mlr.press/v139/arumugam21a/arumugam21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-arumugam21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dilip family: Arumugam - given: Benjamin family: Van Roy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 373-382 id: arumugam21a issued: date-parts: - 2021 - 7 - 1 firstpage: 373 lastpage: 382 published: 2021-07-01 00:00:00 +0000 - title: 'Private Adaptive Gradient Methods for Convex Optimization' abstract: 'We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm. We provide upper bounds on the regret of both algorithms and show that the bounds are (worst-case) optimal. As a consequence of our development, we show that our private versions of AdaGrad outperform adaptive SGD, which in turn outperforms traditional SGD in scenarios with non-isotropic gradients where (non-private) Adagrad provably outperforms SGD. The major challenge is that the isotropic noise typically added for privacy dominates the signal in gradient geometry for high-dimensional problems; approaches to this that effectively optimize over lower-dimensional subspaces simply ignore the actual problems that varying gradient geometries introduce. In contrast, we study non-isotropic clipping and noise addition, developing a principled theoretical approach; the consequent procedures also enjoy significantly stronger empirical performance than prior approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/asi21a.html PDF: http://proceedings.mlr.press/v139/asi21a/asi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-asi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hilal family: Asi - given: John family: Duchi - given: Alireza family: Fallah - given: Omid family: Javidbakht - given: Kunal family: Talwar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 383-392 id: asi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 383 lastpage: 392 published: 2021-07-01 00:00:00 +0000 - title: 'Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry' abstract: 'Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any $(\epsilon,\delta)$-differentially private optimizer is $\sqrt{\log(d)/n} + \sqrt{d}/\epsilon n.$ The upper bound is based on a new algorithm that combines the iterative localization approach of Feldman et al. (2020) with a new analysis of private regularized mirror descent. It applies to $\ell_p$ bounded domains for $p\in [1,2]$ and queries at most $n^{3/2}$ gradients improving over the best previously known algorithm for the $\ell_2$ case which needs $n^2$ gradients. Further, we show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $\sqrt{\log(d)/n} + (\log(d)/\epsilon n)^{2/3}.$ This bound is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data. We also show that the lower bound in this case is the minimum of the two rates mentioned above.' volume: 139 URL: https://proceedings.mlr.press/v139/asi21b.html PDF: http://proceedings.mlr.press/v139/asi21b/asi21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-asi21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hilal family: Asi - given: Vitaly family: Feldman - given: Tomer family: Koren - given: Kunal family: Talwar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 393-403 id: asi21b issued: date-parts: - 2021 - 7 - 1 firstpage: 393 lastpage: 403 published: 2021-07-01 00:00:00 +0000 - title: 'Combinatorial Blocking Bandits with Stochastic Delays' abstract: 'Recent work has considered natural variations of the {\em multi-armed bandit} problem, where the reward distribution of each arm is a special function of the time passed since its last pulling. In this direction, a simple (yet widely applicable) model is that of {\em blocking bandits}, where an arm becomes unavailable for a deterministic number of rounds after each play. In this work, we extend the above model in two directions: (i) We consider the general combinatorial setting where more than one arms can be played at each round, subject to feasibility constraints. (ii) We allow the blocking time of each arm to be stochastic. We first study the computational/unconditional hardness of the above setting and identify the necessary conditions for the problem to become tractable (even in an approximate sense). Based on these conditions, we provide a tight analysis of the approximation guarantee of a natural greedy heuristic that always plays the maximum expected reward feasible subset among the available (non-blocked) arms. When the arms’ expected rewards are unknown, we adapt the above heuristic into a bandit algorithm, based on UCB, for which we provide sublinear (approximate) regret guarantees, matching the theoretical lower bounds in the limiting case of absence of delays.' volume: 139 URL: https://proceedings.mlr.press/v139/atsidakou21a.html PDF: http://proceedings.mlr.press/v139/atsidakou21a/atsidakou21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-atsidakou21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexia family: Atsidakou - given: Orestis family: Papadigenopoulos - given: Soumya family: Basu - given: Constantine family: Caramanis - given: Sanjay family: Shakkottai editor: - given: Marina family: Meila - given: Tong family: Zhang page: 404-413 id: atsidakou21a issued: date-parts: - 2021 - 7 - 1 firstpage: 404 lastpage: 413 published: 2021-07-01 00:00:00 +0000 - title: 'Dichotomous Optimistic Search to Quantify Human Perception' abstract: 'In this paper we address a variant of the continuous multi-armed bandits problem, called the threshold estimation problem, which is at the heart of many psychometric experiments. Here, the objective is to estimate the sensitivity threshold for an unknown psychometric function Psi, which is assumed to be non decreasing and continuous. Our algorithm, Dichotomous Optimistic Search (DOS), efficiently solves this task by taking inspiration from hierarchical multi-armed bandits and Black-box optimization. Compared to previous approaches, DOS is model free and only makes minimal assumption on Psi smoothness, while having strong theoretical guarantees that compares favorably to recent methods from both Psychophysics and Global Optimization. We also empirically evaluate DOS and show that it significantly outperforms these methods, both in experiments that mimics the conduct of a psychometric experiment, and in tests with large pulls budgets that illustrate the faster convergence rate.' volume: 139 URL: https://proceedings.mlr.press/v139/audiffren21a.html PDF: http://proceedings.mlr.press/v139/audiffren21a/audiffren21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-audiffren21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Julien family: Audiffren editor: - given: Marina family: Meila - given: Tong family: Zhang page: 414-424 id: audiffren21a issued: date-parts: - 2021 - 7 - 1 firstpage: 414 lastpage: 424 published: 2021-07-01 00:00:00 +0000 - title: 'Federated Learning under Arbitrary Communication Patterns' abstract: 'Federated Learning is a distributed learning setting where the goal is to train a centralized model with training data distributed over a large number of heterogeneous clients, each with unreliable and relatively slow network connections. A common optimization approach used in federated learning is based on the idea of local SGD: each client runs some number of SGD steps locally and then the updated local models are averaged to form the updated global model on the coordinating server. In this paper, we investigate the performance of an asynchronous version of local SGD wherein the clients can communicate with the server at arbitrary time intervals. Our main result shows that for smooth strongly convex and smooth nonconvex functions we achieve convergence rates that match the synchronous version that requires all clients to communicate simultaneously.' volume: 139 URL: https://proceedings.mlr.press/v139/avdiukhin21a.html PDF: http://proceedings.mlr.press/v139/avdiukhin21a/avdiukhin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-avdiukhin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dmitrii family: Avdiukhin - given: Shiva family: Kasiviswanathan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 425-435 id: avdiukhin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 425 lastpage: 435 published: 2021-07-01 00:00:00 +0000 - title: 'Asynchronous Distributed Learning : Adapting to Gradient Delays without Prior Knowledge' abstract: 'We consider stochastic convex optimization problems, where several machines act asynchronously in parallel while sharing a common memory. We propose a robust training method for the constrained setting and derive non asymptotic convergence guarantees that do not depend on prior knowledge of update delays, objective smoothness, and gradient variance. Conversely, existing methods for this setting crucially rely on this prior knowledge, which render them unsuitable for essentially all shared-resources computational environments, such as clouds and data centers. Concretely, existing approaches are unable to accommodate changes in the delays which result from dynamic allocation of the machines, while our method implicitly adapts to such changes.' volume: 139 URL: https://proceedings.mlr.press/v139/aviv21a.html PDF: http://proceedings.mlr.press/v139/aviv21a/aviv21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-aviv21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rotem Zamir family: Aviv - given: Ido family: Hakimi - given: Assaf family: Schuster - given: Kfir Yehuda family: Levy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 436-445 id: aviv21a issued: date-parts: - 2021 - 7 - 1 firstpage: 436 lastpage: 445 published: 2021-07-01 00:00:00 +0000 - title: 'Decomposable Submodular Function Minimization via Maximum Flow' abstract: 'This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization, in both the standard and parametric settings. We provide improved running times for this problem by reducing it to a number of calls to a maximum flow oracle. When each function in the decomposition acts on O(1) elements of the ground set V and is polynomially bounded, our running time is up to polylogarithmic factors equal to that of solving maximum flow in a sparse graph with O(|V|) vertices and polynomial integral capacities. We achieve this by providing a simple iterative method which can optimize to high precision any convex function defined on the submodular base polytope, provided we can efficiently minimize it on the base polytope corresponding to the cut function of a certain graph that we construct. We solve this minimization problem by lifting the solutions of a parametric cut problem, which we obtain via a new efficient combinatorial reduction to maximum flow. This reduction is of independent interest and implies some previously unknown bounds for the parametric minimum s,t-cut problem in multiple settings.' volume: 139 URL: https://proceedings.mlr.press/v139/axiotis21a.html PDF: http://proceedings.mlr.press/v139/axiotis21a/axiotis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-axiotis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kyriakos family: Axiotis - given: Adam family: Karczmarz - given: Anish family: Mukherjee - given: Piotr family: Sankowski - given: Adrian family: Vladu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 446-456 id: axiotis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 446 lastpage: 456 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially Private Query Release Through Adaptive Projection' abstract: 'We propose, implement, and evaluate a new algo-rithm for releasing answers to very large numbersof statistical queries likek-way marginals, sub-ject to differential privacy. Our algorithm makesadaptive use of a continuous relaxation of thePro-jection Mechanism, which answers queries on theprivate dataset using simple perturbation, and thenattempts to find the synthetic dataset that mostclosely matches the noisy answers. We use a con-tinuous relaxation of the synthetic dataset domainwhich makes the projection loss differentiable,and allows us to use efficient ML optimizationtechniques and tooling. Rather than answering allqueries up front, we make judicious use of ourprivacy budget by iteratively finding queries forwhich our (relaxed) synthetic data has high error,and then repeating the projection. Randomizedrounding allows us to obtain synthetic data in theoriginal schema. We perform experimental evalu-ations across a range of parameters and datasets,and find that our method outperforms existingalgorithms on large query classes.' volume: 139 URL: https://proceedings.mlr.press/v139/aydore21a.html PDF: http://proceedings.mlr.press/v139/aydore21a/aydore21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-aydore21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sergul family: Aydore - given: William family: Brown - given: Michael family: Kearns - given: Krishnaram family: Kenthapadi - given: Luca family: Melis - given: Aaron family: Roth - given: Ankit A. family: Siva editor: - given: Marina family: Meila - given: Tong family: Zhang page: 457-467 id: aydore21a issued: date-parts: - 2021 - 7 - 1 firstpage: 457 lastpage: 467 published: 2021-07-01 00:00:00 +0000 - title: 'On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent' abstract: 'Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called “rich regimes”. However, the initialization structure is richer than the overall scale alone and involves relative magnitudes of different weights and layers in the network. Here we show that these relative scales, which we refer to as initialization shape, play an important role in determining the learned model. We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest.' volume: 139 URL: https://proceedings.mlr.press/v139/azulay21a.html PDF: http://proceedings.mlr.press/v139/azulay21a/azulay21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-azulay21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shahar family: Azulay - given: Edward family: Moroshko - given: Mor Shpigel family: Nacson - given: Blake E family: Woodworth - given: Nathan family: Srebro - given: Amir family: Globerson - given: Daniel family: Soudry editor: - given: Marina family: Meila - given: Tong family: Zhang page: 468-477 id: azulay21a issued: date-parts: - 2021 - 7 - 1 firstpage: 468 lastpage: 477 published: 2021-07-01 00:00:00 +0000 - title: 'On-Off Center-Surround Receptive Fields for Accurate and Robust Image Classification' abstract: 'Robustness to variations in lighting conditions is a key objective for any deep vision system. To this end, our paper extends the receptive field of convolutional neural networks with two residual components, ubiquitous in the visual processing system of vertebrates: On-center and off-center pathways, with an excitatory center and inhibitory surround; OOCS for short. The On-center pathway is excited by the presence of a light stimulus in its center, but not in its surround, whereas the Off-center pathway is excited by the absence of a light stimulus in its center, but not in its surround. We design OOCS pathways via a difference of Gaussians, with their variance computed analytically from the size of the receptive fields. OOCS pathways complement each other in their response to light stimuli, ensuring this way a strong edge-detection capability, and as a result an accurate and robust inference under challenging lighting conditions. We provide extensive empirical evidence showing that networks supplied with OOCS pathways gain accuracy and illumination-robustness from the novel edge representation, compared to other baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/babaiee21a.html PDF: http://proceedings.mlr.press/v139/babaiee21a/babaiee21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-babaiee21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zahra family: Babaiee - given: Ramin family: Hasani - given: Mathias family: Lechner - given: Daniela family: Rus - given: Radu family: Grosu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 478-489 id: babaiee21a issued: date-parts: - 2021 - 7 - 1 firstpage: 478 lastpage: 489 published: 2021-07-01 00:00:00 +0000 - title: 'Uniform Convergence, Adversarial Spheres and a Simple Remedy' abstract: 'Previous work has cast doubt on the general framework of uniform convergence and its ability to explain generalization in neural networks. By considering a specific dataset, it was observed that a neural network completely misclassifies a projection of the training data (adversarial set), rendering any existing generalization bound based on uniform convergence vacuous. We provide an extensive theoretical investigation of the previously studied data setting through the lens of infinitely-wide models. We prove that the Neural Tangent Kernel (NTK) also suffers from the same phenomenon and we uncover its origin. We highlight the important role of the output bias and show theoretically as well as empirically how a sensible choice completely mitigates the problem. We identify sharp phase transitions in the accuracy on the adversarial set and study its dependency on the training sample size. As a result, we are able to characterize critical sample sizes beyond which the effect disappears. Moreover, we study decompositions of a neural network into a clean and noisy part by considering its canonical decomposition into its different eigenfunctions and show empirically that for too small bias the adversarial phenomenon still persists.' volume: 139 URL: https://proceedings.mlr.press/v139/bachmann21a.html PDF: http://proceedings.mlr.press/v139/bachmann21a/bachmann21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bachmann21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gregor family: Bachmann - given: Seyed-Mohsen family: Moosavi-Dezfooli - given: Thomas family: Hofmann editor: - given: Marina family: Meila - given: Tong family: Zhang page: 490-499 id: bachmann21a issued: date-parts: - 2021 - 7 - 1 firstpage: 490 lastpage: 499 published: 2021-07-01 00:00:00 +0000 - title: 'Faster Kernel Matrix Algebra via Density Estimation' abstract: 'We study fast algorithms for computing basic properties of an n x n positive semidefinite kernel matrix K corresponding to n points x_1,...,x_n in R^d. In particular, we consider the estimating the sum of kernel matrix entries, along with its top eigenvalue and eigenvector. These are some of the most basic problems defined over kernel matrices. We show that the sum of matrix entries can be estimated up to a multiplicative factor of 1+\epsilon in time sublinear in n and linear in d for many popular kernel functions, including the Gaussian, exponential, and rational quadratic kernels. For these kernels, we also show that the top eigenvalue (and a witnessing approximate eigenvector) can be approximated to a multiplicative factor of 1+\epsilon in time sub-quadratic in n and linear in d. Our algorithms represent significant advances in the best known runtimes for these problems. They leverage the positive definiteness of the kernel matrix, along with a recent line of work on efficient kernel density estimation.' volume: 139 URL: https://proceedings.mlr.press/v139/backurs21a.html PDF: http://proceedings.mlr.press/v139/backurs21a/backurs21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-backurs21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Arturs family: Backurs - given: Piotr family: Indyk - given: Cameron family: Musco - given: Tal family: Wagner editor: - given: Marina family: Meila - given: Tong family: Zhang page: 500-510 id: backurs21a issued: date-parts: - 2021 - 7 - 1 firstpage: 500 lastpage: 510 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Reinforcement Learning using Least Squares Policy Iteration with Provable Performance Guarantees' abstract: 'This paper addresses the problem of model-free reinforcement learning for Robust Markov Decision Process (RMDP) with large state spaces. The goal of the RMDPs framework is to find a policy that is robust against the parameter uncertainties due to the mismatch between the simulator model and real-world settings. We first propose the Robust Least Squares Policy Evaluation algorithm, which is a multi-step online model-free learning algorithm for policy evaluation. We prove the convergence of this algorithm using stochastic approximation techniques. We then propose Robust Least Squares Policy Iteration (RLSPI) algorithm for learning the optimal robust policy. We also give a general weighted Euclidean norm bound on the error (closeness to optimality) of the resulting policy. Finally, we demonstrate the performance of our RLSPI algorithm on some benchmark problems from OpenAI Gym.' volume: 139 URL: https://proceedings.mlr.press/v139/badrinath21a.html PDF: http://proceedings.mlr.press/v139/badrinath21a/badrinath21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-badrinath21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kishan Panaganti family: Badrinath - given: Dileep family: Kalathil editor: - given: Marina family: Meila - given: Tong family: Zhang page: 511-520 id: badrinath21a issued: date-parts: - 2021 - 7 - 1 firstpage: 511 lastpage: 520 published: 2021-07-01 00:00:00 +0000 - title: 'Skill Discovery for Exploration and Planning using Deep Skill Graphs' abstract: 'We introduce a new skill-discovery algorithm that builds a discrete graph representation of large continuous MDPs, where nodes correspond to skill subgoals and the edges to skill policies. The agent constructs this graph during an unsupervised training phase where it interleaves discovering skills and planning using them to gain coverage over ever-increasing portions of the state-space. Given a novel goal at test time, the agent plans with the acquired skill graph to reach a nearby state, then switches to learning to reach the goal. We show that the resulting algorithm, Deep Skill Graphs, outperforms both flat and existing hierarchical reinforcement learning methods on four difficult continuous control tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/bagaria21a.html PDF: http://proceedings.mlr.press/v139/bagaria21a/bagaria21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bagaria21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Akhil family: Bagaria - given: Jason K family: Senthil - given: George family: Konidaris editor: - given: Marina family: Meila - given: Tong family: Zhang page: 521-531 id: bagaria21a issued: date-parts: - 2021 - 7 - 1 firstpage: 521 lastpage: 531 published: 2021-07-01 00:00:00 +0000 - title: 'Locally Adaptive Label Smoothing Improves Predictive Churn' abstract: 'Training modern neural networks is an inherently noisy process that can lead to high \emph{prediction churn}– disagreements between re-trainings of the same model due to factors such as randomization in the parameter initialization and mini-batches– even when the trained models all attain similar accuracies. Such prediction churn can be very undesirable in practice. In this paper, we present several baselines for reducing churn and show that training on soft labels obtained by adaptively smoothing each example’s label based on the example’s neighboring labels often outperforms the baselines on churn while improving accuracy on a variety of benchmark classification tasks and model architectures.' volume: 139 URL: https://proceedings.mlr.press/v139/bahri21a.html PDF: http://proceedings.mlr.press/v139/bahri21a/bahri21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bahri21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dara family: Bahri - given: Heinrich family: Jiang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 532-542 id: bahri21a issued: date-parts: - 2021 - 7 - 1 firstpage: 532 lastpage: 542 published: 2021-07-01 00:00:00 +0000 - title: 'How Important is the Train-Validation Split in Meta-Learning?' abstract: 'Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from multiple existing tasks. A common practice in meta-learning is to perform a train-validation split (\emph{train-val method}) where the prior adapts to the task on one split of the data, and the resulting predictor is evaluated on another split. Despite its prevalence, the importance of the train-validation split is not well understood either in theory or in practice, particularly in comparison to the more direct \emph{train-train method}, which uses all the per-task data for both training and evaluation. We provide a detailed theoretical study on whether and when the train-validation split is helpful in the linear centroid meta-learning problem. In the agnostic case, we show that the expected loss of the train-val method is minimized at the optimal prior for meta testing, and this is not the case for the train-train method in general without structural assumptions on the data. In contrast, in the realizable case where the data are generated from linear models, we show that both the train-val and train-train losses are minimized at the optimal prior in expectation. Further, perhaps surprisingly, our main result shows that the train-train method achieves a \emph{strictly better} excess loss in this realizable case, even when the regularization parameter and split ratio are optimally tuned for both methods. Our results highlight that sample splitting may not always be preferable, especially when the data is realizable by the model. We validate our theories by experimentally showing that the train-train method can indeed outperform the train-val method, on both simulations and real meta-learning tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/bai21a.html PDF: http://proceedings.mlr.press/v139/bai21a/bai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yu family: Bai - given: Minshuo family: Chen - given: Pan family: Zhou - given: Tuo family: Zhao - given: Jason family: Lee - given: Sham family: Kakade - given: Huan family: Wang - given: Caiming family: Xiong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 543-553 id: bai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 543 lastpage: 553 published: 2021-07-01 00:00:00 +0000 - title: 'Stabilizing Equilibrium Models by Jacobian Regularization' abstract: 'Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single non-linear layer. These models have been shown to achieve performance competitive with the state-of-the-art deep networks while using significantly less memory. Yet they are also slower, brittle to architectural choices, and introduce potential instability to the model. In this paper, we propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models. We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains (e.g., WikiText-103 language modeling and ImageNet classification). Using this method, we demonstrate, for the first time, an implicit-depth model that runs with approximately the same speed and level of performance as popular conventional deep networks such as ResNet-101, while still maintaining the constant memory footprint and architectural simplicity of DEQs. Code is available https://github.com/locuslab/deq.' volume: 139 URL: https://proceedings.mlr.press/v139/bai21b.html PDF: http://proceedings.mlr.press/v139/bai21b/bai21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bai21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shaojie family: Bai - given: Vladlen family: Koltun - given: Zico family: Kolter editor: - given: Marina family: Meila - given: Tong family: Zhang page: 554-565 id: bai21b issued: date-parts: - 2021 - 7 - 1 firstpage: 554 lastpage: 565 published: 2021-07-01 00:00:00 +0000 - title: 'Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification' abstract: 'Modern machine learning models with high accuracy are often miscalibrated—the predicted top probability does not reflect the actual accuracy, and tends to be \emph{over-confident}. It is commonly believed that such over-confidence is mainly due to \emph{over-parametrization}, in particular when the model is large enough to memorize the training data and maximize the confidence. In this paper, we show theoretically that over-parametrization is not the only reason for over-confidence. We prove that \emph{logistic regression is inherently over-confident}, in the realizable, under-parametrized setting where the data is generated from the logistic model, and the sample size is much larger than the number of parameters. Further, this over-confidence happens for general well-specified binary classification problems as long as the activation is symmetric and concave on the positive part. Perhaps surprisingly, we also show that over-confidence is not always the case—there exists another activation function (and a suitable loss function) under which the learned classifier is \emph{under-confident} at some probability values. Overall, our theory provides a precise characterization of calibration in realizable binary classification, which we verify on simulations and real data experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/bai21c.html PDF: http://proceedings.mlr.press/v139/bai21c/bai21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bai21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yu family: Bai - given: Song family: Mei - given: Huan family: Wang - given: Caiming family: Xiong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 566-576 id: bai21c issued: date-parts: - 2021 - 7 - 1 firstpage: 566 lastpage: 576 published: 2021-07-01 00:00:00 +0000 - title: 'Principled Exploration via Optimistic Bootstrapping and Backward Induction' abstract: 'One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic uncertainty of state-action pairs for optimistic exploration. We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in linear setting. We propagate future uncertainty in a time-consistent manner through episodic backward update, which exploits the theoretical advantage and empirically improves the sample-efficiency. Our experiments in MNIST maze and Atari suit suggest that OB2I outperforms several state-of-the-art exploration approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/bai21d.html PDF: http://proceedings.mlr.press/v139/bai21d/bai21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bai21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chenjia family: Bai - given: Lingxiao family: Wang - given: Lei family: Han - given: Jianye family: Hao - given: Animesh family: Garg - given: Peng family: Liu - given: Zhaoran family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 577-587 id: bai21d issued: date-parts: - 2021 - 7 - 1 firstpage: 577 lastpage: 587 published: 2021-07-01 00:00:00 +0000 - title: 'GLSearch: Maximum Common Subgraph Detection via Learning to Search' abstract: 'Detecting the Maximum Common Subgraph (MCS) between two input graphs is fundamental for applications in drug synthesis, malware detection, cloud computing, etc. However, MCS computation is NP-hard, and state-of-the-art MCS solvers rely on heuristic search algorithms which in practice cannot find good solution for large graph pairs given a limited computation budget. We propose GLSearch, a Graph Neural Network (GNN) based learning to search model. Our model is built upon the branch and bound algorithm, which selects one pair of nodes from the two input graphs to expand at a time. We propose a novel GNN-based Deep Q-Network (DQN) to select the node pair, making the search process much faster. Experiments on synthetic and real-world graph pairs demonstrate that our model learns a search strategy that is able to detect significantly larger common subgraphs than existing MCS solvers given the same computation budget. GLSearch can be potentially extended to solve many other combinatorial problems with constraints on graphs.' volume: 139 URL: https://proceedings.mlr.press/v139/bai21e.html PDF: http://proceedings.mlr.press/v139/bai21e/bai21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bai21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yunsheng family: Bai - given: Derek family: Xu - given: Yizhou family: Sun - given: Wei family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 588-598 id: bai21e issued: date-parts: - 2021 - 7 - 1 firstpage: 588 lastpage: 598 published: 2021-07-01 00:00:00 +0000 - title: 'Breaking the Limits of Message Passing Graph Neural Networks' abstract: 'Since the Message Passing (Graph) Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs, they have been widely implemented and still raise a lot of interest even though their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1-WL). In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test and experimentally as powerful as a 3-WL existing models, while remaining spatially localized. Moreover, by designing custom filter functions, outputs can have various frequency components that allow the convolution process to learn different relationships between a given input graph signal and its associated properties. So far, the best 3-WL equivalent graph neural networks have a computational complexity in $\mathcal{O}(n^3)$ with memory usage in $\mathcal{O}(n^2)$, consider non-local update mechanism and do not provide the spectral richness of output profile. The proposed method overcomes all these aforementioned problems and reaches state-of-the-art results in many downstream tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/balcilar21a.html PDF: http://proceedings.mlr.press/v139/balcilar21a/balcilar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-balcilar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Muhammet family: Balcilar - given: Pierre family: Heroux - given: Benoit family: Gauzere - given: Pascal family: Vasseur - given: Sebastien family: Adam - given: Paul family: Honeine editor: - given: Marina family: Meila - given: Tong family: Zhang page: 599-608 id: balcilar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 599 lastpage: 608 published: 2021-07-01 00:00:00 +0000 - title: 'Instance Specific Approximations for Submodular Maximization' abstract: 'The predominant measure for the performance of an algorithm is its worst-case approximation guarantee. While worst-case approximations give desirable robustness guarantees, they can differ significantly from the performance of an algorithm in practice. For the problem of monotone submodular maximization under a cardinality constraint, the greedy algorithm is known to obtain a 1-1/e approximation guarantee, which is optimal for a polynomial-time algorithm. However, very little is known about the approximation achieved by greedy and other submodular maximization algorithms on real instances. We develop an algorithm that gives an instance-specific approximation for any solution of an instance of monotone submodular maximization under a cardinality constraint. This algorithm uses a novel dual approach to submodular maximization. In particular, it relies on the construction of a lower bound to the dual objective that can also be exactly minimized. We use this algorithm to show that on a wide variety of real-world datasets and objectives, greedy and other algorithms find solutions that approximate the optimal solution significantly better than the 1-1/e   0.63 worst-case approximation guarantee, often exceeding 0.9.' volume: 139 URL: https://proceedings.mlr.press/v139/balkanski21a.html PDF: http://proceedings.mlr.press/v139/balkanski21a/balkanski21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-balkanski21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eric family: Balkanski - given: Sharon family: Qian - given: Yaron family: Singer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 609-618 id: balkanski21a issued: date-parts: - 2021 - 7 - 1 firstpage: 609 lastpage: 618 published: 2021-07-01 00:00:00 +0000 - title: 'Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment' abstract: 'Reinforcement learning from large-scale offline datasets provides us with the ability to learn policies without potentially unsafe or impractical exploration. Significant progress has been made in the past few years in dealing with the challenge of correcting for differing behavior between the data collection and learned policies. However, little attention has been paid to potentially changing dynamics when transferring a policy to the online setting, where performance can be up to 90% reduced for existing methods. In this paper we address this problem with Augmented World Models (AugWM). We augment a learned dynamics model with simple transformations that seek to capture potential changes in physical properties of the robot, leading to more robust policies. We not only train our policy in this new setting, but also provide it with the sampled augmentation as a context, allowing it to adapt to changes in the environment. At test time we learn the context in a self-supervised fashion by approximating the augmentation which corresponds to the new environment. We rigorously evaluate our approach on over 100 different changed dynamics settings, and show that this simple approach can significantly improve the zero-shot generalization of a recent state-of-the-art baseline, often achieving successful policies where the baseline fails.' volume: 139 URL: https://proceedings.mlr.press/v139/ball21a.html PDF: http://proceedings.mlr.press/v139/ball21a/ball21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ball21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Philip J family: Ball - given: Cong family: Lu - given: Jack family: Parker-Holder - given: Stephen family: Roberts editor: - given: Marina family: Meila - given: Tong family: Zhang page: 619-629 id: ball21a issued: date-parts: - 2021 - 7 - 1 firstpage: 619 lastpage: 629 published: 2021-07-01 00:00:00 +0000 - title: 'Regularized Online Allocation Problems: Fairness and Beyond' abstract: 'Online allocation problems with resource constraints have a rich history in computer science and operations research. In this paper, we introduce the regularized online allocation problem, a variant that includes a non-linear regularizer acting on the total resource consumption. In this problem, requests repeatedly arrive over time and, for each request, a decision maker needs to take an action that generates a reward and consumes resources. The objective is to simultaneously maximize total rewards and the value of the regularizer subject to the resource constraints. Our primary motivation is the online allocation of internet advertisements wherein firms seek to maximize additive objectives such as the revenue or efficiency of the allocation. By introducing a regularizer, firms can account for the fairness of the allocation or, alternatively, punish under-delivery of advertisements—two common desiderata in internet advertising markets. We design an algorithm when arrivals are drawn independently from a distribution that is unknown to the decision maker. Our algorithm is simple, fast, and attains the optimal order of sub-linear regret compared to the optimal allocation with the benefit of hindsight. Numerical experiments confirm the effectiveness of the proposed algorithm and of the regularizers in an internet advertising application.' volume: 139 URL: https://proceedings.mlr.press/v139/balseiro21a.html PDF: http://proceedings.mlr.press/v139/balseiro21a/balseiro21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-balseiro21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Santiago family: Balseiro - given: Haihao family: Lu - given: Vahab family: Mirrokni editor: - given: Marina family: Meila - given: Tong family: Zhang page: 630-639 id: balseiro21a issued: date-parts: - 2021 - 7 - 1 firstpage: 630 lastpage: 639 published: 2021-07-01 00:00:00 +0000 - title: 'Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers' abstract: 'We propose Predict then Interpolate (PI), a simple algorithm for learning correlations that are stable across environments. The algorithm follows from the intuition that when using a classifier trained on one environment to make predictions on examples from another environment, its mistakes are informative as to which correlations are unstable. In this work, we prove that by interpolating the distributions of the correct predictions and the wrong predictions, we can uncover an oracle distribution where the unstable correlation vanishes. Since the oracle interpolation coefficients are not accessible, we use group distributionally robust optimization to minimize the worst-case risk across all such interpolations. We evaluate our method on both text classification and image classification. Empirical results demonstrate that our algorithm is able to learn robust classifiers (outperforms IRM by 23.85% on synthetic environments and 12.41% on natural environments). Our code and data are available at https://github.com/YujiaBao/ Predict-then-Interpolate.' volume: 139 URL: https://proceedings.mlr.press/v139/bao21a.html PDF: http://proceedings.mlr.press/v139/bao21a/bao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yujia family: Bao - given: Shiyu family: Chang - given: Regina family: Barzilay editor: - given: Marina family: Meila - given: Tong family: Zhang page: 640-650 id: bao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 640 lastpage: 650 published: 2021-07-01 00:00:00 +0000 - title: 'Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models' abstract: 'This paper presents new estimates of the score function and its gradient with respect to the model parameters in a general energy-based latent variable model (EBLVM). The score function and its gradient can be expressed as combinations of expectation and covariance terms over the (generally intractable) posterior of the latent variables. New estimates are obtained by introducing a variational posterior to approximate the true posterior in these terms. The variational posterior is trained to minimize a certain divergence (e.g., the KL divergence) between itself and the true posterior. Theoretically, the divergence characterizes upper bounds of the bias of the estimates. In principle, our estimates can be applied to a wide range of objectives, including kernelized Stein discrepancy (KSD), score matching (SM)-based methods and exact Fisher divergence with a minimal model assumption. In particular, these estimates applied to SM-based methods outperform existing methods in learning EBLVMs on several image datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/bao21b.html PDF: http://proceedings.mlr.press/v139/bao21b/bao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fan family: Bao - given: Kun family: Xu - given: Chongxuan family: Li - given: Lanqing family: Hong - given: Jun family: Zhu - given: Bo family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 651-661 id: bao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 651 lastpage: 661 published: 2021-07-01 00:00:00 +0000 - title: 'Compositional Video Synthesis with Action Graphs' abstract: 'Videos of actions are complex signals containing rich compositional structure in space and time. Current video generation methods lack the ability to condition the generation on multiple coordinated and potentially simultaneous timed actions. To address this challenge, we propose to represent the actions in a graph structure called Action Graph and present the new "Action Graph To Video" synthesis task. Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation. We train and evaluate AG2Vid on CATER and Something-Something V2 datasets, which results in videos that have better visual quality and semantic consistency compared to baselines. Finally, our model demonstrates zero-shot abilities by synthesizing novel compositions of the learned actions.' volume: 139 URL: https://proceedings.mlr.press/v139/bar21a.html PDF: http://proceedings.mlr.press/v139/bar21a/bar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amir family: Bar - given: Roei family: Herzig - given: Xiaolong family: Wang - given: Anna family: Rohrbach - given: Gal family: Chechik - given: Trevor family: Darrell - given: Amir family: Globerson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 662-673 id: bar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 662 lastpage: 673 published: 2021-07-01 00:00:00 +0000 - title: 'Approximating a Distribution Using Weight Queries' abstract: 'We consider a novel challenge: approximating a distribution without the ability to randomly sample from that distribution. We study how such an approximation can be obtained using *weight queries*. Given some data set of examples, a weight query presents one of the examples to an oracle, which returns the probability, according to the target distribution, of observing examples similar to the presented example. This oracle can represent, for instance, counting queries to a database of the target population, or an interface to a search engine which returns the number of results that match a given search. We propose an interactive algorithm that iteratively selects data set examples and performs corresponding weight queries. The algorithm finds a reweighting of the data set that approximates the weights according to the target distribution, using a limited number of weight queries. We derive an approximation bound on the total variation distance between the reweighting found by the algorithm and the best achievable reweighting. Our algorithm takes inspiration from the UCB approach common in multi-armed bandits problems, and combines it with a new discrepancy estimator and a greedy iterative procedure. In addition to our theoretical guarantees, we demonstrate in experiments the advantages of the proposed algorithm over several baselines. A python implementation of the proposed algorithm and of all the experiments can be found at https://github.com/Nadav-Barak/AWP.' volume: 139 URL: https://proceedings.mlr.press/v139/barak21a.html PDF: http://proceedings.mlr.press/v139/barak21a/barak21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-barak21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nadav family: Barak - given: Sivan family: Sabato editor: - given: Marina family: Meila - given: Tong family: Zhang page: 674-683 id: barak21a issued: date-parts: - 2021 - 7 - 1 firstpage: 674 lastpage: 683 published: 2021-07-01 00:00:00 +0000 - title: 'Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization' abstract: 'Recently there has been increased interest in semi-supervised classification in the presence of graphical information. A new class of learning models has emerged that relies, at its most basic level, on classifying the data after first applying a graph convolution. To understand the merits of this approach, we study the classification of a mixture of Gaussians, where the data corresponds to the node attributes of a stochastic block model. We show that graph convolution extends the regime in which the data is linearly separable by a factor of roughly $1/\sqrt{D}$, where $D$ is the expected degree of a node, as compared to the mixture model data on its own. Furthermore, we find that the linear classifier obtained by minimizing the cross-entropy loss after the graph convolution generalizes to out-of-distribution data where the unseen data can have different intra- and inter-class edge probabilities from the training data.' volume: 139 URL: https://proceedings.mlr.press/v139/baranwal21a.html PDF: http://proceedings.mlr.press/v139/baranwal21a/baranwal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-baranwal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aseem family: Baranwal - given: Kimon family: Fountoulakis - given: Aukosh family: Jagannath editor: - given: Marina family: Meila - given: Tong family: Zhang page: 684-693 id: baranwal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 684 lastpage: 693 published: 2021-07-01 00:00:00 +0000 - title: 'Training Quantized Neural Networks to Global Optimality via Semidefinite Programming' abstract: 'Neural networks (NNs) have been extremely successful across many tasks in machine learning. Quantization of NN weights has become an important topic due to its impact on their energy efficiency, inference time and deployment on hardware. Although post-training quantization is well-studied, training optimal quantized NNs involves combinatorial non-convex optimization problems which appear intractable. In this work, we introduce a convex optimization strategy to train quantized NNs with polynomial activations. Our method leverages hidden convexity in two-layer neural networks from the recent literature, semidefinite lifting, and Grothendieck’s identity. Surprisingly, we show that certain quantized NN problems can be solved to global optimality provably in polynomial time in all relevant parameters via tight semidefinite relaxations. We present numerical examples to illustrate the effectiveness of our method.' volume: 139 URL: https://proceedings.mlr.press/v139/bartan21a.html PDF: http://proceedings.mlr.press/v139/bartan21a/bartan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bartan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Burak family: Bartan - given: Mert family: Pilanci editor: - given: Marina family: Meila - given: Tong family: Zhang page: 694-704 id: bartan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 694 lastpage: 704 published: 2021-07-01 00:00:00 +0000 - title: 'Beyond $log^2(T)$ regret for decentralized bandits in matching markets' abstract: 'We design decentralized algorithms for regret minimization in the two sided matching market with one-sided bandit feedback that significantly improves upon the prior works (Liu et al.\,2020a, Sankararaman et al.\,2020, Liu et al.\,2020b). First, for general markets, for any $\varepsilon > 0$, we design an algorithm that achieves a $O(\log^{1+\varepsilon}(T))$ regret to the agent-optimal stable matching, with unknown time horizon $T$, improving upon the $O(\log^{2}(T))$ regret achieved in (Liu et al.\,2020b). Second, we provide the optimal $\Theta(\log(T))$ agent-optimal regret for markets satisfying {\em uniqueness consistency} – markets where leaving participants don’t alter the original stable matching. Previously, $\Theta(\log(T))$ regret was achievable (Sankararaman et al.\,2020, Liu et al.\,2020b) in the much restricted {\em serial dictatorship} setting, when all arms have the same preference over the agents. We propose a phase based algorithm, where in each phase, besides deleting the globally communicated dominated arms the agents locally delete arms with which they collide often. This \emph{local deletion} is pivotal in breaking deadlocks arising from rank heterogeneity of agents across arms. We further demonstrate superiority of our algorithm over existing works through simulations.' volume: 139 URL: https://proceedings.mlr.press/v139/basu21a.html PDF: http://proceedings.mlr.press/v139/basu21a/basu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-basu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Soumya family: Basu - given: Karthik Abinav family: Sankararaman - given: Abishek family: Sankararaman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 705-715 id: basu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 705 lastpage: 715 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Thompson Sampling strategies for support-aware CVaR bandits' abstract: 'In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for CVaR bandits on bounded rewards that is flexible enough to solve a variety of problems grounded on physical resources. Building on a recent work by Riou & Honda (2020), we introduce B-CVTS for continuous bounded rewards and M-CVTS for multinomial distributions. On the theoretical side, we provide a non-trivial extension of their analysis that enables to theoretically bound their CVaR regret minimization performance. Strikingly, our results show that these strategies are the first to provably achieve asymptotic optimality in CVaR bandits, matching the corresponding asymptotic lower bounds for this setting. Further, we illustrate empirically the benefit of Thompson Sampling approaches both in a realistic environment simulating a use-case in agriculture and on various synthetic examples.' volume: 139 URL: https://proceedings.mlr.press/v139/baudry21a.html PDF: http://proceedings.mlr.press/v139/baudry21a/baudry21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-baudry21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dorian family: Baudry - given: Romain family: Gautron - given: Emilie family: Kaufmann - given: Odalric family: Maillard editor: - given: Marina family: Meila - given: Tong family: Zhang page: 716-726 id: baudry21a issued: date-parts: - 2021 - 7 - 1 firstpage: 716 lastpage: 726 published: 2021-07-01 00:00:00 +0000 - title: 'On Limited-Memory Subsampling Strategies for Bandits' abstract: 'There has been a recent surge of interest in non-parametric bandit algorithms based on subsampling. One drawback however of these approaches is the additional complexity required by random subsampling and the storage of the full history of rewards. Our first contribution is to show that a simple deterministic subsampling rule, proposed in the recent work of \citet{baudry2020sub} under the name of “last-block subsampling”, is asymptotically optimal in one-parameter exponential families. In addition, we prove that these guarantees also hold when limiting the algorithm memory to a polylogarithmic function of the time horizon. These findings open up new perspectives, in particular for non-stationary scenarios in which the arm distributions evolve over time. We propose a variant of the algorithm in which only the most recent observations are used for subsampling, achieving optimal regret guarantees under the assumption of a known number of abrupt changes. Extensive numerical simulations highlight the merits of this approach, particularly when the changes are not only affecting the means of the rewards.' volume: 139 URL: https://proceedings.mlr.press/v139/baudry21b.html PDF: http://proceedings.mlr.press/v139/baudry21b/baudry21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-baudry21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dorian family: Baudry - given: Yoan family: Russac - given: Olivier family: Cappé editor: - given: Marina family: Meila - given: Tong family: Zhang page: 727-737 id: baudry21b issued: date-parts: - 2021 - 7 - 1 firstpage: 727 lastpage: 737 published: 2021-07-01 00:00:00 +0000 - title: 'Generalized Doubly Reparameterized Gradient Estimators' abstract: 'Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders. Doubly-reparameterized gradients (DReGs) improve on the RT for multi-sample variational bounds by applying reparameterization a second time for an additional reduction in variance. Here, we develop two generalizations of the DReGs estimator and show that they can be used to train conditional and hierarchical VAEs on image modelling tasks more effectively. We first extend the estimator to hierarchical models with several stochastic layers by showing how to treat additional score function terms due to the hierarchical variational posterior. We then generalize DReGs to score functions of arbitrary distributions instead of just those of the sampling distribution, which makes the estimator applicable to the parameters of the prior in addition to those of the posterior.' volume: 139 URL: https://proceedings.mlr.press/v139/bauer21a.html PDF: http://proceedings.mlr.press/v139/bauer21a/bauer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bauer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Matthias family: Bauer - given: Andriy family: Mnih editor: - given: Marina family: Meila - given: Tong family: Zhang page: 738-747 id: bauer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 738 lastpage: 747 published: 2021-07-01 00:00:00 +0000 - title: 'Directional Graph Networks' abstract: 'The lack of anisotropic kernels in graph neural networks (GNNs) strongly limits their expressiveness, contributing to well-known issues such as over-smoothing. To overcome this limitation, we propose the first globally consistent anisotropic kernels for GNNs, allowing for graph convolutions that are defined according to topologicaly-derived directional flows. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then, we propose the use of the Laplacian eigenvectors as such vector field. We show that the method generalizes CNNs on an $n$-dimensional grid and is provably more discriminative than standard GNNs regarding the Weisfeiler-Lehman 1-WL test. We evaluate our method on different standard benchmarks and see a relative error reduction of 8% on the CIFAR10 graph dataset and 11% to 32% on the molecular ZINC dataset, and a relative increase in precision of 1.6% on the MolPCBA dataset. An important outcome of this work is that it enables graph networks to embed directions in an unsupervised way, thus allowing a better representation of the anisotropic features in different physical or biological problems.' volume: 139 URL: https://proceedings.mlr.press/v139/beaini21a.html PDF: http://proceedings.mlr.press/v139/beaini21a/beaini21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-beaini21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dominique family: Beaini - given: Saro family: Passaro - given: Vincent family: Létourneau - given: Will family: Hamilton - given: Gabriele family: Corso - given: Pietro family: Lió editor: - given: Marina family: Meila - given: Tong family: Zhang page: 748-758 id: beaini21a issued: date-parts: - 2021 - 7 - 1 firstpage: 748 lastpage: 758 published: 2021-07-01 00:00:00 +0000 - title: 'Policy Analysis using Synthetic Controls in Continuous-Time' abstract: 'Counterfactual estimation using synthetic controls is one of the most successful recent methodological developments in causal inference. Despite its popularity, the current description only considers time series aligned across units and synthetic controls expressed as linear combinations of observed control units. We propose a continuous-time alternative that models the latent counterfactual path explicitly using the formalism of controlled differential equations. This model is directly applicable to the general setting of irregularly-aligned multivariate time series and may be optimized in rich function spaces – thereby improving on some limitations of existing approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/bellot21a.html PDF: http://proceedings.mlr.press/v139/bellot21a/bellot21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bellot21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexis family: Bellot - given: Mihaela prefix: van der family: Schaar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 759-768 id: bellot21a issued: date-parts: - 2021 - 7 - 1 firstpage: 759 lastpage: 768 published: 2021-07-01 00:00:00 +0000 - title: 'Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling' abstract: 'With a better understanding of the loss surfaces for multilayer networks, we can build more robust and accurate training procedures. Recently it was discovered that independently trained SGD solutions can be connected along one-dimensional paths of near-constant training loss. In this paper, we in fact demonstrate the existence of mode-connecting simplicial complexes that form multi-dimensional manifolds of low loss, connecting many independently trained models. Building on this discovery, we show how to efficiently construct simplicial complexes for fast ensembling, outperforming independently trained deep ensembles in accuracy, calibration, and robustness to dataset shift. Notably, our approach is easy to apply and only requires a few training epochs to discover a low-loss simplex.' volume: 139 URL: https://proceedings.mlr.press/v139/benton21a.html PDF: http://proceedings.mlr.press/v139/benton21a/benton21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-benton21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gregory family: Benton - given: Wesley family: Maddox - given: Sanae family: Lotfi - given: Andrew Gordon Gordon family: Wilson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 769-779 id: benton21a issued: date-parts: - 2021 - 7 - 1 firstpage: 769 lastpage: 779 published: 2021-07-01 00:00:00 +0000 - title: 'TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer' abstract: 'The problem of fixing errors in programs has attracted substantial interest over the years. The key challenge for building an effective code fixing tool is to capture a wide range of errors and meanwhile maintain high accuracy. In this paper, we address this challenge and present a new learning-based system, called TFix. TFix works directly on program text and phrases the problem of code fixing as a text-to-text task. In turn, this enables it to leverage a powerful Transformer based model pre-trained on natural language and fine-tuned to generate code fixes (via a large, high-quality dataset obtained from GitHub commits). TFix is not specific to a particular programming language or class of defects and, in fact, improved its precision by simultaneously fine-tuning on 52 different error types reported by a popular static analyzer. Our evaluation on a massive dataset of JavaScript programs shows that TFix is practically effective: it is able to synthesize code that fixes the error in  67 percent of cases and significantly outperforms existing learning-based approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/berabi21a.html PDF: http://proceedings.mlr.press/v139/berabi21a/berabi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-berabi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Berkay family: Berabi - given: Jingxuan family: He - given: Veselin family: Raychev - given: Martin family: Vechev editor: - given: Marina family: Meila - given: Tong family: Zhang page: 780-791 id: berabi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 780 lastpage: 791 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis' abstract: 'Organ transplantation is often the last resort for treating end-stage illnesses, but managing transplant wait-lists is challenging because of organ scarcity and the complexity of assessing donor-recipient compatibility. In this paper, we develop a data-driven model for (real-time) organ allocation using observational data for transplant outcomes. Our model integrates a queuing-theoretic framework with unsupervised learning to cluster the organs into “organ types”, and then construct priority queues (associated with each organ type) wherein incoming patients are assigned. To reason about organ allocations, the model uses synthetic controls to infer a patient’s survival outcomes under counterfactual allocations to the different organ types{–} the model is trained end-to-end to optimise the trade-off between patient waiting time and expected survival time. The usage of synthetic controls enable patient-level interpretations of allocation decisions that can be presented and understood by clinicians. We test our model on multiple data sets, and show that it outperforms other organ-allocation policies in terms of added life-years, and death count. Furthermore, we introduce a novel organ-allocation simulator to accurately test new policies.' volume: 139 URL: https://proceedings.mlr.press/v139/berrevoets21a.html PDF: http://proceedings.mlr.press/v139/berrevoets21a/berrevoets21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-berrevoets21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jeroen family: Berrevoets - given: Ahmed family: Alaa - given: Zhaozhi family: Qian - given: James family: Jordon - given: Alexander E. S. family: Gimson - given: Mihaela prefix: van der family: Schaar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 792-802 id: berrevoets21a issued: date-parts: - 2021 - 7 - 1 firstpage: 792 lastpage: 802 published: 2021-07-01 00:00:00 +0000 - title: 'Learning from Biased Data: A Semi-Parametric Approach' abstract: 'We consider risk minimization problems where the (source) distribution $P_S$ of the training observations $Z_1, \ldots, Z_n$ differs from the (target) distribution $P_T$ involved in the risk that one seeks to minimize. Under the natural assumption that $P_S$ dominates $P_T$, \textit{i.e.} $P_T< \! \! 1$ (including $p = \infty$).' volume: 139 URL: https://proceedings.mlr.press/v139/bhattacharjee21a.html PDF: http://proceedings.mlr.press/v139/bhattacharjee21a/bhattacharjee21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bhattacharjee21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Robi family: Bhattacharjee - given: Somesh family: Jha - given: Kamalika family: Chaudhuri editor: - given: Marina family: Meila - given: Tong family: Zhang page: 884-893 id: bhattacharjee21a issued: date-parts: - 2021 - 7 - 1 firstpage: 884 lastpage: 893 published: 2021-07-01 00:00:00 +0000 - title: 'Finding k in Latent $k-$ polytope' abstract: 'The recently introduced Latent $k-$ Polytope($\LkP$) encompasses several stochastic Mixed Membership models including Topic Models. The problem of finding $k$, the number of extreme points of $\LkP$, is a fundamental challenge and includes several important open problems such as determination of number of components in Ad-mixtures. This paper addresses this challenge by introducing Interpolative Convex Rank(\INR) of a matrix defined as the minimum number of its columns whose convex hull is within Hausdorff distance $\varepsilon$ of the convex hull of all columns. The first important contribution of this paper is to show that under \emph{standard assumptions} $k$ equals the \INR of a \emph{subset smoothed data matrix} defined from Data generated from an $\LkP$. The second important contribution of the paper is a polynomial time algorithm for finding $k$ under standard assumptions. An immediate corollary is the first polynomial time algorithm for finding the \emph{inner dimension} in Non-negative matrix factorisation(NMF) with assumptions which are qualitatively different than existing ones such as \emph{Separability}. %An immediate corollary is the first polynomial time algorithm for finding the \emph{inner dimension} in Non-negative matrix factorisation(NMF) with assumptions considerably weaker than \emph{Separability}.' volume: 139 URL: https://proceedings.mlr.press/v139/bhattacharyya21a.html PDF: http://proceedings.mlr.press/v139/bhattacharyya21a/bhattacharyya21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bhattacharyya21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chiranjib family: Bhattacharyya - given: Ravindran family: Kannan - given: Amit family: Kumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 894-903 id: bhattacharyya21a issued: date-parts: - 2021 - 7 - 1 firstpage: 894 lastpage: 903 published: 2021-07-01 00:00:00 +0000 - title: 'Non-Autoregressive Electron Redistribution Modeling for Reaction Prediction' abstract: 'Reliably predicting the products of chemical reactions presents a fundamental challenge in synthetic chemistry. Existing machine learning approaches typically produce a reaction product by sequentially forming its subparts or intermediate molecules. Such autoregressive methods, however, not only require a pre-defined order for the incremental construction but preclude the use of parallel decoding for efficient computation. To address these issues, we devise a non-autoregressive learning paradigm that predicts reaction in one shot. Leveraging the fact that chemical reactions can be described as a redistribution of electrons in molecules, we formulate a reaction as an arbitrary electron flow and predict it with a novel multi-pointer decoding network. Experiments on the USPTO-MIT dataset show that our approach has established a new state-of-the-art top-1 accuracy and achieves at least 27 times inference speedup over the state-of-the-art methods. Also, our predictions are easier for chemists to interpret owing to predicting the electron flows.' volume: 139 URL: https://proceedings.mlr.press/v139/bi21a.html PDF: http://proceedings.mlr.press/v139/bi21a/bi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hangrui family: Bi - given: Hengyi family: Wang - given: Chence family: Shi - given: Connor family: Coley - given: Jian family: Tang - given: Hongyu family: Guo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 904-913 id: bi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 904 lastpage: 913 published: 2021-07-01 00:00:00 +0000 - title: 'TempoRL: Learning When to Act' abstract: 'Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new decisions. This makes learning inefficient especially in environments that need various degrees of fine and coarse control. To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. Our TempoRL approach introduces skip connections between states and learns a skip-policy for repeating the same action along these skips. We demonstrate the effectiveness of TempoRL on a variety of traditional and deep RL environments, showing that our approach is capable of learning successful policies up to an order of magnitude faster than vanilla Q-learning.' volume: 139 URL: https://proceedings.mlr.press/v139/biedenkapp21a.html PDF: http://proceedings.mlr.press/v139/biedenkapp21a/biedenkapp21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-biedenkapp21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: André family: Biedenkapp - given: Raghu family: Rajan - given: Frank family: Hutter - given: Marius family: Lindauer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 914-924 id: biedenkapp21a issued: date-parts: - 2021 - 7 - 1 firstpage: 914 lastpage: 924 published: 2021-07-01 00:00:00 +0000 - title: 'Follow-the-Regularized-Leader Routes to Chaos in Routing Games' abstract: 'We study the emergence of chaotic behavior of Follow-the-Regularized Leader (FoReL) dynamics in games. We focus on the effects of increasing the population size or the scale of costs in congestion games, and generalize recent results on unstable, chaotic behaviors in the Multiplicative Weights Update dynamics to a much larger class of FoReL dynamics. We establish that, even in simple linear non-atomic congestion games with two parallel links and \emph{any} fixed learning rate, unless the game is fully symmetric, increasing the population size or the scale of costs causes learning dynamics to becomes unstable and eventually chaotic, in the sense of Li-Yorke and positive topological entropy. Furthermore, we prove the existence of novel non-standard phenomena such as the coexistence of stable Nash equilibria and chaos in the same game. We also observe the simultaneous creation of a chaotic attractor as another chaotic attractor gets destroyed. Lastly, although FoReL dynamics can be strange and non-equilibrating, we prove that the time average still converges to an \emph{exact} equilibrium for any choice of learning rate and any scale of costs.' volume: 139 URL: https://proceedings.mlr.press/v139/bielawski21a.html PDF: http://proceedings.mlr.press/v139/bielawski21a/bielawski21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bielawski21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jakub family: Bielawski - given: Thiparat family: Chotibut - given: Fryderyk family: Falniowski - given: Grzegorz family: Kosiorowski - given: Michał family: Misiurewicz - given: Georgios family: Piliouras editor: - given: Marina family: Meila - given: Tong family: Zhang page: 925-935 id: bielawski21a issued: date-parts: - 2021 - 7 - 1 firstpage: 925 lastpage: 935 published: 2021-07-01 00:00:00 +0000 - title: 'Neural Symbolic Regression that scales' abstract: 'Symbolic equations are at the core of scientific discovery. The task of discovering the underlying equation from a set of input-output pairs is called symbolic regression. Traditionally, symbolic regression methods use hand-designed strategies that do not improve with experience. In this paper, we introduce the first symbolic regression method that leverages large scale pre-training. We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs. At test time, we query the model on a new set of points and use its output to guide the search for the equation. We show empirically that this approach can re-discover a set of well-known physical equations, and that it improves over time with more data and compute.' volume: 139 URL: https://proceedings.mlr.press/v139/biggio21a.html PDF: http://proceedings.mlr.press/v139/biggio21a/biggio21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-biggio21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Luca family: Biggio - given: Tommaso family: Bendinelli - given: Alexander family: Neitz - given: Aurelien family: Lucchi - given: Giambattista family: Parascandolo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 936-945 id: biggio21a issued: date-parts: - 2021 - 7 - 1 firstpage: 936 lastpage: 945 published: 2021-07-01 00:00:00 +0000 - title: 'Model Distillation for Revenue Optimization: Interpretable Personalized Pricing' abstract: 'Data-driven pricing strategies are becoming increasingly common, where customers are offered a personalized price based on features that are predictive of their valuation of a product. It is desirable for this pricing policy to be simple and interpretable, so it can be verified, checked for fairness, and easily implemented. However, efforts to incorporate machine learning into a pricing framework often lead to complex pricing policies that are not interpretable, resulting in slow adoption in practice. We present a novel, customized, prescriptive tree-based algorithm that distills knowledge from a complex black-box machine learning algorithm, segments customers with similar valuations and prescribes prices in such a way that maximizes revenue while maintaining interpretability. We quantify the regret of a resulting policy and demonstrate its efficacy in applications with both synthetic and real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/biggs21a.html PDF: http://proceedings.mlr.press/v139/biggs21a/biggs21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-biggs21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Max family: Biggs - given: Wei family: Sun - given: Markus family: Ettl editor: - given: Marina family: Meila - given: Tong family: Zhang page: 946-956 id: biggs21a issued: date-parts: - 2021 - 7 - 1 firstpage: 946 lastpage: 956 published: 2021-07-01 00:00:00 +0000 - title: 'Scalable Normalizing Flows for Permutation Invariant Densities' abstract: 'Modeling sets is an important problem in machine learning since this type of data can be found in many domains. A promising approach defines a family of permutation invariant densities with continuous normalizing flows. This allows us to maximize the likelihood directly and sample new realizations with ease. In this work, we demonstrate how calculating the trace, a crucial step in this method, raises issues that occur both during training and inference, limiting its practicality. We propose an alternative way of defining permutation equivariant transformations that give closed form trace. This leads not only to improvements while training, but also to better final performance. We demonstrate the benefits of our approach on point processes and general set modeling.' volume: 139 URL: https://proceedings.mlr.press/v139/bilos21a.html PDF: http://proceedings.mlr.press/v139/bilos21a/bilos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bilos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Marin family: Biloš - given: Stephan family: Günnemann editor: - given: Marina family: Meila - given: Tong family: Zhang page: 957-967 id: bilos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 957 lastpage: 967 published: 2021-07-01 00:00:00 +0000 - title: 'Online Learning for Load Balancing of Unknown Monotone Resource Allocation Games' abstract: 'Consider N players that each uses a mixture of K resources. Each of the players’ reward functions includes a linear pricing term for each resource that is controlled by the game manager. We assume that the game is strongly monotone, so if each player runs gradient descent, the dynamics converge to a unique Nash equilibrium (NE). Unfortunately, this NE can be inefficient since the total load on a given resource can be very high. In principle, we can control the total loads by tuning the coefficients of the pricing terms. However, finding pricing coefficients that balance the loads requires knowing the players’ reward functions and their action sets. Obtaining this game structure information is infeasible in a large-scale network and violates the users’ privacy. To overcome this, we propose a simple algorithm that learns to shift the NE of the game to meet the total load constraints by adjusting the pricing coefficients in an online manner. Our algorithm only requires the total load per resource as feedback and does not need to know the reward functions or the action sets. We prove that our algorithm guarantees convergence in L2 to a NE that meets target total load constraints. Simulations show the effectiveness of our approach when applied to smart grid demand-side management or power control in wireless networks.' volume: 139 URL: https://proceedings.mlr.press/v139/bistritz21a.html PDF: http://proceedings.mlr.press/v139/bistritz21a/bistritz21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bistritz21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ilai family: Bistritz - given: Nicholas family: Bambos editor: - given: Marina family: Meila - given: Tong family: Zhang page: 968-979 id: bistritz21a issued: date-parts: - 2021 - 7 - 1 firstpage: 968 lastpage: 979 published: 2021-07-01 00:00:00 +0000 - title: 'Low-Precision Reinforcement Learning: Running Soft Actor-Critic in Half Precision' abstract: 'Low-precision training has become a popular approach to reduce compute requirements, memory footprint, and energy consumption in supervised learning. In contrast, this promising approach has not yet enjoyed similarly widespread adoption within the reinforcement learning (RL) community, partly because RL agents can be notoriously hard to train even in full precision. In this paper we consider continuous control with the state-of-the-art SAC agent and demonstrate that a naïve adaptation of low-precision methods from supervised learning fails. We propose a set of six modifications, all straightforward to implement, that leaves the underlying agent and its hyperparameters unchanged but improves the numerical stability dramatically. The resulting modified SAC agent has lower memory and compute requirements while matching full-precision rewards, demonstrating that low-precision training can substantially accelerate state-of-the-art RL without parameter tuning.' volume: 139 URL: https://proceedings.mlr.press/v139/bjorck21a.html PDF: http://proceedings.mlr.press/v139/bjorck21a/bjorck21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bjorck21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Johan family: Björck - given: Xiangyu family: Chen - given: Christopher family: De Sa - given: Carla P family: Gomes - given: Kilian family: Weinberger editor: - given: Marina family: Meila - given: Tong family: Zhang page: 980-991 id: bjorck21a issued: date-parts: - 2021 - 7 - 1 firstpage: 980 lastpage: 991 published: 2021-07-01 00:00:00 +0000 - title: 'Multiplying Matrices Without Multiplying' abstract: 'Multiplying matrices is among the most fundamental and most computationally demanding operations in machine learning and scientific computing. Consequently, the task of efficiently approximating matrix products has received significant attention. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs 10x faster than alternatives at a given level of error, as well as 100x faster than exact matrix multiplication. In the common case that one matrix is known ahead of time, our method also has the interesting property that it requires zero multiply-adds. These results suggest that a mixture of hashing, averaging, and byte shuffling{—}the core operations of our method{—}could be a more promising building block for machine learning than the sparsified, factorized, and/or scalar quantized matrix products that have recently been the focus of substantial research and hardware investment.' volume: 139 URL: https://proceedings.mlr.press/v139/blalock21a.html PDF: http://proceedings.mlr.press/v139/blalock21a/blalock21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-blalock21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Davis family: Blalock - given: John family: Guttag editor: - given: Marina family: Meila - given: Tong family: Zhang page: 992-1004 id: blalock21a issued: date-parts: - 2021 - 7 - 1 firstpage: 992 lastpage: 1004 published: 2021-07-01 00:00:00 +0000 - title: 'One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning' abstract: 'In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents’ incentives into account when allocating individual resources for communal learning in order to maintain such collaborations. Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives while keeping their own sample collection burden low. For example, in an envy-free equilibrium, no agent would wish to swap their sampling burden with any other agent and in a stable equilibrium, no agent would wish to unilaterally reduce their sampling burden. In addition to formalizing this framework, our contributions include characterizing the structural properties of such equilibria, proving when they exist, and showing how they can be computed. Furthermore, we compare the sample complexity of incentive-aware collaboration with that of optimal collaboration when one ignores agents’ incentives.' volume: 139 URL: https://proceedings.mlr.press/v139/blum21a.html PDF: http://proceedings.mlr.press/v139/blum21a/blum21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-blum21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Avrim family: Blum - given: Nika family: Haghtalab - given: Richard Lanas family: Phillips - given: Han family: Shao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1005-1014 id: blum21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1005 lastpage: 1014 published: 2021-07-01 00:00:00 +0000 - title: 'Black-box density function estimation using recursive partitioning' abstract: 'We present a novel approach to Bayesian inference and general Bayesian computation that is defined through a sequential decision loop. Our method defines a recursive partitioning of the sample space. It neither relies on gradients nor requires any problem-specific tuning, and is asymptotically exact for any density function with a bounded domain. The output is an approximation to the whole density function including the normalisation constant, via partitions organised in efficient data structures. Such approximations may be used for evidence estimation or fast posterior sampling, but also as building blocks to treat a larger class of estimation problems. The algorithm shows competitive performance to recent state-of-the-art methods on synthetic and real-world problems including parameter inference for gravitational-wave physics.' volume: 139 URL: https://proceedings.mlr.press/v139/bodin21a.html PDF: http://proceedings.mlr.press/v139/bodin21a/bodin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bodin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Erik family: Bodin - given: Zhenwen family: Dai - given: Neill family: Campbell - given: Carl Henrik family: Ek editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1015-1025 id: bodin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1015 lastpage: 1025 published: 2021-07-01 00:00:00 +0000 - title: 'Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks' abstract: 'The pairwise interaction paradigm of graph machine learning has predominantly governed the modelling of relational systems. However, graphs alone cannot capture the multi-level interactions present in many complex systems and the expressive power of such schemes was proven to be limited. To overcome these limitations, we propose Message Passing Simplicial Networks (MPSNs), a class of models that perform message passing on simplicial complexes (SCs). To theoretically analyse the expressivity of our model we introduce a Simplicial Weisfeiler-Lehman (SWL) colouring procedure for distinguishing non-isomorphic SCs. We relate the power of SWL to the problem of distinguishing non-isomorphic graphs and show that SWL and MPSNs are strictly more powerful than the WL test and not less powerful than the 3-WL test. We deepen the analysis by comparing our model with traditional graph neural networks (GNNs) with ReLU activations in terms of the number of linear regions of the functions they can represent. We empirically support our theoretical claims by showing that MPSNs can distinguish challenging strongly regular graphs for which GNNs fail and, when equipped with orientation equivariant layers, they can improve classification accuracy in oriented SCs compared to a GNN baseline.' volume: 139 URL: https://proceedings.mlr.press/v139/bodnar21a.html PDF: http://proceedings.mlr.press/v139/bodnar21a/bodnar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bodnar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Cristian family: Bodnar - given: Fabrizio family: Frasca - given: Yuguang family: Wang - given: Nina family: Otter - given: Guido F family: Montufar - given: Pietro family: Lió - given: Michael family: Bronstein editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1026-1037 id: bodnar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1026 lastpage: 1037 published: 2021-07-01 00:00:00 +0000 - title: 'The Hintons in your Neural Network: a Quantum Field Theory View of Deep Learning' abstract: 'In this work we develop a quantum field theory formalism for deep learning, where input signals are encoded in Gaussian states, a generalization of Gaussian processes which encode the agent’s uncertainty about the input signal. We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles, dubbed “Hintons”. On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing, and provides quantum deformations of neural networks that can be run efficiently on those devices. Finally, we discuss a semi-classical limit of the quantum deformed models which is amenable to classical simulation.' volume: 139 URL: https://proceedings.mlr.press/v139/bondesan21a.html PDF: http://proceedings.mlr.press/v139/bondesan21a/bondesan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bondesan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Roberto family: Bondesan - given: Max family: Welling editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1038-1048 id: bondesan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1038 lastpage: 1048 published: 2021-07-01 00:00:00 +0000 - title: 'Offline Contextual Bandits with Overparameterized Models' abstract: 'Recent results in supervised learning suggest that while overparameterized models have the capacity to overfit, they in fact generalize quite well. We ask whether the same phenomenon occurs for offline contextual bandits. Our results are mixed. Value-based algorithms benefit from the same generalization behavior as overparameterized supervised learning, but policy-based algorithms do not. We show that this discrepancy is due to the \emph{action-stability} of their objectives. An objective is action-stable if there exists a prediction (action-value vector or action distribution) which is optimal no matter which action is observed. While value-based objectives are action-stable, policy-based objectives are unstable. We formally prove upper bounds on the regret of overparameterized value-based learning and lower bounds on the regret for policy-based algorithms. In our experiments with large neural networks, this gap between action-stable value-based objectives and unstable policy-based objectives leads to significant performance differences.' volume: 139 URL: https://proceedings.mlr.press/v139/brandfonbrener21a.html PDF: http://proceedings.mlr.press/v139/brandfonbrener21a/brandfonbrener21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-brandfonbrener21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Brandfonbrener - given: William family: Whitney - given: Rajesh family: Ranganath - given: Joan family: Bruna editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1049-1058 id: brandfonbrener21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1049 lastpage: 1058 published: 2021-07-01 00:00:00 +0000 - title: 'High-Performance Large-Scale Image Recognition Without Normalization' abstract: 'Batch normalization is a key component of most image classification models, but it has many undesirable properties stemming from its dependence on the batch size and interactions between examples. Although recent work has succeeded in training deep ResNets without normalization layers, these models do not match the test accuracies of the best batch-normalized networks, and are often unstable for large learning rates or strong data augmentations. In this work, we develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on ImageNet while being up to 8.7x faster to train, and our largest models attain a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free models attain significantly better performance than their batch-normalized counterparts when fine-tuning on ImageNet after large-scale pre-training on a dataset of 300 million labeled images, with our best models obtaining an accuracy of 89.2%.' volume: 139 URL: https://proceedings.mlr.press/v139/brock21a.html PDF: http://proceedings.mlr.press/v139/brock21a/brock21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-brock21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andy family: Brock - given: Soham family: De - given: Samuel L family: Smith - given: Karen family: Simonyan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1059-1071 id: brock21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1059 lastpage: 1071 published: 2021-07-01 00:00:00 +0000 - title: 'Evaluating the Implicit Midpoint Integrator for Riemannian Hamiltonian Monte Carlo' abstract: 'Riemannian manifold Hamiltonian Monte Carlo is traditionally carried out using the generalized leapfrog integrator. However, this integrator is not the only choice and other integrators yielding valid Markov chain transition operators may be considered. In this work, we examine the implicit midpoint integrator as an alternative to the generalized leapfrog integrator. We discuss advantages and disadvantages of the implicit midpoint integrator for Hamiltonian Monte Carlo, its theoretical properties, and an empirical assessment of the critical attributes of such an integrator for Hamiltonian Monte Carlo: energy conservation, volume preservation, and reversibility. Empirically, we find that while leapfrog iterations are faster, the implicit midpoint integrator has better energy conservation, leading to higher acceptance rates, as well as better conservation of volume and better reversibility, arguably yielding a more accurate sampling procedure.' volume: 139 URL: https://proceedings.mlr.press/v139/brofos21a.html PDF: http://proceedings.mlr.press/v139/brofos21a/brofos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-brofos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: James family: Brofos - given: Roy R family: Lederman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1072-1081 id: brofos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1072 lastpage: 1081 published: 2021-07-01 00:00:00 +0000 - title: 'Reinforcement Learning of Implicit and Explicit Control Flow Instructions' abstract: 'Learning to flexibly follow task instructions in dynamic environments poses interesting challenges for reinforcement learning agents. We focus here on the problem of learning control flow that deviates from a strict step-by-step execution of instructions{—}that is, control flow that may skip forward over parts of the instructions or return backward to previously completed or skipped steps. Demand for such flexible control arises in two fundamental ways: explicitly when control is specified in the instructions themselves (such as conditional branching and looping) and implicitly when stochastic environment dynamics require re-completion of instructions whose effects have been perturbed, or opportunistic skipping of instructions whose effects are already present. We formulate an attention-based architecture that meets these challenges by learning, from task reward only, to flexibly attend to and condition behavior on an internal encoding of the instructions. We test the architecture’s ability to learn both explicit and implicit control in two illustrative domains—one inspired by Minecraft and the other by StarCraft—and show that the architecture exhibits zero-shot generalization to novel instructions of length greater than those in a training set, at a performance level unmatched by three baseline recurrent architectures and one ablation architecture.' volume: 139 URL: https://proceedings.mlr.press/v139/brooks21a.html PDF: http://proceedings.mlr.press/v139/brooks21a/brooks21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-brooks21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ethan family: Brooks - given: Janarthanan family: Rajendran - given: Richard L family: Lewis - given: Satinder family: Singh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1082-1091 id: brooks21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1082 lastpage: 1091 published: 2021-07-01 00:00:00 +0000 - title: 'Machine Unlearning for Random Forests' abstract: 'Responding to user data deletion requests, removing noisy examples, or deleting corrupted training data are just a few reasons for wanting to delete instances from a machine learning (ML) model. However, efficiently removing this data from an ML model is generally difficult. In this paper, we introduce data removal-enabled (DaRE) forests, a variant of random forests that enables the removal of training data with minimal retraining. Model updates for each DaRE tree in the forest are exact, meaning that removing instances from a DaRE model yields exactly the same model as retraining from scratch on updated data. DaRE trees use randomness and caching to make data deletion efficient. The upper levels of DaRE trees use random nodes, which choose split attributes and thresholds uniformly at random. These nodes rarely require updates because they only minimally depend on the data. At the lower levels, splits are chosen to greedily optimize a split criterion such as Gini index or mutual information. DaRE trees cache statistics at each node and training data at each leaf, so that only the necessary subtrees are updated as data is removed. For numerical attributes, greedy nodes optimize over a random subset of thresholds, so that they can maintain statistics while approximating the optimal threshold. By adjusting the number of thresholds considered for greedy nodes, and the number of random nodes, DaRE trees can trade off between more accurate predictions and more efficient updates. In experiments on 13 real-world datasets and one synthetic dataset, we find DaRE forests delete data orders of magnitude faster than retraining from scratch while sacrificing little to no predictive power.' volume: 139 URL: https://proceedings.mlr.press/v139/brophy21a.html PDF: http://proceedings.mlr.press/v139/brophy21a/brophy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-brophy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jonathan family: Brophy - given: Daniel family: Lowd editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1092-1104 id: brophy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1092 lastpage: 1104 published: 2021-07-01 00:00:00 +0000 - title: 'Value Alignment Verification' abstract: 'As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent’s performance and correctness. In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human’s values? The goal is to construct a kind of "driver’s test" that a human can give to any agent which will verify value alignment via a minimal number of queries. We study alignment verification problems with both idealized humans that have an explicit reward function as well as problems where they have implicit values. We analyze verification of exact value alignment for rational agents, propose and test heuristics for value alignment verification in gridworlds and a continuous autonomous driving domain, and prove that there exist sufficient conditions such that we can verify epsilon-alignment in any environment via a constant-query-complexity alignment test.' volume: 139 URL: https://proceedings.mlr.press/v139/brown21a.html PDF: http://proceedings.mlr.press/v139/brown21a/brown21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-brown21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Daniel S family: Brown - given: Jordan family: Schneider - given: Anca family: Dragan - given: Scott family: Niekum editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1105-1115 id: brown21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1105 lastpage: 1115 published: 2021-07-01 00:00:00 +0000 - title: 'Model-Free and Model-Based Policy Evaluation when Causality is Uncertain' abstract: 'When decision-makers can directly intervene, policy evaluation algorithms give valid causal estimates. In off-policy evaluation (OPE), there may exist unobserved variables that both impact the dynamics and are used by the unknown behavior policy. These “confounders” will introduce spurious correlations and naive estimates for a new policy will be biased. We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons when confounders are drawn iid each period. We demonstrate that a model-based approach with robust MDPs gives sharper lower bounds by exploiting domain knowledge about the dynamics. Finally, we show that when unobserved confounders are persistent over time, OPE is far more difficult and existing techniques produce extremely conservative bounds.' volume: 139 URL: https://proceedings.mlr.press/v139/bruns-smith21a.html PDF: http://proceedings.mlr.press/v139/bruns-smith21a/bruns-smith21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bruns-smith21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David A family: Bruns-Smith editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1116-1126 id: bruns-smith21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1116 lastpage: 1126 published: 2021-07-01 00:00:00 +0000 - title: 'Narrow Margins: Classification, Margins and Fat Tails' abstract: 'It is well-known that, for separable data, the regularised two-class logistic regression or support vector machine re-normalised estimate converges to the maximal margin classifier as the regularisation hyper-parameter $\lambda$ goes to 0. The fact that different loss functions may lead to the same solution is of theoretical and practical relevance as margin maximisation allows more straightforward considerations in terms of generalisation and geometric interpretation. We investigate the case where this convergence property is not guaranteed to hold and show that it can be fully characterised by the distribution of error terms in the latent variable interpretation of linear classifiers. In particular, if errors follow a regularly varying distribution, then the regularised and re-normalised estimate does not converge to the maximal margin classifier. This shows that classification with fat tails has a qualitatively different behaviour, which should be taken into account when considering real-life data.' volume: 139 URL: https://proceedings.mlr.press/v139/buet-golfouse21a.html PDF: http://proceedings.mlr.press/v139/buet-golfouse21a/buet-golfouse21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-buet-golfouse21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Francois family: Buet-Golfouse editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1127-1135 id: buet-golfouse21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1127 lastpage: 1135 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially Private Correlation Clustering' abstract: 'Correlation clustering is a widely used technique in unsupervised machine learning. Motivated by applications where individual privacy is a concern, we initiate the study of differentially private correlation clustering. We propose an algorithm that achieves subquadratic additive error compared to the optimal cost. In contrast, straightforward adaptations of existing non-private algorithms all lead to a trivial quadratic error. Finally, we give a lower bound showing that any pure differentially private algorithm for correlation clustering requires additive error $\Omega$(n).' volume: 139 URL: https://proceedings.mlr.press/v139/bun21a.html PDF: http://proceedings.mlr.press/v139/bun21a/bun21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-bun21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mark family: Bun - given: Marek family: Elias - given: Janardhan family: Kulkarni editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1136-1146 id: bun21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1136 lastpage: 1146 published: 2021-07-01 00:00:00 +0000 - title: 'Disambiguation of Weak Supervision leading to Exponential Convergence rates' abstract: 'Machine learning approached through supervised learning requires expensive annotation of data. This motivates weakly supervised learning, where data are annotated with incomplete yet discriminative information. In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets. We review a disambiguation principle to recover full supervision from weak supervision, and propose an empirical disambiguation algorithm. We prove exponential convergence rates of our algorithm under classical learnability assumptions, and we illustrate the usefulness of our method on practical examples.' volume: 139 URL: https://proceedings.mlr.press/v139/cabannnes21a.html PDF: http://proceedings.mlr.press/v139/cabannnes21a/cabannnes21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cabannnes21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vivien A family: Cabannnes - given: Francis family: Bach - given: Alessandro family: Rudi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1147-1157 id: cabannnes21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1147 lastpage: 1157 published: 2021-07-01 00:00:00 +0000 - title: 'Finite mixture models do not reliably learn the number of components' abstract: 'Scientists and engineers are often interested in learning the number of subpopulations (or components) present in a data set. A common suggestion is to use a finite mixture model (FMM) with a prior on the number of components. Past work has shown the resulting FMM component-count posterior is consistent; that is, the posterior concentrates on the true, generating number of components. But consistency requires the assumption that the component likelihoods are perfectly specified, which is unrealistic in practice. In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data. Contrary to intuition, posterior-density consistency is not sufficient to establish this result. We develop novel sufficient conditions that are more realistic and easily checkable than those common in the asymptotics literature. We illustrate practical consequences of our theory on simulated and real data.' volume: 139 URL: https://proceedings.mlr.press/v139/cai21a.html PDF: http://proceedings.mlr.press/v139/cai21a/cai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Diana family: Cai - given: Trevor family: Campbell - given: Tamara family: Broderick editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1158-1169 id: cai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1158 lastpage: 1169 published: 2021-07-01 00:00:00 +0000 - title: 'A Theory of Label Propagation for Subpopulation Shift' abstract: 'One of the central problems in machine learning is domain adaptation. Different from past theoretical works, we consider a new model of subpopulation shift in the input or representation space. In this work, we propose a provably effective framework based on label propagation by using an input consistency loss. In our analysis we used a simple but realistic “expansion” assumption, which has been proposed in \citet{wei2021theoretical}. It turns out that based on a teacher classifier on the source domain, the learned classifier can not only propagate to the target domain but also improve upon the teacher. By leveraging existing generalization bounds, we also obtain end-to-end finite-sample guarantees on deep neural networks. In addition, we extend our theoretical framework to a more general setting of source-to-target transfer based on an additional unlabeled dataset, which can be easily applied to various learning scenarios. Inspired by our theory, we adapt consistency-based semi-supervised learning methods to domain adaptation settings and gain significant improvements.' volume: 139 URL: https://proceedings.mlr.press/v139/cai21b.html PDF: http://proceedings.mlr.press/v139/cai21b/cai21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cai21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianle family: Cai - given: Ruiqi family: Gao - given: Jason family: Lee - given: Qi family: Lei editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1170-1182 id: cai21b issued: date-parts: - 2021 - 7 - 1 firstpage: 1170 lastpage: 1182 published: 2021-07-01 00:00:00 +0000 - title: 'Lenient Regret and Good-Action Identification in Gaussian Process Bandits' abstract: 'In this paper, we study the problem of Gaussian process (GP) bandits under relaxed optimization criteria stating that any function value above a certain threshold is “good enough”. On the theoretical side, we study various {\em lenient regret} notions in which all near-optimal actions incur zero penalty, and provide upper bounds on the lenient regret for GP-UCB and an elimination algorithm, circumventing the usual $O(\sqrt{T})$ term (with time horizon $T$) resulting from zooming extremely close towards the function maximum. In addition, we complement these upper bounds with algorithm-independent lower bounds. On the practical side, we consider the problem of finding a single “good action” according to a known pre-specified threshold, and introduce several good-action identification algorithms that exploit knowledge of the threshold. We experimentally find that such algorithms can typically find a good action faster than standard optimization-based approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/cai21c.html PDF: http://proceedings.mlr.press/v139/cai21c/cai21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cai21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xu family: Cai - given: Selwyn family: Gomes - given: Jonathan family: Scarlett editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1183-1192 id: cai21c issued: date-parts: - 2021 - 7 - 1 firstpage: 1183 lastpage: 1192 published: 2021-07-01 00:00:00 +0000 - title: 'A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization' abstract: 'We consider the zeroth-order optimization problem in the huge-scale setting, where the dimension of the problem is so large that performing even basic vector operations on the decision variables is infeasible. In this paper, we propose a novel algorithm, coined ZO-BCD, that exhibits favorable overall query complexity and has a much smaller per-iteration computational complexity. In addition, we discuss how the memory footprint of ZO-BCD can be reduced even further by the clever use of circulant measurement matrices. As an application of our new method, we propose the idea of crafting adversarial attacks on neural network based classifiers in a wavelet domain, which can result in problem dimensions of over one million. In particular, we show that crafting adversarial examples to audio classifiers in a wavelet domain can achieve the state-of-the-art attack success rate of 97.9% with significantly less distortion.' volume: 139 URL: https://proceedings.mlr.press/v139/cai21d.html PDF: http://proceedings.mlr.press/v139/cai21d/cai21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cai21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hanqin family: Cai - given: Yuchen family: Lou - given: Daniel family: Mckenzie - given: Wotao family: Yin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1193-1203 id: cai21d issued: date-parts: - 2021 - 7 - 1 firstpage: 1193 lastpage: 1203 published: 2021-07-01 00:00:00 +0000 - title: 'GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training' abstract: 'Normalization is known to help the optimization of deep neural networks. Curiously, different architectures require specialized normalization methods. In this paper, we study what normalization is effective for Graph Neural Networks (GNNs). First, we adapt and evaluate the existing methods from other domains to GNNs. Faster convergence is achieved with InstanceNorm compared to BatchNorm and LayerNorm. We provide an explanation by showing that InstanceNorm serves as a preconditioner for GNNs, but such preconditioning effect is weaker with BatchNorm due to the heavy batch noise in graph datasets. Second, we show that the shift operation in InstanceNorm results in an expressiveness degradation of GNNs for highly regular graphs. We address this issue by proposing GraphNorm with a learnable shift. Empirically, GNNs with GraphNorm converge faster compared to GNNs using other normalization. GraphNorm also improves the generalization of GNNs, achieving better performance on graph classification benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/cai21e.html PDF: http://proceedings.mlr.press/v139/cai21e/cai21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cai21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianle family: Cai - given: Shengjie family: Luo - given: Keyulu family: Xu - given: Di family: He - given: Tie-Yan family: Liu - given: Liwei family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1204-1215 id: cai21e issued: date-parts: - 2021 - 7 - 1 firstpage: 1204 lastpage: 1215 published: 2021-07-01 00:00:00 +0000 - title: 'On Lower Bounds for Standard and Robust Gaussian Process Bandit Optimization' abstract: 'In this paper, we consider algorithm independent lower bounds for the problem of black-box optimization of functions having a bounded norm is some Reproducing Kernel Hilbert Space (RKHS), which can be viewed as a non-Bayesian Gaussian process bandit problem. In the standard noisy setting, we provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability. In a robust setting in which the final point is perturbed by an adversary, we strengthen an existing lower bound that only holds for target success probabilities very close to one, by allowing for arbitrary target success probabilities in (0, 1). Furthermore, in a distinct robust setting in which every sampled point may be perturbed by a constrained adversary, we provide a novel lower bound for deterministic strategies, demonstrating an inevitable joint dependence of the cumulative regret on the corruption level and the time horizon, in contrast with existing lower bounds that only characterize the individual dependencies.' volume: 139 URL: https://proceedings.mlr.press/v139/cai21f.html PDF: http://proceedings.mlr.press/v139/cai21f/cai21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cai21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xu family: Cai - given: Jonathan family: Scarlett editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1216-1226 id: cai21f issued: date-parts: - 2021 - 7 - 1 firstpage: 1216 lastpage: 1226 published: 2021-07-01 00:00:00 +0000 - title: 'High-dimensional Experimental Design and Kernel Bandits' abstract: 'In recent years methods from optimal linear experimental design have been leveraged to obtain state of the art results for linear bandits. A design returned from an objective such as G-optimal design is actually a probability distribution over a pool of potential measurement vectors. Consequently, one nuisance of the approach is the task of converting this continuous probability distribution into a discrete assignment of N measurements. While sophisticated rounding techniques have been proposed, in d dimensions they require N to be at least d, d log(log(d)), or d^2 based on the sub-optimality of the solution. In this paper we are interested in settings where N may be much less than d, such as in experimental design in an RKHS where d may be effectively infinite. In this work, we propose a rounding procedure that frees N of any dependence on the dimension d, while achieving nearly the same performance guarantees of existing rounding procedures. We evaluate the procedure against a baseline that projects the problem to a lower dimensional space and performs rounding there, which requires N to just be at least a notion of the effective dimension. We also leverage our new approach in a new algorithm for kernelized bandits to obtain state of the art results for regret minimization and pure exploration. An advantage of our approach over existing UCB-like approaches is that our kernel bandit algorithms are provably robust to model misspecification.' volume: 139 URL: https://proceedings.mlr.press/v139/camilleri21a.html PDF: http://proceedings.mlr.press/v139/camilleri21a/camilleri21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-camilleri21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Romain family: Camilleri - given: Kevin family: Jamieson - given: Julian family: Katz-Samuels editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1227-1237 id: camilleri21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1227 lastpage: 1237 published: 2021-07-01 00:00:00 +0000 - title: 'A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization' abstract: 'Hamiltonian Monte Carlo (HMC) is one of the most successful sampling methods in machine learning. However, its performance is significantly affected by the choice of hyperparameter values. Existing approaches for optimizing the HMC hyperparameters either optimize a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that can be very loose in practice. Instead, we propose to optimize an objective that quantifies directly the speed of convergence to the target distribution. Our objective can be easily optimized using stochastic gradient descent. We evaluate our proposed method and compare to baselines on a variety of problems including sampling from synthetic 2D distributions, reconstructing sparse signals, learning deep latent variable models and sampling molecular configurations from the Boltzmann distribution of a 22 atom molecule. We find that our method is competitive with or improves upon alternative baselines in all these experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/campbell21a.html PDF: http://proceedings.mlr.press/v139/campbell21a/campbell21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-campbell21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrew family: Campbell - given: Wenlong family: Chen - given: Vincent family: Stimper - given: Jose Miguel family: Hernandez-Lobato - given: Yichuan family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1238-1248 id: campbell21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1238 lastpage: 1248 published: 2021-07-01 00:00:00 +0000 - title: 'Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections' abstract: 'Gaussian noise injections (GNIs) are a family of simple and widely-used regularisation methods for training neural networks, where one injects additive or multiplicative Gaussian noise to the network activations at every iteration of the optimisation algorithm, which is typically chosen as stochastic gradient descent (SGD). In this paper, we focus on the so-called ‘implicit effect’ of GNIs, which is the effect of the injected noise on the dynamics of SGD. We show that this effect induces an \emph{asymmetric heavy-tailed noise} on SGD gradient updates. In order to model this modified dynamics, we first develop a Langevin-like stochastic differential equation that is driven by a general family of \emph{asymmetric} heavy-tailed noise. Using this model we then formally prove that GNIs induce an ‘implicit bias’, which varies depending on the heaviness of the tails and the level of asymmetry. Our empirical results confirm that different types of neural networks trained with GNIs are well-modelled by the proposed dynamics and that the implicit effect of these injections induces a bias that degrades the performance of networks.' volume: 139 URL: https://proceedings.mlr.press/v139/camuto21a.html PDF: http://proceedings.mlr.press/v139/camuto21a/camuto21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-camuto21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexander family: Camuto - given: Xiaoyu family: Wang - given: Lingjiong family: Zhu - given: Chris family: Holmes - given: Mert family: Gurbuzbalaban - given: Umut family: Simsekli editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1249-1260 id: camuto21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1249 lastpage: 1260 published: 2021-07-01 00:00:00 +0000 - title: 'Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design' abstract: 'Designing novel protein sequences for a desired 3D topological fold is a fundamental yet non-trivial task in protein engineering. Challenges exist due to the complex sequence–fold relationship, as well as the difficulties to capture the diversity of the sequences (therefore structures and functions) within a fold. To overcome these challenges, we propose Fold2Seq, a novel transformer-based generative framework for designing protein sequences conditioned on a specific target fold. To model the complex sequence–structure relationship, Fold2Seq jointly learns a sequence embedding using a transformer and a fold embedding from the density of secondary structural elements in 3D voxels. On test sets with single, high-resolution and complete structure inputs for individual folds, our experiments demonstrate improved or comparable performance of Fold2Seq in terms of speed, coverage, and reliability for sequence design, when compared to existing state-of-the-art methods that include data-driven deep generative models and physics-based RosettaDesign. The unique advantages of fold-based Fold2Seq, in comparison to a structure-based deep model and RosettaDesign, become more evident on three additional real-world challenges originating from low-quality, incomplete, or ambiguous input structures. Source code and data are available at https://github.com/IBM/fold2seq.' volume: 139 URL: https://proceedings.mlr.press/v139/cao21a.html PDF: http://proceedings.mlr.press/v139/cao21a/cao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yue family: Cao - given: Payel family: Das - given: Vijil family: Chenthamarakshan - given: Pin-Yu family: Chen - given: Igor family: Melnyk - given: Yang family: Shen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1261-1271 id: cao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1261 lastpage: 1271 published: 2021-07-01 00:00:00 +0000 - title: 'Learning from Similarity-Confidence Data' abstract: 'Weakly supervised learning has drawn considerable attention recently to reduce the expensive time and labor consumption of labeling massive data. In this paper, we investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data, where only unlabeled data pairs equipped with confidence that illustrates their degree of similarity (two examples are similar if they belong to the same class) are needed for training a discriminative binary classifier. We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate. To alleviate potential overfitting when flexible models are used, we further employ a risk correction scheme on the proposed risk estimator. Experimental results demonstrate the effectiveness of the proposed methods.' volume: 139 URL: https://proceedings.mlr.press/v139/cao21b.html PDF: http://proceedings.mlr.press/v139/cao21b/cao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuzhou family: Cao - given: Lei family: Feng - given: Yitian family: Xu - given: Bo family: An - given: Gang family: Niu - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1272-1282 id: cao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 1272 lastpage: 1282 published: 2021-07-01 00:00:00 +0000 - title: 'Parameter-free Locally Accelerated Conditional Gradients' abstract: 'Projection-free conditional gradient (CG) methods are the algorithms of choice for constrained optimization setups in which projections are often computationally prohibitive but linear optimization over the constraint set remains computationally feasible. Unlike in projection-based methods, globally accelerated convergence rates are in general unattainable for CG. However, a very recent work on Locally accelerated CG (LaCG) has demonstrated that local acceleration for CG is possible for many settings of interest. The main downside of LaCG is that it requires knowledge of the smoothness and strong convexity parameters of the objective function. We remove this limitation by introducing a novel, Parameter-Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees. Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms, both in terms of iteration count and wall-clock time.' volume: 139 URL: https://proceedings.mlr.press/v139/carderera21a.html PDF: http://proceedings.mlr.press/v139/carderera21a/carderera21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-carderera21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alejandro family: Carderera - given: Jelena family: Diakonikolas - given: Cheuk Yin family: Lin - given: Sebastian family: Pokutta editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1283-1293 id: carderera21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1283 lastpage: 1293 published: 2021-07-01 00:00:00 +0000 - title: 'Optimizing persistent homology based functions' abstract: 'Solving optimization tasks based on functions and losses with a topological flavor is a very active and growing field of research in data science and Topological Data Analysis, with applications in non-convex optimization, statistics and machine learning. However, the approaches proposed in the literature are usually anchored to a specific application and/or topological construction, and do not come with theoretical guarantees. To address this issue, we study the differentiability of a general map associated with the most common topological construction, that is, the persistence map. Building on real analytic geometry arguments, we propose a general framework that allows us to define and compute gradients for persistence-based functions in a very simple way. We also provide a simple, explicit and sufficient condition for convergence of stochastic subgradient methods for such functions. This result encompasses all the constructions and applications of topological optimization in the literature. Finally, we provide associated code, that is easy to handle and to mix with other non-topological methods and constraints, as well as some experiments showcasing the versatility of our approach.' volume: 139 URL: https://proceedings.mlr.press/v139/carriere21a.html PDF: http://proceedings.mlr.press/v139/carriere21a/carriere21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-carriere21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mathieu family: Carriere - given: Frederic family: Chazal - given: Marc family: Glisse - given: Yuichi family: Ike - given: Hariprasad family: Kannan - given: Yuhei family: Umeda editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1294-1303 id: carriere21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1294 lastpage: 1303 published: 2021-07-01 00:00:00 +0000 - title: 'Online Policy Gradient for Model Free Learning of Linear Quadratic Regulators with $\sqrt$T Regret' abstract: 'We consider the task of learning to control a linear dynamical system under fixed quadratic costs, known as the Linear Quadratic Regulator (LQR) problem. While model-free approaches are often favorable in practice, thus far only model-based methods, which rely on costly system identification, have been shown to achieve regret that scales with the optimal dependence on the time horizon T. We present the first model-free algorithm that achieves similar regret guarantees. Our method relies on an efficient policy gradient scheme, and a novel and tighter analysis of the cost of exploration in policy space in this setting.' volume: 139 URL: https://proceedings.mlr.press/v139/cassel21a.html PDF: http://proceedings.mlr.press/v139/cassel21a/cassel21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cassel21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Asaf B family: Cassel - given: Tomer family: Koren editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1304-1313 id: cassel21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1304 lastpage: 1313 published: 2021-07-01 00:00:00 +0000 - title: 'Multi-Receiver Online Bayesian Persuasion' abstract: 'Bayesian persuasion studies how an informed sender should partially disclose information to influence the behavior of a self-interested receiver. Classical models make the stringent assumption that the sender knows the receiver’s utility. This can be relaxed by considering an online learning framework in which the sender repeatedly faces a receiver of an unknown, adversarially selected type. We study, for the first time, an online Bayesian persuasion setting with multiple receivers. We focus on the case with no externalities and binary actions, as customary in offline models. Our goal is to design no-regret algorithms for the sender with polynomial per-iteration running time. First, we prove a negative result: for any 0 < $\alpha$ $\leq$ 1, there is no polynomial-time no-$\alpha$-regret algorithm when the sender’s utility function is supermodular or anonymous. Then, we focus on the setting of submodular sender’s utility functions and we show that, in this case, it is possible to design a polynomial-time no-(1-1/e)-regret algorithm. To do so, we introduce a general online gradient descent framework to handle online learning problems with a finite number of possible loss functions. This requires the existence of an approximate projection oracle. We show that, in our setting, there exists one such projection oracle which can be implemented in polynomial time.' volume: 139 URL: https://proceedings.mlr.press/v139/castiglioni21a.html PDF: http://proceedings.mlr.press/v139/castiglioni21a/castiglioni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-castiglioni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Matteo family: Castiglioni - given: Alberto family: Marchesi - given: Andrea family: Celli - given: Nicola family: Gatti editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1314-1323 id: castiglioni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1314 lastpage: 1323 published: 2021-07-01 00:00:00 +0000 - title: 'Marginal Contribution Feature Importance - an Axiomatic Approach for Explaining Data' abstract: 'In recent years, methods were proposed for assigning feature importance scores to measure the contribution of individual features. While in some cases the goal is to understand a specific model, in many cases the goal is to understand the contribution of certain properties (features) to a real-world phenomenon. Thus, a distinction has been made between feature importance scores that explain a model and scores that explain the data. When explaining the data, machine learning models are used as proxies in settings where conducting many real-world experiments is expensive or prohibited. While existing feature importance scores show great success in explaining models, we demonstrate their limitations when explaining the data, especially in the presence of correlations between features. Therefore, we develop a set of axioms to capture properties expected from a feature importance score when explaining data and prove that there exists only one score that satisfies all of them, the Marginal Contribution Feature Importance (MCI). We analyze the theoretical properties of this score function and demonstrate its merits empirically.' volume: 139 URL: https://proceedings.mlr.press/v139/catav21a.html PDF: http://proceedings.mlr.press/v139/catav21a/catav21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-catav21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amnon family: Catav - given: Boyang family: Fu - given: Yazeed family: Zoabi - given: Ahuva Libi Weiss family: Meilik - given: Noam family: Shomron - given: Jason family: Ernst - given: Sriram family: Sankararaman - given: Ran family: Gilad-Bachrach editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1324-1335 id: catav21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1324 lastpage: 1335 published: 2021-07-01 00:00:00 +0000 - title: 'Disentangling syntax and semantics in the brain with deep networks' abstract: 'The activations of language transformers like GPT-2 have been shown to linearly map onto brain activity during speech comprehension. However, the nature of these activations remains largely unknown and presumably conflate distinct linguistic classes. Here, we propose a taxonomy to factorize the high-dimensional activations of language models into four combinatorial classes: lexical, compositional, syntactic, and semantic representations. We then introduce a statistical method to decompose, through the lens of GPT-2’s activations, the brain activity of 345 subjects recorded with functional magnetic resonance imaging (fMRI) during the listening of  4.6 hours of narrated text. The results highlight two findings. First, compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices. Second, contrary to previous claims, syntax and semantics are not associated with separated modules, but, instead, appear to share a common and distributed neural substrate. Overall, this study introduces a versatile framework to isolate, in the brain activity, the distributed representations of linguistic constructs.' volume: 139 URL: https://proceedings.mlr.press/v139/caucheteux21a.html PDF: http://proceedings.mlr.press/v139/caucheteux21a/caucheteux21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-caucheteux21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Charlotte family: Caucheteux - given: Alexandre family: Gramfort - given: Jean-Remi family: King editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1336-1348 id: caucheteux21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1336 lastpage: 1348 published: 2021-07-01 00:00:00 +0000 - title: 'Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees' abstract: 'We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes. Compared to prior work, our framework can be employed with a very general class of linear and linear-fractional fairness constraints, can handle multiple, non-binary protected attributes, and outputs a classifier that comes with provable guarantees on both accuracy and fairness. Empirically, we show that our framework can be used to attain either statistical rate or false positive rate fairness guarantees with a minimal loss in accuracy, even when the noise is large, in two real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/celis21a.html PDF: http://proceedings.mlr.press/v139/celis21a/celis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-celis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: L. Elisa family: Celis - given: Lingxiao family: Huang - given: Vijay family: Keswani - given: Nisheeth K. family: Vishnoi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1349-1361 id: celis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1349 lastpage: 1361 published: 2021-07-01 00:00:00 +0000 - title: 'Best Model Identification: A Rested Bandit Formulation' abstract: 'We introduce and analyze a best arm identification problem in the rested bandit setting, wherein arms are themselves learning algorithms whose expected losses decrease with the number of times the arm has been played. The shape of the expected loss functions is similar across arms, and is assumed to be available up to unknown parameters that have to be learned on the fly. We define a novel notion of regret for this problem, where we compare to the policy that always plays the arm having the smallest expected loss at the end of the game. We analyze an arm elimination algorithm whose regret vanishes as the time horizon increases. The actual rate of convergence depends in a detailed way on the postulated functional form of the expected losses. We complement our analysis with lower bounds, indicating strengths and limitations of the proposed solution.' volume: 139 URL: https://proceedings.mlr.press/v139/cella21a.html PDF: http://proceedings.mlr.press/v139/cella21a/cella21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cella21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Leonardo family: Cella - given: Massimiliano family: Pontil - given: Claudio family: Gentile editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1362-1372 id: cella21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1362 lastpage: 1372 published: 2021-07-01 00:00:00 +0000 - title: 'Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research' abstract: 'Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community’s emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.' volume: 139 URL: https://proceedings.mlr.press/v139/ceron21a.html PDF: http://proceedings.mlr.press/v139/ceron21a/ceron21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ceron21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Johan Samir Obando family: Ceron - given: Pablo Samuel family: Castro editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1373-1383 id: ceron21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1373 lastpage: 1383 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Routines for Effective Off-Policy Reinforcement Learning' abstract: 'The performance of reinforcement learning depends upon designing an appropriate action space, where the effect of each action is measurable, yet, granular enough to permit flexible behavior. So far, this process involved non-trivial user choices in terms of the available actions and their execution frequency. We propose a novel framework for reinforcement learning that effectively lifts such constraints. Within our framework, agents learn effective behavior over a routine space: a new, higher-level action space, where each routine represents a set of ’equivalent’ sequences of granular actions with arbitrary length. Our routine space is learned end-to-end to facilitate the accomplishment of underlying off-policy reinforcement learning objectives. We apply our framework to two state-of-the-art off-policy algorithms and show that the resulting agents obtain relevant performance improvements while requiring fewer interactions with the environment per episode, improving computational efficiency.' volume: 139 URL: https://proceedings.mlr.press/v139/cetin21a.html PDF: http://proceedings.mlr.press/v139/cetin21a/cetin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cetin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Edoardo family: Cetin - given: Oya family: Celiktutan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1384-1394 id: cetin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1384 lastpage: 1394 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Node Representations Using Stationary Flow Prediction on Large Payment and Cash Transaction Networks' abstract: 'Banks are required to analyse large transaction datasets as a part of the fight against financial crime. Today, this analysis is either performed manually by domain experts or using expensive feature engineering. Gradient flow analysis allows for basic representation learning as node potentials can be inferred directly from network transaction data. However, the gradient model has a fundamental limitation: it cannot represent all types of of network flows. Furthermore, standard methods for learning the gradient flow are not appropriate for flow signals that span multiple orders of magnitude and contain outliers, i.e. transaction data. In this work, the gradient model is extended to a gated version and we prove that it, unlike the gradient model, is a universal approximator for flows on graphs. To tackle the mentioned challenges of transaction data, we propose a multi-scale and outlier robust loss function based on the Student-t log-likelihood. Ethereum transaction data is used for evaluation and the gradient models outperform MLP models using hand-engineered and node2vec features in terms of relative error. These results extend to 60 synthetic datasets, with experiments also showing that the gated gradient model learns qualitative information about the underlying synthetic generative flow distributions.' volume: 139 URL: https://proceedings.mlr.press/v139/ceylan21a.html PDF: http://proceedings.mlr.press/v139/ceylan21a/ceylan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ceylan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ciwan family: Ceylan - given: Salla family: Franzén - given: Florian T. family: Pokorny editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1395-1406 id: ceylan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1395 lastpage: 1406 published: 2021-07-01 00:00:00 +0000 - title: 'GRAND: Graph Neural Diffusion' abstract: 'We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators. Our approach allows a principled development of a broad new class of GNNs that are able to address the common plights of graph learning models such as depth, oversmoothing, and bottlenecks. Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes. We develop linear and nonlinear versions of GRAND, which achieve competitive results on many standard graph benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/chamberlain21a.html PDF: http://proceedings.mlr.press/v139/chamberlain21a/chamberlain21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chamberlain21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ben family: Chamberlain - given: James family: Rowbottom - given: Maria I family: Gorinova - given: Michael family: Bronstein - given: Stefan family: Webb - given: Emanuele family: Rossi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1407-1418 id: chamberlain21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1407 lastpage: 1418 published: 2021-07-01 00:00:00 +0000 - title: 'HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections' abstract: 'This paper studies Principal Component Analysis (PCA) for data lying in hyperbolic spaces. Given directions, PCA relies on: (1) a parameterization of subspaces spanned by these directions, (2) a method of projection onto subspaces that preserves information in these directions, and (3) an objective to optimize, namely the variance explained by projections. We generalize each of these concepts to the hyperbolic space and propose HoroPCA, a method for hyperbolic dimensionality reduction. By focusing on the core problem of extracting principal directions, HoroPCA theoretically better preserves information in the original data such as distances, compared to previous generalizations of PCA. Empirically, we validate that HoroPCA outperforms existing dimensionality reduction methods, significantly reducing error in distance preservation. As a data whitening method, it improves downstream classification by up to 3.9% compared to methods that don’t use whitening. Finally, we show that HoroPCA can be used to visualize hyperbolic data in two dimensions.' volume: 139 URL: https://proceedings.mlr.press/v139/chami21a.html PDF: http://proceedings.mlr.press/v139/chami21a/chami21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chami21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ines family: Chami - given: Albert family: Gu - given: Dat P family: Nguyen - given: Christopher family: Re editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1419-1429 id: chami21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1419 lastpage: 1429 published: 2021-07-01 00:00:00 +0000 - title: 'Goal-Conditioned Reinforcement Learning with Imagined Subgoals' abstract: 'Goal-conditioned reinforcement learning endows an agent with a large variety of skills, but it often struggles to solve tasks that require more temporally extended reasoning. In this work, we propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks. Imagined subgoals are predicted by a separate high-level policy, which is trained simultaneously with the policy and its critic. This high-level policy predicts intermediate states halfway to the goal using the value function as a reachability metric. We don’t require the policy to reach these subgoals explicitly. Instead, we use them to define a prior policy, and incorporate this prior into a KL-constrained policy iteration scheme to speed up and regularize learning. Imagined subgoals are used during policy learning, but not during test time, where we only apply the learned policy. We evaluate our approach on complex robotic navigation and manipulation tasks and show that it outperforms existing methods by a large margin.' volume: 139 URL: https://proceedings.mlr.press/v139/chane-sane21a.html PDF: http://proceedings.mlr.press/v139/chane-sane21a/chane-sane21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chane-sane21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Elliot family: Chane-Sane - given: Cordelia family: Schmid - given: Ivan family: Laptev editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1430-1440 id: chane-sane21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1430 lastpage: 1440 published: 2021-07-01 00:00:00 +0000 - title: 'Locally Private k-Means in One Round' abstract: 'We provide an approximation algorithm for k-means clustering in the \emph{one-round} (aka \emph{non-interactive}) local model of differential privacy (DP). Our algorithm achieves an approximation ratio arbitrarily close to the best \emph{non private} approximation algorithm, improving upon previously known algorithms that only guarantee large (constant) approximation ratios. Furthermore, ours is the first constant-factor approximation algorithm for k-means that requires only \emph{one} round of communication in the local DP model, positively resolving an open question of Stemmer (SODA 2020). Our algorithmic framework is quite flexible; we demonstrate this by showing that it also yields a similar near-optimal approximation algorithm in the (one-round) shuffle DP model.' volume: 139 URL: https://proceedings.mlr.press/v139/chang21a.html PDF: http://proceedings.mlr.press/v139/chang21a/chang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alisa family: Chang - given: Badih family: Ghazi - given: Ravi family: Kumar - given: Pasin family: Manurangsi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1441-1451 id: chang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1441 lastpage: 1451 published: 2021-07-01 00:00:00 +0000 - title: 'Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment' abstract: 'Many transfer problems require re-using previously optimal decisions for solving new tasks, which suggests the need for learning algorithms that can modify the mechanisms for choosing certain actions independently of those for choosing others. However, there is currently no formalism nor theory for how to achieve this kind of modular credit assignment. To answer this question, we define modular credit assignment as a constraint on minimizing the algorithmic mutual information among feedback signals for different decisions. We introduce what we call the modularity criterion for testing whether a learning algorithm satisfies this constraint by performing causal analysis on the algorithm itself. We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process to prove that for decision sequences that do not contain cycles, certain single-step temporal difference action-value methods meet this criterion while all policy-gradient methods do not. Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.' volume: 139 URL: https://proceedings.mlr.press/v139/chang21b.html PDF: http://proceedings.mlr.press/v139/chang21b/chang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michael family: Chang - given: Sid family: Kaushik - given: Sergey family: Levine - given: Tom family: Griffiths editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1452-1462 id: chang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 1452 lastpage: 1462 published: 2021-07-01 00:00:00 +0000 - title: 'Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection' abstract: 'Training on datasets with long-tailed distributions has been challenging for major recognition tasks such as classification and detection. To deal with this challenge, image resampling is typically introduced as a simple but effective approach. However, we observe that long-tailed detection differs from classification since multiple classes may be present in one image. As a result, image resampling alone is not enough to yield a sufficiently balanced distribution at the object-level. We address object-level resampling by introducing an object-centric sampling strategy based on a dynamic, episodic memory bank. Our proposed strategy has two benefits: 1) convenient object-level resampling without significant extra computation, and 2) implicit feature-level augmentation from model updates. We show that image-level and object-level resamplings are both important, and thus unify them with a joint resampling strategy. Our method achieves state-of-the-art performance on the rare categories of LVIS, with 1.89% and 3.13% relative improvements over Forest R-CNN on detection and instance segmentation.' volume: 139 URL: https://proceedings.mlr.press/v139/chang21c.html PDF: http://proceedings.mlr.press/v139/chang21c/chang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nadine family: Chang - given: Zhiding family: Yu - given: Yu-Xiong family: Wang - given: Animashree family: Anandkumar - given: Sanja family: Fidler - given: Jose M family: Alvarez editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1463-1472 id: chang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 1463 lastpage: 1472 published: 2021-07-01 00:00:00 +0000 - title: 'DeepWalking Backwards: From Embeddings Back to Graphs' abstract: 'Low-dimensional node embeddings play a key role in analyzing graph datasets. However, little work studies exactly what information is encoded by popular embedding methods, and how this information correlates with performance in downstream learning tasks. We tackle this question by studying whether embeddings can be inverted to (approximately) recover the graph used to generate them. Focusing on a variant of the popular DeepWalk method \cite{PerozziAl-RfouSkiena:2014, QiuDongMa:2018}, we present algorithms for accurate embedding inversion – i.e., from the low-dimensional embedding of a graph $G$, we can find a graph $\tilde G$ with a very similar embedding. We perform numerous experiments on real-world networks, observing that significant information about $G$, such as specific edges and bulk properties like triangle density, is often lost in $\tilde G$. However, community structure is often preserved or even enhanced. Our findings are a step towards a more rigorous understanding of exactly what information embeddings encode about the input graph, and why this information is useful for learning tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/chanpuriya21a.html PDF: http://proceedings.mlr.press/v139/chanpuriya21a/chanpuriya21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chanpuriya21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sudhanshu family: Chanpuriya - given: Cameron family: Musco - given: Konstantinos family: Sotiropoulos - given: Charalampos family: Tsourakakis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1473-1483 id: chanpuriya21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1473 lastpage: 1483 published: 2021-07-01 00:00:00 +0000 - title: 'Differentiable Spatial Planning using Transformers' abstract: 'We consider the problem of spatial path planning. In contrast to the classical solutions which optimize a new plan from scratch and assume access to the full map with ground truth obstacle locations, we learn a planner from the data in a differentiable manner that allows us to leverage statistical regularities from past data. We propose Spatial Planning Transformers (SPT), which given an obstacle map learns to generate actions by planning over long-range spatial dependencies, unlike prior data-driven planners that propagate information locally via convolutional structure in an iterative manner. In the setting where the ground truth map is not known to the agent, we leverage pre-trained SPTs in an end-to-end framework that has the structure of mapper and planner built into it which allows seamless generalization to out-of-distribution maps and goals. SPTs outperform prior state-of-the-art differentiable planners across all the setups for both manipulation and navigation tasks, leading to an absolute improvement of 7-19%.' volume: 139 URL: https://proceedings.mlr.press/v139/chaplot21a.html PDF: http://proceedings.mlr.press/v139/chaplot21a/chaplot21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chaplot21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Devendra Singh family: Chaplot - given: Deepak family: Pathak - given: Jitendra family: Malik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1484-1495 id: chaplot21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1484 lastpage: 1495 published: 2021-07-01 00:00:00 +0000 - title: 'Solving Challenging Dexterous Manipulation Tasks With Trajectory Optimisation and Reinforcement Learning' abstract: 'Training agents to autonomously control anthropomorphic robotic hands has the potential to lead to systems capable of performing a multitude of complex manipulation tasks in unstructured and uncertain environments. In this work, we first introduce a suite of challenging simulated manipulation tasks where current reinforcement learning and trajectory optimisation techniques perform poorly. These include environments where two simulated hands have to pass or throw objects between each other, as well as an environment where the agent must learn to spin a long pen between its fingers. We then introduce a simple trajectory optimisation algorithm that performs significantly better than existing methods on these environments. Finally, on the most challenging “PenSpin" task, we combine sub-optimal demonstrations generated through trajectory optimisation with off-policy reinforcement learning, obtaining performance that far exceeds either of these approaches individually. Videos of all of our results are available at: https://dexterous-manipulation.github.io' volume: 139 URL: https://proceedings.mlr.press/v139/charlesworth21a.html PDF: http://proceedings.mlr.press/v139/charlesworth21a/charlesworth21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-charlesworth21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Henry J family: Charlesworth - given: Giovanni family: Montana editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1496-1506 id: charlesworth21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1496 lastpage: 1506 published: 2021-07-01 00:00:00 +0000 - title: 'Classification with Rejection Based on Cost-sensitive Classification' abstract: 'The goal of classification with rejection is to avoid risky misclassification in error-critical applications such as medical diagnosis and product inspection. In this paper, based on the relationship between classification with rejection and cost-sensitive classification, we propose a novel method of classification with rejection by learning an ensemble of cost-sensitive classifiers, which satisfies all the following properties: (i) it can avoid estimating class-posterior probabilities, resulting in improved classification accuracy. (ii) it allows a flexible choice of losses including non-convex ones, (iii) it does not require complicated modifications when using different losses, (iv) it is applicable to both binary and multiclass cases, and (v) it is theoretically justifiable for any classification-calibrated loss. Experimental results demonstrate the usefulness of our proposed approach in clean-labeled, noisy-labeled, and positive-unlabeled classification.' volume: 139 URL: https://proceedings.mlr.press/v139/charoenphakdee21a.html PDF: http://proceedings.mlr.press/v139/charoenphakdee21a/charoenphakdee21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-charoenphakdee21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nontawat family: Charoenphakdee - given: Zhenghang family: Cui - given: Yivan family: Zhang - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1507-1517 id: charoenphakdee21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1507 lastpage: 1517 published: 2021-07-01 00:00:00 +0000 - title: 'Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills' abstract: 'We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data. In particular, we propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset. We employ goal-conditioned Q-learning with hindsight relabeling and develop several techniques that enable training in a particularly challenging offline setting. We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects. We also show that our method can learn to reach long-horizon goals across multiple episodes through goal chaining, and learn rich representations that can help with downstream tasks through pre-training or auxiliary objectives.' volume: 139 URL: https://proceedings.mlr.press/v139/chebotar21a.html PDF: http://proceedings.mlr.press/v139/chebotar21a/chebotar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chebotar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yevgen family: Chebotar - given: Karol family: Hausman - given: Yao family: Lu - given: Ted family: Xiao - given: Dmitry family: Kalashnikov - given: Jacob family: Varley - given: Alex family: Irpan - given: Benjamin family: Eysenbach - given: Ryan C family: Julian - given: Chelsea family: Finn - given: Sergey family: Levine editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1518-1528 id: chebotar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1518 lastpage: 1528 published: 2021-07-01 00:00:00 +0000 - title: 'Unified Robust Semi-Supervised Variational Autoencoder' abstract: 'In this paper, we propose a novel noise-robust semi-supervised deep generative model by jointly tackling noisy labels and outliers simultaneously in a unified robust semi-supervised variational autoencoder (URSVAE). Typically, the uncertainty of of input data is characterized by placing uncertainty prior on the parameters of the probability density distributions in order to ensure the robustness of the variational encoder towards outliers. Subsequently, a noise transition model is integrated naturally into our model to alleviate the detrimental effects of noisy labels. Moreover, a robust divergence measure is employed to further enhance the robustness, where a novel variational lower bound is derived and optimized to infer the network parameters. By proving the influence function on the proposed evidence lower bound is bounded, the enormous potential of the proposed model in the classification in the presence of the compound noise is demonstrated. The experimental results highlight the superiority of the proposed framework by the evaluating on image classification tasks and comparing with the state-of-the-art approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21a.html PDF: http://proceedings.mlr.press/v139/chen21a/chen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xu family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1529-1538 id: chen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1529 lastpage: 1538 published: 2021-07-01 00:00:00 +0000 - title: 'Unsupervised Learning of Visual 3D Keypoints for Control' abstract: 'Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21b.html PDF: http://proceedings.mlr.press/v139/chen21b/chen21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Boyuan family: Chen - given: Pieter family: Abbeel - given: Deepak family: Pathak editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1539-1549 id: chen21b issued: date-parts: - 2021 - 7 - 1 firstpage: 1539 lastpage: 1549 published: 2021-07-01 00:00:00 +0000 - title: 'Integer Programming for Causal Structure Learning in the Presence of Latent Variables' abstract: 'The problem of finding an ancestral acyclic directed mixed graph (ADMG) that represents the causal relationships between a set of variables is an important area of research on causal inference. Most existing score-based structure learning methods focus on learning directed acyclic graph (DAG) models without latent variables. A number of score-based methods have recently been proposed for the ADMG learning, yet they are heuristic in nature and do not guarantee an optimal solution. We propose a novel exact score-based method that solves an integer programming (IP) formulation and returns a score-maximizing ancestral ADMG for a set of continuous variables that follow a multivariate Gaussian distribution. We generalize the state-of-the-art IP model for DAG learning problems and derive new classes of valid inequalities to formulate an IP model for ADMG learning. Empirically, our model can be solved efficiently for medium-sized problems and achieves better accuracy than state-of-the-art score-based methods as well as benchmark constraint-based methods.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21c.html PDF: http://proceedings.mlr.press/v139/chen21c/chen21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rui family: Chen - given: Sanjeeb family: Dash - given: Tian family: Gao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1550-1560 id: chen21c issued: date-parts: - 2021 - 7 - 1 firstpage: 1550 lastpage: 1560 published: 2021-07-01 00:00:00 +0000 - title: 'Improved Corruption Robust Algorithms for Episodic Reinforcement Learning' abstract: 'We study episodic reinforcement learning under unknown adversarial corruptions in both the rewards and the transition probabilities of the underlying system. We propose new algorithms which, compared to the existing results in \cite{lykouris2020corruption}, achieve strictly better regret bounds in terms of total corruptions for the tabular setting. To be specific, firstly, our regret bounds depend on more precise numerical values of total rewards corruptions and transition corruptions, instead of only on the total number of corrupted episodes. Secondly, our regret bounds are the first of their kind in the reinforcement learning setting to have the number of corruptions show up additively with respect to $\min\{ \sqrt{T},\text{PolicyGapComplexity} \}$ rather than multiplicatively. Our results follow from a general algorithmic framework that combines corruption-robust policy elimination meta-algorithms, and plug-in reward-free exploration sub-algorithms. Replacing the meta-algorithm or sub-algorithm may extend the framework to address other corrupted settings with potentially more structure.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21d.html PDF: http://proceedings.mlr.press/v139/chen21d/chen21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yifang family: Chen - given: Simon family: Du - given: Kevin family: Jamieson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1561-1570 id: chen21d issued: date-parts: - 2021 - 7 - 1 firstpage: 1561 lastpage: 1570 published: 2021-07-01 00:00:00 +0000 - title: 'Scalable Computations of Wasserstein Barycenter via Input Convex Neural Networks' abstract: 'Wasserstein Barycenter is a principled approach to represent the weighted mean of a given set of probability distributions, utilizing the geometry induced by optimal transport. In this work, we present a novel scalable algorithm to approximate the Wasserstein Barycenters aiming at high-dimensional applications in machine learning. Our proposed algorithm is based on the Kantorovich dual formulation of the Wasserstein-2 distance as well as a recent neural network architecture, input convex neural network, that is known to parametrize convex functions. The distinguishing features of our method are: i) it only requires samples from the marginal distributions; ii) unlike the existing approaches, it represents the Barycenter with a generative model and can thus generate infinite samples from the barycenter without querying the marginal distributions; iii) it works similar to Generative Adversarial Model in one marginal case. We demonstratethe efficacy of our algorithm by comparing it with the state-of-art methods in multiple experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/fan21d.html PDF: http://proceedings.mlr.press/v139/fan21d/fan21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fan21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiaojiao family: Fan - given: Amirhossein family: Taghvaei - given: Yongxin family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1571-1581 id: fan21d issued: date-parts: - 2021 - 7 - 1 firstpage: 1571 lastpage: 1581 published: 2021-07-01 00:00:00 +0000 - title: 'Neural Feature Matching in Implicit 3D Representations' abstract: 'Recently, neural implicit functions have achieved impressive results for encoding 3D shapes. Conditioning on low-dimensional latent codes generalises a single implicit function to learn shared representation space for a variety of shapes, with the advantage of smooth interpolation. While the benefits from the global latent space do not correspond to explicit points at local level, we propose to track the continuous point trajectory by matching implicit features with the latent code interpolating between shapes, from which we corroborate the hierarchical functionality of the deep implicit functions, where early layers map the latent code to fitting the coarse shape structure, and deeper layers further refine the shape details. Furthermore, the structured representation space of implicit functions enables to apply feature matching for shape deformation, with the benefits to handle topology and semantics inconsistency, such as from an armchair to a chair with no arms, without explicit flow functions or manual annotations.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21f.html PDF: http://proceedings.mlr.press/v139/chen21f/chen21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yunlu family: Chen - given: Basura family: Fernando - given: Hakan family: Bilen - given: Thomas family: Mensink - given: Efstratios family: Gavves editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1582-1593 id: chen21f issued: date-parts: - 2021 - 7 - 1 firstpage: 1582 lastpage: 1593 published: 2021-07-01 00:00:00 +0000 - title: 'Decentralized Riemannian Gradient Descent on the Stiefel Manifold' abstract: 'We consider a distributed non-convex optimization where a network of agents aims at minimizing a global function over the Stiefel manifold. The global function is represented as a finite sum of smooth local functions, where each local function is associated with one agent and agents communicate with each other over an undirected connected graph. The problem is non-convex as local functions are possibly non-convex (but smooth) and the Steifel manifold is a non-convex set. We present a decentralized Riemannian stochastic gradient method (DRSGD) with the convergence rate of $\mathcal{O}(1/\sqrt{K})$ to a stationary point. To have exact convergence with constant stepsize, we also propose a decentralized Riemannian gradient tracking algorithm (DRGTA) with the convergence rate of $\mathcal{O}(1/K)$ to a stationary point. We use multi-step consensus to preserve the iteration in the local (consensus) region. DRGTA is the first decentralized algorithm with exact convergence for distributed optimization on Stiefel manifold.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21g.html PDF: http://proceedings.mlr.press/v139/chen21g/chen21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shixiang family: Chen - given: Alfredo family: Garcia - given: Mingyi family: Hong - given: Shahin family: Shahrampour editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1594-1605 id: chen21g issued: date-parts: - 2021 - 7 - 1 firstpage: 1594 lastpage: 1605 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Self-Modulating Attention in Continuous Time Space with Applications to Sequential Recommendation' abstract: 'User interests are usually dynamic in the real world, which poses both theoretical and practical challenges for learning accurate preferences from rich behavior data. Among existing user behavior modeling solutions, attention networks are widely adopted for its effectiveness and relative simplicity. Despite being extensively studied, existing attentions still suffer from two limitations: i) conventional attentions mainly take into account the spatial correlation between user behaviors, regardless the distance between those behaviors in the continuous time space; and ii) these attentions mostly provide a dense and undistinguished distribution over all past behaviors then attentively encode them into the output latent representations. This is however not suitable in practical scenarios where a user’s future actions are relevant to a small subset of her/his historical behaviors. In this paper, we propose a novel attention network, named \textit{self-modulating attention}, that models the complex and non-linearly evolving dynamic user preferences. We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21h.html PDF: http://proceedings.mlr.press/v139/chen21h/chen21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chao family: Chen - given: Haoyu family: Geng - given: Nianzu family: Yang - given: Junchi family: Yan - given: Daiyue family: Xue - given: Jianping family: Yu - given: Xiaokang family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1606-1616 id: chen21h issued: date-parts: - 2021 - 7 - 1 firstpage: 1606 lastpage: 1616 published: 2021-07-01 00:00:00 +0000 - title: 'Mandoline: Model Evaluation under Distribution Shift' abstract: 'Machine learning models are often deployed in different settings than they were trained and validated on, posing a challenge to practitioners who wish to predict how well the deployed model will perform on a target distribution. If an unlabeled sample from the target distribution is available, along with a labeled sample from a possibly different source distribution, standard approaches such as importance weighting can be applied to estimate performance on the target. However, importance weighting struggles when the source and target distributions have non-overlapping support or are high-dimensional. Taking inspiration from fields such as epidemiology and polling, we develop Mandoline, a new evaluation framework that mitigates these issues. Our key insight is that practitioners may have prior knowledge about the ways in which the distribution shifts, which we can use to better guide the importance weighting procedure. Specifically, users write simple "slicing functions" {–} noisy, potentially correlated binary functions intended to capture possible axes of distribution shift {–} to compute reweighted performance estimates. We further describe a density ratio estimation framework for the slices and show how its estimation error scales with slice quality and dataset size. Empirical validation on NLP and vision tasks shows that Mandoline can estimate performance on the target distribution up to 3x more accurately compared to standard baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21i.html PDF: http://proceedings.mlr.press/v139/chen21i/chen21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mayee family: Chen - given: Karan family: Goel - given: Nimit S family: Sohoni - given: Fait family: Poms - given: Kayvon family: Fatahalian - given: Christopher family: Re editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1617-1629 id: chen21i issued: date-parts: - 2021 - 7 - 1 firstpage: 1617 lastpage: 1629 published: 2021-07-01 00:00:00 +0000 - title: 'Order Matters: Probabilistic Modeling of Node Sequence for Graph Generation' abstract: 'A graph generative model defines a distribution over graphs. Typically, the model consists of a sequential process that creates and adds nodes and edges. Such sequential process defines an ordering of the nodes in the graph. The computation of the model’s likelihood requires to marginalize the node orderings; this makes maximum likelihood estimation (MLE) challenging due to the (factorial) number of possible permutations. In this work, we provide an expression for the likelihood of a graph generative model and show that its calculation is closely related to the problem of graph automorphism. In addition, we derive a variational inference (VI) algorithm for fitting a graph generative model that is based on the maximization of a variational bound of the log-likelihood. This allows the model to be trained with node orderings from the approximate posterior instead of ad-hoc orderings. Our experiments show that our log-likelihood bound is significantly tighter than the bound of previous schemes. The models fitted with the VI algorithm are able to generate high-quality graphs that match the structures of target graphs not seen during training.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21j.html PDF: http://proceedings.mlr.press/v139/chen21j/chen21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiaohui family: Chen - given: Xu family: Han - given: Jiajing family: Hu - given: Francisco family: Ruiz - given: Liping family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1630-1639 id: chen21j issued: date-parts: - 2021 - 7 - 1 firstpage: 1630 lastpage: 1639 published: 2021-07-01 00:00:00 +0000 - title: 'CARTL: Cooperative Adversarially-Robust Transfer Learning' abstract: 'Transfer learning eases the burden of training a well-performed model from scratch, especially when training data is scarce and computation power is limited. In deep learning, a typical strategy for transfer learning is to freeze the early layers of a pre-trained model and fine-tune the rest of its layers on the target domain. Previous work focuses on the accuracy of the transferred model but neglects the transfer of adversarial robustness. In this work, we first show that transfer learning improves the accuracy on the target domain but degrades the inherited robustness of the target model. To address such a problem, we propose a novel cooperative adversarially-robust transfer learning (CARTL) by pre-training the model via feature distance minimization and fine-tuning the pre-trained model with non-expansive fine-tuning for target domain tasks. Empirical results show that CARTL improves the inherited robustness by about 28% at most compared with the baseline with the same degree of accuracy. Furthermore, we study the relationship between the batch normalization (BN) layers and the robustness in the context of transfer learning, and we reveal that freezing BN layers can further boost the robustness transfer.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21k.html PDF: http://proceedings.mlr.press/v139/chen21k/chen21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dian family: Chen - given: Hongxin family: Hu - given: Qian family: Wang - given: Li family: Yinli - given: Cong family: Wang - given: Chao family: Shen - given: Qi family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1640-1650 id: chen21k issued: date-parts: - 2021 - 7 - 1 firstpage: 1640 lastpage: 1650 published: 2021-07-01 00:00:00 +0000 - title: 'Finding the Stochastic Shortest Path with Low Regret: the Adversarial Cost and Unknown Transition Case' abstract: 'We make significant progress toward the stochastic shortest path problem with adversarial costs and unknown transition. Specifically, we develop algorithms that achieve $O(\sqrt{S^2ADT_\star K})$ regret for the full-information setting and $O(\sqrt{S^3A^2DT_\star K})$ regret for the bandit feedback setting, where $D$ is the diameter, $T_\star$ is the expected hitting time of the optimal policy, $S$ is the number of states, $A$ is the number of actions, and $K$ is the number of episodes. Our work strictly improves (Rosenberg and Mansour, 2020) in the full information setting, extends (Chen et al., 2020) from known transition to unknown transition, and is also the first to consider the most challenging combination: bandit feedback with adversarial costs and unknown transition. To remedy the gap between our upper bounds and the current best lower bounds constructed via a stochastically oblivious adversary, we also propose algorithms with near-optimal regret for this special case.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21l.html PDF: http://proceedings.mlr.press/v139/chen21l/chen21l.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21l.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Liyu family: Chen - given: Haipeng family: Luo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1651-1660 id: chen21l issued: date-parts: - 2021 - 7 - 1 firstpage: 1651 lastpage: 1660 published: 2021-07-01 00:00:00 +0000 - title: 'SpreadsheetCoder: Formula Prediction from Semi-structured Context' abstract: 'Spreadsheet formula prediction has been an important program synthesis problem with many real-world applications. Previous works typically utilize input-output examples as the specification for spreadsheet formula synthesis, where each input-output pair simulates a separate row in the spreadsheet. However, this formulation does not fully capture the rich context in real-world spreadsheets. First, spreadsheet data entries are organized as tables, thus rows and columns are not necessarily independent from each other. In addition, many spreadsheet tables include headers, which provide high-level descriptions of the cell data. However, previous synthesis approaches do not consider headers as part of the specification. In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data. In particular, we propose SpreadsheetCoder, a BERT-based model architecture to represent the tabular context in both row-based and column-based formats. We train our model on a large dataset of spreadsheets, and demonstrate that SpreadsheetCoder achieves top-1 prediction accuracy of 42.51%, which is a considerable improvement over baselines that do not employ rich tabular context. Compared to the rule-based system, SpreadsheetCoder assists 82% more users in composing formulas on Google Sheets.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21m.html PDF: http://proceedings.mlr.press/v139/chen21m/chen21m.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21m.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xinyun family: Chen - given: Petros family: Maniatis - given: Rishabh family: Singh - given: Charles family: Sutton - given: Hanjun family: Dai - given: Max family: Lin - given: Denny family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1661-1672 id: chen21m issued: date-parts: - 2021 - 7 - 1 firstpage: 1661 lastpage: 1672 published: 2021-07-01 00:00:00 +0000 - title: 'Large-Margin Contrastive Learning with Distance Polarization Regularizer' abstract: '\emph{Contrastive learning} (CL) pretrains models in a pairwise manner, where given a data point, other data points are all regarded as dissimilar, including some that are \emph{semantically} similar. The issue has been addressed by properly weighting similar and dissimilar pairs as in \emph{positive-unlabeled learning}, so that the objective of CL is \emph{unbiased} and CL is \emph{consistent}. However, in this paper, we argue that this great solution is still not enough: its weighted objective \emph{hides} the issue where the semantically similar pairs are still pushed away; as CL is pretraining, this phenomenon is not our desideratum and might affect downstream tasks. To this end, we propose \emph{large-margin contrastive learning} (LMCL) with \emph{distance polarization regularizer}, motivated by the distribution characteristic of pairwise distances in \emph{metric learning}. In LMCL, we can distinguish between \emph{intra-cluster} and \emph{inter-cluster} pairs, and then only push away inter-cluster pairs, which \emph{solves} the above issue explicitly. Theoretically, we prove a tighter error bound for LMCL; empirically, the superiority of LMCL is demonstrated across multiple domains, \emph{i.e.}, image classification, sentence representation, and reinforcement learning.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21n.html PDF: http://proceedings.mlr.press/v139/chen21n/chen21n.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21n.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shuo family: Chen - given: Gang family: Niu - given: Chen family: Gong - given: Jun family: Li - given: Jian family: Yang - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1673-1683 id: chen21n issued: date-parts: - 2021 - 7 - 1 firstpage: 1673 lastpage: 1683 published: 2021-07-01 00:00:00 +0000 - title: 'Z-GCNETs: Time Zigzags at Graph Convolutional Networks for Time Series Forecasting' abstract: 'There recently has been a surge of interest in developing a new class of deep learning (DL) architectures that integrate an explicit time dimension as a fundamental building block of learning and representation mechanisms. In turn, many recent results show that topological descriptors of the observed data, encoding information on the shape of the dataset in a topological space at different scales, that is, persistent homology of the data, may contain important complementary information, improving both performance and robustness of DL. As convergence of these two emerging ideas, we propose to enhance DL architectures with the most salient time-conditioned topological information of the data and introduce the concept of zigzag persistence into time-aware graph convolutional networks (GCNs). Zigzag persistence provides a systematic and mathematically rigorous framework to track the most important topological features of the observed data that tend to manifest themselves over time. To integrate the extracted time-conditioned topological descriptors into DL, we develop a new topological summary, zigzag persistence image, and derive its theoretical stability guarantees. We validate the new GCNs with a time-aware zigzag topological layer (Z-GCNETs), in application to traffic forecasting and Ethereum blockchain price prediction. Our results indicate that Z-GCNET outperforms 13 state-of-the-art methods on 4 time series datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21o.html PDF: http://proceedings.mlr.press/v139/chen21o/chen21o.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21o.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuzhou family: Chen - given: Ignacio family: Segovia - given: Yulia R. family: Gel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1684-1694 id: chen21o issued: date-parts: - 2021 - 7 - 1 firstpage: 1684 lastpage: 1694 published: 2021-07-01 00:00:00 +0000 - title: 'A Unified Lottery Ticket Hypothesis for Graph Neural Networks' abstract: 'With graphs rapidly growing in size and deeper graph neural networks (GNNs) emerging, the training and inference of GNNs become increasingly expensive. Existing network weight pruning algorithms cannot address the main space and computational bottleneck in GNNs, caused by the size and connectivity of the graph. To this end, this paper first presents a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights, for effectively accelerating GNN inference on large-scale graphs. Leveraging this new tool, we further generalize the recently popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network, which can be jointly identified from the original GNN and the full dense graph by iteratively applying UGS. Like its counterpart in convolutional neural networks, GLT can be trained in isolation to match the performance of training with the full model and graph, and can be drawn from both randomly initialized and self-supervised pre-trained GNNs. Our proposal has been experimentally verified across various GNN architectures and diverse tasks, on both small-scale graph datasets (Cora, Citeseer and PubMed), and large-scale datasets from the challenging Open Graph Benchmark (OGB). Specifically, for node classification, our found GLTs achieve the same accuracies with 20% 98% MACs saving on small graphs and 25% 85% MACs saving on large ones. For link prediction, GLTs lead to 48% 97% and 70% MACs saving on small and large graph datasets, respectively, without compromising predictive performance. Codes are at https://github.com/VITA-Group/Unified-LTH-GNN.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21p.html PDF: http://proceedings.mlr.press/v139/chen21p/chen21p.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21p.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianlong family: Chen - given: Yongduo family: Sui - given: Xuxi family: Chen - given: Aston family: Zhang - given: Zhangyang family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1695-1706 id: chen21p issued: date-parts: - 2021 - 7 - 1 firstpage: 1695 lastpage: 1706 published: 2021-07-01 00:00:00 +0000 - title: 'Network Inference and Influence Maximization from Samples' abstract: 'Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21q.html PDF: http://proceedings.mlr.press/v139/chen21q/chen21q.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21q.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wei family: Chen - given: Xiaoming family: Sun - given: Jialin family: Zhang - given: Zhijie family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1707-1716 id: chen21q issued: date-parts: - 2021 - 7 - 1 firstpage: 1707 lastpage: 1716 published: 2021-07-01 00:00:00 +0000 - title: 'Data-driven Prediction of General Hamiltonian Dynamics via Learning Exactly-Symplectic Maps' abstract: 'We consider the learning and prediction of nonlinear time series generated by a latent symplectic map. A special case is (not necessarily separable) Hamiltonian systems, whose solution flows give such symplectic maps. For this special case, both generic approaches based on learning the vector field of the latent ODE and specialized approaches based on learning the Hamiltonian that generates the vector field exist. Our method, however, is different as it does not rely on the vector field nor assume its existence; instead, it directly learns the symplectic evolution map in discrete time. Moreover, we do so by representing the symplectic map via a generating function, which we approximate by a neural network (hence the name GFNN). This way, our approximation of the evolution map is always \emph{exactly} symplectic. This additional geometric structure allows the local prediction error at each step to accumulate in a controlled fashion, and we will prove, under reasonable assumptions, that the global prediction error grows at most \emph{linearly} with long prediction time, which significantly improves an otherwise exponential growth. In addition, as a map-based and thus purely data-driven method, GFNN avoids two additional sources of inaccuracies common in vector-field based approaches, namely the error in approximating the vector field by finite difference of the data, and the error in numerical integration of the vector field for making predictions. Numerical experiments further demonstrate our claims.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21r.html PDF: http://proceedings.mlr.press/v139/chen21r/chen21r.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21r.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Renyi family: Chen - given: Molei family: Tao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1717-1727 id: chen21r issued: date-parts: - 2021 - 7 - 1 firstpage: 1717 lastpage: 1727 published: 2021-07-01 00:00:00 +0000 - title: 'Analysis of stochastic Lanczos quadrature for spectrum approximation' abstract: 'The cumulative empirical spectral measure (CESM) $\Phi[\mathbf{A}] : \mathbb{R} \to [0,1]$ of a $n\times n$ symmetric matrix $\mathbf{A}$ is defined as the fraction of eigenvalues of $\mathbf{A}$ less than a given threshold, i.e., $\Phi[\mathbf{A}](x) := \sum_{i=1}^{n} \frac{1}{n} {\large\unicode{x1D7D9}}[ \lambda_i[\mathbf{A}]\leq x]$. Spectral sums $\operatorname{tr}(f[\mathbf{A}])$ can be computed as the Riemann–Stieltjes integral of $f$ against $\Phi[\mathbf{A}]$, so the task of estimating CESM arises frequently in a number of applications, including machine learning. We present an error analysis for stochastic Lanczos quadrature (SLQ). We show that SLQ obtains an approximation to the CESM within a Wasserstein distance of $t \: | \lambda_{\text{max}}[\mathbf{A}] - \lambda_{\text{min}}[\mathbf{A}] |$ with probability at least $1-\eta$, by applying the Lanczos algorithm for $\lceil 12 t^{-1} + \frac{1}{2} \rceil$ iterations to $\lceil 4 ( n+2 )^{-1}t^{-2} \ln(2n\eta^{-1}) \rceil$ vectors sampled independently and uniformly from the unit sphere. We additionally provide (matrix-dependent) a posteriori error bounds for the Wasserstein and Kolmogorov–Smirnov distances between the output of this algorithm and the true CESM. The quality of our bounds is demonstrated using numerical experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21s.html PDF: http://proceedings.mlr.press/v139/chen21s/chen21s.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21s.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tyler family: Chen - given: Thomas family: Trogdon - given: Shashanka family: Ubaru editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1728-1739 id: chen21s issued: date-parts: - 2021 - 7 - 1 firstpage: 1728 lastpage: 1739 published: 2021-07-01 00:00:00 +0000 - title: 'Large-Scale Multi-Agent Deep FBSDEs' abstract: 'In this paper we present a scalable deep learning framework for finding Markovian Nash Equilibria in multi-agent stochastic games using fictitious play. The motivation is inspired by theoretical analysis of Forward Backward Stochastic Differential Equations and their implementation in a deep learning setting, which is the source of our algorithm’s sample efficiency improvement. By taking advantage of the permutation-invariant property of agents in symmetric games, the scalability and performance is further enhanced significantly. We showcase superior performance of our framework over the state-of-the-art deep fictitious play algorithm on an inter-bank lending/borrowing problem in terms of multiple metrics. More importantly, our approach scales up to 3000 agents in simulation, a scale which, to the best of our knowledge, represents a new state-of-the-art. We also demonstrate the applicability of our framework in robotics on a belief space autonomous racing problem.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21t.html PDF: http://proceedings.mlr.press/v139/chen21t/chen21t.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21t.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianrong family: Chen - given: Ziyi O family: Wang - given: Ioannis family: Exarchos - given: Evangelos family: Theodorou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1740-1748 id: chen21t issued: date-parts: - 2021 - 7 - 1 firstpage: 1740 lastpage: 1748 published: 2021-07-01 00:00:00 +0000 - title: 'Representation Subspace Distance for Domain Adaptation Regression' abstract: 'Regression, as a counterpart to classification, is a major paradigm with a wide range of applications. Domain adaptation regression extends it by generalizing a regressor from a labeled source domain to an unlabeled target domain. Existing domain adaptation regression methods have achieved positive results limited only to the shallow regime. A question arises: Why learning invariant representations in the deep regime less pronounced? A key finding of this paper is that classification is robust to feature scaling but regression is not, and aligning the distributions of deep representations will alter feature scale and impede domain adaptation regression. Based on this finding, we propose to close the domain gap through orthogonal bases of the representation spaces, which are free from feature scaling. Inspired by Riemannian geometry of Grassmann manifold, we define a geometrical distance over representation subspaces and learn deep transferable representations by minimizing it. To avoid breaking the geometrical properties of deep representations, we further introduce the bases mismatch penalization to match the ordering of orthogonal bases across representation subspaces. Our method is evaluated on three domain adaptation regression benchmarks, two of which are introduced in this paper. Our method outperforms the state-of-the-art methods significantly, forming early positive results in the deep regime.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21u.html PDF: http://proceedings.mlr.press/v139/chen21u/chen21u.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21u.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xinyang family: Chen - given: Sinan family: Wang - given: Jianmin family: Wang - given: Mingsheng family: Long editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1749-1759 id: chen21u issued: date-parts: - 2021 - 7 - 1 firstpage: 1749 lastpage: 1759 published: 2021-07-01 00:00:00 +0000 - title: 'Overcoming Catastrophic Forgetting by Bayesian Generative Regularization' abstract: 'In this paper, we propose a new method to over-come catastrophic forgetting by adding generative regularization to Bayesian inference frame-work. Bayesian method provides a general frame-work for continual learning. We could further construct a generative regularization term for all given classification models by leveraging energy-based models and Langevin dynamic sampling to enrich the features learned in each task. By combining discriminative and generative loss together, we empirically show that the proposed method outperforms state-of-the-art methods on a variety of tasks, avoiding catastrophic forgetting in continual learning. In particular, the proposed method outperforms baseline methods over 15%on the Fashion-MNIST dataset and 10%on the CUB dataset.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21v.html PDF: http://proceedings.mlr.press/v139/chen21v/chen21v.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21v.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pei-Hung family: Chen - given: Wei family: Wei - given: Cho-Jui family: Hsieh - given: Bo family: Dai editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1760-1770 id: chen21v issued: date-parts: - 2021 - 7 - 1 firstpage: 1760 lastpage: 1770 published: 2021-07-01 00:00:00 +0000 - title: 'Cyclically Equivariant Neural Decoders for Cyclic Codes' abstract: 'Neural decoders were introduced as a generalization of the classic Belief Propagation (BP) decoding algorithms, where the Trellis graph in the BP algorithm is viewed as a neural network, and the weights in the Trellis graph are optimized by training the neural network. In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. More precisely, we impose a shift invariant structure on the weights of our neural decoder so that any cyclic shift of inputs results in the same cyclic shift of outputs. Extensive simulations with BCH codes and punctured Reed-Muller (RM) codes show that our new decoder consistently outperforms previous neural decoders when decoding cyclic codes. Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes. For certain high-rate codes, the gap between our list decoder and the Maximum Likelihood decoder is less than $0.1$dB. Code available at github.com/cyclicallyneuraldecoder' volume: 139 URL: https://proceedings.mlr.press/v139/chen21w.html PDF: http://proceedings.mlr.press/v139/chen21w/chen21w.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21w.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiangyu family: Chen - given: Min family: Ye editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1771-1780 id: chen21w issued: date-parts: - 2021 - 7 - 1 firstpage: 1771 lastpage: 1780 published: 2021-07-01 00:00:00 +0000 - title: 'A Receptor Skeleton for Capsule Neural Networks' abstract: 'In previous Capsule Neural Networks (CapsNets), routing algorithms often performed clustering processes to assemble the child capsules’ representations into parent capsules. Such routing algorithms were typically implemented with iterative processes and incurred high computing complexity. This paper presents a new capsule structure, which contains a set of optimizable receptors and a transmitter is devised on the capsule’s representation. Specifically, child capsules’ representations are sent to the parent capsules whose receptors match well the transmitters of the child capsules’ representations, avoiding applying computationally complex routing algorithms. To ensure the receptors in a CapsNet work cooperatively, we build a skeleton to organize the receptors in different capsule layers in a CapsNet. The receptor skeleton assigns a share-out objective for each receptor, making the CapsNet perform as a hierarchical agglomerative clustering process. Comprehensive experiments verify that our approach facilitates efficient clustering processes, and CapsNets with our approach significantly outperform CapsNets with previous routing algorithms on image classification, affine transformation generalization, overlapped object recognition, and representation semantic decoupling.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21x.html PDF: http://proceedings.mlr.press/v139/chen21x/chen21x.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21x.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jintai family: Chen - given: Hongyun family: Yu - given: Chengde family: Qian - given: Danny Z family: Chen - given: Jian family: Wu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1781-1790 id: chen21x issued: date-parts: - 2021 - 7 - 1 firstpage: 1781 lastpage: 1790 published: 2021-07-01 00:00:00 +0000 - title: 'Accelerating Gossip SGD with Periodic Global Averaging' abstract: 'Communication overhead hinders the scalability of large-scale distributed training. Gossip SGD, where each node averages only with its neighbors, is more communication-efficient than the prevalent parallel SGD. However, its convergence rate is reversely proportional to quantity $1-\beta$ which measures the network connectivity. On large and sparse networks where $1-\beta \to 0$, Gossip SGD requires more iterations to converge, which offsets against its communication benefit. This paper introduces Gossip-PGA, which adds Periodic Global Averaging to accelerate Gossip SGD. Its transient stage, i.e., the iterations required to reach asymptotic linear speedup stage, improves from $\Omega(\beta^4 n^3/(1-\beta)^4)$ to $\Omega(\beta^4 n^3 H^4)$ for non-convex problems. The influence of network topology in Gossip-PGA can be controlled by the averaging period $H$. Its transient-stage complexity is also superior to local SGD which has order $\Omega(n^3 H^4)$. Empirical results of large-scale training on image classification (ResNet50) and language modeling (BERT) validate our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21y.html PDF: http://proceedings.mlr.press/v139/chen21y/chen21y.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21y.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yiming family: Chen - given: Kun family: Yuan - given: Yingya family: Zhang - given: Pan family: Pan - given: Yinghui family: Xu - given: Wotao family: Yin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1791-1802 id: chen21y issued: date-parts: - 2021 - 7 - 1 firstpage: 1791 lastpage: 1802 published: 2021-07-01 00:00:00 +0000 - title: 'ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training' abstract: 'The increasing size of neural network models has been critical for improvements in their accuracy, but device memory is not growing at the same rate. This creates fundamental challenges for training neural networks within limited memory environments. In this work, we propose ActNN, a memory-efficient training framework that stores randomly quantized activations for back propagation. We prove the convergence of ActNN for general network architectures, and we characterize the impact of quantization on the convergence via an exact expression for the gradient variance. Using our theory, we propose novel mixed-precision quantization strategies that exploit the activation’s heterogeneity across feature dimensions, samples, and layers. These techniques can be readily applied to existing dynamic graph frameworks, such as PyTorch, simply by substituting the layers. We evaluate ActNN on mainstream computer vision models for classification, detection, and segmentation tasks. On all these tasks, ActNN compresses the activation to 2 bits on average, with negligible accuracy loss. ActNN reduces the memory footprint of the activation by 12x, and it enables training with a 6.6x to 14x larger batch size.' volume: 139 URL: https://proceedings.mlr.press/v139/chen21z.html PDF: http://proceedings.mlr.press/v139/chen21z/chen21z.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chen21z.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jianfei family: Chen - given: Lianmin family: Zheng - given: Zhewei family: Yao - given: Dequan family: Wang - given: Ion family: Stoica - given: Michael family: Mahoney - given: Joseph family: Gonzalez editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1803-1813 id: chen21z issued: date-parts: - 2021 - 7 - 1 firstpage: 1803 lastpage: 1813 published: 2021-07-01 00:00:00 +0000 - title: 'SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation' abstract: 'A black-box spectral method is introduced for evaluating the adversarial robustness of a given machine learning (ML) model. Our approach, named SPADE, exploits bijective distance mapping between the input/output graphs constructed for approximating the manifolds corresponding to the input/output data. By leveraging the generalized Courant-Fischer theorem, we propose a SPADE score for evaluating the adversarial robustness of a given model, which is proved to be an upper bound of the best Lipschitz constant under the manifold setting. To reveal the most non-robust data samples highly vulnerable to adversarial attacks, we develop a spectral graph embedding procedure leveraging dominant generalized eigenvectors. This embedding step allows assigning each data point a robustness score that can be further harnessed for more effective adversarial training of ML models. Our experiments show promising empirical results for neural networks trained with the MNIST and CIFAR-10 data sets.' volume: 139 URL: https://proceedings.mlr.press/v139/cheng21a.html PDF: http://proceedings.mlr.press/v139/cheng21a/cheng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cheng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wuxinlin family: Cheng - given: Chenhui family: Deng - given: Zhiqiang family: Zhao - given: Yaohui family: Cai - given: Zhiru family: Zhang - given: Zhuo family: Feng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1814-1824 id: cheng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1814 lastpage: 1824 published: 2021-07-01 00:00:00 +0000 - title: 'Self-supervised and Supervised Joint Training for Resource-rich Machine Translation' abstract: 'Self-supervised pre-training of text representations has been successfully applied to low-resource Neural Machine Translation (NMT). However, it usually fails to achieve notable gains on resource-rich NMT. In this paper, we propose a joint training approach, F2-XEnDec, to combine self-supervised and supervised learning to optimize NMT models. To exploit complementary self-supervised signals for supervised learning, NMT models are trained on examples that are interbred from monolingual and parallel sentences through a new process called crossover encoder-decoder. Experiments on two resource-rich translation benchmarks, WMT’14 English-German and WMT’14 English-French, demonstrate that our approach achieves substantial improvements over several strong baseline methods and obtains a new state of the art of 46.19 BLEU on English-French when incorporating back translation. Results also show that our approach is capable of improving model robustness to input perturbations such as code-switching noise which frequently appears on the social media.' volume: 139 URL: https://proceedings.mlr.press/v139/cheng21b.html PDF: http://proceedings.mlr.press/v139/cheng21b/cheng21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cheng21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yong family: Cheng - given: Wei family: Wang - given: Lu family: Jiang - given: Wolfgang family: Macherey editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1825-1835 id: cheng21b issued: date-parts: - 2021 - 7 - 1 firstpage: 1825 lastpage: 1835 published: 2021-07-01 00:00:00 +0000 - title: 'Exact Optimization of Conformal Predictors via Incremental and Decremental Learning' abstract: 'Conformal Predictors (CP) are wrappers around ML models, providing error guarantees under weak assumptions on the data distribution. They are suitable for a wide range of problems, from classification and regression to anomaly detection. Unfortunately, their very high computational complexity limits their applicability to large datasets. In this work, we show that it is possible to speed up a CP classifier considerably, by studying it in conjunction with the underlying ML method, and by exploiting incremental&decremental learning. For methods such as k-NN, KDE, and kernel LS-SVM, our approach reduces the running time by one order of magnitude, whilst producing exact solutions. With similar ideas, we also achieve a linear speed up for the harder case of bootstrapping. Finally, we extend these techniques to improve upon an optimization of k-NN CP for regression. We evaluate our findings empirically, and discuss when methods are suitable for CP optimization.' volume: 139 URL: https://proceedings.mlr.press/v139/cherubin21a.html PDF: http://proceedings.mlr.press/v139/cherubin21a/cherubin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cherubin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Giovanni family: Cherubin - given: Konstantinos family: Chatzikokolakis - given: Martin family: Jaggi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1836-1845 id: cherubin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1836 lastpage: 1845 published: 2021-07-01 00:00:00 +0000 - title: 'Problem Dependent View on Structured Thresholding Bandit Problems' abstract: 'We investigate the \textit{problem dependent regime} in the stochastic \emph{Thresholding Bandit problem} (\tbp) under several \emph{shape constraints}. In the \tbp the objective of the learner is to output, after interacting with the environment, the set of arms whose means are above a given threshold. The vanilla, unstructured, case is already well studied in the literature. Taking $K$ as the number of arms, we consider the case where (i) the sequence of arm’s means $(\mu_k){k=1}^K$ is monotonically increasing (\textit{MTBP}) and (ii) the case where $(\mu_k){k=1}^K$ is concave (\textit{CTBP}). We consider both cases in the \emph{problem dependent} regime and study the probability of error - i.e. the probability to mis-classify at least one arm. In the fixed budget setting, we provide nearly matching upper and lower bounds for the probability of error in both the concave and monotone settings, as well as associated algorithms. Of interest, is that for both the monotone and concave cases, optimal bounds on probability of error are of the same order as those for the two armed bandit problem.' volume: 139 URL: https://proceedings.mlr.press/v139/cheshire21a.html PDF: http://proceedings.mlr.press/v139/cheshire21a/cheshire21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cheshire21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: James family: Cheshire - given: Pierre family: Menard - given: Alexandra family: Carpentier editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1846-1854 id: cheshire21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1846 lastpage: 1854 published: 2021-07-01 00:00:00 +0000 - title: 'Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence' abstract: 'We present a novel control-theoretic understanding of online optimization and learning in games, via the notion of passivity. Passivity is a fundamental concept in control theory, which abstracts energy conservation and dissipation in physical systems. It has become a standard tool in analysis of general feedback systems, to which game dynamics belong. Our starting point is to show that all continuous-time Follow-the-Regularized-Leader (FTRL) dynamics, which include the well-known Replicator Dynamic, are lossless, i.e. it is passive with no energy dissipation. Interestingly, we prove that passivity implies bounded regret, connecting two fundamental primitives of control theory and online optimization. The observation of energy conservation in FTRL inspires us to present a family of lossless learning dynamics, each of which has an underlying energy function with a simple gradient structure. This family is closed under convex combination; as an immediate corollary, any convex combination of FTRL dynamics is lossless and thus has bounded regret. This allows us to extend the framework of Fox & Shamma [Games 2013] to prove not just global asymptotic stability results for game dynamics, but Poincar{é} recurrence results as well. Intuitively, when a lossless game (e.g. graphical constant-sum game) is coupled with lossless learning dynamic, their interconnection is also lossless, which results in a pendulum-like energy-preserving recurrent behavior, generalizing Piliouras & Shamma [SODA 2014] and Mertikopoulos et al. [SODA 2018].' volume: 139 URL: https://proceedings.mlr.press/v139/cheung21a.html PDF: http://proceedings.mlr.press/v139/cheung21a/cheung21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cheung21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yun Kuen family: Cheung - given: Georgios family: Piliouras editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1855-1865 id: cheung21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1855 lastpage: 1865 published: 2021-07-01 00:00:00 +0000 - title: 'Understanding and Mitigating Accuracy Disparity in Regression' abstract: 'With the widespread deployment of large-scale prediction systems in high-stakes domains, e.g., face recognition, criminal justice, etc., disparity on prediction accuracy between different demographic subgroups has called for fundamental understanding on the source of such disparity and algorithmic intervention to mitigate it. In this paper, we study the accuracy disparity problem in regression. To begin with, we first propose an error decomposition theorem, which decomposes the accuracy disparity into the distance between marginal label distributions and the distance between conditional representations, to help explain why such accuracy disparity appears in practice. Motivated by this error decomposition and the general idea of distribution alignment with statistical distances, we then propose an algorithm to reduce this disparity, and analyze its game-theoretic optima of the proposed objective functions. To corroborate our theoretical findings, we also conduct experiments on five benchmark datasets. The experimental results suggest that our proposed algorithms can effectively mitigate accuracy disparity while maintaining the predictive power of the regression models.' volume: 139 URL: https://proceedings.mlr.press/v139/chi21a.html PDF: http://proceedings.mlr.press/v139/chi21a/chi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jianfeng family: Chi - given: Yuan family: Tian - given: Geoffrey J. family: Gordon - given: Han family: Zhao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1866-1876 id: chi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1866 lastpage: 1876 published: 2021-07-01 00:00:00 +0000 - title: 'Private Alternating Least Squares: Practical Private Matrix Completion with Tighter Rates' abstract: 'We study the problem of differentially private (DP) matrix completion under user-level privacy. We design a joint differentially private variant of the popular Alternating-Least-Squares (ALS) method that achieves: i) (nearly) optimal sample complexity for matrix completion (in terms of number of items, users), and ii) the best known privacy/utility trade-off both theoretically, as well as on benchmark data sets. In particular, we provide the first global convergence analysis of ALS with noise introduced to ensure DP, and show that, in comparison to the best known alternative (the Private Frank-Wolfe algorithm by Jain et al. (2018)), our error bounds scale significantly better with respect to the number of items and users, which is critical in practical problems. Extensive validation on standard benchmarks demonstrate that the algorithm, in combination with carefully designed sampling procedures, is significantly more accurate than existing techniques, thus promising to be the first practical DP embedding model.' volume: 139 URL: https://proceedings.mlr.press/v139/chien21a.html PDF: http://proceedings.mlr.press/v139/chien21a/chien21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chien21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Steve family: Chien - given: Prateek family: Jain - given: Walid family: Krichene - given: Steffen family: Rendle - given: Shuang family: Song - given: Abhradeep family: Thakurta - given: Li family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1877-1887 id: chien21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1877 lastpage: 1887 published: 2021-07-01 00:00:00 +0000 - title: 'Light RUMs' abstract: 'A Random Utility Model (RUM) is a distribution on permutations over a universe of items. For each subset of the universe, a RUM induces a natural distribution of the winner in the subset: choose a permutation according to the RUM distribution and pick the maximum item in the subset according to the chosen permutation. RUMs are widely used in the theory of discrete choice. In this paper we consider the question of the (lossy) compressibility of RUMs on a universe of size $n$, i.e., the minimum number of bits required to approximate the winning probabilities of each slate. Our main result is that RUMs can be approximated using $\tilde{O}(n^2)$ bits, an exponential improvement over the standard representation; furthermore, we show that this bound is optimal. En route, we sharpen the classical existential result of McFadden and Train (2000) by showing that the minimum size of a mixture of multinomial logits required to can approximate a general RUM is $\tilde{\Theta}(n)$.' volume: 139 URL: https://proceedings.mlr.press/v139/chierichetti21a.html PDF: http://proceedings.mlr.press/v139/chierichetti21a/chierichetti21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chierichetti21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Flavio family: Chierichetti - given: Ravi family: Kumar - given: Andrew family: Tomkins editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1888-1897 id: chierichetti21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1888 lastpage: 1897 published: 2021-07-01 00:00:00 +0000 - title: 'Parallelizing Legendre Memory Unit Training' abstract: 'Recently, a new recurrent neural network (RNN) named the Legendre Memory Unit (LMU) was proposed and shown to achieve state-of-the-art performance on several benchmark datasets. Here we leverage the linear time-invariant (LTI) memory component of the LMU to construct a simplified variant that can be parallelized during training (and yet executed as an RNN during inference), resulting in up to 200 times faster training. We note that our efficient parallelizing scheme is general and is applicable to any deep network whose recurrent components are linear dynamical systems. We demonstrate the improved accuracy of our new architecture compared to the original LMU and a variety of published LSTM and transformer networks across seven benchmarks. For instance, our LMU sets a new state-of-the-art result on psMNIST, and uses half the parameters while outperforming DistilBERT and LSTM models on IMDB sentiment analysis.' volume: 139 URL: https://proceedings.mlr.press/v139/chilkuri21a.html PDF: http://proceedings.mlr.press/v139/chilkuri21a/chilkuri21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chilkuri21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Narsimha Reddy family: Chilkuri - given: Chris family: Eliasmith editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1898-1907 id: chilkuri21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1898 lastpage: 1907 published: 2021-07-01 00:00:00 +0000 - title: 'Quantifying and Reducing Bias in Maximum Likelihood Estimation of Structured Anomalies' abstract: 'Anomaly estimation, or the problem of finding a subset of a dataset that differs from the rest of the dataset, is a classic problem in machine learning and data mining. In both theoretical work and in applications, the anomaly is assumed to have a specific structure defined by membership in an anomaly family. For example, in temporal data the anomaly family may be time intervals, while in network data the anomaly family may be connected subgraphs. The most prominent approach for anomaly estimation is to compute the Maximum Likelihood Estimator (MLE) of the anomaly; however, it was recently observed that for normally distributed data, the MLE is a biased estimator for some anomaly families. In this work, we demonstrate that in the normal means setting, the bias of the MLE depends on the size of the anomaly family. We prove that if the number of sets in the anomaly family that contain the anomaly is sub-exponential, then the MLE is asymptotically unbiased. We also provide empirical evidence that the converse is true: if the number of such sets is exponential, then the MLE is asymptotically biased. Our analysis unifies a number of earlier results on the bias of the MLE for specific anomaly families. Next, we derive a new anomaly estimator using a mixture model, and we prove that our anomaly estimator is asymptotically unbiased regardless of the size of the anomaly family. We illustrate the advantages of our estimator versus the MLE on disease outbreak data and highway traffic data.' volume: 139 URL: https://proceedings.mlr.press/v139/chitra21a.html PDF: http://proceedings.mlr.press/v139/chitra21a/chitra21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chitra21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Uthsav family: Chitra - given: Kimberly family: Ding - given: Jasper C.H. family: Lee - given: Benjamin J family: Raphael editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1908-1919 id: chitra21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1908 lastpage: 1919 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Learning-Augmented Caching: An Experimental Study' abstract: 'Effective caching is crucial for performance of modern-day computing systems. A key optimization problem arising in caching – which item to evict to make room for a new item – cannot be optimally solved without knowing the future. There are many classical approximation algorithms for this problem, but more recently researchers started to successfully apply machine learning to decide what to evict by discovering implicit input patterns and predicting the future. While machine learning typically does not provide any worst-case guarantees, the new field of learning-augmented algorithms proposes solutions which leverage classical online caching algorithms to make the machine-learned predictors robust. We are the first to comprehensively evaluate these learning-augmented algorithms on real-world caching datasets and state-of-the-art machine-learned predictors. We show that a straightforward method – blindly following either a predictor or a classical robust algorithm, and switching whenever one becomes worse than the other – has only a low overhead over a well-performing predictor, while competing with classical methods when the coupled predictor fails, thus providing a cheap worst-case insurance.' volume: 139 URL: https://proceedings.mlr.press/v139/chledowski21a.html PDF: http://proceedings.mlr.press/v139/chledowski21a/chledowski21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chledowski21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jakub family: Chłędowski - given: Adam family: Polak - given: Bartosz family: Szabucki - given: Konrad Tomasz family: Żołna editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1920-1930 id: chledowski21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1920 lastpage: 1930 published: 2021-07-01 00:00:00 +0000 - title: 'Unifying Vision-and-Language Tasks via Text Generation' abstract: 'Existing methods for vision-and-language learning typically require designing task-specific architectures and objectives for each task. For example, a multi-label answer classifier for visual question answering, a region scorer for referring expression comprehension, and a language decoder for image captioning, etc. To alleviate these hassles, in this work, we propose a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation, where our models learn to generate labels in text based on the visual and textual inputs. On 7 popular vision-and-language benchmarks, including visual question answering, referring expression comprehension, visual commonsense reasoning, most of which have been previously modeled as discriminative tasks, our generative approach (with a single unified architecture) reaches comparable performance to recent task-specific state-of-the-art vision-and-language models. Moreover, our generative approach shows better generalization ability on questions that have rare answers. Also, we show that our framework allows multi-task learning in a single architecture with a single set of parameters, achieving similar performance to separately optimized single-task models. Our code is publicly available at: https://github.com/j-min/VL-T5' volume: 139 URL: https://proceedings.mlr.press/v139/cho21a.html PDF: http://proceedings.mlr.press/v139/cho21a/cho21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cho21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jaemin family: Cho - given: Jie family: Lei - given: Hao family: Tan - given: Mohit family: Bansal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1931-1942 id: cho21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1931 lastpage: 1942 published: 2021-07-01 00:00:00 +0000 - title: 'Learning from Nested Data with Ornstein Auto-Encoders' abstract: 'Many of real-world data, e.g., the VGGFace2 dataset, which is a collection of multiple portraits of individuals, come with nested structures due to grouped observation. The Ornstein auto-encoder (OAE) is an emerging framework for representation learning from nested data, based on an optimal transport distance between random processes. An attractive feature of OAE is its ability to generate new variations nested within an observational unit, whether or not the unit is known to the model. A previously proposed algorithm for OAE, termed the random-intercept OAE (RIOAE), showed an impressive performance in learning nested representations, yet lacks theoretical justification. In this work, we show that RIOAE minimizes a loose upper bound of the employed optimal transport distance. After identifying several issues with RIOAE, we present the product-space OAE (PSOAE) that minimizes a tighter upper bound of the distance and achieves orthogonality in the representation space. PSOAE alleviates the instability of RIOAE and provides more flexible representation of nested data. We demonstrate the high performance of PSOAE in the three key tasks of generative models: exemplar generation, style transfer, and new concept generation.' volume: 139 URL: https://proceedings.mlr.press/v139/choi21a.html PDF: http://proceedings.mlr.press/v139/choi21a/choi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-choi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Youngwon family: Choi - given: Sungdong family: Lee - given: Joong-Ho family: Won editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1943-1952 id: choi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1943 lastpage: 1952 published: 2021-07-01 00:00:00 +0000 - title: 'Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning' abstract: 'Learning to reach goal states and learning diverse skills through mutual information maximization have been proposed as principled frameworks for unsupervised reinforcement learning, allowing agents to acquire broadly applicable multi-task policies with minimal reward engineering. In this paper, we discuss how these two approaches {—} goal-conditioned RL (GCRL) and MI-based RL {—} can be generalized into a single family of methods, interpreting mutual information maximization and variational empowerment as representation learning methods that acquire function-ally aware state representations for goal reaching.Starting from a simple observation that the standard GCRL is encapsulated by the optimization objective of variational empowerment, we can derive novel variants of GCRL and variational empowerment under a single, unified optimization objective, such as adaptive-variance GCRL and linear-mapping GCRL, and study the characteristics of representation learning each variant provides. Furthermore, through the lens of GCRL, we show that adapting powerful techniques fromGCRL such as goal relabeling into the variationalMI context as well as proper regularization on the variational posterior provides substantial gains in algorithm performance, and propose a novel evaluation metric named latent goal reaching (LGR)as an objective measure for evaluating empowerment algorithms akin to goal-based RL. Through principled mathematical derivations and careful experimental validations, our work lays a novel foundation from which representation learning can be evaluated and analyzed in goal-based RL' volume: 139 URL: https://proceedings.mlr.press/v139/choi21b.html PDF: http://proceedings.mlr.press/v139/choi21b/choi21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-choi21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jongwook family: Choi - given: Archit family: Sharma - given: Honglak family: Lee - given: Sergey family: Levine - given: Shixiang Shane family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1953-1963 id: choi21b issued: date-parts: - 2021 - 7 - 1 firstpage: 1953 lastpage: 1963 published: 2021-07-01 00:00:00 +0000 - title: 'Label-Only Membership Inference Attacks' abstract: 'Membership inference is one of the simplest privacy threats faced by machine learning models that are trained on private sensitive data. In this attack, an adversary infers whether a particular point was used to train the model, or not, by observing the model’s predictions. Whereas current attack methods all require access to the model’s predicted confidence score, we introduce a label-only attack that instead evaluates the robustness of the model’s predicted (hard) labels under perturbations of the input, to infer membership. Our label-only attack is not only as-effective as attacks requiring access to confidence scores, it also demonstrates that a class of defenses against membership inference, which we call “confidence masking” because they obfuscate the confidence scores to thwart attacks, are insufficient to prevent the leakage of private information. Our experiments show that training with differential privacy or strong L2 regularization are the only current defenses that meaningfully decrease leakage of private information, even for points that are outliers of the training distribution.' volume: 139 URL: https://proceedings.mlr.press/v139/choquette-choo21a.html PDF: http://proceedings.mlr.press/v139/choquette-choo21a/choquette-choo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-choquette-choo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Christopher A. family: Choquette-Choo - given: Florian family: Tramer - given: Nicholas family: Carlini - given: Nicolas family: Papernot editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1964-1974 id: choquette-choo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1964 lastpage: 1974 published: 2021-07-01 00:00:00 +0000 - title: 'Modeling Hierarchical Structures with Continuous Recursive Neural Networks' abstract: 'Recursive Neural Networks (RvNNs), which compose sequences according to their underlying hierarchical syntactic structure, have performed well in several natural language processing tasks compared to similar models without structural biases. However, traditional RvNNs are incapable of inducing the latent structure in a plain text sequence on their own. Several extensions have been proposed to overcome this limitation. Nevertheless, these extensions tend to rely on surrogate gradients or reinforcement learning at the cost of higher bias or variance. In this work, we propose Continuous Recursive Neural Network (CRvNN) as a backpropagation-friendly alternative to address the aforementioned limitations. This is done by incorporating a continuous relaxation to the induced structure. We demonstrate that CRvNN achieves strong performance in challenging synthetic tasks such as logical inference (Bowman et al., 2015b) and ListOps (Nangia & Bowman, 2018). We also show that CRvNN performs comparably or better than prior latent structure models on real-world tasks such as sentiment analysis and natural language inference.' volume: 139 URL: https://proceedings.mlr.press/v139/chowdhury21a.html PDF: http://proceedings.mlr.press/v139/chowdhury21a/chowdhury21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chowdhury21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jishnu Ray family: Chowdhury - given: Cornelia family: Caragea editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1975-1988 id: chowdhury21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1975 lastpage: 1988 published: 2021-07-01 00:00:00 +0000 - title: 'Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing' abstract: 'Sharing parameters in multi-agent deep reinforcement learning has played an essential role in allowing algorithms to scale to a large number of agents. Parameter sharing between agents significantly decreases the number of trainable parameters, shortening training times to tractable levels, and has been linked to more efficient learning. However, having all agents share the same parameters can also have a detrimental effect on learning. We demonstrate the impact of parameter sharing methods on training speed and converged returns, establishing that when applied indiscriminately, their effectiveness is highly dependent on the environment. We propose a novel method to automatically identify agents which may benefit from sharing parameters by partitioning them based on their abilities and goals. Our approach combines the increased sample efficiency of parameter sharing with the representational capacity of multiple independent networks to reduce training time and increase final returns.' volume: 139 URL: https://proceedings.mlr.press/v139/christianos21a.html PDF: http://proceedings.mlr.press/v139/christianos21a/christianos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-christianos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Filippos family: Christianos - given: Georgios family: Papoudakis - given: Muhammad A family: Rahman - given: Stefano V family: Albrecht editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1989-1998 id: christianos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1989 lastpage: 1998 published: 2021-07-01 00:00:00 +0000 - title: 'Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization' abstract: 'Bandit and reinforcement learning (RL) problems can often be framed as optimization problems where the goal is to maximize average performance while having access only to stochastic estimates of the true gradient. Traditionally, stochastic optimization theory predicts that learning dynamics are governed by the curvature of the loss function and the noise of the gradient estimates. In this paper we demonstrate that the standard view is too limited for bandit and RL problems. To allow our analysis to be interpreted in light of multi-step MDPs, we focus on techniques derived from stochastic optimization principles (e.g., natural policy gradient and EXP3) and we show that some standard assumptions from optimization theory are violated in these problems. We present theoretical results showing that, at least for bandit problems, curvature and noise are not sufficient to explain the learning dynamics and that seemingly innocuous choices like the baseline can determine whether an algorithm converges. These theoretical findings match our empirical evaluation, which we extend to multi-state MDPs.' volume: 139 URL: https://proceedings.mlr.press/v139/chung21a.html PDF: http://proceedings.mlr.press/v139/chung21a/chung21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-chung21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wesley family: Chung - given: Valentin family: Thomas - given: Marlos C. family: Machado - given: Nicolas Le family: Roux editor: - given: Marina family: Meila - given: Tong family: Zhang page: 1999-2009 id: chung21a issued: date-parts: - 2021 - 7 - 1 firstpage: 1999 lastpage: 2009 published: 2021-07-01 00:00:00 +0000 - title: 'First-Order Methods for Wasserstein Distributionally Robust MDP' abstract: 'Markov decision processes (MDPs) are known to be sensitive to parameter specification. Distributionally robust MDPs alleviate this issue by allowing for \textit{ambiguity sets} which give a set of possible distributions over parameter sets. The goal is to find an optimal policy with respect to the worst-case parameter distribution. We propose a framework for solving Distributionally robust MDPs via first-order methods, and instantiate it for several types of Wasserstein ambiguity sets. By developing efficient proximal updates, our algorithms achieve a convergence rate of $O\left(NA^{2.5}S^{3.5}\log(S)\log(\epsilon^{-1})\epsilon^{-1.5} \right)$ for the number of kernels $N$ in the support of the nominal distribution, states $S$, and actions $A$; this rate varies slightly based on the Wasserstein setup. Our dependence on $N,A$ and $S$ is significantly better than existing methods, which have a complexity of $O\left(N^{3.5}A^{3.5}S^{4.5}\log^{2}(\epsilon^{-1}) \right)$. Numerical experiments show that our algorithm is significantly more scalable than state-of-the-art approaches across several domains.' volume: 139 URL: https://proceedings.mlr.press/v139/clement21a.html PDF: http://proceedings.mlr.press/v139/clement21a/clement21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-clement21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Julien Grand family: Clement - given: Christian family: Kroer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2010-2019 id: clement21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2010 lastpage: 2019 published: 2021-07-01 00:00:00 +0000 - title: 'Phasic Policy Gradient' abstract: 'We introduce Phasic Policy Gradient (PPG), a reinforcement learning framework which modifies traditional on-policy actor-critic methods by separating policy and value function training into distinct phases. In prior methods, one must choose between using a shared network or separate networks to represent the policy and value function. Using separate networks avoids interference between objectives, while using a shared network allows useful features to be shared. PPG is able to achieve the best of both worlds by splitting optimization into two phases, one that advances training and one that distills features. PPG also enables the value function to be more aggressively optimized with a higher level of sample reuse. Compared to PPO, we find that PPG significantly improves sample efficiency on the challenging Procgen Benchmark.' volume: 139 URL: https://proceedings.mlr.press/v139/cobbe21a.html PDF: http://proceedings.mlr.press/v139/cobbe21a/cobbe21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cobbe21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Karl W family: Cobbe - given: Jacob family: Hilton - given: Oleg family: Klimov - given: John family: Schulman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2020-2027 id: cobbe21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2020 lastpage: 2027 published: 2021-07-01 00:00:00 +0000 - title: 'Riemannian Convex Potential Maps' abstract: 'Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited by representational and computational tradeoffs. We propose and study a class of flows that uses convex potentials from Riemannian optimal transport. These are universal and can model distributions on any compact Riemannian manifold without requiring domain knowledge of the manifold to be integrated into the architecture. We demonstrate that these flows can model standard distributions on spheres, and tori, on synthetic and geological data.' volume: 139 URL: https://proceedings.mlr.press/v139/cohen21a.html PDF: http://proceedings.mlr.press/v139/cohen21a/cohen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cohen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Samuel family: Cohen - given: Brandon family: Amos - given: Yaron family: Lipman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2028-2038 id: cohen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2028 lastpage: 2038 published: 2021-07-01 00:00:00 +0000 - title: 'Scaling Properties of Deep Residual Networks' abstract: 'Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.' volume: 139 URL: https://proceedings.mlr.press/v139/cohen21b.html PDF: http://proceedings.mlr.press/v139/cohen21b/cohen21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cohen21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alain-Sam family: Cohen - given: Rama family: Cont - given: Alain family: Rossier - given: Renyuan family: Xu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2039-2048 id: cohen21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2039 lastpage: 2048 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially-Private Clustering of Easy Instances' abstract: 'Clustering is a fundamental problem in data analysis. In differentially private clustering, the goal is to identify k cluster centers without disclosing information on individual data points. Despite significant research progress, the problem had so far resisted practical solutions. In this work we aim at providing simple implementable differentrially private clustering algorithms when the the data is "easy," e.g., when there exists a significant separation between the clusters. For the easy instances we consider, we have a simple implementation based on utilizing non-private clustering algorithms, and combining them privately. We are able to get improved sample complexity bounds in some cases of Gaussian mixtures and k-means. We complement our theoretical algorithms with experiments of simulated data.' volume: 139 URL: https://proceedings.mlr.press/v139/cohen21c.html PDF: http://proceedings.mlr.press/v139/cohen21c/cohen21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cohen21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Edith family: Cohen - given: Haim family: Kaplan - given: Yishay family: Mansour - given: Uri family: Stemmer - given: Eliad family: Tsfadia editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2049-2059 id: cohen21c issued: date-parts: - 2021 - 7 - 1 firstpage: 2049 lastpage: 2059 published: 2021-07-01 00:00:00 +0000 - title: 'Improving Ultrametrics Embeddings Through Coresets' abstract: 'To tackle the curse of dimensionality in data analysis and unsupervised learning, it is critical to be able to efficiently compute “simple” faithful representations of the data that helps extract information, improves understanding and visualization of the structure. When the dataset consists of $d$-dimensional vectors, simple representations of the data may consist in trees or ultrametrics, and the goal is to best preserve the distances (i.e.: dissimilarity values) between data elements. To circumvent the quadratic running times of the most popular methods for fitting ultrametrics, such as average, single, or complete linkage, \citet{CKL20} recently presented a new algorithm that for any $c \ge 1$, outputs in time $n^{1+O(1/c^2)}$ an ultrametric $\Delta$ such that for any two points $u, v$, $\Delta(u, v)$ is within a multiplicative factor of $5c$ to the distance between $u$ and $v$ in the “best” ultrametric representation. We improve the above result and show how to improve the above guarantee from $5c$ to $\sqrt{2}c + \varepsilon$ while achieving the same asymptotic running time. To complement the improved theoretical bound, we additionally show that the performances of our algorithm are significantly better for various real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/cohen-addad21a.html PDF: http://proceedings.mlr.press/v139/cohen-addad21a/cohen-addad21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cohen-addad21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vincent family: Cohen-Addad - given: Rémi family: De Joannis De Verclos - given: Guillaume family: Lagarde editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2060-2068 id: cohen-addad21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2060 lastpage: 2068 published: 2021-07-01 00:00:00 +0000 - title: 'Correlation Clustering in Constant Many Parallel Rounds' abstract: 'Correlation clustering is a central topic in unsupervised learning, with many applications in ML and data mining. In correlation clustering, one receives as input a signed graph and the goal is to partition it to minimize the number of disagreements. In this work we propose a massively parallel computation (MPC) algorithm for this problem that is considerably faster than prior work. In particular, our algorithm uses machines with memory sublinear in the number of nodes in the graph and returns a constant approximation while running only for a constant number of rounds. To the best of our knowledge, our algorithm is the first that can provably approximate a clustering problem using only a constant number of MPC rounds in the sublinear memory regime. We complement our analysis with an experimental scalability evaluation of our techniques.' volume: 139 URL: https://proceedings.mlr.press/v139/cohen-addad21b.html PDF: http://proceedings.mlr.press/v139/cohen-addad21b/cohen-addad21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cohen-addad21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vincent family: Cohen-Addad - given: Silvio family: Lattanzi - given: Slobodan family: Mitrović - given: Ashkan family: Norouzi-Fard - given: Nikos family: Parotsidis - given: Jakub family: Tarnawski editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2069-2078 id: cohen-addad21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2069 lastpage: 2078 published: 2021-07-01 00:00:00 +0000 - title: 'Concentric mixtures of Mallows models for top-$k$ rankings: sampling and identifiability' abstract: 'In this paper, we study mixtures of two Mallows models for top-$k$ rankings with equal location parameters but with different scale parameters (a mixture of concentric Mallows models). These models arise when we have a heterogeneous population of voters formed by two populations, one of which is a subpopulation of expert voters. We show the identifiability of both components and the learnability of their respective parameters. These results are based upon, first, bounding the sample complexity for the Borda algorithm with top-$k$ rankings. Second, we characterize the distances between rankings, showing that an off-the-shelf clustering algorithm separates the rankings by components with high probability -provided the scales are well-separated.As a by-product, we include an efficient sampling algorithm for Mallows top-$k$ rankings. Finally, since the rank aggregation will suffer from a large amount of noise introduced by the non-expert voters, we adapt the Borda algorithm to be able to recover the ground truth consensus ranking which is especially consistent with the expert rankings.' volume: 139 URL: https://proceedings.mlr.press/v139/collas21a.html PDF: http://proceedings.mlr.press/v139/collas21a/collas21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-collas21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fabien family: Collas - given: Ekhine family: Irurozki editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2079-2088 id: collas21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2079 lastpage: 2088 published: 2021-07-01 00:00:00 +0000 - title: 'Exploiting Shared Representations for Personalized Federated Learning' abstract: 'Deep neural networks have shown the ability to extract universal feature representations from data such as images and text that have been useful for a variety of learning tasks. However, the fruits of representation learning have yet to be fully-realized in federated settings. Although data in federated settings is often non-i.i.d. across clients, the success of centralized deep learning suggests that data often shares a global {\em feature representation}, while the statistical heterogeneity across clients or tasks is concentrated in the {\em labels}. Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client. Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation. We prove that this method obtains linear convergence to the ground-truth representation with near-optimal sample complexity in a linear setting, demonstrating that it can efficiently reduce the problem dimension for each client. Further, we provide extensive experimental results demonstrating the improvement of our method over alternative personalized federated learning approaches in heterogeneous settings.' volume: 139 URL: https://proceedings.mlr.press/v139/collins21a.html PDF: http://proceedings.mlr.press/v139/collins21a/collins21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-collins21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Liam family: Collins - given: Hamed family: Hassani - given: Aryan family: Mokhtari - given: Sanjay family: Shakkottai editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2089-2099 id: collins21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2089 lastpage: 2099 published: 2021-07-01 00:00:00 +0000 - title: 'Differentiable Particle Filtering via Entropy-Regularized Optimal Transport' abstract: 'Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models. Resampling is a key ingredient of PF necessary to obtain low variance likelihood and states estimates. However, traditional resampling methods result in PF-based loss functions being non-differentiable with respect to model and PF parameters. In a variational inference context, resampling also yields high variance gradient estimates of the PF-based evidence lower bound. By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. We demonstrate this novel method on a variety of applications.' volume: 139 URL: https://proceedings.mlr.press/v139/corenflos21a.html PDF: http://proceedings.mlr.press/v139/corenflos21a/corenflos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-corenflos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Adrien family: Corenflos - given: James family: Thornton - given: George family: Deligiannidis - given: Arnaud family: Doucet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2100-2111 id: corenflos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2100 lastpage: 2111 published: 2021-07-01 00:00:00 +0000 - title: 'Fairness and Bias in Online Selection' abstract: 'There is growing awareness and concern about fairness in machine learning and algorithm design. This is particularly true in online selection problems where decisions are often biased, for example, when assessing credit risks or hiring staff. We address the issues of fairness and bias in online selection by introducing multi-color versions of the classic secretary and prophet problem. Interestingly, existing algorithms for these problems are either very unfair or very inefficient, so we develop optimal fair algorithms for these new problems and provide tight bounds on their competitiveness. We validate our theoretical findings on real-world data.' volume: 139 URL: https://proceedings.mlr.press/v139/correa21a.html PDF: http://proceedings.mlr.press/v139/correa21a/correa21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-correa21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jose family: Correa - given: Andres family: Cristi - given: Paul family: Duetting - given: Ashkan family: Norouzi-Fard editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2112-2121 id: correa21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2112 lastpage: 2121 published: 2021-07-01 00:00:00 +0000 - title: 'Relative Deviation Margin Bounds' abstract: 'We present a series of new and more favorable margin-based learning guarantees that depend on the empirical margin loss of a predictor. e give two types of learning bounds, in terms of either the Rademacher complexity or the empirical $\ell_\infty$-covering number of the hypothesis set used, both distribution-dependent and valid for general families. Furthermore, using our relative deviation margin bounds, we derive distribution-dependent generalization bounds for unbounded loss functions under the assumption of a finite moment. We also briefly highlight several applications of these bounds and discuss their connection with existing results.' volume: 139 URL: https://proceedings.mlr.press/v139/cortes21a.html PDF: http://proceedings.mlr.press/v139/cortes21a/cortes21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cortes21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Corinna family: Cortes - given: Mehryar family: Mohri - given: Ananda Theertha family: Suresh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2122-2131 id: cortes21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2122 lastpage: 2131 published: 2021-07-01 00:00:00 +0000 - title: 'A Discriminative Technique for Multiple-Source Adaptation' abstract: 'We present a new discriminative technique for the multiple-source adaptation (MSA) problem. Unlike previous work, which relies on density estimation for each source domain, our solution only requires conditional probabilities that can be straightforwardly accurately estimated from unlabeled data from the source domains. We give a detailed analysis of our new technique, including general guarantees based on Rényi divergences, and learning bounds when conditional Maxent is used for estimating conditional probabilities for a point to belong to a source domain. We show that these guarantees compare favorably to those that can be derived for the generative solution, using kernel density estimation. Our experiments with real-world applications further demonstrate that our new discriminative MSA algorithm outperforms the previous generative solution as well as other domain adaptation baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/cortes21b.html PDF: http://proceedings.mlr.press/v139/cortes21b/cortes21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cortes21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Corinna family: Cortes - given: Mehryar family: Mohri - given: Ananda Theertha family: Suresh - given: Ningshan family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2132-2143 id: cortes21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2132 lastpage: 2143 published: 2021-07-01 00:00:00 +0000 - title: 'Characterizing Fairness Over the Set of Good Models Under Selective Labels' abstract: 'Algorithmic risk assessments are used to inform decisions in a wide variety of high-stakes settings. Often multiple predictive models deliver similar overall performance but differ markedly in their predictions for individual cases, an empirical phenomenon known as the “Rashomon Effect.” These models may have different properties over various groups, and therefore have different predictive fairness properties. We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or “the set of good models.” Our framework addresses the empirically relevant challenge of selectively labelled data in the setting where the selection decision and outcome are unconfounded given the observed data features. Our framework can be used to 1) audit for predictive bias; or 2) replace an existing model with one that has better fairness properties. We illustrate these use cases on a recidivism prediction task and a real-world credit-scoring task.' volume: 139 URL: https://proceedings.mlr.press/v139/coston21a.html PDF: http://proceedings.mlr.press/v139/coston21a/coston21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-coston21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amanda family: Coston - given: Ashesh family: Rambachan - given: Alexandra family: Chouldechova editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2144-2155 id: coston21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2144 lastpage: 2155 published: 2021-07-01 00:00:00 +0000 - title: 'Two-way kernel matrix puncturing: towards resource-efficient PCA and spectral clustering' abstract: 'The article introduces an elementary cost and storage reduction method for spectral clustering and principal component analysis. The method consists in randomly “puncturing” both the data matrix $X\in\mathbb{C}^{p\times n}$ (or $\mathbb{R}^{p\times n}$) and its corresponding kernel (Gram) matrix $K$ through Bernoulli masks: $S\in\{0,1\}^{p\times n}$ for $X$ and $B\in\{0,1\}^{n\times n}$ for $K$. The resulting “two-way punctured” kernel is thus given by $K=\frac1p[(X\odot S)^\H (X\odot S)]\odot B$. We demonstrate that, for $X$ composed of independent columns drawn from a Gaussian mixture model, as $n,p\to\infty$ with $p/n\to c_0\in(0,\infty)$, the spectral behavior of $K$ – its limiting eigenvalue distribution, as well as its isolated eigenvalues and eigenvectors – is fully tractable and exhibits a series of counter-intuitive phenomena. We notably prove, and empirically confirm on various image databases, that it is possible to drastically puncture the data, thereby providing possibly huge computational and storage gains, for a virtually constant (clustering or PCA) performance. This preliminary study opens as such the path towards rethinking, from a large dimensional standpoint, computational and storage costs in elementary machine learning models.' volume: 139 URL: https://proceedings.mlr.press/v139/couillet21a.html PDF: http://proceedings.mlr.press/v139/couillet21a/couillet21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-couillet21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Romain family: Couillet - given: Florent family: Chatelain - given: Nicolas Le family: Bihan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2156-2165 id: couillet21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2156 lastpage: 2165 published: 2021-07-01 00:00:00 +0000 - title: 'Explaining Time Series Predictions with Dynamic Masks' abstract: 'How can we explain the predictions of a machine learning model? When the data is structured as a multivariate time series, this question induces additional difficulties such as the necessity for the explanation to embody the time dependency and the large number of inputs. To address these challenges, we propose dynamic masks (Dynamask). This method produces instance-wise importance scores for each feature at each time step by fitting a perturbation mask to the input sequence. In order to incorporate the time dependency of the data, Dynamask studies the effects of dynamic perturbation operators. In order to tackle the large number of inputs, we propose a scheme to make the feature selection parsimonious (to select no more feature than necessary) and legible (a notion that we detail by making a parallel with information theory). With synthetic and real-world data, we demonstrate that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time. The modularity of Dynamask makes it ideal as a plug-in to increase the transparency of a wide range of machine learning models in areas such as medicine and finance, where time series are abundant.' volume: 139 URL: https://proceedings.mlr.press/v139/crabbe21a.html PDF: http://proceedings.mlr.press/v139/crabbe21a/crabbe21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-crabbe21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jonathan family: Crabbé - given: Mihaela family: Van Der Schaar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2166-2177 id: crabbe21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2166 lastpage: 2177 published: 2021-07-01 00:00:00 +0000 - title: 'Generalised Lipschitz Regularisation Equals Distributional Robustness' abstract: 'The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators. In response, we have been able to significantly sharpen existing results regarding the relationship between distributional robustness and regularisation, when defined with a transportation cost uncertainty set. The theory allows us to characterise the conditions under which the distributional robustness equals a Lipschitz-regularised model, and to tightly quantify, for the first time, the slackness under very mild assumptions. As a theoretical application we show a new result explicating the connection between adversarial learning and distributional robustness. We then give new results for how to achieve Lipschitz regularisation of kernel classifiers, which are demonstrated experimentally.' volume: 139 URL: https://proceedings.mlr.press/v139/cranko21a.html PDF: http://proceedings.mlr.press/v139/cranko21a/cranko21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cranko21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zac family: Cranko - given: Zhan family: Shi - given: Xinhua family: Zhang - given: Richard family: Nock - given: Simon family: Kornblith editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2178-2188 id: cranko21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2178 lastpage: 2188 published: 2021-07-01 00:00:00 +0000 - title: 'Environment Inference for Invariant Learning' abstract: 'Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domain-invariant. An important assumption in this area is that the training examples are partitioned into “domains” or “environments”. Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds dataset. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.' volume: 139 URL: https://proceedings.mlr.press/v139/creager21a.html PDF: http://proceedings.mlr.press/v139/creager21a/creager21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-creager21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Elliot family: Creager - given: Joern-Henrik family: Jacobsen - given: Richard family: Zemel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2189-2200 id: creager21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2189 lastpage: 2200 published: 2021-07-01 00:00:00 +0000 - title: 'Mind the Box: $l_1$-APGD for Sparse Adversarial Attacks on Image Classifiers' abstract: 'We show that when taking into account also the image domain $[0,1]^d$, established $l_1$-projected gradient descent (PGD) attacks are suboptimal as they do not consider that the effective threat model is the intersection of the $l_1$-ball and $[0,1]^d$. We study the expected sparsity of the steepest descent step for this effective threat model and show that the exact projection onto this set is computationally feasible and yields better performance. Moreover, we propose an adaptive form of PGD which is highly effective even with a small budget of iterations. Our resulting $l_1$-APGD is a strong white-box attack showing that prior works overestimated their $l_1$-robustness. Using $l_1$-APGD for adversarial training we get a robust classifier with SOTA $l_1$-robustness. Finally, we combine $l_1$-APGD and an adaptation of the Square Attack to $l_1$ into $l_1$-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of $l_1$-ball intersected with $[0,1]^d$.' volume: 139 URL: https://proceedings.mlr.press/v139/croce21a.html PDF: http://proceedings.mlr.press/v139/croce21a/croce21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-croce21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Francesco family: Croce - given: Matthias family: Hein editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2201-2211 id: croce21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2201 lastpage: 2211 published: 2021-07-01 00:00:00 +0000 - title: 'Parameterless Transductive Feature Re-representation for Few-Shot Learning' abstract: 'Recent literature in few-shot learning (FSL) has shown that transductive methods often outperform their inductive counterparts. However, most transductive solutions, particularly the meta-learning based ones, require inserting trainable parameters on top of some inductive baselines to facilitate transduction. In this paper, we propose a parameterless transductive feature re-representation framework that differs from all existing solutions from the following perspectives. (1) It is widely compatible with existing FSL methods, including meta-learning and fine tuning based models. (2) The framework is simple and introduces no extra training parameters when applied to any architecture. We conduct experiments on three benchmark datasets by applying the framework to both representative meta-learning baselines and state-of-the-art FSL methods. Our framework consistently improves performances in all experiments and refreshes the state-of-the-art FSL results.' volume: 139 URL: https://proceedings.mlr.press/v139/cui21a.html PDF: http://proceedings.mlr.press/v139/cui21a/cui21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cui21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wentao family: Cui - given: Yuhong family: Guo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2212-2221 id: cui21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2212 lastpage: 2221 published: 2021-07-01 00:00:00 +0000 - title: 'Randomized Algorithms for Submodular Function Maximization with a $k$-System Constraint' abstract: 'Submodular optimization has numerous applications such as crowdsourcing and viral marketing. In this paper, we study the problem of non-negative submodular function maximization subject to a $k$-system constraint, which generalizes many other important constraints in submodular optimization such as cardinality constraint, matroid constraint, and $k$-extendible system constraint. The existing approaches for this problem are all based on deterministic algorithmic frameworks, and the best approximation ratio achieved by these algorithms (for a general submodular function) is $k+2\sqrt{k+2}+3$. We propose a randomized algorithm with an improved approximation ratio of $(1+\sqrt{k})^2$, while achieving nearly-linear time complexity significantly lower than that of the state-of-the-art algorithm. We also show that our algorithm can be further generalized to address a stochastic case where the elements can be adaptively selected, and propose an approximation ratio of $(1+\sqrt{k+1})^2$ for the adaptive optimization case. The empirical performance of our algorithms is extensively evaluated in several applications related to data mining and social computing, and the experimental results demonstrate the superiorities of our algorithms in terms of both utility and efficiency.' volume: 139 URL: https://proceedings.mlr.press/v139/cui21b.html PDF: http://proceedings.mlr.press/v139/cui21b/cui21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cui21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shuang family: Cui - given: Kai family: Han - given: Tianshuai family: Zhu - given: Jing family: Tang - given: Benwei family: Wu - given: He family: Huang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2222-2232 id: cui21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2222 lastpage: 2232 published: 2021-07-01 00:00:00 +0000 - title: 'GBHT: Gradient Boosting Histogram Transform for Density Estimation' abstract: 'In this paper, we propose a density estimation algorithm called \textit{Gradient Boosting Histogram Transform} (GBHT), where we adopt the \textit{Negative Log Likelihood} as the loss function to make the boosting procedure available for the unsupervised tasks. From a learning theory viewpoint, we first prove fast convergence rates for GBHT with the smoothness assumption that the underlying density function lies in the space $C^{0,\alpha}$. Then when the target density function lies in spaces $C^{1,\alpha}$, we present an upper bound for GBHT which is smaller than the lower bound of its corresponding base learner, in the sense of convergence rates. To the best of our knowledge, we make the first attempt to theoretically explain why boosting can enhance the performance of its base learners for density estimation problems. In experiments, we not only conduct performance comparisons with the widely used KDE, but also apply GBHT to anomaly detection to showcase a further application of GBHT.' volume: 139 URL: https://proceedings.mlr.press/v139/cui21c.html PDF: http://proceedings.mlr.press/v139/cui21c/cui21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cui21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jingyi family: Cui - given: Hanyuan family: Hang - given: Yisen family: Wang - given: Zhouchen family: Lin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2233-2243 id: cui21c issued: date-parts: - 2021 - 7 - 1 firstpage: 2233 lastpage: 2243 published: 2021-07-01 00:00:00 +0000 - title: 'ProGraML: A Graph-based Program Representation for Data Flow Analysis and Compiler Optimizations' abstract: 'Machine learning (ML) is increasingly seen as a viable approach for building compiler optimization heuristics, but many ML methods cannot replicate even the simplest of the data flow analyses that are critical to making good optimization decisions. We posit that if ML cannot do that, then it is insufficiently able to reason about programs. We formulate data flow analyses as supervised learning tasks and introduce a large open dataset of programs and their corresponding labels from several analyses. We use this dataset to benchmark ML methods and show that they struggle on these fundamental program reasoning tasks. We propose ProGraML - Program Graphs for Machine Learning - a language-independent, portable representation of program semantics. ProGraML overcomes the limitations of prior works and yields improved performance on downstream optimization tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/cummins21a.html PDF: http://proceedings.mlr.press/v139/cummins21a/cummins21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cummins21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chris family: Cummins - given: Zacharias V. family: Fisches - given: Tal family: Ben-Nun - given: Torsten family: Hoefler - given: Michael F P family: O’Boyle - given: Hugh family: Leather editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2244-2253 id: cummins21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2244 lastpage: 2253 published: 2021-07-01 00:00:00 +0000 - title: 'Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning' abstract: 'In real-world tasks, reinforcement learning (RL) agents frequently encounter situations that are not present during training time. To ensure reliable performance, the RL agents need to exhibit robustness to such worst-case situations. The robust-RL framework addresses this challenge via a minimax optimization between an agent and an adversary. Previous robust RL algorithms are either sample inefficient, lack robustness guarantees, or do not scale to large problems. We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem while attaining near-optimal sample complexity guarantees. RH-UCRL is a model-based reinforcement learning (MBRL) algorithm that effectively distinguishes between epistemic and aleatoric uncertainty and efficiently explores both the agent and the adversary decision spaces during policy learning. We scale RH-UCRL to complex tasks via neural networks ensemble models as well as neural network policies. Experimentally we demonstrate that RH-UCRL outperforms other robust deep RL algorithms in a variety of adversarial environments.' volume: 139 URL: https://proceedings.mlr.press/v139/curi21a.html PDF: http://proceedings.mlr.press/v139/curi21a/curi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-curi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sebastian family: Curi - given: Ilija family: Bogunovic - given: Andreas family: Krause editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2254-2264 id: curi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2254 lastpage: 2264 published: 2021-07-01 00:00:00 +0000 - title: 'Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability' abstract: 'In this work, we consider how preference models in interactive recommendation systems determine the availability of content and users’ opportunities for discovery. We propose an evaluation procedure based on stochastic reachability to quantify the maximum probability of recommending a target piece of content to an user for a set of allowable strategic modifications. This framework allows us to compute an upper bound on the likelihood of recommendation with minimal assumptions about user behavior. Stochastic reachability can be used to detect biases in the availability of content and diagnose limitations in the opportunities for discovery granted to users. We show that this metric can be computed efficiently as a convex program for a variety of practical settings, and further argue that reachability is not inherently at odds with accuracy. We demonstrate evaluations of recommendation algorithms trained on large datasets of explicit and implicit ratings. Our results illustrate how preference models, selection rules, and user interventions impact reachability and how these effects can be distributed unevenly.' volume: 139 URL: https://proceedings.mlr.press/v139/curmei21a.html PDF: http://proceedings.mlr.press/v139/curmei21a/curmei21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-curmei21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mihaela family: Curmei - given: Sarah family: Dean - given: Benjamin family: Recht editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2265-2275 id: curmei21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2265 lastpage: 2275 published: 2021-07-01 00:00:00 +0000 - title: 'Dynamic Balancing for Model Selection in Bandits and RL' abstract: 'We propose a framework for model selection by combining base algorithms in stochastic bandits and reinforcement learning. We require a candidate regret bound for each base algorithm that may or may not hold. We select base algorithms to play in each round using a “balancing condition” on the candidate regret bounds. Our approach simultaneously recovers previous worst-case regret bounds, while also obtaining much smaller regret in natural scenarios when some base learners significantly exceed their candidate bounds. Our framework is relevant in many settings, including linear bandits and MDPs with nested function classes, linear bandits with unknown misspecification, and tuning confidence parameters of algorithms such as LinUCB. Moreover, unlike recent efforts in model selection for linear stochastic bandits, our approach can be extended to consider adversarial rather than stochastic contexts.' volume: 139 URL: https://proceedings.mlr.press/v139/cutkosky21a.html PDF: http://proceedings.mlr.press/v139/cutkosky21a/cutkosky21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-cutkosky21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ashok family: Cutkosky - given: Christoph family: Dann - given: Abhimanyu family: Das - given: Claudio family: Gentile - given: Aldo family: Pacchiano - given: Manish family: Purohit editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2276-2285 id: cutkosky21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2276 lastpage: 2285 published: 2021-07-01 00:00:00 +0000 - title: 'ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases' abstract: 'Convolutional architectures have proven extremely successful for vision tasks. Their hard inductive biases enable sample-efficient learning, but come at the cost of a potentially lower performance ceiling. Vision Transformers (ViTs) rely on more flexible self-attention layers, and have recently outperformed CNNs for image classification. However, they require costly pre-training on large external datasets or distillation from pre-trained convolutional networks. In this paper, we ask the following question: is it possible to combine the strengths of these two architectures while avoiding their respective limitations? To this end, we introduce gated positional self-attention (GPSA), a form of positional self-attention which can be equipped with a “soft" convolutional inductive bias. We initialise the GPSA layers to mimic the locality of convolutional layers, then give each attention head the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information. The resulting convolutional-like ViT architecture, ConViT, outperforms the DeiT on ImageNet, while offering a much improved sample efficiency. We further investigate the role of locality in learning by first quantifying how it is encouraged in vanilla self-attention layers, then analysing how it is escaped in GPSA layers. We conclude by presenting various ablations to better understand the success of the ConViT. Our code and models are released publicly at https://github.com/facebookresearch/convit.' volume: 139 URL: https://proceedings.mlr.press/v139/d-ascoli21a.html PDF: http://proceedings.mlr.press/v139/d-ascoli21a/d-ascoli21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-d-ascoli21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Stéphane family: D’Ascoli - given: Hugo family: Touvron - given: Matthew L family: Leavitt - given: Ari S family: Morcos - given: Giulio family: Biroli - given: Levent family: Sagun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2286-2296 id: d-ascoli21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2286 lastpage: 2296 published: 2021-07-01 00:00:00 +0000 - title: 'Consistent regression when oblivious outliers overwhelm' abstract: 'We consider a robust linear regression model $y=X\beta^* + \eta$, where an adversary oblivious to the design $X\in \mathbb{R}^{n\times d}$ may choose $\eta$ to corrupt all but an $\alpha$ fraction of the observations $y$ in an arbitrary way. Prior to our work, even for Gaussian $X$, no estimator for $\beta^*$ was known to be consistent in this model except for quadratic sample size $n \gtrsim (d/\alpha)^2$ or for logarithmic inlier fraction $\alpha\ge 1/\log n$. We show that consistent estimation is possible with nearly linear sample size and inverse-polynomial inlier fraction. Concretely, we show that the Huber loss estimator is consistent for every sample size $n= \omega(d/\alpha^2)$ and achieves an error rate of $O(d/\alpha^2n)^{1/2}$ (both bounds are optimal up to constant factors). Our results extend to designs far beyond the Gaussian case and only require the column span of $X$ to not contain approximately sparse vectors (similar to the kind of assumption commonly made about the kernel space for compressed sensing). We provide two technically similar proofs. One proof is phrased in terms of strong convexity, extending work of [Tsakonas et al. ’14], and particularly short. The other proof highlights a connection between the Huber loss estimator and high-dimensional median computations. In the special case of Gaussian designs, this connection leads us to a strikingly simple algorithm based on computing coordinate-wise medians that achieves nearly optimal guarantees in linear time, and that can exploit sparsity of $\beta^*$. The model studied here also captures heavy-tailed noise distributions that may not even have a first moment.' volume: 139 URL: https://proceedings.mlr.press/v139/d-orsi21a.html PDF: http://proceedings.mlr.press/v139/d-orsi21a/d-orsi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-d-orsi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tommaso family: D’Orsi - given: Gleb family: Novikov - given: David family: Steurer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2297-2306 id: d-orsi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2297 lastpage: 2306 published: 2021-07-01 00:00:00 +0000 - title: 'Offline Reinforcement Learning with Pseudometric Learning' abstract: 'Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/dadashi21a.html PDF: http://proceedings.mlr.press/v139/dadashi21a/dadashi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dadashi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Robert family: Dadashi - given: Shideh family: Rezaeifar - given: Nino family: Vieillard - given: Léonard family: Hussenot - given: Olivier family: Pietquin - given: Matthieu family: Geist editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2307-2318 id: dadashi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2307 lastpage: 2318 published: 2021-07-01 00:00:00 +0000 - title: 'A Tale of Two Efficient and Informative Negative Sampling Distributions' abstract: 'Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval. The calculation of full softmax is costly from the computational and energy perspective. There have been various sampling approaches to overcome this challenge, popularly known as negative sampling (NS). Ideally, NS should sample negative classes from a distribution that is dependent on the input data, the current parameters, and the correct positive class. Unfortunately, due to the dynamically updated parameters and data samples, there is no sampling scheme that is provably adaptive and samples the negative classes efficiently. Therefore, alternative heuristics like random sampling, static frequency-based sampling, or learning-based biased sampling, which primarily trade either the sampling cost or the adaptivity of samples per iteration are adopted. In this paper, we show two classes of distributions where the sampling scheme is truly adaptive and provably generates negative samples in near-constant time. Our implementation in C++ on CPU is significantly superior, both in terms of wall-clock time and accuracy, compared to the most optimized TensorFlow implementations of other popular negative sampling approaches on powerful NVIDIA V100 GPU.' volume: 139 URL: https://proceedings.mlr.press/v139/daghaghi21a.html PDF: http://proceedings.mlr.press/v139/daghaghi21a/daghaghi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-daghaghi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shabnam family: Daghaghi - given: Tharun family: Medini - given: Nicholas family: Meisburger - given: Beidi family: Chen - given: Mengnan family: Zhao - given: Anshumali family: Shrivastava editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2319-2329 id: daghaghi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2319 lastpage: 2329 published: 2021-07-01 00:00:00 +0000 - title: 'SiameseXML: Siamese Networks meet Extreme Classifiers with 100M Labels' abstract: 'Deep extreme multi-label learning (XML) requires training deep architectures that can tag a data point with its most relevant subset of labels from an extremely large label set. XML applications such as ad and product recommendation involve labels rarely seen during training but which nevertheless hold the key to recommendations that delight users. Effective utilization of label metadata and high quality predictions for rare labels at the scale of millions of labels are thus key challenges in contemporary XML research. To address these, this paper develops the SiameseXML framework based on a novel probabilistic model that naturally motivates a modular approach melding Siamese architectures with high-capacity extreme classifiers, and a training pipeline that effortlessly scales to tasks with 100 million labels. SiameseXML offers predictions 2–13% more accurate than leading XML methods on public benchmark datasets, as well as in live A/B tests on the Bing search engine, it offers significant gains in click-through-rates, coverage, revenue and other online metrics over state-of-the-art techniques currently in production. Code for SiameseXML is available at https://github.com/Extreme-classification/siamesexml' volume: 139 URL: https://proceedings.mlr.press/v139/dahiya21a.html PDF: http://proceedings.mlr.press/v139/dahiya21a/dahiya21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dahiya21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kunal family: Dahiya - given: Ananye family: Agarwal - given: Deepak family: Saini - given: Gururaj family: K - given: Jian family: Jiao - given: Amit family: Singh - given: Sumeet family: Agarwal - given: Purushottam family: Kar - given: Manik family: Varma editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2330-2340 id: dahiya21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2330 lastpage: 2340 published: 2021-07-01 00:00:00 +0000 - title: 'Fixed-Parameter and Approximation Algorithms for PCA with Outliers' abstract: 'PCA with Outliers is the fundamental problem of identifying an underlying low-dimensional subspace in a data set corrupted with outliers. A large body of work is devoted to the information-theoretic aspects of this problem. However, from the computational perspective, its complexity is still not well-understood. We study this problem from the perspective of parameterized complexity by investigating how parameters like the dimension of the data, the subspace dimension, the number of outliers and their structure, and approximation error, influence the computational complexity of the problem. Our algorithmic methods are based on techniques of randomized linear algebra and algebraic geometry.' volume: 139 URL: https://proceedings.mlr.press/v139/dahiya21b.html PDF: http://proceedings.mlr.press/v139/dahiya21b/dahiya21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dahiya21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yogesh family: Dahiya - given: Fedor family: Fomin - given: Fahad family: Panolan - given: Kirill family: Simonov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2341-2351 id: dahiya21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2341 lastpage: 2351 published: 2021-07-01 00:00:00 +0000 - title: 'Sliced Iterative Normalizing Flows' abstract: 'We develop an iterative (greedy) deep learning (DL) algorithm which is able to transform an arbitrary probability distribution function (PDF) into the target PDF. The model is based on iterative Optimal Transport of a series of 1D slices, matching on each slice the marginal PDF to the target. The axes of the orthogonal slices are chosen to maximize the PDF difference using Wasserstein distance at each iteration, which enables the algorithm to scale well to high dimensions. As special cases of this algorithm, we introduce two sliced iterative Normalizing Flow (SINF) models, which map from the data to the latent space (GIS) and vice versa (SIG). We show that SIG is able to generate high quality samples of image datasets, which match the GAN benchmarks, while GIS obtains competitive results on density estimation tasks compared to the density trained NFs, and is more stable, faster, and achieves higher p(x) when trained on small training sets. SINF approach deviates significantly from the current DL paradigm, as it is greedy and does not use concepts such as mini-batching, stochastic gradient descent and gradient back-propagation through deep layers.' volume: 139 URL: https://proceedings.mlr.press/v139/dai21a.html PDF: http://proceedings.mlr.press/v139/dai21a/dai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Biwei family: Dai - given: Uros family: Seljak editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2352-2364 id: dai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2352 lastpage: 2364 published: 2021-07-01 00:00:00 +0000 - title: 'Convex Regularization in Monte-Carlo Tree Search' abstract: 'Monte-Carlo planning and Reinforcement Learning (RL) are essential to sequential decision making. The recent AlphaGo and AlphaZero algorithms have shown how to successfully combine these two paradigms to solve large-scale sequential decision problems. These methodologies exploit a variant of the well-known UCT algorithm to trade off the exploitation of good actions and the exploration of unvisited states, but their empirical success comes at the cost of poor sample-efficiency and high computation time. In this paper, we overcome these limitations by introducing the use of convex regularization in Monte-Carlo Tree Search (MCTS) to drive exploration efficiently and to improve policy updates. First, we introduce a unifying theory on the use of generic convex regularizers in MCTS, deriving the first regret analysis of regularized MCTS and showing that it guarantees an exponential convergence rate. Second, we exploit our theoretical framework to introduce novel regularized backup operators for MCTS, based on the relative entropy of the policy update and, more importantly, on the Tsallis entropy of the policy, for which we prove superior theoretical guarantees. We empirically verify the consequence of our theoretical results on a toy problem. Finally, we show how our framework can easily be incorporated in AlphaGo and we empirically show the superiority of convex regularization, w.r.t. representative baselines, on well-known RL problems across several Atari games.' volume: 139 URL: https://proceedings.mlr.press/v139/dam21a.html PDF: http://proceedings.mlr.press/v139/dam21a/dam21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dam21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tuan Q family: Dam - given: Carlo family: D’Eramo - given: Jan family: Peters - given: Joni family: Pajarinen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2365-2375 id: dam21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2365 lastpage: 2375 published: 2021-07-01 00:00:00 +0000 - title: 'Demonstration-Conditioned Reinforcement Learning for Few-Shot Imitation' abstract: 'In few-shot imitation, an agent is given a few demonstrations of a previously unseen task, and must then successfully perform that task. We propose a novel approach to learning few-shot-imitation agents that we call demonstration-conditioned reinforcement learning (DCRL). Given a training set consisting of demonstrations, reward functions and transition distributions for multiple tasks, the idea is to work with a policy that takes demonstrations as input, and to train this policy to maximize the average of the cumulative reward over the set of training tasks. Relative to previously proposed few-shot imitation methods that use behaviour cloning or infer reward functions from demonstrations, our method has the disadvantage that it requires reward functions at training time. However, DCRL also has several advantages, such as the ability to improve upon suboptimal demonstrations, to operate given state-only demonstrations, and to cope with a domain shift between the demonstrator and the agent. Moreover, we show that DCRL outperforms methods based on behaviour cloning by a large margin, on navigation tasks and on robotic manipulation tasks from the Meta-World benchmark.' volume: 139 URL: https://proceedings.mlr.press/v139/dance21a.html PDF: http://proceedings.mlr.press/v139/dance21a/dance21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dance21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Christopher R. family: Dance - given: Julien family: Perez - given: Théo family: Cachet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2376-2387 id: dance21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2376 lastpage: 2387 published: 2021-07-01 00:00:00 +0000 - title: 'Re-understanding Finite-State Representations of Recurrent Policy Networks' abstract: 'We introduce an approach for understanding control policies represented as recurrent neural networks. Recent work has approached this problem by transforming such recurrent policy networks into finite-state machines (FSM) and then analyzing the equivalent minimized FSM. While this led to interesting insights, the minimization process can obscure a deeper understanding of a machine’s operation by merging states that are semantically distinct. To address this issue, we introduce an analysis approach that starts with an unminimized FSM and applies more-interpretable reductions that preserve the key decision points of the policy. We also contribute an attention tool to attain a deeper understanding of the role of observations in the decisions. Our case studies on 7 Atari games and 3 control benchmarks demonstrate that the approach can reveal insights that have not been previously noticed.' volume: 139 URL: https://proceedings.mlr.press/v139/danesh21a.html PDF: http://proceedings.mlr.press/v139/danesh21a/danesh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-danesh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mohamad H family: Danesh - given: Anurag family: Koul - given: Alan family: Fern - given: Saeed family: Khorram editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2388-2397 id: danesh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2388 lastpage: 2397 published: 2021-07-01 00:00:00 +0000 - title: 'Newton Method over Networks is Fast up to the Statistical Precision' abstract: 'We propose a distributed cubic regularization of the Newton method for solving (constrained) empirical risk minimization problems over a network of agents, modeled as undirected graph. The algorithm employs an inexact, preconditioned Newton step at each agent’s side: the gradient of the centralized loss is iteratively estimated via a gradient-tracking consensus mechanism and the Hessian is subsampled over the local data sets. No Hessian matrices are exchanged over the network. We derive global complexity bounds for convex and strongly convex losses. Our analysis reveals an interesting interplay between sample and iteration/communication complexity: statistically accurate solutions are achievable in roughly the same number of iterations of the centralized cubic Newton, with a communication cost per iteration of the order of $\widetilde{\mathcal{O}}\big(1/\sqrt{1-\rho}\big)$, where $\rho$ characterizes the connectivity of the network. This represents a significant improvement with respect to existing, statistically oblivious, distributed Newton-based methods over networks.' volume: 139 URL: https://proceedings.mlr.press/v139/daneshmand21a.html PDF: http://proceedings.mlr.press/v139/daneshmand21a/daneshmand21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-daneshmand21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amir family: Daneshmand - given: Gesualdo family: Scutari - given: Pavel family: Dvurechensky - given: Alexander family: Gasnikov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2398-2409 id: daneshmand21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2398 lastpage: 2409 published: 2021-07-01 00:00:00 +0000 - title: 'BasisDeVAE: Interpretable Simultaneous Dimensionality Reduction and Feature-Level Clustering with Derivative-Based Variational Autoencoders' abstract: 'The Variational Autoencoder (VAE) performs effective nonlinear dimensionality reduction in a variety of problem settings. However, the black-box neural network decoder function typically employed limits the ability of the decoder function to be constrained and interpreted, making the use of VAEs problematic in settings where prior knowledge should be embedded within the decoder. We present DeVAE, a novel VAE-based model with a derivative-based forward mapping, allowing for greater control over decoder behaviour via specification of the decoder function in derivative space. Additionally, we show how DeVAE can be paired with a sparse clustering prior to create BasisDeVAE and perform interpretable simultaneous dimensionality reduction and feature-level clustering. We demonstrate the performance and scalability of the DeVAE and BasisDeVAE models on synthetic and real-world data and present how the derivative-based approach allows for expressive yet interpretable forward models which respect prior knowledge.' volume: 139 URL: https://proceedings.mlr.press/v139/danks21a.html PDF: http://proceedings.mlr.press/v139/danks21a/danks21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-danks21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dominic family: Danks - given: Christopher family: Yau editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2410-2420 id: danks21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2410 lastpage: 2420 published: 2021-07-01 00:00:00 +0000 - title: 'Intermediate Layer Optimization for Inverse Problems using Deep Generative Models' abstract: 'We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models. Instead of optimizing only over the initial latent code, we progressively change the input layer obtaining successively more expressive generators. To explore the higher dimensional spaces, our method searches for latent codes that lie within a small l1 ball around the manifold induced by the previous layer. Our theoretical analysis shows that by keeping the radius of the ball relatively small, we can improve the established error bound for compressed sensing with deep generative models. We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN2 and PULSE for a wide range of inverse problems including inpainting, denoising, super-resolution and compressed sensing.' volume: 139 URL: https://proceedings.mlr.press/v139/daras21a.html PDF: http://proceedings.mlr.press/v139/daras21a/daras21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-daras21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Giannis family: Daras - given: Joseph family: Dean - given: Ajil family: Jalal - given: Alex family: Dimakis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2421-2432 id: daras21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2421 lastpage: 2432 published: 2021-07-01 00:00:00 +0000 - title: 'Measuring Robustness in Deep Learning Based Compressive Sensing' abstract: 'Deep neural networks give state-of-the-art accuracy for reconstructing images from few and noisy measurements, a problem arising for example in accelerated magnetic resonance imaging (MRI). However, recent works have raised concerns that deep-learning-based image reconstruction methods are sensitive to perturbations and are less robust than traditional methods: Neural networks (i) may be sensitive to small, yet adversarially-selected perturbations, (ii) may perform poorly under distribution shifts, and (iii) may fail to recover small but important features in an image. In order to understand the sensitivity to such perturbations, in this work, we measure the robustness of different approaches for image reconstruction including trained and un-trained neural networks as well as traditional sparsity-based methods. We find, contrary to prior works, that both trained and un-trained methods are vulnerable to adversarial perturbations. Moreover, both trained and un-trained methods tuned for a particular dataset suffer very similarly from distribution shifts. Finally, we demonstrate that an image reconstruction method that achieves higher reconstruction quality, also performs better in terms of accurately recovering fine details. Our results indicate that the state-of-the-art deep-learning-based image reconstruction methods provide improved performance than traditional methods without compromising robustness.' volume: 139 URL: https://proceedings.mlr.press/v139/darestani21a.html PDF: http://proceedings.mlr.press/v139/darestani21a/darestani21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-darestani21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mohammad Zalbagi family: Darestani - given: Akshay S family: Chaudhari - given: Reinhard family: Heckel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2433-2444 id: darestani21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2433 lastpage: 2444 published: 2021-07-01 00:00:00 +0000 - title: 'SAINT-ACC: Safety-Aware Intelligent Adaptive Cruise Control for Autonomous Vehicles Using Deep Reinforcement Learning' abstract: 'We present a novel adaptive cruise control (ACC) system namely SAINT-ACC: {S}afety-{A}ware {Int}elligent {ACC} system (SAINT-ACC) that is designed to achieve simultaneous optimization of traffic efficiency, driving safety, and driving comfort through dynamic adaptation of the inter-vehicle gap based on deep reinforcement learning (RL). A novel dual RL agent-based approach is developed to seek and adapt the optimal balance between traffic efficiency and driving safety/comfort by effectively controlling the driving safety model parameters and inter-vehicle gap based on macroscopic and microscopic traffic information collected from dynamically changing and complex traffic environments. Results obtained through over 12,000 simulation runs with varying traffic scenarios and penetration rates demonstrate that SAINT-ACC significantly enhances traffic flow, driving safety and comfort compared with a state-of-the-art approach.' volume: 139 URL: https://proceedings.mlr.press/v139/das21a.html PDF: http://proceedings.mlr.press/v139/das21a/das21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-das21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lokesh Chandra family: Das - given: Myounggyu family: Won editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2445-2455 id: das21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2445 lastpage: 2455 published: 2021-07-01 00:00:00 +0000 - title: 'Lipschitz normalization for self-attention layers with application to graph neural networks' abstract: 'Attention based neural networks are state of the art in a large range of applications. However, their performance tends to degrade when the number of layers increases. In this work, we show that enforcing Lipschitz continuity by normalizing the attention scores can significantly improve the performance of deep attention models. First, we show that, for deep graph attention networks (GAT), gradient explosion appears during training, leading to poor performance of gradient-based training algorithms. To address this issue, we derive a theoretical analysis of the Lipschitz continuity of attention modules and introduce LipschitzNorm, a simple and parameter-free normalization for self-attention mechanisms that enforces the model to be Lipschitz continuous. We then apply LipschitzNorm to GAT and Graph Transformers and show that their performance is substantially improved in the deep setting (10 to 30 layers). More specifically, we show that a deep GAT model with LipschitzNorm achieves state of the art results for node label prediction tasks that exhibit long-range dependencies, while showing consistent improvements over their unnormalized counterparts in benchmark node classification tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/dasoulas21a.html PDF: http://proceedings.mlr.press/v139/dasoulas21a/dasoulas21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dasoulas21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: George family: Dasoulas - given: Kevin family: Scaman - given: Aladin family: Virmaux editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2456-2466 id: dasoulas21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2456 lastpage: 2466 published: 2021-07-01 00:00:00 +0000 - title: 'Householder Sketch for Accurate and Accelerated Least-Mean-Squares Solvers' abstract: 'Least-Mean-Squares (\textsc{LMS}) solvers comprise a class of fundamental optimization problems such as linear regression, and regularized regressions such as Ridge, LASSO, and Elastic-Net. Data summarization techniques for big data generate summaries called coresets and sketches to speed up model learning under streaming and distributed settings. For example, \citep{nips2019} design a fast and accurate Caratheodory set on input data to boost the performance of existing \textsc{LMS} solvers. In retrospect, we explore classical Householder transformation as a candidate for sketching and accurately solving LMS problems. We find it to be a simpler, memory-efficient, and faster alternative that always existed to the above strong baseline. We also present a scalable algorithm based on the construction of distributed Householder sketches to solve \textsc{LMS} problem across multiple worker nodes. We perform thorough empirical analysis with large synthetic and real datasets to evaluate the performance of Householder sketch and compare with \citep{nips2019}. Our results show Householder sketch speeds up existing \textsc{LMS} solvers in the scikit-learn library up to $100$x-$400$x. Also, it is $10$x-$100$x faster than the above baseline with similar numerical stability. The distributed algorithm demonstrates linear scalability with a near-negligible communication overhead.' volume: 139 URL: https://proceedings.mlr.press/v139/dass21a.html PDF: http://proceedings.mlr.press/v139/dass21a/dass21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dass21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jyotikrishna family: Dass - given: Rabi family: Mahapatra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2467-2477 id: dass21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2467 lastpage: 2477 published: 2021-07-01 00:00:00 +0000 - title: 'Byzantine-Resilient High-Dimensional SGD with Local Iterations on Heterogeneous Data' abstract: 'We study stochastic gradient descent (SGD) with local iterations in the presence of Byzantine clients, motivated by the federated learning. The clients, instead of communicating with the server in every iteration, maintain their local models, which they update by taking several SGD iterations based on their own datasets and then communicate the net update with the server, thereby achieving communication-efficiency. Furthermore, only a subset of clients communicates with the server at synchronization times. The Byzantine clients may collude and send arbitrary vectors to the server to disrupt the learning process. To combat the adversary, we employ an efficient high-dimensional robust mean estimation algorithm at the server to filter-out corrupt vectors; and to analyze the outlier-filtering procedure, we develop a novel matrix concentration result that may be of independent interest. We provide convergence analyses for both strongly-convex and non-convex smooth objectives in the heterogeneous data setting. We believe that ours is the first Byzantine-resilient local SGD algorithm and analysis with non-trivial guarantees. We corroborate our theoretical results with preliminary experiments for neural network training.' volume: 139 URL: https://proceedings.mlr.press/v139/data21a.html PDF: http://proceedings.mlr.press/v139/data21a/data21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-data21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Deepesh family: Data - given: Suhas family: Diggavi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2478-2488 id: data21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2478 lastpage: 2488 published: 2021-07-01 00:00:00 +0000 - title: 'Catformer: Designing Stable Transformers via Sensitivity Analysis' abstract: 'Transformer architectures are widely used, but training them is non-trivial, requiring custom learning rate schedules, scaling terms, residual connections, careful placement of submodules such as normalization, and so on. In this paper, we improve upon recent analysis of Transformers and formalize a notion of sensitivity to capture the difficulty of training. Sensitivity characterizes how the variance of activation and gradient norms change in expectation when parameters are randomly perturbed. We analyze the sensitivity of previous Transformer architectures and design a new architecture, the Catformer, which replaces residual connections or RNN-based gating mechanisms with concatenation. We prove that Catformers are less sensitive than other Transformer variants and demonstrate that this leads to more stable training. On DMLab30, a suite of high-dimension reinforcement tasks, Catformer outperforms other transformers, including Gated Transformer-XL—the state-of-the-art architecture designed to address stability—by 13%.' volume: 139 URL: https://proceedings.mlr.press/v139/davis21a.html PDF: http://proceedings.mlr.press/v139/davis21a/davis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-davis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jared Q family: Davis - given: Albert family: Gu - given: Krzysztof family: Choromanski - given: Tri family: Dao - given: Christopher family: Re - given: Chelsea family: Finn - given: Percy family: Liang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2489-2499 id: davis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2489 lastpage: 2499 published: 2021-07-01 00:00:00 +0000 - title: 'Diffusion Source Identification on Networks with Statistical Confidence' abstract: 'Diffusion source identification on networks is a problem of fundamental importance in a broad class of applications, including controlling the spreading of rumors on social media, identifying a computer virus over cyber networks, or identifying the disease center during epidemiology. Though this problem has received significant recent attention, most known approaches are well-studied in only very restrictive settings and lack theoretical guarantees for more realistic networks. We introduce a statistical framework for the study of this problem and develop a confidence set inference approach inspired by hypothesis testing. Our method efficiently produces a small subset of nodes, which provably covers the source node with any pre-specified confidence level without restrictive assumptions on network structures. To our knowledge, this is the first diffusion source identification method with a practically useful theoretical guarantee on general networks. We demonstrate our approach via extensive synthetic experiments on well-known random network models, a large data set of real-world networks as well as a mobility network between cities concerning the COVID-19 spreading in January 2020.' volume: 139 URL: https://proceedings.mlr.press/v139/dawkins21a.html PDF: http://proceedings.mlr.press/v139/dawkins21a/dawkins21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dawkins21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Quinlan E family: Dawkins - given: Tianxi family: Li - given: Haifeng family: Xu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2500-2509 id: dawkins21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2500 lastpage: 2509 published: 2021-07-01 00:00:00 +0000 - title: 'Bayesian Deep Learning via Subnetwork Inference' abstract: 'The Bayesian paradigm has the potential to solve core issues of deep neural networks such as poor calibration and data inefficiency. Alas, scaling Bayesian inference to large weight spaces often requires restrictive approximations. In this work, we show that it suffices to perform inference over a small subset of model weights in order to obtain accurate predictive posteriors. The other weights are kept as point estimates. This subnetwork inference framework enables us to use expressive, otherwise intractable, posterior approximations over such subsets. In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation. We propose a subnetwork selection strategy that aims to maximally preserve the model’s predictive uncertainty. Empirically, our approach compares favorably to ensembles and less expressive posterior approximations over full networks.' volume: 139 URL: https://proceedings.mlr.press/v139/daxberger21a.html PDF: http://proceedings.mlr.press/v139/daxberger21a/daxberger21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-daxberger21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Erik family: Daxberger - given: Eric family: Nalisnick - given: James U family: Allingham - given: Javier family: Antoran - given: Jose Miguel family: Hernandez-Lobato editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2510-2521 id: daxberger21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2510 lastpage: 2521 published: 2021-07-01 00:00:00 +0000 - title: 'Adversarial Robustness Guarantees for Random Deep Neural Networks' abstract: 'The reliability of deep learning algorithms is fundamentally challenged by the existence of adversarial examples, which are incorrectly classified inputs that are extremely close to a correctly classified input. We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any p$\geq$1, the \ell^p distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the \ell^p norm of the input. The results are based on the recently proved equivalence between Gaussian processes and deep neural networks in the limit of infinite width of the hidden layers, and are validated with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. The results constitute a fundamental advance in the theoretical understanding of adversarial examples, and open the way to a thorough theoretical characterization of the relation between network architecture and robustness to adversarial perturbations.' volume: 139 URL: https://proceedings.mlr.press/v139/de-palma21a.html PDF: http://proceedings.mlr.press/v139/de-palma21a/de-palma21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-de-palma21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Giacomo family: De Palma - given: Bobak family: Kiani - given: Seth family: Lloyd editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2522-2534 id: de-palma21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2522 lastpage: 2534 published: 2021-07-01 00:00:00 +0000 - title: 'High-Dimensional Gaussian Process Inference with Derivatives' abstract: 'Although it is widely known that Gaussian processes can be conditioned on observations of the gradient, this functionality is of limited use due to the prohibitive computational cost of $\mathcal{O}(N^3 D^3)$ in data points $N$ and dimension $D$. The dilemma of gradient observations is that a single one of them comes at the same cost as $D$ independent function evaluations, so the latter are often preferred. Careful scrutiny reveals, however, that derivative observations give rise to highly structured kernel Gram matrices for very general classes of kernels (inter alia, stationary kernels). We show that in the \emph{low-data} regime $N < D$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $\mathcal{O}(N^2D + (N^2)^3)$ (i.e., linear in the number of dimensions) and, in special cases, to $\mathcal{O}(N^2D + N^3)$. This reduction in complexity opens up new use-cases for inference with gradients especially in the high-dimensional regime, where the information-to-cost ratio of gradient observations significantly increases. We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.' volume: 139 URL: https://proceedings.mlr.press/v139/de-roos21a.html PDF: http://proceedings.mlr.press/v139/de-roos21a/de-roos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-de-roos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Filip prefix: de family: Roos - given: Alexandra family: Gessner - given: Philipp family: Hennig editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2535-2545 id: de-roos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2535 lastpage: 2545 published: 2021-07-01 00:00:00 +0000 - title: 'Transfer-Based Semantic Anomaly Detection' abstract: 'Detecting semantic anomalies is challenging due to the countless ways in which they may appear in real-world data. While enhancing the robustness of networks may be sufficient for modeling simplistic anomalies, there is no good known way of preparing models for all potential and unseen anomalies that can potentially occur, such as the appearance of new object classes. In this paper, we show that a previously overlooked strategy for anomaly detection (AD) is to introduce an explicit inductive bias toward representations transferred over from some large and varied semantic task. We rigorously verify our hypothesis in controlled trials that utilize intervention, and show that it gives rise to surprisingly effective auxiliary objectives that outperform previous AD paradigms.' volume: 139 URL: https://proceedings.mlr.press/v139/deecke21a.html PDF: http://proceedings.mlr.press/v139/deecke21a/deecke21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-deecke21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lucas family: Deecke - given: Lukas family: Ruff - given: Robert A. family: Vandermeulen - given: Hakan family: Bilen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2546-2558 id: deecke21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2546 lastpage: 2558 published: 2021-07-01 00:00:00 +0000 - title: 'Grid-Functioned Neural Networks' abstract: 'We introduce a new neural network architecture that we call "grid-functioned" neural networks. It utilises a grid structure of network parameterisations that can be specialised for different subdomains of the problem, while maintaining smooth, continuous behaviour. The grid gives the user flexibility to prevent gross features from overshadowing important minor ones. We present a full characterisation of its computational and spatial complexity, and demonstrate its potential, compared to a traditional architecture, over a set of synthetic regression problems. We further illustrate the benefits through a real-world 3D skeletal animation case study, where it offers the same visual quality as a state-of-the-art model, but with lower computational complexity and better control accuracy.' volume: 139 URL: https://proceedings.mlr.press/v139/dehesa21a.html PDF: http://proceedings.mlr.press/v139/dehesa21a/dehesa21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dehesa21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Javier family: Dehesa - given: Andrew family: Vidler - given: Julian family: Padget - given: Christof family: Lutteroth editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2559-2567 id: dehesa21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2559 lastpage: 2567 published: 2021-07-01 00:00:00 +0000 - title: 'Multidimensional Scaling: Approximation and Complexity' abstract: 'Metric Multidimensional scaling (MDS) is a classical method for generating meaningful (non-linear) low-dimensional embeddings of high-dimensional data. MDS has a long history in the statistics, machine learning, and graph drawing communities. In particular, the Kamada-Kawai force-directed graph drawing method is equivalent to MDS and is one of the most popular ways in practice to embed graphs into low dimensions. Despite its ubiquity, our theoretical understanding of MDS remains limited as its objective function is highly non-convex. In this paper, we prove that minimizing the Kamada-Kawai objective is NP-hard and give a provable approximation algorithm for optimizing it, which in particular is a PTAS on low-diameter graphs. We supplement this result with experiments suggesting possible connections between our greedy approximation algorithm and gradient-based methods.' volume: 139 URL: https://proceedings.mlr.press/v139/demaine21a.html PDF: http://proceedings.mlr.press/v139/demaine21a/demaine21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-demaine21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Erik family: Demaine - given: Adam family: Hesterberg - given: Frederic family: Koehler - given: Jayson family: Lynch - given: John family: Urschel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2568-2578 id: demaine21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2568 lastpage: 2578 published: 2021-07-01 00:00:00 +0000 - title: 'What Does Rotation Prediction Tell Us about Classifier Accuracy under Varying Testing Environments?' abstract: 'Understanding classifier decision under novel environments is central to the community, and a common practice is evaluating it on labeled test sets. However, in real-world testing, image annotations are difficult and expensive to obtain, especially when the test environment is changing. A natural question then arises: given a trained classifier, can we evaluate its accuracy on varying unlabeled test sets? In this work, we train semantic classification and rotation prediction in a multi-task way. On a series of datasets, we report an interesting finding, i.e., the semantic classification accuracy exhibits a strong linear relationship with the accuracy of the rotation prediction task (Pearson’s Correlation r > 0.88). This finding allows us to utilize linear regression to estimate classifier performance from the accuracy of rotation prediction which can be obtained on the test set through the freely generated rotation labels.' volume: 139 URL: https://proceedings.mlr.press/v139/deng21a.html PDF: http://proceedings.mlr.press/v139/deng21a/deng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-deng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Weijian family: Deng - given: Stephen family: Gould - given: Liang family: Zheng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2579-2589 id: deng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2579 lastpage: 2589 published: 2021-07-01 00:00:00 +0000 - title: 'Toward Better Generalization Bounds with Locally Elastic Stability' abstract: 'Algorithmic stability is a key characteristic to ensure the generalization ability of a learning algorithm. Among different notions of stability, \emph{uniform stability} is arguably the most popular one, which yields exponential generalization bounds. However, uniform stability only considers the worst-case loss change (or so-called sensitivity) by removing a single data point, which is distribution-independent and therefore undesirable. There are many cases that the worst-case sensitivity of the loss is much larger than the average sensitivity taken over the single data point that is removed, especially in some advanced models such as random feature models or neural networks. Many previous works try to mitigate the distribution independent issue by proposing weaker notions of stability, however, they either only yield polynomial bounds or the bounds derived do not vanish as sample size goes to infinity. Given that, we propose \emph{locally elastic stability} as a weaker and distribution-dependent stability notion, which still yields exponential generalization bounds. We further demonstrate that locally elastic stability implies tighter generalization bounds than those derived based on uniform stability in many situations by revisiting the examples of bounded support vector machines, regularized least square regressions, and stochastic gradient descent.' volume: 139 URL: https://proceedings.mlr.press/v139/deng21b.html PDF: http://proceedings.mlr.press/v139/deng21b/deng21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-deng21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhun family: Deng - given: Hangfeng family: He - given: Weijie family: Su editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2590-2600 id: deng21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2590 lastpage: 2600 published: 2021-07-01 00:00:00 +0000 - title: 'Revenue-Incentive Tradeoffs in Dynamic Reserve Pricing' abstract: 'Online advertisements are primarily sold via repeated auctions with reserve prices. In this paper, we study how to set reserves to boost revenue based on the historical bids of strategic buyers, while controlling the impact of such a policy on the incentive compatibility of the repeated auctions. Adopting an incentive compatibility metric which quantifies the incentives to shade bids, we propose a novel class of reserve pricing policies and provide analytical tradeoffs between their revenue performance and bid-shading incentives. The policies are inspired by the exponential mechanism from the literature on differential privacy, but our study uncovers mechanisms with significantly better revenue-incentive tradeoffs than the exponential mechanism in practice. We further empirically evaluate the tradeoffs on synthetic data as well as real ad auction data from a major ad exchange to verify and support our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/deng21c.html PDF: http://proceedings.mlr.press/v139/deng21c/deng21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-deng21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuan family: Deng - given: Sebastien family: Lahaie - given: Vahab family: Mirrokni - given: Song family: Zuo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2601-2610 id: deng21c issued: date-parts: - 2021 - 7 - 1 firstpage: 2601 lastpage: 2610 published: 2021-07-01 00:00:00 +0000 - title: 'Heterogeneity for the Win: One-Shot Federated Clustering' abstract: 'In this work, we explore the unique challenges—and opportunities—of unsupervised federated learning (FL). We develop and analyze a one-shot federated clustering scheme, kfed, based on the widely-used Lloyd’s method for $k$-means clustering. In contrast to many supervised problems, we show that the issue of statistical heterogeneity in federated networks can in fact benefit our analysis. We analyse kfed under a center separation assumption and compare it to the best known requirements of its centralized counterpart. Our analysis shows that in heterogeneous regimes where the number of clusters per device $(k’)$ is smaller than the total number of clusters over the network $k$, $(k’\le \sqrt{k})$, we can use heterogeneity to our advantage—significantly weakening the cluster separation requirements for kfed. From a practical viewpoint, kfed also has many desirable properties: it requires only round of communication, can run asynchronously, and can handle partial participation or node/network failures. We motivate our analysis with experiments on common FL benchmarks, and highlight the practical utility of one-shot clustering through use-cases in personalized FL and device sampling.' volume: 139 URL: https://proceedings.mlr.press/v139/dennis21a.html PDF: http://proceedings.mlr.press/v139/dennis21a/dennis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dennis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Don Kurian family: Dennis - given: Tian family: Li - given: Virginia family: Smith editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2611-2620 id: dennis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2611 lastpage: 2620 published: 2021-07-01 00:00:00 +0000 - title: 'Kernel Continual Learning' abstract: 'This paper introduces kernel continual learning, a simple but effective variant of continual learning that leverages the non-parametric nature of kernel methods to tackle catastrophic forgetting. We deploy an episodic memory unit that stores a subset of samples for each task to learn task-specific classifiers based on kernel ridge regression. This does not require memory replay and systematically avoids task interference in the classifiers. We further introduce variational random features to learn a data-driven kernel for each task. To do so, we formulate kernel continual learning as a variational inference problem, where a random Fourier basis is incorporated as the latent variable. The variational posterior distribution over the random Fourier basis is inferred from the coreset of each task. In this way, we are able to generate more informative kernels specific to each task, and, more importantly, the coreset size can be reduced to achieve more compact memory, resulting in more efficient continual learning based on episodic memory. Extensive evaluation on four benchmarks demonstrates the effectiveness and promise of kernels for continual learning.' volume: 139 URL: https://proceedings.mlr.press/v139/derakhshani21a.html PDF: http://proceedings.mlr.press/v139/derakhshani21a/derakhshani21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-derakhshani21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mohammad Mahdi family: Derakhshani - given: Xiantong family: Zhen - given: Ling family: Shao - given: Cees family: Snoek editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2621-2631 id: derakhshani21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2621 lastpage: 2631 published: 2021-07-01 00:00:00 +0000 - title: 'Bayesian Optimization over Hybrid Spaces' abstract: 'We consider the problem of optimizing hybrid structures (mixture of discrete and continuous input variables) via expensive black-box function evaluations. This problem arises in many real-world applications. For example, in materials design optimization via lab experiments, discrete and continuous variables correspond to the presence/absence of primitive elements and their relative concentrations respectively. The key challenge is to accurately model the complex interactions between discrete and continuous variables. In this paper, we propose a novel approach referred as Hybrid Bayesian Optimization (HyBO) by utilizing diffusion kernels, which are naturally defined over continuous and discrete variables. We develop a principled approach for constructing diffusion kernels over hybrid spaces by utilizing the additive kernel formulation, which allows additive interactions of all orders in a tractable manner. We theoretically analyze the modeling strength of additive hybrid kernels and prove that it has the universal approximation property. Our experiments on synthetic and six diverse real-world benchmarks show that HyBO significantly outperforms the state-of-the-art methods.' volume: 139 URL: https://proceedings.mlr.press/v139/deshwal21a.html PDF: http://proceedings.mlr.press/v139/deshwal21a/deshwal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-deshwal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aryan family: Deshwal - given: Syrine family: Belakaria - given: Janardhan Rao family: Doppa editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2632-2643 id: deshwal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2632 lastpage: 2643 published: 2021-07-01 00:00:00 +0000 - title: 'Navigation Turing Test (NTT): Learning to Evaluate Human-Like Navigation' abstract: 'A key challenge on the path to developing agents that learn complex human-like behavior is the need to quickly and accurately quantify human-likeness. While human assessments of such behavior can be highly accurate, speed and scalability are limited. We address these limitations through a novel automated Navigation Turing Test (ANTT) that learns to predict human judgments of human-likeness. We demonstrate the effectiveness of our automated NTT on a navigation task in a complex 3D environment. We investigate six classification models to shed light on the types of architectures best suited to this task, and validate them against data collected through a human NTT. Our best models achieve high accuracy when distinguishing true human and agent behavior. At the same time, we show that predicting finer-grained human assessment of agents’ progress towards human-like behavior remains unsolved. Our work takes an important step towards agents that more effectively learn complex human-like behavior.' volume: 139 URL: https://proceedings.mlr.press/v139/devlin21a.html PDF: http://proceedings.mlr.press/v139/devlin21a/devlin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-devlin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sam family: Devlin - given: Raluca family: Georgescu - given: Ida family: Momennejad - given: Jaroslaw family: Rzepecki - given: Evelyn family: Zuniga - given: Gavin family: Costello - given: Guy family: Leroy - given: Ali family: Shaw - given: Katja family: Hofmann editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2644-2653 id: devlin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2644 lastpage: 2653 published: 2021-07-01 00:00:00 +0000 - title: 'Versatile Verification of Tree Ensembles' abstract: 'Machine learned models often must abide by certain requirements (e.g., fairness or legal). This has spurred interested in developing approaches that can provably verify whether a model satisfies certain properties. This paper introduces a generic algorithm called Veritas that enables tackling multiple different verification tasks for tree ensemble models like random forests (RFs) and gradient boosted decision trees (GBDTs). This generality contrasts with previous work, which has focused exclusively on either adversarial example generation or robustness checking. Veritas formulates the verification task as a generic optimization problem and introduces a novel search space representation. Veritas offers two key advantages. First, it provides anytime lower and upper bounds when the optimization problem cannot be solved exactly. In contrast, many existing methods have focused on exact solutions and are thus limited by the verification problem being NP-complete. Second, Veritas produces full (bounded suboptimal) solutions that can be used to generate concrete examples. We experimentally show that our method produces state-of-the-art robustness estimates, especially when executed with strict time constraints. This is exceedingly important when checking the robustness of large datasets. Additionally, we show that Veritas enables tackling more real-world verification scenarios.' volume: 139 URL: https://proceedings.mlr.press/v139/devos21a.html PDF: http://proceedings.mlr.press/v139/devos21a/devos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-devos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Laurens family: Devos - given: Wannes family: Meert - given: Jesse family: Davis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2654-2664 id: devos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2654 lastpage: 2664 published: 2021-07-01 00:00:00 +0000 - title: 'On the Inherent Regularization Effects of Noise Injection During Training' abstract: 'Randomly perturbing networks during the training process is a commonly used approach to improving generalization performance. In this paper, we present a theoretical study of one particular way of random perturbation, which corresponds to injecting artificial noise to the training data. We provide a precise asymptotic characterization of the training and generalization errors of such randomly perturbed learning problems on a random feature model. Our analysis shows that Gaussian noise injection in the training process is equivalent to introducing a weighted ridge regularization, when the number of noise injections tends to infinity. The explicit form of the regularization is also given. Numerical results corroborate our asymptotic predictions, showing that they are accurate even in moderate problem dimensions. Our theoretical predictions are based on a new correlated Gaussian equivalence conjecture that generalizes recent results in the study of random feature models.' volume: 139 URL: https://proceedings.mlr.press/v139/dhifallah21a.html PDF: http://proceedings.mlr.press/v139/dhifallah21a/dhifallah21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dhifallah21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Oussama family: Dhifallah - given: Yue family: Lu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2665-2675 id: dhifallah21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2665 lastpage: 2675 published: 2021-07-01 00:00:00 +0000 - title: 'Hierarchical Agglomerative Graph Clustering in Nearly-Linear Time' abstract: 'We study the widely-used hierarchical agglomerative clustering (HAC) algorithm on edge-weighted graphs. We define an algorithmic framework for hierarchical agglomerative graph clustering that provides the first efficient $\tilde{O}(m)$ time exact algorithms for classic linkage measures, such as complete- and WPGMA-linkage, as well as other measures. Furthermore, for average-linkage, arguably the most popular variant of HAC, we provide an algorithm that runs in $\tilde{O}(n\sqrt{m})$ time. For this variant, this is the first exact algorithm that runs in subquadratic time, as long as $m=n^{2-\epsilon}$ for some constant $\epsilon > 0$. We complement this result with a simple $\epsilon$-close approximation algorithm for average-linkage in our framework that runs in $\tilde{O}(m)$ time. As an application of our algorithms, we consider clustering points in a metric space by first using $k$-NN to generate a graph from the point set, and then running our algorithms on the resulting weighted graph. We validate the performance of our algorithms on publicly available datasets, and show that our approach can speed up clustering of point datasets by a factor of 20.7–76.5x.' volume: 139 URL: https://proceedings.mlr.press/v139/dhulipala21a.html PDF: http://proceedings.mlr.press/v139/dhulipala21a/dhulipala21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dhulipala21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Laxman family: Dhulipala - given: David family: Eisenstat - given: Jakub family: Łącki - given: Vahab family: Mirrokni - given: Jessica family: Shi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2676-2686 id: dhulipala21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2676 lastpage: 2686 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Online Algorithms with Distributional Advice' abstract: 'We study the problem of designing online algorithms given advice about the input. While prior work had focused on deterministic advice, we only assume distributional access to the instances of interest, and the goal is to learn a competitive algorithm given access to i.i.d. samples. We aim to be competitive against an adversary with prior knowledge of the distribution, while also performing well against worst-case inputs. We focus on the classical online problems of ski-rental and prophet-inequalities, and provide sample complexity bounds for the underlying learning tasks. First, we point out that for general distributions it is information-theoretically impossible to beat the worst-case competitive-ratio with any finite sample size. As our main contribution, we establish strong positive results for well-behaved distributions. Specifically, for the broad class of log-concave distributions, we show that $\mathrm{poly}(1/\epsilon)$ samples suffice to obtain $(1+\epsilon)$-competitive ratio. Finally, we show that this sample upper bound is close to best possible, even for very simple classes of distributions.' volume: 139 URL: https://proceedings.mlr.press/v139/diakonikolas21a.html PDF: http://proceedings.mlr.press/v139/diakonikolas21a/diakonikolas21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-diakonikolas21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ilias family: Diakonikolas - given: Vasilis family: Kontonis - given: Christos family: Tzamos - given: Ali family: Vakilian - given: Nikos family: Zarifis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2687-2696 id: diakonikolas21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2687 lastpage: 2696 published: 2021-07-01 00:00:00 +0000 - title: 'A Wasserstein Minimax Framework for Mixed Linear Regression' abstract: 'Multi-modal distributions are commonly used to model clustered data in statistical learning tasks. In this paper, we consider the Mixed Linear Regression (MLR) problem. We propose an optimal transport-based framework for MLR problems, Wasserstein Mixed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models. Through a model-based duality analysis, WMLR reduces the underlying MLR task to a nonconvex-concave minimax optimization problem, which can be provably solved to find a minimax stationary point by the Gradient Descent Ascent (GDA) algorithm. In the special case of mixtures of two linear regression models, we show that WMLR enjoys global convergence and generalization guarantees. We prove that WMLR’s sample complexity grows linearly with the dimension of data. Finally, we discuss the application of WMLR to the federated learning task where the training samples are collected by multiple agents in a network. Unlike the Expectation-Maximization algorithm, WMLR directly extends to the distributed, federated learning setting. We support our theoretical results through several numerical experiments, which highlight our framework’s ability to handle the federated learning setting with mixture models.' volume: 139 URL: https://proceedings.mlr.press/v139/diamandis21a.html PDF: http://proceedings.mlr.press/v139/diamandis21a/diamandis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-diamandis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Theo family: Diamandis - given: Yonina family: Eldar - given: Alireza family: Fallah - given: Farzan family: Farnia - given: Asuman family: Ozdaglar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2697-2706 id: diamandis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2697 lastpage: 2706 published: 2021-07-01 00:00:00 +0000 - title: 'Context-Aware Online Collective Inference for Templated Graphical Models' abstract: 'In this work, we examine online collective inference, the problem of maintaining and performing inference over a sequence of evolving graphical models. We utilize templated graphical models (TGM), a general class of graphical models expressed via templates and instantiated with data. A key challenge is minimizing the cost of instantiating the updated model. To address this, we define a class of exact and approximate context-aware methods for updating an existing TGM. These methods avoid a full re-instantiation by using the context of the updates to only add relevant components to the graphical model. Further, we provide stability bounds for the general online inference problem and regret bounds for a proposed approximation. Finally, we implement our approach in probabilistic soft logic, and test it on several online collective inference tasks. Through these experiments we verify the bounds on regret and stability, and show that our approximate online approach consistently runs two to five times faster than the offline alternative while, surprisingly, maintaining the quality of the predictions.' volume: 139 URL: https://proceedings.mlr.press/v139/dickens21a.html PDF: http://proceedings.mlr.press/v139/dickens21a/dickens21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dickens21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Charles family: Dickens - given: Connor family: Pryor - given: Eriq family: Augustine - given: Alexander family: Miller - given: Lise family: Getoor editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2707-2716 id: dickens21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2707 lastpage: 2716 published: 2021-07-01 00:00:00 +0000 - title: 'ARMS: Antithetic-REINFORCE-Multi-Sample Gradient for Binary Variables' abstract: 'Estimating the gradients for binary variables is a task that arises frequently in various domains, such as training discrete latent variable models. What has been commonly used is a REINFORCE based Monte Carlo estimation method that uses either independent samples or pairs of negatively correlated samples. To better utilize more than two samples, we propose ARMS, an Antithetic REINFORCE-based Multi-Sample gradient estimator. ARMS uses a copula to generate any number of mutually antithetic samples. It is unbiased, has low variance, and generalizes both DisARM, which we show to be ARMS with two samples, and the leave-one-out REINFORCE (LOORF) estimator, which is ARMS with uncorrelated samples. We evaluate ARMS on several datasets for training generative models, and our experimental results show that it outperforms competing methods. We also develop a version of ARMS for optimizing the multi-sample variational bound, and show that it outperforms both VIMCO and DisARM. The code is publicly available.' volume: 139 URL: https://proceedings.mlr.press/v139/dimitriev21a.html PDF: http://proceedings.mlr.press/v139/dimitriev21a/dimitriev21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dimitriev21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aleksandar family: Dimitriev - given: Mingyuan family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2717-2727 id: dimitriev21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2717 lastpage: 2727 published: 2021-07-01 00:00:00 +0000 - title: 'XOR-CD: Linearly Convergent Constrained Structure Generation' abstract: 'We propose XOR-Contrastive Divergence learning (XOR-CD), a provable approach for constrained structure generation, which remains difficult for state-of-the-art neural network and constraint reasoning approaches. XOR-CD harnesses XOR-Sampling to generate samples from the model distribution in CD learning and is guaranteed to generate valid structures. In addition, XOR-CD has a linear convergence rate towards the global maximum of the likelihood function within a vanishing constant in learning exponential family models. Constraint satisfaction enabled by XOR-CD also boosts its learning performance. Our real-world experiments on data-driven experimental design, dispatching route generation, and sequence-based protein homology detection demonstrate the superior performance of XOR-CD compared to baseline approaches in generating valid structures as well as capturing the inductive bias in the training set.' volume: 139 URL: https://proceedings.mlr.press/v139/ding21a.html PDF: http://proceedings.mlr.press/v139/ding21a/ding21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ding21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fan family: Ding - given: Jianzhu family: Ma - given: Jinbo family: Xu - given: Yexiang family: Xue editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2728-2738 id: ding21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2728 lastpage: 2738 published: 2021-07-01 00:00:00 +0000 - title: 'Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach' abstract: 'The Dual Principal Component Pursuit (DPCP) method has been proposed to robustly recover a subspace of high-relative dimension from corrupted data. Existing analyses and algorithms of DPCP, however, mainly focus on finding a normal to a single hyperplane that contains the inliers. Although these algorithms can be extended to a subspace of higher co-dimension through a recursive approach that sequentially finds a new basis element of the space orthogonal to the subspace, this procedure is computationally expensive and lacks convergence guarantees. In this paper, we consider a DPCP approach for simultaneously computing the entire basis of the orthogonal complement subspace (we call this a holistic approach) by solving a non-convex non-smooth optimization problem over the Grassmannian. We provide geometric and statistical analyses for the global optimality and prove that it can tolerate as many outliers as the square of the number of inliers, under both noiseless and noisy settings. We then present a Riemannian regularity condition for the problem, which is then used to prove that a Riemannian subgradient method converges linearly to a neighborhood of the orthogonal subspace with error proportional to the noise level.' volume: 139 URL: https://proceedings.mlr.press/v139/ding21b.html PDF: http://proceedings.mlr.press/v139/ding21b/ding21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ding21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianyu family: Ding - given: Zhihui family: Zhu - given: Rene family: Vidal - given: Daniel P family: Robinson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2739-2748 id: ding21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2739 lastpage: 2748 published: 2021-07-01 00:00:00 +0000 - title: 'Coded-InvNet for Resilient Prediction Serving Systems' abstract: 'Inspired by a new coded computation algorithm for invertible functions, we propose Coded-InvNet a new approach to design resilient prediction serving systems that can gracefully handle stragglers or node failures. Coded-InvNet leverages recent findings in the deep learning literature such as invertible neural networks, Manifold Mixup, and domain translation algorithms, identifying interesting research directions that span across machine learning and systems. Our experimental results show that Coded-InvNet can outperform existing approaches, especially when the compute resource overhead is as low as 10%. For instance, without knowing which of the ten workers is going to fail, our algorithm can design a backup task so that it can correctly recover the missing prediction result with an accuracy of 85.9%, significantly outperforming the previous SOTA by 32.5%.' volume: 139 URL: https://proceedings.mlr.press/v139/dinh21a.html PDF: http://proceedings.mlr.press/v139/dinh21a/dinh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dinh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tuan family: Dinh - given: Kangwook family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2749-2759 id: dinh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2749 lastpage: 2759 published: 2021-07-01 00:00:00 +0000 - title: 'Estimation and Quantization of Expected Persistence Diagrams' abstract: 'Persistence diagrams (PDs) are the most common descriptors used to encode the topology of structured data appearing in challenging learning tasks; think e.g. of graphs, time series or point clouds sampled close to a manifold. Given random objects and the corresponding distribution of PDs, one may want to build a statistical summary—such as a mean—of these random PDs, which is however not a trivial task as the natural geometry of the space of PDs is not linear. In this article, we study two such summaries, the Expected Persistence Diagram (EPD), and its quantization. The EPD is a measure supported on $\mathbb{R}^2$, which may be approximated by its empirical counterpart. We prove that this estimator is optimal from a minimax standpoint on a large class of models with a parametric rate of convergence. The empirical EPD is simple and efficient to compute, but possibly has a very large support, hindering its use in practice. To overcome this issue, we propose an algorithm to compute a quantization of the empirical EPD, a measure with small support which is shown to approximate with near-optimal rates a quantization of the theoretical EPD.' volume: 139 URL: https://proceedings.mlr.press/v139/divol21a.html PDF: http://proceedings.mlr.press/v139/divol21a/divol21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-divol21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vincent family: Divol - given: Theo family: Lacombe editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2760-2770 id: divol21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2760 lastpage: 2770 published: 2021-07-01 00:00:00 +0000 - title: 'On Energy-Based Models with Overparametrized Shallow Neural Networks' abstract: 'Energy-based models (EBMs) are a simple yet powerful framework for generative modeling. They are based on a trainable energy function which defines an associated Gibbs measure, and they can be trained and sampled from via well-established statistical tools, such as MCMC. Neural networks may be used as energy function approximators, providing both a rich class of expressive models as well as a flexible device to incorporate data structure. In this work we focus on shallow neural networks. Building from the incipient theory of overparametrized neural networks, we show that models trained in the so-called ’active’ regime provide a statistical advantage over their associated ’lazy’ or kernel regime, leading to improved adaptivity to hidden low-dimensional structure in the data distribution, as already observed in supervised learning. Our study covers both the maximum likelihood and Stein Discrepancy estimators, and we validate our theoretical results with numerical experiments on synthetic data.' volume: 139 URL: https://proceedings.mlr.press/v139/domingo-enrich21a.html PDF: http://proceedings.mlr.press/v139/domingo-enrich21a/domingo-enrich21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-domingo-enrich21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Carles family: Domingo-Enrich - given: Alberto family: Bietti - given: Eric family: Vanden-Eijnden - given: Joan family: Bruna editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2771-2782 id: domingo-enrich21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2771 lastpage: 2782 published: 2021-07-01 00:00:00 +0000 - title: 'Kernel-Based Reinforcement Learning: A Finite-Time Analysis' abstract: 'We consider the exploration-exploitation dilemma in finite-horizon reinforcement learning problems whose state-action space is endowed with a metric. We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards and transitions to efficiently balance exploration and exploitation. For problems with $K$ episodes and horizon $H$, we provide a regret bound of $\widetilde{O}\left( H^3 K^{\frac{2d}{2d+1}}\right)$, where $d$ is the covering dimension of the joint state-action space. This is the first regret bound for kernel-based RL using smoothing kernels, which requires very weak assumptions on the MDP and applies to a wide range of tasks. We empirically validate our approach in continuous MDPs with sparse rewards.' volume: 139 URL: https://proceedings.mlr.press/v139/domingues21a.html PDF: http://proceedings.mlr.press/v139/domingues21a/domingues21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-domingues21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Omar Darwiche family: Domingues - given: Pierre family: Menard - given: Matteo family: Pirotta - given: Emilie family: Kaufmann - given: Michal family: Valko editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2783-2792 id: domingues21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2783 lastpage: 2792 published: 2021-07-01 00:00:00 +0000 - title: 'Attention is not all you need: pure attention loses rank doubly exponentially with depth' abstract: 'Attention-based architectures have become ubiquitous in machine learning. Yet, our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms—or paths—each involving the operation of a sequence of attention heads across layers. Using this path decomposition, we prove that self-attention possesses a strong inductive bias towards "token uniformity". Specifically, without skip connections or multi-layer perceptrons (MLPs), the output converges doubly exponentially to a rank-1 matrix. On the other hand, skip connections and MLPs stop the output from degeneration. Our experiments verify the convergence results on standard transformer architectures.' volume: 139 URL: https://proceedings.mlr.press/v139/dong21a.html PDF: http://proceedings.mlr.press/v139/dong21a/dong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yihe family: Dong - given: Jean-Baptiste family: Cordonnier - given: Andreas family: Loukas editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2793-2803 id: dong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2793 lastpage: 2803 published: 2021-07-01 00:00:00 +0000 - title: 'How rotational invariance of common kernels prevents generalization in high dimensions' abstract: 'Kernel ridge regression is well-known to achieve minimax optimal rates in low-dimensional settings. However, its behavior in high dimensions is much less understood. Recent work establishes consistency for high-dimensional kernel regression for a number of specific assumptions on the data distribution. In this paper, we show that in high dimensions, the rotational invariance property of commonly studied kernels (such as RBF, inner product kernels and fully-connected NTK of any depth) leads to inconsistent estimation unless the ground truth is a low-degree polynomial. Our lower bound on the generalization error holds for a wide range of distributions and kernels with different eigenvalue decays. This lower bound suggests that consistency results for kernel ridge regression in high dimensions generally require a more refined analysis that depends on the structure of the kernel beyond its eigenvalue decay.' volume: 139 URL: https://proceedings.mlr.press/v139/donhauser21a.html PDF: http://proceedings.mlr.press/v139/donhauser21a/donhauser21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-donhauser21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Konstantin family: Donhauser - given: Mingqi family: Wu - given: Fanny family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2804-2814 id: donhauser21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2804 lastpage: 2814 published: 2021-07-01 00:00:00 +0000 - title: 'Fast Stochastic Bregman Gradient Methods: Sharp Analysis and Variance Reduction' abstract: 'We study the problem of minimizing a relatively-smooth convex function using stochastic Bregman gradient methods. We first prove the convergence of Bregman Stochastic Gradient Descent (BSGD) to a region that depends on the noise (magnitude of the gradients) at the optimum. In particular, BSGD quickly converges to the exact minimizer when this noise is zero (interpolation setting, in which the data is fit perfectly). Otherwise, when the objective has a finite sum structure, we show that variance reduction can be used to counter the effect of noise. In particular, fast convergence to the exact minimizer can be obtained under additional regularity assumptions on the Bregman reference function. We illustrate the effectiveness of our approach on two key applications of relative smoothness: tomographic reconstruction with Poisson noise and statistical preconditioning for distributed optimization.' volume: 139 URL: https://proceedings.mlr.press/v139/dragomir21a.html PDF: http://proceedings.mlr.press/v139/dragomir21a/dragomir21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dragomir21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Radu Alexandru family: Dragomir - given: Mathieu family: Even - given: Hadrien family: Hendrikx editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2815-2825 id: dragomir21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2815 lastpage: 2825 published: 2021-07-01 00:00:00 +0000 - title: 'Bilinear Classes: A Structural Framework for Provable Generalization in RL' abstract: 'This work introduces Bilinear Classes, a new structural framework, which permit generalization in reinforcement learning in a wide variety of settings through the use of function approximation. The framework incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear Q*/V* model in which both the optimal Q-function and the optimal V-function are linear in some known feature space. Our main result provides an RL algorithm which has polynomial sample complexity for Bilinear Classes; notably, this sample complexity is stated in terms of a reduction to the generalization error of an underlying supervised learning sub-problem. These bounds nearly match the best known sample complexity bounds for existing models. Furthermore, this framework also extends to the infinite dimensional (RKHS) setting: for the the Linear Q*/V* model, linear MDPs, and linear mixture MDPs, we provide sample complexities that have no explicit dependence on the explicit feature dimension (which could be infinite), but instead depends only on information theoretic quantities.' volume: 139 URL: https://proceedings.mlr.press/v139/du21a.html PDF: http://proceedings.mlr.press/v139/du21a/du21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-du21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Simon family: Du - given: Sham family: Kakade - given: Jason family: Lee - given: Shachar family: Lovett - given: Gaurav family: Mahajan - given: Wen family: Sun - given: Ruosong family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2826-2836 id: du21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2826 lastpage: 2836 published: 2021-07-01 00:00:00 +0000 - title: 'Improved Contrastive Divergence Training of Energy-Based Models' abstract: 'Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability. We propose an adaptation to improve contrastive divergence training by scrutinizing a gradient term that is difficult to calculate and is often left out for convenience. We show that this gradient term is numerically significant and in practice is important to avoid training instabilities, while being tractable to estimate. We further highlight how data augmentation and multi-scale processing can be used to improve model robustness and generation quality. Finally, we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases, such as image generation, OOD detection, and compositional generation.' volume: 139 URL: https://proceedings.mlr.press/v139/du21b.html PDF: http://proceedings.mlr.press/v139/du21b/du21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-du21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yilun family: Du - given: Shuang family: Li - given: Joshua family: Tenenbaum - given: Igor family: Mordatch editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2837-2848 id: du21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2837 lastpage: 2848 published: 2021-07-01 00:00:00 +0000 - title: 'Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation' abstract: 'We propose a new training objective named order-agnostic cross entropy (OaXE) for fully non-autoregressive translation (NAT) models. OaXE improves the standard cross-entropy loss to ameliorate the effect of word reordering, which is a common source of the critical multimodality problem in NAT. Concretely, OaXE removes the penalty for word order errors, and computes the cross entropy loss based on the best possible alignment between model predictions and target tokens. Since the log loss is very sensitive to invalid references, we leverage cross entropy initialization and loss truncation to ensure the model focuses on a good part of the search space. Extensive experiments on major WMT benchmarks demonstrate that OaXE substantially improves translation performance, setting new state of the art for fully NAT models. Further analyses show that OaXE indeed alleviates the multimodality problem by reducing token repetitions and increasing prediction confidence. Our code, data, and trained models are available at https://github.com/tencent-ailab/ICML21_OAXE.' volume: 139 URL: https://proceedings.mlr.press/v139/du21c.html PDF: http://proceedings.mlr.press/v139/du21c/du21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-du21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Cunxiao family: Du - given: Zhaopeng family: Tu - given: Jing family: Jiang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2849-2859 id: du21c issued: date-parts: - 2021 - 7 - 1 firstpage: 2849 lastpage: 2859 published: 2021-07-01 00:00:00 +0000 - title: 'Putting the “Learning" into Learning-Augmented Algorithms for Frequency Estimation' abstract: 'In learning-augmented algorithms, algorithms are enhanced using information from a machine learning algorithm. In turn, this suggests that we should tailor our machine-learning approach for the target algorithm. We here consider this synergy in the context of the learned count-min sketch from (Hsu et al., 2019). Learning here is used to predict heavy hitters from a data stream, which are counted explicitly outside the sketch. We show that an approximately sufficient statistic for the performance of the underlying count-min sketch is given by the coverage of the predictor, or the normalized $L^1$ norm of keys that are filtered by the predictor to be explicitly counted. We show that machine learning models which are trained to optimize for coverage lead to large improvements in performance over prior approaches according to the average absolute frequency error. Our source code can be found at https://github.com/franklynwang/putting-the-learning-in-LAA.' volume: 139 URL: https://proceedings.mlr.press/v139/du21d.html PDF: http://proceedings.mlr.press/v139/du21d/du21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-du21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Elbert family: Du - given: Franklyn family: Wang - given: Michael family: Mitzenmacher editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2860-2869 id: du21d issued: date-parts: - 2021 - 7 - 1 firstpage: 2860 lastpage: 2869 published: 2021-07-01 00:00:00 +0000 - title: 'Estimating $α$-Rank from A Few Entries with Low Rank Matrix Completion' abstract: 'Multi-agent evaluation aims at the assessment of an agent’s strategy on the basis of interaction with others. Typically, existing methods such as $\alpha$-rank and its approximation still require to exhaustively compare all pairs of joint strategies for an accurate ranking, which in practice is computationally expensive. In this paper, we aim to reduce the number of pairwise comparisons in recovering a satisfying ranking for $n$ strategies in two-player meta-games, by exploring the fact that agents with similar skills may achieve similar payoffs against others. Two situations are considered: the first one is when we can obtain the true payoffs; the other one is when we can only access noisy payoff. Based on these formulations, we leverage low-rank matrix completion and design two novel algorithms for noise-free and noisy evaluations respectively. For both of these settings, we theorize that $O(nr \log n)$ ($n$ is the number of agents and $r$ is the rank of the payoff matrix) payoff entries are required to achieve sufficiently well strategy evaluation performance. Empirical results on evaluating the strategies in three synthetic games and twelve real world games demonstrate that strategy evaluation from a few entries can lead to comparable performance to algorithms with full knowledge of the payoff matrix.' volume: 139 URL: https://proceedings.mlr.press/v139/du21e.html PDF: http://proceedings.mlr.press/v139/du21e/du21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-du21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yali family: Du - given: Xue family: Yan - given: Xu family: Chen - given: Jun family: Wang - given: Haifeng family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2870-2879 id: du21e issued: date-parts: - 2021 - 7 - 1 firstpage: 2870 lastpage: 2879 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Diverse-Structured Networks for Adversarial Robustness' abstract: 'In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST). Classic network architectures (NAs) are generally worse than searched NA in ST, which should be the same in AT. In this paper, we argue that NA and AT cannot be handled independently, since given a dataset, the optimal NA in ST would be no longer optimal in AT. That being said, AT is time-consuming itself; if we directly search NAs in AT over large search spaces, the computation will be practically infeasible. Thus, we propose diverse-structured network (DS-Net), to significantly reduce the size of the search space: instead of low-level operations, we only consider predefined atomic blocks, where an atomic block is a time-tested building block like the residual block. There are only a few atomic blocks and thus we can weight all atomic blocks rather than find the best one in a searched block of DS-Net, which is an essential tradeoff between exploring diverse structures and exploiting the best structures. Empirical results demonstrate the advantages of DS-Net, i.e., weighting the atomic blocks.' volume: 139 URL: https://proceedings.mlr.press/v139/du21f.html PDF: http://proceedings.mlr.press/v139/du21f/du21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-du21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xuefeng family: Du - given: Jingfeng family: Zhang - given: Bo family: Han - given: Tongliang family: Liu - given: Yu family: Rong - given: Gang family: Niu - given: Junzhou family: Huang - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2880-2891 id: du21f issued: date-parts: - 2021 - 7 - 1 firstpage: 2880 lastpage: 2891 published: 2021-07-01 00:00:00 +0000 - title: 'Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning' abstract: 'This paper considers batch Reinforcement Learning (RL) with general value function approximation. Our study investigates the minimal assumptions to reliably estimate/minimize Bellman error, and characterizes the generalization performance by (local) Rademacher complexities of general function classes, which makes initial steps in bridging the gap between statistical learning theory and batch RL. Concretely, we view the Bellman error as a surrogate loss for the optimality gap, and prove the followings: (1) In double sampling regime, the excess risk of Empirical Risk Minimizer (ERM) is bounded by the Rademacher complexity of the function class. (2) In the single sampling regime, sample-efficient risk minimization is not possible without further assumptions, regardless of algorithms. However, with completeness assumptions, the excess risk of FQI and a minimax style algorithm can be again bounded by the Rademacher complexity of the corresponding function classes. (3) Fast statistical rates can be achieved by using tools of local Rademacher complexity. Our analysis covers a wide range of function classes, including finite classes, linear spaces, kernel spaces, sparse linear features, etc.' volume: 139 URL: https://proceedings.mlr.press/v139/duan21a.html PDF: http://proceedings.mlr.press/v139/duan21a/duan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-duan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yaqi family: Duan - given: Chi family: Jin - given: Zhiyuan family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2892-2902 id: duan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2892 lastpage: 2902 published: 2021-07-01 00:00:00 +0000 - title: 'Sawtooth Factorial Topic Embeddings Guided Gamma Belief Network' abstract: 'Hierarchical topic models such as the gamma belief network (GBN) have delivered promising results in mining multi-layer document representations and discovering interpretable topic taxonomies. However, they often assume in the prior that the topics at each layer are independently drawn from the Dirichlet distribution, ignoring the dependencies between the topics both at the same layer and across different layers. To relax this assumption, we propose sawtooth factorial topic embedding guided GBN, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space. Specifically, both the words and topics are represented as embedding vectors of the same dimension. The topic matrix at a layer is factorized into the product of a factor loading matrix and a topic embedding matrix, the transpose of which is set as the factor loading matrix of the layer above. Repeating this particular type of factorization, which shares components between adjacent layers, leads to a structure referred to as sawtooth factorization. An auto-encoding variational inference network is constructed to optimize the model parameter via stochastic gradient descent. Experiments on big corpora show that our models outperform other neural topic models on extracting deeper interpretable topics and deriving better document representations.' volume: 139 URL: https://proceedings.mlr.press/v139/duan21b.html PDF: http://proceedings.mlr.press/v139/duan21b/duan21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-duan21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhibin family: Duan - given: Dongsheng family: Wang - given: Bo family: Chen - given: Chaojie family: Wang - given: Wenchao family: Chen - given: Yewen family: Li - given: Jie family: Ren - given: Mingyuan family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2903-2913 id: duan21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2903 lastpage: 2913 published: 2021-07-01 00:00:00 +0000 - title: 'Exponential Reduction in Sample Complexity with Learning of Ising Model Dynamics' abstract: 'The usual setting for learning the structure and parameters of a graphical model assumes the availability of independent samples produced from the corresponding multivariate probability distribution. However, for many models the mixing time of the respective Markov chain can be very large and i.i.d. samples may not be obtained. We study the problem of reconstructing binary graphical models from correlated samples produced by a dynamical process, which is natural in many applications. We analyze the sample complexity of two estimators that are based on the interaction screening objective and the conditional likelihood loss. We observe that for samples coming from a dynamical process far from equilibrium, the sample complexity reduces exponentially compared to a dynamical process that mixes quickly.' volume: 139 URL: https://proceedings.mlr.press/v139/dutt21a.html PDF: http://proceedings.mlr.press/v139/dutt21a/dutt21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-dutt21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Arkopal family: Dutt - given: Andrey family: Lokhov - given: Marc D family: Vuffray - given: Sidhant family: Misra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2914-2925 id: dutt21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2914 lastpage: 2925 published: 2021-07-01 00:00:00 +0000 - title: 'Reinforcement Learning Under Moral Uncertainty' abstract: 'An ambitious goal for machine learning is to create agents that behave ethically: The capacity to abide by human moral norms would greatly expand the context in which autonomous agents could be practically and safely deployed, e.g. fully autonomous vehicles will encounter charged moral decisions that complicate their deployment. While ethical agents could be trained by rewarding correct behavior under a specific moral theory (e.g. utilitarianism), there remains widespread disagreement about the nature of morality. Acknowledging such disagreement, recent work in moral philosophy proposes that ethical behavior requires acting under moral uncertainty, i.e. to take into account when acting that one’s credence is split across several plausible ethical theories. This paper translates such insights to the field of reinforcement learning, proposes two training methods that realize different points among competing desiderata, and trains agents in simple environments to act under moral uncertainty. The results illustrate (1) how such uncertainty can help curb extreme behavior from commitment to single theories and (2) several technical complications arising from attempting to ground moral philosophy in RL (e.g. how can a principled trade-off between two competing but incomparable reward functions be reached). The aim is to catalyze progress towards morally-competent agents and highlight the potential of RL to contribute towards the computational grounding of moral philosophy.' volume: 139 URL: https://proceedings.mlr.press/v139/ecoffet21a.html PDF: http://proceedings.mlr.press/v139/ecoffet21a/ecoffet21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ecoffet21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Adrien family: Ecoffet - given: Joel family: Lehman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2926-2936 id: ecoffet21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2926 lastpage: 2936 published: 2021-07-01 00:00:00 +0000 - title: 'Confidence-Budget Matching for Sequential Budgeted Learning' abstract: 'A core element in decision-making under uncertainty is the feedback on the quality of the performed actions. However, in many applications, such feedback is restricted. For example, in recommendation systems, repeatedly asking the user to provide feedback on the quality of recommendations will annoy them. In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. Specifically, we focus on multi-armed bandits, linear contextual bandits, and reinforcement learning problems. We start by analyzing the performance of ‘greedy’ algorithms that query a reward whenever they can. We show that in fully stochastic settings, doing so performs surprisingly well, but in the presence of any adversity, this might lead to linear regret. To overcome this issue, we propose the Confidence-Budget Matching (CBM) principle that queries rewards when the confidence intervals are wider than the inverse square root of the available budget. We analyze the performance of CBM based algorithms in different settings and show that it performs well in the presence of adversity in the contexts, initial states, and budgets.' volume: 139 URL: https://proceedings.mlr.press/v139/efroni21a.html PDF: http://proceedings.mlr.press/v139/efroni21a/efroni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-efroni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yonathan family: Efroni - given: Nadav family: Merlis - given: Aadirupa family: Saha - given: Shie family: Mannor editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2937-2947 id: efroni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2937 lastpage: 2947 published: 2021-07-01 00:00:00 +0000 - title: 'Self-Paced Context Evaluation for Contextual Reinforcement Learning' abstract: 'Reinforcement learning (RL) has made a lot of advances for solving a single problem in a given environment; but learning policies that generalize to unseen variations of a problem remains challenging. To improve sample efficiency for learning on such instances of a problem domain, we present Self-Paced Context Evaluation (SPaCE). Based on self-paced learning, SPaCE automatically generates instance curricula online with little computational overhead. To this end, SPaCE leverages information contained in state values during training to accelerate and improve training performance as well as generalization capabilities to new \tasks from the same problem domain. Nevertheless, SPaCE is independent of the problem domain at hand and can be applied on top of any RL agent with state-value function approximation. We demonstrate SPaCE’s ability to speed up learning of different value-based RL agents on two environments, showing better generalization capabilities and up to 10x faster learning compared to naive approaches such as round robin or SPDRL, as the closest state-of-the-art approach.' volume: 139 URL: https://proceedings.mlr.press/v139/eimer21a.html PDF: http://proceedings.mlr.press/v139/eimer21a/eimer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-eimer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Theresa family: Eimer - given: André family: Biedenkapp - given: Frank family: Hutter - given: Marius family: Lindauer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2948-2958 id: eimer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2948 lastpage: 2958 published: 2021-07-01 00:00:00 +0000 - title: 'Provably Strict Generalisation Benefit for Equivariant Models' abstract: 'It is widely believed that engineering a model to be invariant/equivariant improves generalisation. Despite the growing popularity of this approach, a precise characterisation of the generalisation benefit is lacking. By considering the simplest case of linear models, this paper provides the first provably non-zero improvement in generalisation for invariant/equivariant models when the target distribution is invariant/equivariant with respect to a compact group. Moreover, our work reveals an interesting relationship between generalisation, the number of training examples and properties of the group action. Our results rest on an observation of the structure of function spaces under averaging operators which, along with its consequences for feature averaging, may be of independent interest.' volume: 139 URL: https://proceedings.mlr.press/v139/elesedy21a.html PDF: http://proceedings.mlr.press/v139/elesedy21a/elesedy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-elesedy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bryn family: Elesedy - given: Sheheryar family: Zaidi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2959-2969 id: elesedy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2959 lastpage: 2969 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Iterative Amortized Inference for Learning Symmetric and Disentangled Multi-Object Representations' abstract: 'Unsupervised multi-object representation learning depends on inductive biases to guide the discovery of object-centric representations that generalize. However, we observe that methods for learning these representations are either impractical due to long training times and large memory consumption or forego key inductive biases. In this work, we introduce EfficientMORL, an efficient framework for the unsupervised learning of object-centric representations. We show that optimization challenges caused by requiring both symmetry and disentanglement can in fact be addressed by high-cost iterative amortized inference by designing the framework to minimize its dependence on it. We take a two-stage approach to inference: first, a hierarchical variational autoencoder extracts symmetric and disentangled representations through bottom-up inference, and second, a lightweight network refines the representations with top-down feedback. The number of refinement steps taken during training is reduced following a curriculum, so that at test time with zero steps the model achieves 99.1% of the refined decomposition performance. We demonstrate strong object decomposition and disentanglement on the standard multi-object benchmark while achieving nearly an order of magnitude faster training and test time inference over the previous state-of-the-art model.' volume: 139 URL: https://proceedings.mlr.press/v139/emami21a.html PDF: http://proceedings.mlr.press/v139/emami21a/emami21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-emami21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Patrick family: Emami - given: Pan family: He - given: Sanjay family: Ranka - given: Anand family: Rangarajan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2970-2981 id: emami21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2970 lastpage: 2981 published: 2021-07-01 00:00:00 +0000 - title: 'Implicit Bias of Linear RNNs' abstract: 'Contemporary wisdom based on empirical studies suggests that standard recurrent neural networks (RNNs) do not perform well on tasks requiring long-term memory. However, RNNs’ poor ability to capture long-term dependencies has not been fully understood. This paper provides a rigorous explanation of this property in the special case of linear RNNs. Although this work is limited to linear RNNs, even these systems have traditionally been difficult to analyze due to their non-linear parameterization. Using recently-developed kernel regime analysis, our main result shows that as the number of hidden units goes to infinity, linear RNNs learned from random initializations are functionally equivalent to a certain weighted 1D-convolutional network. Importantly, the weightings in the equivalent model cause an implicit bias to elements with smaller time lags in the convolution, and hence shorter memory. The degree of this bias depends on the variance of the transition matrix at initialization and is related to the classic exploding and vanishing gradients problem. The theory is validated with both synthetic and real data experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/emami21b.html PDF: http://proceedings.mlr.press/v139/emami21b/emami21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-emami21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Melikasadat family: Emami - given: Mojtaba family: Sahraee-Ardakan - given: Parthe family: Pandit - given: Sundeep family: Rangan - given: Alyson K family: Fletcher editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2982-2992 id: emami21b issued: date-parts: - 2021 - 7 - 1 firstpage: 2982 lastpage: 2992 published: 2021-07-01 00:00:00 +0000 - title: 'Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs' abstract: 'Understanding the fundamental mechanism behind the success of deep neural networks is one of the key challenges in the modern machine learning literature. Despite numerous attempts, a solid theoretical analysis is yet to be developed. In this paper, we develop a novel unified framework to reveal a hidden regularization mechanism through the lens of convex optimization. We first show that the training of multiple three-layer ReLU sub-networks with weight decay regularization can be equivalently cast as a convex optimization problem in a higher dimensional space, where sparsity is enforced via a group $\ell_1$-norm regularization. Consequently, ReLU networks can be interpreted as high dimensional feature selection methods. More importantly, we then prove that the equivalent convex problem can be globally optimized by a standard convex optimization solver with a polynomial-time complexity with respect to the number of samples and data dimension when the width of the network is fixed. Finally, we numerically validate our theoretical results via experiments involving both synthetic and real datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/ergen21a.html PDF: http://proceedings.mlr.press/v139/ergen21a/ergen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ergen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tolga family: Ergen - given: Mert family: Pilanci editor: - given: Marina family: Meila - given: Tong family: Zhang page: 2993-3003 id: ergen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 2993 lastpage: 3003 published: 2021-07-01 00:00:00 +0000 - title: 'Revealing the Structure of Deep Neural Networks via Convex Duality' abstract: 'We study regularized deep neural networks (DNNs) and introduce a convex analytic framework to characterize the structure of the hidden layers. We show that a set of optimal hidden layer weights for a norm regularized DNN training problem can be explicitly found as the extreme points of a convex set. For the special case of deep linear networks, we prove that each optimal weight matrix aligns with the previous layers via duality. More importantly, we apply the same characterization to deep ReLU networks with whitened data and prove the same weight alignment holds. As a corollary, we also prove that norm regularized deep ReLU networks yield spline interpolation for one-dimensional datasets which was previously known only for two-layer networks. Furthermore, we provide closed-form solutions for the optimal layer weights when data is rank-one or whitened. The same analysis also applies to architectures with batch normalization even for arbitrary data. Therefore, we obtain a complete explanation for a recent empirical observation termed Neural Collapse where class means collapse to the vertices of a simplex equiangular tight frame.' volume: 139 URL: https://proceedings.mlr.press/v139/ergen21b.html PDF: http://proceedings.mlr.press/v139/ergen21b/ergen21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ergen21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tolga family: Ergen - given: Mert family: Pilanci editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3004-3014 id: ergen21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3004 lastpage: 3014 published: 2021-07-01 00:00:00 +0000 - title: 'Whitening for Self-Supervised Representation Learning' abstract: 'Most of the current self-supervised representation learning (SSL) methods are based on the contrastive loss and the instance-discrimination task, where augmented versions of the same image instance ("positives") are contrasted with instances extracted from other images ("negatives"). For the learning to be effective, many negatives should be compared with a positive pair, which is computationally demanding. In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latent-space features. The whitening operation has a "scattering" effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. Our solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance. The source code of the method and of all the experiments is available at: https://github.com/htdt/self-supervised.' volume: 139 URL: https://proceedings.mlr.press/v139/ermolov21a.html PDF: http://proceedings.mlr.press/v139/ermolov21a/ermolov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ermolov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aleksandr family: Ermolov - given: Aliaksandr family: Siarohin - given: Enver family: Sangineto - given: Nicu family: Sebe editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3015-3024 id: ermolov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3015 lastpage: 3024 published: 2021-07-01 00:00:00 +0000 - title: 'Graph Mixture Density Networks' abstract: 'We introduce the Graph Mixture Density Networks, a new family of machine learning models that can fit multimodal output distributions conditioned on graphs of arbitrary topology. By combining ideas from mixture models and graph representation learning, we address a broader class of challenging conditional density estimation problems that rely on structured data. In this respect, we evaluate our method on a new benchmark application that leverages random graphs for stochastic epidemic simulations. We show a significant improvement in the likelihood of epidemic outcomes when taking into account both multimodality and structure. The empirical analysis is complemented by two real-world regression tasks showing the effectiveness of our approach in modeling the output prediction uncertainty. Graph Mixture Density Networks open appealing research opportunities in the study of structure-dependent phenomena that exhibit non-trivial conditional output distributions.' volume: 139 URL: https://proceedings.mlr.press/v139/errica21a.html PDF: http://proceedings.mlr.press/v139/errica21a/errica21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-errica21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Federico family: Errica - given: Davide family: Bacciu - given: Alessio family: Micheli editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3025-3035 id: errica21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3025 lastpage: 3035 published: 2021-07-01 00:00:00 +0000 - title: 'Cross-Gradient Aggregation for Decentralized Learning from Non-IID Data' abstract: 'Decentralized learning enables a group of collaborative agents to learn models using a distributed dataset without the need for a central parameter server. Recently, decentralized learning algorithms have demonstrated state-of-the-art results on benchmark data sets, comparable with centralized algorithms. However, the key assumption to achieve competitive performance is that the data is independently and identically distributed (IID) among the agents which, in real-life applications, is often not applicable. Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i.e., derivatives of its model with respect to its neighbors’ datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP). We theoretically analyze the convergence characteristics of CGA and demonstrate its efficiency on non-IID data distributions sampled from the MNIST and CIFAR-10 datasets. Our empirical comparisons show superior learning performance of CGA over existing state-of-the-art decentralized learning algorithms, as well as maintaining the improved performance under information compression to reduce peer-to-peer communication overhead. The code is available here on GitHub.' volume: 139 URL: https://proceedings.mlr.press/v139/esfandiari21a.html PDF: http://proceedings.mlr.press/v139/esfandiari21a/esfandiari21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-esfandiari21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yasaman family: Esfandiari - given: Sin Yong family: Tan - given: Zhanhong family: Jiang - given: Aditya family: Balu - given: Ethan family: Herron - given: Chinmay family: Hegde - given: Soumik family: Sarkar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3036-3046 id: esfandiari21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3036 lastpage: 3046 published: 2021-07-01 00:00:00 +0000 - title: 'Weight-covariance alignment for adversarially robust neural networks' abstract: 'Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks. However, existing SNNs are usually heuristically motivated, and often rely on adversarial training, which is computationally costly. We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training, and enjoys solid theoretical justification. Specifically, while existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness. We evaluate our method on a number of popular benchmarks, show that it can be applied to different architectures, and that it provides robustness to a variety of white-box and black-box attacks, while being simple and fast to train compared to existing alternatives.' volume: 139 URL: https://proceedings.mlr.press/v139/eustratiadis21a.html PDF: http://proceedings.mlr.press/v139/eustratiadis21a/eustratiadis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-eustratiadis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Panagiotis family: Eustratiadis - given: Henry family: Gouk - given: Da family: Li - given: Timothy family: Hospedales editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3047-3056 id: eustratiadis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3047 lastpage: 3056 published: 2021-07-01 00:00:00 +0000 - title: 'Data augmentation for deep learning based accelerated MRI reconstruction with limited data' abstract: 'Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks. These networks are often trained end-to-end to directly reconstruct an image from a noisy or corrupted measurement of that image. To achieve state-of-the-art performance, training on large and diverse sets of images is considered critical. However, it is often difficult and/or expensive to collect large amounts of training images. Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for accelerated MRI reconstruction and study its effectiveness at reducing the required training data in a variety of settings. Our DA pipeline, MRAugment, is specifically designed to utilize the invariances present in medical imaging measurements as naive DA strategies that neglect the physics of the problem fail. Through extensive studies on multiple datasets we demonstrate that in the low-data regime DA prevents overfitting and can match or even surpass the state of the art while using significantly fewer training data, whereas in the high-data regime it has diminishing returns. Furthermore, our findings show that DA improves the robustness of the model against various shifts in the test distribution.' volume: 139 URL: https://proceedings.mlr.press/v139/fabian21a.html PDF: http://proceedings.mlr.press/v139/fabian21a/fabian21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fabian21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zalan family: Fabian - given: Reinhard family: Heckel - given: Mahdi family: Soltanolkotabi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3057-3067 id: fabian21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3057 lastpage: 3067 published: 2021-07-01 00:00:00 +0000 - title: 'Poisson-Randomised DirBN: Large Mutation is Needed in Dirichlet Belief Networks' abstract: 'The Dirichlet Belief Network (DirBN) was recently proposed as a promising deep generative model to learn interpretable deep latent distributions for objects. However, its current representation capability is limited since its latent distributions across different layers is prone to form similar patterns and can thus hardly use multi-layer structure to form flexible distributions. In this work, we propose Poisson-randomised Dirichlet Belief Networks (Pois-DirBN), which allows large mutations for the latent distributions across layers to enlarge the representation capability. Based on our key idea of inserting Poisson random variables in the layer-wise connection, Pois-DirBN first introduces a component-wise propagation mechanism to enable latent distributions to have large variations across different layers. Then, we develop a layer-wise Gibbs sampling algorithm to infer the latent distributions, leading to a larger number of effective layers compared to DirBN. In addition, we integrate out latent distributions and form a multi-stochastic deep integer network, which provides an alternative view on Pois-DirBN. We apply Pois-DirBN to relational modelling and validate its effectiveness through improved link prediction performance and more interpretable latent distribution visualisations. The code can be downloaded at https://github.com/xuhuifan/Pois_DirBN.' volume: 139 URL: https://proceedings.mlr.press/v139/fan21a.html PDF: http://proceedings.mlr.press/v139/fan21a/fan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xuhui family: Fan - given: Bin family: Li - given: Yaqiong family: Li - given: Scott A. family: Sisson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3068-3077 id: fan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3068 lastpage: 3077 published: 2021-07-01 00:00:00 +0000 - title: 'Model-based Reinforcement Learning for Continuous Control with Posterior Sampling' abstract: 'Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces theoretically and empirically. First, we show the first regret bound of PSRL in continuous spaces which is polynomial in the episode length to the best of our knowledge. With the assumption that reward and transition functions can be modeled by Bayesian linear regression, we develop a regret bound of $\tilde{O}(H^{3/2}d\sqrt{T})$, where $H$ is the episode length, $d$ is the dimension of the state-action space, and $T$ indicates the total time steps. This result matches the best-known regret bound of non-PSRL methods in linear MDPs. Our bound can be extended to nonlinear cases as well with feature embedding: using linear kernels on the feature representation $\phi$, the regret bound becomes $\tilde{O}(H^{3/2}d_{\phi}\sqrt{T})$, where $d_\phi$ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models, we use Bayesian linear regression on the penultimate layer (the feature representation layer $\phi$) of neural networks. Empirical results show that our algorithm achieves the state-of-the-art sample efficiency in benchmark continuous control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/fan21b.html PDF: http://proceedings.mlr.press/v139/fan21b/fan21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fan21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ying family: Fan - given: Yifei family: Ming editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3078-3087 id: fan21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3078 lastpage: 3087 published: 2021-07-01 00:00:00 +0000 - title: 'SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies' abstract: 'Generalization has been a long-standing challenge for reinforcement learning (RL). Visual RL, in particular, can be easily distracted by irrelevant factors in high-dimensional observation space. In this work, we consider robust policy learning which targets zero-shot generalization to unseen visual environments with large distributional shift. We propose SECANT, a novel self-expert cloning technique that leverages image augmentation in two stages to *decouple* robust representation learning from policy optimization. Specifically, an expert policy is first trained by RL from scratch with weak augmentations. A student network then learns to mimic the expert policy by supervised learning with strong augmentations, making its representation more robust against visual variations compared to the expert. Extensive experiments demonstrate that SECANT significantly advances the state of the art in zero-shot generalization across 4 challenging domains. Our average reward improvements over prior SOTAs are: DeepMind Control (+26.5%), robotic manipulation (+337.8%), vision-based autonomous driving (+47.7%), and indoor object navigation (+15.8%). Code release and video are available at https://linxifan.github.io/secant-site/.' volume: 139 URL: https://proceedings.mlr.press/v139/fan21c.html PDF: http://proceedings.mlr.press/v139/fan21c/fan21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fan21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Linxi family: Fan - given: Guanzhi family: Wang - given: De-An family: Huang - given: Zhiding family: Yu - given: Li family: Fei-Fei - given: Yuke family: Zhu - given: Animashree family: Anandkumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3088-3099 id: fan21c issued: date-parts: - 2021 - 7 - 1 firstpage: 3088 lastpage: 3099 published: 2021-07-01 00:00:00 +0000 - title: 'On Estimation in Latent Variable Models' abstract: 'Latent variable models have been playing a central role in statistics, econometrics, machine learning with applications to repeated observation study, panel data inference, user behavior analysis, etc. In many modern applications, the inference based on latent variable models involves one or several of the following features: the presence of complex latent structure, the observed and latent variables being continuous or discrete, constraints on parameters, and data size being large. Therefore, solving an estimation problem for general latent variable models is highly non-trivial. In this paper, we consider a gradient based method via using variance reduction technique to accelerate estimation procedure. Theoretically, we show the convergence results for the proposed method under general and mild model assumptions. The algorithm has better computational complexity compared with the classical gradient methods and maintains nice statistical properties. Various numerical results corroborate our theory.' volume: 139 URL: https://proceedings.mlr.press/v139/fang21a.html PDF: http://proceedings.mlr.press/v139/fang21a/fang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Guanhua family: Fang - given: Ping family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3100-3110 id: fang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3100 lastpage: 3110 published: 2021-07-01 00:00:00 +0000 - title: 'On Variational Inference in Biclustering Models' abstract: 'Biclustering structures exist ubiquitously in data matrices and the biclustering problem was first formalized by John Hartigan (1972) to cluster rows and columns simultaneously. In this paper, we develop a theory for the estimation of general biclustering models, where the data is assumed to follow certain statistical distribution with underlying biclustering structure. Due to the existence of latent variables, directly computing the maximal likelihood estimator is prohibitively difficult in practice and we instead consider the variational inference (VI) approach to solve the parameter estimation problem. Although variational inference method generally has good empirical performance, there are very few theoretical results around VI. In this paper, we obtain the precise estimation bound of variational estimator and show that it matches the minimax rate in terms of estimation error under mild assumptions in biclustering setting. Furthermore, we study the convergence property of the coordinate ascent variational inference algorithm, where both local and global convergence results have been provided. Numerical results validate our new theories.' volume: 139 URL: https://proceedings.mlr.press/v139/fang21b.html PDF: http://proceedings.mlr.press/v139/fang21b/fang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Guanhua family: Fang - given: Ping family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3111-3121 id: fang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3111 lastpage: 3121 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Bounds for Open-Set Learning' abstract: 'Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. In this paper, we target a more challenging and re_x0002_alistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. Although researchers have designed many methods from the algorith_x0002_mic perspectives, there are few methods that pro_x0002_vide generalization guarantees on their ability to achieve consistent performance on different train_x0002_ing samples drawn from the same distribution. Motivated by the transfer learning and probably approximate correct (PAC) theory, we make a bold attempt to study OSL by proving its general_x0002_ization error-given training samples with size n, the estimation error will get close to order Op(1/$\sqrt{}$n). This is the first study to provide a generalization bound for OSL, which we do by theoretically investigating the risk of the tar_x0002_get classifier on unknown classes. According to our theory, a novel algorithm, called auxiliary open-set risk (AOSR) is proposed to address the OSL problem. Experiments verify the efficacy of AOSR. The code is available at github.com/AnjinLiu/Openset_Learning_AOSR.' volume: 139 URL: https://proceedings.mlr.press/v139/fang21c.html PDF: http://proceedings.mlr.press/v139/fang21c/fang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhen family: Fang - given: Jie family: Lu - given: Anjin family: Liu - given: Feng family: Liu - given: Guangquan family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3122-3132 id: fang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 3122 lastpage: 3132 published: 2021-07-01 00:00:00 +0000 - title: 'Streaming Bayesian Deep Tensor Factorization' abstract: 'Despite the success of existing tensor factorization methods, most of them conduct a multilinear decomposition, and rarely exploit powerful modeling frameworks, like deep neural networks, to capture a variety of complicated interactions in data. More important, for highly expressive, deep factorization, we lack an effective approach to handle streaming data, which are ubiquitous in real-world applications. To address these issues, we propose SBTD, a Streaming Bayesian Deep Tensor factorization method. We first use Bayesian neural networks (NNs) to build a deep tensor factorization model. We assign a spike-and-slab prior over each NN weight to encourage sparsity and to prevent overfitting. We then use multivariate Delta’s method and moment matching to approximate the posterior of the NN output and calculate the running model evidence, based on which we develop an efficient streaming posterior inference algorithm in the assumed-density-filtering and expectation propagation framework. Our algorithm provides responsive incremental updates for the posterior of the latent factors and NN weights upon receiving newly observed tensor entries, and meanwhile identify and inhibit redundant/useless weights. We show the advantages of our approach in four real-world applications.' volume: 139 URL: https://proceedings.mlr.press/v139/fang21d.html PDF: http://proceedings.mlr.press/v139/fang21d/fang21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fang21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shikai family: Fang - given: Zheng family: Wang - given: Zhimeng family: Pan - given: Ji family: Liu - given: Shandian family: Zhe editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3133-3142 id: fang21d issued: date-parts: - 2021 - 7 - 1 firstpage: 3133 lastpage: 3142 published: 2021-07-01 00:00:00 +0000 - title: 'PID Accelerated Value Iteration Algorithm' abstract: 'The convergence rate of Value Iteration (VI), a fundamental procedure in dynamic programming and reinforcement learning, for solving MDPs can be slow when the discount factor is close to one. We propose modifications to VI in order to potentially accelerate its convergence behaviour. The key insight is the realization that the evolution of the value function approximations $(V_k)_{k \geq 0}$ in the VI procedure can be seen as a dynamical system. This opens up the possibility of using techniques from \emph{control theory} to modify, and potentially accelerate, this dynamics. We present such modifications based on simple controllers, such as PD (Proportional-Derivative), PI (Proportional-Integral), and PID. We present the error dynamics of these variants of VI, and provably (for certain classes of MDPs) and empirically (for more general classes) show that the convergence rate can be significantly improved. We also propose a gain adaptation mechanism in order to automatically select the controller gains, and empirically show the effectiveness of this procedure.' volume: 139 URL: https://proceedings.mlr.press/v139/farahmand21a.html PDF: http://proceedings.mlr.press/v139/farahmand21a/farahmand21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-farahmand21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amir-Massoud family: Farahmand - given: Mohammad family: Ghavamzadeh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3143-3153 id: farahmand21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3143 lastpage: 3153 published: 2021-07-01 00:00:00 +0000 - title: 'Near-Optimal Entrywise Anomaly Detection for Low-Rank Matrices with Sub-Exponential Noise' abstract: 'We study the problem of identifying anomalies in a low-rank matrix observed with sub-exponential noise, motivated by applications in retail and inventory management. State of the art approaches to anomaly detection in low-rank matrices apparently fall short, since they require that non-anomalous entries be observed with vanishingly small noise (which is not the case in our problem, and indeed in many applications). So motivated, we propose a conceptually simple entrywise approach to anomaly detection in low-rank matrices. Our approach accommodates a general class of probabilistic anomaly models. We extend recent work on entrywise error guarantees for matrix completion, establishing such guarantees for sub-exponential matrices, where in addition to missing entries, a fraction of entries are corrupted by (an also unknown) anomaly model. Viewing the anomaly detection as a classification task, to the best of our knowledge, we are the first to achieve the min-max optimal detection rate (up to log factors). Using data from a massive consumer goods retailer, we show that our approach provides significant improvements over incumbent approaches to anomaly detection.' volume: 139 URL: https://proceedings.mlr.press/v139/farias21a.html PDF: http://proceedings.mlr.press/v139/farias21a/farias21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-farias21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vivek family: Farias - given: Andrew A family: Li - given: Tianyi family: Peng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3154-3163 id: farias21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3154 lastpage: 3163 published: 2021-07-01 00:00:00 +0000 - title: 'Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results' abstract: 'We focus on the problem of finding an optimal strategy for a team of players that faces an opponent in an imperfect-information zero-sum extensive-form game. Team members are not allowed to communicate during play but can coordinate before the game. In this setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game. In this paper, we first provide new modeling results about computing such an optimal distribution by drawing a connection to a different literature on extensive-form correlation. Second, we provide an algorithm that allows one for capping the number of profiles employed in the solution. This begets an anytime algorithm by increasing the cap. We find that often a handful of well-chosen such profiles suffices to reach optimal utility for the team. This enables team members to reach coordination through a simple and understandable plan. Finally, inspired by this observation and leveraging theoretical concepts that we introduce, we develop an efficient column-generation algorithm for finding an optimal distribution for the team. We evaluate it on a suite of common benchmark games. It is three orders of magnitude faster than the prior state of the art on games that the latter can solve and it can also solve several games that were previously unsolvable.' volume: 139 URL: https://proceedings.mlr.press/v139/farina21a.html PDF: http://proceedings.mlr.press/v139/farina21a/farina21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-farina21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gabriele family: Farina - given: Andrea family: Celli - given: Nicola family: Gatti - given: Tuomas family: Sandholm editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3164-3173 id: farina21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3164 lastpage: 3173 published: 2021-07-01 00:00:00 +0000 - title: 'Train simultaneously, generalize better: Stability of gradient-based minimax learners' abstract: 'The success of minimax learning problems of generative adversarial networks (GANs) has been observed to depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed and robustness properties of the underlying optimization algorithm. In this paper, we show that the optimization algorithm also plays a key role in the generalization performance of the trained minimax model. To this end, we analyze the generalization properties of standard gradient descent ascent (GDA) and proximal point method (PPM) algorithms through the lens of algorithmic stability as defined by Bousquet & Elisseeff, 2002 under both convex-concave and nonconvex-nonconcave minimax settings. While the GDA algorithm is not guaranteed to have a vanishing excess risk in convex-concave problems, we show the PPM algorithm enjoys a bounded excess risk in the same setup. For nonconvex-nonconcave problems, we compare the generalization performance of stochastic GDA and GDmax algorithms where the latter fully solves the maximization subproblem at every iteration. Our generalization analysis suggests the superiority of GDA provided that the minimization and maximization subproblems are solved simultaneously with similar learning rates. We discuss several numerical results indicating the role of optimization algorithms in the generalization of learned minimax models.' volume: 139 URL: https://proceedings.mlr.press/v139/farnia21a.html PDF: http://proceedings.mlr.press/v139/farnia21a/farnia21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-farnia21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Farzan family: Farnia - given: Asuman family: Ozdaglar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3174-3185 id: farnia21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3174 lastpage: 3185 published: 2021-07-01 00:00:00 +0000 - title: 'Unbalanced minibatch Optimal Transport; applications to Domain Adaptation' abstract: 'Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over subsets of data, i.e. minibatches. While computationally appealing, we highlight in this paper some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behaviors. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/fatras21a.html PDF: http://proceedings.mlr.press/v139/fatras21a/fatras21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fatras21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kilian family: Fatras - given: Thibault family: Sejourne - given: Rémi family: Flamary - given: Nicolas family: Courty editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3186-3197 id: fatras21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3186 lastpage: 3197 published: 2021-07-01 00:00:00 +0000 - title: 'Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach' abstract: 'We study function approximation for episodic reinforcement learning with entropic risk measure. We first propose an algorithm with linear function approximation. Compared to existing algorithms, which suffer from improper regularization and regression biases, this algorithm features debiasing transformations in backward induction and regression procedures. We further propose an algorithm with general function approximation, which features implicit debiasing transformations. We prove that both algorithms achieve a sublinear regret and demonstrate a trade-off between generality and efficiency. Our analysis provides a unified framework for function approximation in risk-sensitive reinforcement learning, which leads to the first sublinear regret bounds in the setting.' volume: 139 URL: https://proceedings.mlr.press/v139/fei21a.html PDF: http://proceedings.mlr.press/v139/fei21a/fei21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fei21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yingjie family: Fei - given: Zhuoran family: Yang - given: Zhaoran family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3198-3207 id: fei21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3198 lastpage: 3207 published: 2021-07-01 00:00:00 +0000 - title: 'Lossless Compression of Efficient Private Local Randomizers' abstract: 'Locally Differentially Private (LDP) Reports are commonly used for collection of statistics and machine learning in the federated setting. In many cases the best known LDP algorithms require sending prohibitively large messages from the client device to the server (such as when constructing histograms over a large domain or learning a high-dimensional model). Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees. The practical implication of our result is that in typical applications every message can be compressed to the size of the server’s pseudo-random generator seed. From this general approach we derive low-communication algorithms for the problems of frequency estimation and high-dimensional mean estimation. Our algorithms are simpler and more accurate than existing low-communication LDP algorithms for these well-studied problems.' volume: 139 URL: https://proceedings.mlr.press/v139/feldman21a.html PDF: http://proceedings.mlr.press/v139/feldman21a/feldman21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feldman21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vitaly family: Feldman - given: Kunal family: Talwar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3208-3219 id: feldman21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3208 lastpage: 3219 published: 2021-07-01 00:00:00 +0000 - title: 'Dimensionality Reduction for the Sum-of-Distances Metric' abstract: 'We give a dimensionality reduction procedure to approximate the sum of distances of a given set of $n$ points in $R^d$ to any “shape” that lies in a $k$-dimensional subspace. Here, by “shape” we mean any set of points in $R^d$. Our algorithm takes an input in the form of an $n \times d$ matrix $A$, where each row of $A$ denotes a data point, and outputs a subspace $P$ of dimension $O(k^{3}/\epsilon^6)$ such that the projections of each of the $n$ points onto the subspace $P$ and the distances of each of the points to the subspace $P$ are sufficient to obtain an $\epsilon$-approximation to the sum of distances to any arbitrary shape that lies in a $k$-dimensional subspace of $R^d$. These include important problems such as $k$-median, $k$-subspace approximation, and $(j,l)$ subspace clustering with $j \cdot l \leq k$. Dimensionality reduction reduces the data storage requirement to $(n+d)k^{3}/\epsilon^6$ from nnz$(A)$. Here nnz$(A)$ could potentially be as large as $nd$. Our algorithm runs in time nnz$(A)/\epsilon^2 + (n+d)$poly$(k/\epsilon)$, up to logarithmic factors. For dense matrices, where nnz$(A) \approx nd$, we give a faster algorithm, that runs in time $nd + (n+d)$poly$(k/\epsilon)$ up to logarithmic factors. Our dimensionality reduction algorithm can also be used to obtain poly$(k/\epsilon)$ size coresets for $k$-median and $(k,1)$-subspace approximation problems in polynomial time.' volume: 139 URL: https://proceedings.mlr.press/v139/feng21a.html PDF: http://proceedings.mlr.press/v139/feng21a/feng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhili family: Feng - given: Praneeth family: Kacham - given: David family: Woodruff editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3220-3229 id: feng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3220 lastpage: 3229 published: 2021-07-01 00:00:00 +0000 - title: 'Reserve Price Optimization for First Price Auctions in Display Advertising' abstract: 'The display advertising industry has recently transitioned from second- to first-price auctions as its primary mechanism for ad allocation and pricing. In light of this, publishers need to re-evaluate and optimize their auction parameters, notably reserve prices. In this paper, we propose a gradient-based algorithm to adaptively update and optimize reserve prices based on estimates of bidders’ responsiveness to experimental shocks in reserves. Our key innovation is to draw on the inherent structure of the revenue objective in order to reduce the variance of gradient estimates and improve convergence rates in both theory and practice. We show that revenue in a first-price auction can be usefully decomposed into a \emph{demand} component and a \emph{bidding} component, and introduce techniques to reduce the variance of each component. We characterize the bias-variance trade-offs of these techniques and validate the performance of our proposed algorithm through experiments on synthetic data and real display ad auctions data from a major ad exchange.' volume: 139 URL: https://proceedings.mlr.press/v139/feng21b.html PDF: http://proceedings.mlr.press/v139/feng21b/feng21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feng21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhe family: Feng - given: Sebastien family: Lahaie - given: Jon family: Schneider - given: Jinchao family: Ye editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3230-3239 id: feng21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3230 lastpage: 3239 published: 2021-07-01 00:00:00 +0000 - title: 'Uncertainty Principles of Encoding GANs' abstract: 'The compelling synthesis results of Generative Adversarial Networks (GANs) demonstrate rich semantic knowledge in their latent codes. To obtain this knowledge for downstream applications, encoding GANs has been proposed to learn encoders, such that real world data can be encoded to latent codes, which can be fed to generators to reconstruct those data. However, despite the theoretical guarantees of precise reconstruction in previous works, current algorithms generally reconstruct inputs with non-negligible deviations from inputs. In this paper we study this predicament of encoding GANs, which is indispensable research for the GAN community. We prove three uncertainty principles of encoding GANs in practice: a) the ‘perfect’ encoder and generator cannot be continuous at the same time, which implies that current framework of encoding GANs is ill-posed and needs rethinking; b) neural networks cannot approximate the underlying encoder and generator precisely at the same time, which explains why we cannot get ‘perfect’ encoders and generators as promised in previous theories; c) neural networks cannot be stable and accurate at the same time, which demonstrates the difficulty of training and trade-off between fidelity and disentanglement encountered in previous works. Our work may eliminate gaps between previous theories and empirical results, promote the understanding of GANs, and guide network designs for follow-up works.' volume: 139 URL: https://proceedings.mlr.press/v139/feng21c.html PDF: http://proceedings.mlr.press/v139/feng21c/feng21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feng21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ruili family: Feng - given: Zhouchen family: Lin - given: Jiapeng family: Zhu - given: Deli family: Zhao - given: Jingren family: Zhou - given: Zheng-Jun family: Zha editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3240-3251 id: feng21c issued: date-parts: - 2021 - 7 - 1 firstpage: 3240 lastpage: 3251 published: 2021-07-01 00:00:00 +0000 - title: 'Pointwise Binary Classification with Pairwise Confidence Comparisons' abstract: 'To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed. Among them, some consider using pairwise but not pointwise labels, when pointwise labels are not accessible due to privacy, confidentiality, or security reasons. However, as a pairwise label denotes whether or not two data points share a pointwise label, it cannot be easily collected if either point is equally likely to be positive or negative. Thus, in this paper, we propose a novel setting called pairwise comparison (Pcomp) classification, where we have only pairs of unlabeled data that we know one is more likely to be positive than the other. Firstly, we give a Pcomp data generation process, derive an unbiased risk estimator (URE) with theoretical guarantee, and further improve URE using correction functions. Secondly, we link Pcomp classification to noisy-label learning to develop a progressive URE and improve it by imposing consistency regularization. Finally, we demonstrate by experiments the effectiveness of our methods, which suggests Pcomp is a valuable and practically useful type of pairwise supervision besides the pairwise label.' volume: 139 URL: https://proceedings.mlr.press/v139/feng21d.html PDF: http://proceedings.mlr.press/v139/feng21d/feng21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feng21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lei family: Feng - given: Senlin family: Shu - given: Nan family: Lu - given: Bo family: Han - given: Miao family: Xu - given: Gang family: Niu - given: Bo family: An - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3252-3262 id: feng21d issued: date-parts: - 2021 - 7 - 1 firstpage: 3252 lastpage: 3262 published: 2021-07-01 00:00:00 +0000 - title: 'Provably Correct Optimization and Exploration with Non-linear Policies' abstract: 'Policy optimization methods remain a powerful workhorse in empirical Reinforcement Learning (RL), with a focus on neural policies that can easily reason over complex and continuous state and/or action spaces. Theoretical understanding of strategic exploration in policy-based methods with non-linear function approximation, however, is largely missing. In this paper, we address this question by designing ENIAC, an actor-critic method that allows non-linear function approximation in the critic. We show that under certain assumptions, e.g., a bounded eluder dimension $d$ for the critic class, the learner finds to a near-optimal policy in $\widetilde{O}(\mathrm{poly}(d))$ exploration rounds. The method is robust to model misspecification and strictly extends existing works on linear function approximation. We also develop some computational optimizations of our approach with slightly worse statistical guarantees, and an empirical adaptation building on existing deep RL tools. We empirically evaluate this adaptation, and show that it outperforms prior heuristics inspired by linear methods, establishing the value in correctly reasoning about the agent’s uncertainty under non-linear function approximation.' volume: 139 URL: https://proceedings.mlr.press/v139/feng21e.html PDF: http://proceedings.mlr.press/v139/feng21e/feng21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feng21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fei family: Feng - given: Wotao family: Yin - given: Alekh family: Agarwal - given: Lin family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3263-3273 id: feng21e issued: date-parts: - 2021 - 7 - 1 firstpage: 3263 lastpage: 3273 published: 2021-07-01 00:00:00 +0000 - title: 'KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation' abstract: 'Conventional unsupervised multi-source domain adaptation (UMDA) methods assume all source domains can be accessed directly. However, this assumption neglects the privacy-preserving policy, where all the data and computations must be kept decentralized. There exist three challenges in this scenario: (1) Minimizing the domain distance requires the pairwise calculation of the data from the source and target domains, while the data on the source domain is not available. (2) The communication cost and privacy security limit the application of existing UMDA methods, such as the domain adversarial training. (3) Since users cannot govern the data quality, the irrelevant or malicious source domains are more likely to appear, which causes negative transfer. To address the above problems, we propose a privacy-preserving UMDA paradigm named Knowledge Distillation based Decentralized Domain Adaptation (KD3A), which performs domain adaptation through the knowledge distillation on models from different source domains. The extensive experiments show that KD3A significantly outperforms state-of-the-art UMDA approaches. Moreover, the KD3A is robust to the negative transfer and brings a 100x reduction of communication cost compared with other decentralized UMDA methods.' volume: 139 URL: https://proceedings.mlr.press/v139/feng21f.html PDF: http://proceedings.mlr.press/v139/feng21f/feng21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feng21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Haozhe family: Feng - given: Zhaoyang family: You - given: Minghao family: Chen - given: Tianye family: Zhang - given: Minfeng family: Zhu - given: Fei family: Wu - given: Chao family: Wu - given: Wei family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3274-3283 id: feng21f issued: date-parts: - 2021 - 7 - 1 firstpage: 3274 lastpage: 3283 published: 2021-07-01 00:00:00 +0000 - title: 'Understanding Noise Injection in GANs' abstract: 'Noise injection is an effective way of circumventing overfitting and enhancing generalization in machine learning, the rationale of which has been validated in deep learning as well. Recently, noise injection exhibits surprising effectiveness when generating high-fidelity images in Generative Adversarial Networks (GANs) (e.g. StyleGAN). Despite its successful applications in GANs, the mechanism of its validity is still unclear. In this paper, we propose a geometric framework to theoretically analyze the role of noise injection in GANs. First, we point out the existence of the adversarial dimension trap inherent in GANs, which leads to the difficulty of learning a proper generator. Second, we successfully model the noise injection framework with exponential maps based on Riemannian geometry. Guided by our theories, we propose a general geometric realization for noise injection. Under our novel framework, the simple noise injection used in StyleGAN reduces to the Euclidean case. The goal of our work is to make theoretical steps towards understanding the underlying mechanism of state-of-the-art GAN algorithms. Experiments on image generation and GAN inversion validate our theory in practice.' volume: 139 URL: https://proceedings.mlr.press/v139/feng21g.html PDF: http://proceedings.mlr.press/v139/feng21g/feng21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-feng21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ruili family: Feng - given: Deli family: Zhao - given: Zheng-Jun family: Zha editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3284-3293 id: feng21g issued: date-parts: - 2021 - 7 - 1 firstpage: 3284 lastpage: 3293 published: 2021-07-01 00:00:00 +0000 - title: 'GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings' abstract: 'We present GNNAutoScale (GAS), a framework for scaling arbitrary message-passing GNNs to large graphs. GAS prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations, leading to constant GPU memory consumption in respect to input node size without dropping any data. While existing solutions weaken the expressive power of message passing due to sub-sampling of edges or non-trainable propagations, our approach is provably able to maintain the expressive power of the original GNN. We achieve this by providing approximation error bounds of historical embeddings and show how to tighten them in practice. Empirically, we show that the practical realization of our framework, PyGAS, an easy-to-use extension for PyTorch Geometric, is both fast and memory-efficient, learns expressive node representations, closely resembles the performance of their non-scaling counterparts, and reaches state-of-the-art performance on large-scale graphs.' volume: 139 URL: https://proceedings.mlr.press/v139/fey21a.html PDF: http://proceedings.mlr.press/v139/fey21a/fey21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fey21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Matthias family: Fey - given: Jan E. family: Lenssen - given: Frank family: Weichert - given: Jure family: Leskovec editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3294-3304 id: fey21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3294 lastpage: 3304 published: 2021-07-01 00:00:00 +0000 - title: 'PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning' abstract: 'We study reinforcement learning (RL) with no-reward demonstrations, a setting in which an RL agent has access to additional data from the interaction of other agents with the same environment. However, it has no access to the rewards or goals of these agents, and their objectives and levels of expertise may vary widely. These assumptions are common in multi-agent settings, such as autonomous driving. To effectively use this data, we turn to the framework of successor features. This allows us to disentangle shared features and dynamics of the environment from agent-specific rewards and policies. We propose a multi-task inverse reinforcement learning (IRL) algorithm, called \emph{inverse temporal difference learning} (ITD), that learns shared state features, alongside per-agent successor features and preference vectors, purely from demonstrations without reward labels. We further show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $\Psi \Phi$-learning (pronounced ‘Sci-Fi’). We provide empirical evidence for the effectiveness of $\Psi \Phi$-learning as a method for improving RL, IRL, imitation, and few-shot transfer, and derive worst-case bounds for its performance in zero-shot transfer to new tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/filos21a.html PDF: http://proceedings.mlr.press/v139/filos21a/filos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-filos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Angelos family: Filos - given: Clare family: Lyle - given: Yarin family: Gal - given: Sergey family: Levine - given: Natasha family: Jaques - given: Gregory family: Farquhar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3305-3317 id: filos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3305 lastpage: 3317 published: 2021-07-01 00:00:00 +0000 - title: 'A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups' abstract: 'Symmetries and equivariance are fundamental to the generalization of neural networks on domains such as images, graphs, and point clouds. Existing work has primarily focused on a small number of groups, such as the translation, rotation, and permutation groups. In this work we provide a completely general algorithm for solving for the equivariant layers of matrix groups. In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before, including $\mathrm{O}(1,3)$, $\mathrm{O}(5)$, $\mathrm{Sp}(n)$, and the Rubik’s cube group. Our approach outperforms non-equivariant baselines, with applications to particle physics and modeling dynamical systems. We release our software library to enable researchers to construct equivariant layers for arbitrary' volume: 139 URL: https://proceedings.mlr.press/v139/finzi21a.html PDF: http://proceedings.mlr.press/v139/finzi21a/finzi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-finzi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Marc family: Finzi - given: Max family: Welling - given: Andrew Gordon family: Wilson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3318-3328 id: finzi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3318 lastpage: 3328 published: 2021-07-01 00:00:00 +0000 - title: 'Few-Shot Conformal Prediction with Auxiliary Tasks' abstract: 'We develop a novel approach to conformal prediction when the target task has limited data available for training. Conformal prediction identifies a small set of promising output candidates in place of a single prediction, with guarantees that the set contains the correct answer with high probability. When training data is limited, however, the predicted set can easily become unusably large. In this work, we obtain substantially tighter prediction sets while maintaining desirable marginal guarantees by casting conformal prediction as a meta-learning paradigm over exchangeable collections of auxiliary tasks. Our conformalization algorithm is simple, fast, and agnostic to the choice of underlying model, learning algorithm, or dataset. We demonstrate the effectiveness of this approach across a number of few-shot classification and regression tasks in natural language processing, computer vision, and computational chemistry for drug discovery.' volume: 139 URL: https://proceedings.mlr.press/v139/fisch21a.html PDF: http://proceedings.mlr.press/v139/fisch21a/fisch21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fisch21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Adam family: Fisch - given: Tal family: Schuster - given: Tommi family: Jaakkola - given: Dr.Regina family: Barzilay editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3329-3339 id: fisch21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3329 lastpage: 3339 published: 2021-07-01 00:00:00 +0000 - title: 'Scalable Certified Segmentation via Randomized Smoothing' abstract: 'We present a new certification method for image and point cloud segmentation based on randomized smoothing. The method leverages a novel scalable algorithm for prediction and certification that correctly accounts for multiple testing, necessary for ensuring statistical guarantees. The key to our approach is reliance on established multiple-testing correction mechanisms as well as the ability to abstain from classifying single pixels or points while still robustly segmenting the overall input. Our experimental evaluation on synthetic data and challenging datasets, such as Pascal Context, Cityscapes, and ShapeNet, shows that our algorithm can achieve, for the first time, competitive accuracy and certification guarantees on real-world segmentation tasks. We provide an implementation at https://github.com/eth-sri/segmentation-smoothing.' volume: 139 URL: https://proceedings.mlr.press/v139/fischer21a.html PDF: http://proceedings.mlr.press/v139/fischer21a/fischer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fischer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Marc family: Fischer - given: Maximilian family: Baader - given: Martin family: Vechev editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3340-3351 id: fischer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3340 lastpage: 3351 published: 2021-07-01 00:00:00 +0000 - title: 'What’s in the Box? Exploring the Inner Life of Neural Networks with Robust Rules' abstract: 'We propose a novel method for exploring how neurons within neural networks interact. In particular, we consider activation values of a network for given data, and propose to mine noise-robust rules of the form X {\rightarrow} Y , where X and Y are sets of neurons in different layers. We identify the best set of rules by the Minimum Description Length Principle as the rules that together are most descriptive of the activation data. To learn good rule sets in practice, we propose the unsupervised ExplaiNN algorithm. Extensive evaluation shows that the patterns it discovers give clear insight in how networks perceive the world: they identify shared, respectively class-specific traits, compositionality within the network, as well as locality in convolutional layers. Moreover, these patterns are not only easily interpretable, but also supercharge prototyping as they identify which groups of neurons to consider in unison.' volume: 139 URL: https://proceedings.mlr.press/v139/fischer21b.html PDF: http://proceedings.mlr.press/v139/fischer21b/fischer21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fischer21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jonas family: Fischer - given: Anna family: Olah - given: Jilles family: Vreeken editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3352-3362 id: fischer21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3352 lastpage: 3362 published: 2021-07-01 00:00:00 +0000 - title: 'Online Learning with Optimism and Delay' abstract: 'Inspired by the demands of real-time climate and weather forecasting, we develop optimistic online learning algorithms that require no parameter tuning and have optimal regret guarantees under delayed feedback. Our algorithms—DORM, DORM+, and AdaHedgeD—arise from a novel reduction of delayed online learning to optimistic online learning that reveals how optimistic hints can mitigate the regret penalty caused by delay. We pair this delay-as-optimism perspective with a new analysis of optimistic learning that exposes its robustness to hinting errors and a new meta-algorithm for learning effective hinting strategies in the presence of delay. We conclude by benchmarking our algorithms on four subseasonal climate forecasting tasks, demonstrating low regret relative to state-of-the-art forecasting models.' volume: 139 URL: https://proceedings.mlr.press/v139/flaspohler21a.html PDF: http://proceedings.mlr.press/v139/flaspohler21a/flaspohler21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-flaspohler21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Genevieve E family: Flaspohler - given: Francesco family: Orabona - given: Judah family: Cohen - given: Soukayna family: Mouatadid - given: Miruna family: Oprescu - given: Paulo family: Orenstein - given: Lester family: Mackey editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3363-3373 id: flaspohler21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3363 lastpage: 3373 published: 2021-07-01 00:00:00 +0000 - title: 'Online A-Optimal Design and Active Linear Regression' abstract: 'We consider in this paper the problem of optimal experiment design where a decision maker can choose which points to sample to obtain an estimate $\hat{\beta}$ of the hidden parameter $\beta^{\star}$ of an underlying linear model. The key challenge of this work lies in the heteroscedasticity assumption that we make, meaning that each covariate has a different and unknown variance. The goal of the decision maker is then to figure out on the fly the optimal way to allocate the total budget of $T$ samples between covariates, as sampling several times a specific one will reduce the variance of the estimated model around it (but at the cost of a possible higher variance elsewhere). By trying to minimize the $\ell^2$-loss $\mathbb{E} [\lVert\hat{\beta}-\beta^{\star}\rVert^2]$ the decision maker is actually minimizing the trace of the covariance matrix of the problem, which corresponds then to online A-optimal design. Combining techniques from bandit and convex optimization we propose a new active sampling algorithm and we compare it with existing ones. We provide theoretical guarantees of this algorithm in different settings, including a $\mathcal{O}(T^{-2})$ regret bound in the case where the covariates form a basis of the feature space, generalizing and improving existing results. Numerical experiments validate our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/fontaine21a.html PDF: http://proceedings.mlr.press/v139/fontaine21a/fontaine21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fontaine21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xavier family: Fontaine - given: Pierre family: Perrault - given: Michal family: Valko - given: Vianney family: Perchet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3374-3383 id: fontaine21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3374 lastpage: 3383 published: 2021-07-01 00:00:00 +0000 - title: 'Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design' abstract: 'We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time. Traditional sequential Bayesian optimal experimental design approaches require substantial computation at each stage of the experiment. This makes them unsuitable for most real-world applications, where decisions must typically be made quickly. DAD addresses this restriction by learning an amortized design network upfront and then using this to rapidly run (multiple) adaptive experiments at deployment time. This network represents a design policy which takes as input the data from previous steps, and outputs the next design using a single forward pass; these design decisions can be made in milliseconds during the live experiment. To train the network, we introduce contrastive information bounds that are suitable objectives for the sequential setting, and propose a customized network architecture that exploits key symmetries. We demonstrate that DAD successfully amortizes the process of experimental design, outperforming alternative strategies on a number of problems.' volume: 139 URL: https://proceedings.mlr.press/v139/foster21a.html PDF: http://proceedings.mlr.press/v139/foster21a/foster21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-foster21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Adam family: Foster - given: Desi R family: Ivanova - given: Ilyas family: Malik - given: Tom family: Rainforth editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3384-3395 id: foster21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3384 lastpage: 3395 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Online Learning for Dynamic k-Clustering' abstract: 'In this work, we study dynamic clustering problems from the perspective of online learning. We consider an online learning problem, called \textit{Dynamic $k$-Clustering}, in which $k$ centers are maintained in a metric space over time (centers may change positions) such as a dynamically changing set of $r$ clients is served in the best possible way. The connection cost at round $t$ is given by the \textit{$p$-norm} of the vector formed by the distance of each client to its closest center at round $t$, for some $p\geq 1$. We design a \textit{$\Theta\left( \min(k,r) \right)$-regret} polynomial-time online learning algorithm, while we show that, under some well-established computational complexity conjectures, \textit{constant-regret} cannot be achieved in polynomial-time. In addition to the efficient solution of Dynamic $k$-Clustering, our work contributes to the long line of research of combinatorial online learning.' volume: 139 URL: https://proceedings.mlr.press/v139/fotakis21a.html PDF: http://proceedings.mlr.press/v139/fotakis21a/fotakis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fotakis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dimitris family: Fotakis - given: Georgios family: Piliouras - given: Stratis family: Skoulakis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3396-3406 id: fotakis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3396 lastpage: 3406 published: 2021-07-01 00:00:00 +0000 - title: 'Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning' abstract: 'This work addresses the problem of optimizing communications between server and clients in federated learning (FL). Current sampling approaches in FL are either biased, or non optimal in terms of server-clients communications and training stability. To overcome this issue, we introduce clustered sampling for clients selection. We prove that clustered sampling leads to better clients representatitivity and to reduced variance of the clients stochastic aggregation weights in FL. Compatibly with our theory, we provide two different clustering approaches enabling clients aggregation based on 1) sample size, and 2) models similarity. Through a series of experiments in non-iid and unbalanced scenarios, we demonstrate that model aggregation through clustered sampling consistently leads to better training convergence and variability when compared to standard sampling approaches. Our approach does not require any additional operation on the clients side, and can be seamlessly integrated in standard FL implementations. Finally, clustered sampling is compatible with existing methods and technologies for privacy enhancement, and for communication reduction through model compression.' volume: 139 URL: https://proceedings.mlr.press/v139/fraboni21a.html PDF: http://proceedings.mlr.press/v139/fraboni21a/fraboni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fraboni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yann family: Fraboni - given: Richard family: Vidal - given: Laetitia family: Kameni - given: Marco family: Lorenzi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3407-3416 id: fraboni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3407 lastpage: 3416 published: 2021-07-01 00:00:00 +0000 - title: 'Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins' abstract: 'We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the \textit{soft margin} is well-behaved—a condition satisfied by log-concave isotropic distributions among others—minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.' volume: 139 URL: https://proceedings.mlr.press/v139/frei21a.html PDF: http://proceedings.mlr.press/v139/frei21a/frei21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-frei21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Spencer family: Frei - given: Yuan family: Cao - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3417-3426 id: frei21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3417 lastpage: 3426 published: 2021-07-01 00:00:00 +0000 - title: 'Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise' abstract: 'We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic gradient descent (SGD) following an arbitrary initialization. We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution for a broad class of distributions that includes log-concave isotropic and hard margin distributions. Equivalently, such networks can generalize when the data distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. To the best of our knowledge, this is the first work to show that overparameterized neural networks trained by SGD can generalize when the data is corrupted with adversarial label noise.' volume: 139 URL: https://proceedings.mlr.press/v139/frei21b.html PDF: http://proceedings.mlr.press/v139/frei21b/frei21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-frei21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Spencer family: Frei - given: Yuan family: Cao - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3427-3438 id: frei21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3427 lastpage: 3438 published: 2021-07-01 00:00:00 +0000 - title: 'Post-selection inference with HSIC-Lasso' abstract: 'Detecting influential features in non-linear and/or high-dimensional data is a challenging and increasingly important task in machine learning. Variable selection methods have thus been gaining much attention as well as post-selection inference. Indeed, the selected features can be significantly flawed when the selection procedure is not accounted for. We propose a selective inference procedure using the so-called model-free "HSIC-Lasso" based on the framework of truncated Gaussians combined with the polyhedral lemma. We then develop an algorithm, which allows for low computational costs and provides a selection of the regularisation parameter. The performance of our method is illustrated by both artificial and real-world data based experiments, which emphasise a tight control of the type-I error, even for small sample sizes.' volume: 139 URL: https://proceedings.mlr.press/v139/freidling21a.html PDF: http://proceedings.mlr.press/v139/freidling21a/freidling21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-freidling21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tobias family: Freidling - given: Benjamin family: Poignard - given: Héctor family: Climente-González - given: Makoto family: Yamada editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3439-3448 id: freidling21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3439 lastpage: 3448 published: 2021-07-01 00:00:00 +0000 - title: 'Variational Data Assimilation with a Learned Inverse Observation Operator' abstract: 'Variational data assimilation optimizes for an initial state of a dynamical system such that its evolution fits observational data. The physical model can subsequently be evolved into the future to make predictions. This principle is a cornerstone of large scale forecasting applications such as numerical weather prediction. As such, it is implemented in current operational systems of weather forecasting agencies across the globe. However, finding a good initial state poses a difficult optimization problem in part due to the non-invertible relationship between physical states and their corresponding observations. We learn a mapping from observational data to physical states and show how it can be used to improve optimizability. We employ this mapping in two ways: to better initialize the non-convex optimization problem, and to reformulate the objective function in better behaved physics space instead of observation space. Our experimental results for the Lorenz96 model and a two-dimensional turbulent fluid flow demonstrate that this procedure significantly improves forecast quality for chaotic systems.' volume: 139 URL: https://proceedings.mlr.press/v139/frerix21a.html PDF: http://proceedings.mlr.press/v139/frerix21a/frerix21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-frerix21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Thomas family: Frerix - given: Dmitrii family: Kochkov - given: Jamie family: Smith - given: Daniel family: Cremers - given: Michael family: Brenner - given: Stephan family: Hoyer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3449-3458 id: frerix21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3449 lastpage: 3458 published: 2021-07-01 00:00:00 +0000 - title: 'Bayesian Quadrature on Riemannian Data Manifolds' abstract: 'Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data. A Riemannian metric on said manifolds determines geometry-aware shortest paths and provides the means to define statistical models accordingly. However, these operations are typically computationally demanding. To ease this computational burden, we advocate probabilistic numerical methods for Riemannian statistics. In particular, we focus on Bayesian quadrature (BQ) to numerically compute integrals over normal laws on Riemannian manifolds learned from data. In this task, each function evaluation relies on the solution of an expensive initial value problem. We show that by leveraging both prior knowledge and an active exploration scheme, BQ significantly reduces the number of required evaluations and thus outperforms Monte Carlo methods on a wide range of integration problems. As a concrete application, we highlight the merits of adopting Riemannian geometry with our proposed framework on a nonlinear dataset from molecular dynamics.' volume: 139 URL: https://proceedings.mlr.press/v139/frohlich21a.html PDF: http://proceedings.mlr.press/v139/frohlich21a/frohlich21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-frohlich21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Christian family: Fröhlich - given: Alexandra family: Gessner - given: Philipp family: Hennig - given: Bernhard family: Schölkopf - given: Georgios family: Arvanitidis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3459-3468 id: frohlich21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3459 lastpage: 3468 published: 2021-07-01 00:00:00 +0000 - title: 'Learn-to-Share: A Hardware-friendly Transfer Learning Framework Exploiting Computation and Parameter Sharing' abstract: 'Task-specific fine-tuning on pre-trained transformers has achieved performance breakthroughs in multiple NLP tasks. Yet, as both computation and parameter size grows linearly with the number of sub-tasks, it is increasingly difficult to adopt such methods to the real world due to unrealistic memory and computation overhead on computing devices. Previous works on fine-tuning focus on reducing the growing parameter size to save storage cost by parameter sharing. However, compared to storage, the constraint of computation is a more critical issue with the fine-tuning models in modern computing environments. In this work, we propose LeTS, a framework that leverages both computation and parameter sharing across multiple tasks. Compared to traditional fine-tuning, LeTS proposes a novel neural architecture that contains a fixed pre-trained transformer model, plus learnable additive components for sub-tasks. The learnable components reuse the intermediate activations in the fixed pre-trained model, decoupling computation dependency. Differentiable neural architecture search is used to determine a task-specific computation sharing scheme, and a novel early stage pruning is applied to additive components for sparsity to achieve parameter sharing. Extensive experiments show that with 1.4% of extra parameters per task, LeTS reduces the computation by 49.5% on GLUE benchmarks with only 0.2% accuracy loss compared to full fine-tuning.' volume: 139 URL: https://proceedings.mlr.press/v139/fu21a.html PDF: http://proceedings.mlr.press/v139/fu21a/fu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Cheng family: Fu - given: Hanxian family: Huang - given: Xinyun family: Chen - given: Yuandong family: Tian - given: Jishen family: Zhao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3469-3479 id: fu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3469 lastpage: 3479 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Task Informed Abstractions' abstract: 'Current model-based reinforcement learning methods struggle when operating from complex visual scenes due to their inability to prioritize task-relevant features. To mitigate this problem, we propose learning Task Informed Abstractions (TIA) that explicitly separates reward-correlated visual features from distractors. For learning TIA, we introduce the formalism of Task Informed MDP (TiMDP) that is realized by training two models that learn visual features via cooperative reconstruction, but one model is adversarially dissociated from the reward signal. Empirical evaluation shows that TIA leads to significant performance gains over state-of-the-art methods on many visual control tasks where natural and unconstrained visual distractions pose a formidable challenge. Project page: https://xiangfu.co/tia' volume: 139 URL: https://proceedings.mlr.press/v139/fu21b.html PDF: http://proceedings.mlr.press/v139/fu21b/fu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiang family: Fu - given: Ge family: Yang - given: Pulkit family: Agrawal - given: Tommi family: Jaakkola editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3480-3491 id: fu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3480 lastpage: 3491 published: 2021-07-01 00:00:00 +0000 - title: 'Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference' abstract: 'Quantization is promising in enabling powerful yet complex deep neural networks (DNNs) to be deployed into resource constrained platforms. However, quantized DNNs are vulnerable to adversarial attacks unless being equipped with sophisticated techniques, leading to a dilemma of struggling between DNNs’ efficiency and robustness. In this work, we demonstrate a new perspective regarding quantization’s role in DNNs’ robustness, advocating that quantization can be leveraged to largely boost DNNs’ robustness, and propose a framework dubbed Double-Win Quant that can boost the robustness of quantized DNNs over their full precision counterparts by a large margin. Specifically, we for the first time identify that when an adversarially trained model is quantized to different precisions in a post-training manner, the associated adversarial attacks transfer poorly between different precisions. Leveraging this intriguing observation, we further develop Double-Win Quant integrating random precision inference and training to further reduce and utilize the poor adversarial transferability, enabling an aggressive “win-win" in terms of DNNs’ robustness and efficiency. Extensive experiments and ablation studies consistently validate Double-Win Quant’s effectiveness and advantages over state-of-the-art (SOTA) adversarial training methods across various attacks/models/datasets. Our codes are available at: https://github.com/RICE-EIC/Double-Win-Quant.' volume: 139 URL: https://proceedings.mlr.press/v139/fu21c.html PDF: http://proceedings.mlr.press/v139/fu21c/fu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yonggan family: Fu - given: Qixuan family: Yu - given: Meng family: Li - given: Vikas family: Chandra - given: Yingyan family: Lin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3492-3504 id: fu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 3492 lastpage: 3504 published: 2021-07-01 00:00:00 +0000 - title: 'Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators' abstract: 'While maximizing deep neural networks’ (DNNs’) acceleration efficiency requires a joint search/design of three different yet highly coupled aspects, including the networks, bitwidths, and accelerators, the challenges associated with such a joint search have not yet been fully understood and addressed. The key challenges include (1) the dilemma of whether to explode the memory consumption due to the huge joint space or achieve sub-optimal designs, (2) the discrete nature of the accelerator design space that is coupled yet different from that of the networks and bitwidths, and (3) the chicken and egg problem associated with network-accelerator co-search, i.e., co-search requires operation-wise hardware cost, which is lacking during search as the optimal accelerator depending on the whole network is still unknown during search. To tackle these daunting challenges towards optimal and fast development of DNN accelerators, we propose a framework dubbed Auto-NBA to enable jointly searching for the Networks, Bitwidths, and Accelerators, by efficiently localizing the optimal design within the huge joint design space for each target dataset and acceleration specification. Our Auto-NBA integrates a heterogeneous sampling strategy to achieve unbiased search with constant memory consumption, and a novel joint-search pipeline equipped with a generic differentiable accelerator search engine. Extensive experiments and ablation studies validate that both Auto-NBA generated networks and accelerators consistently outperform state-of-the-art designs (including co-search/exploration techniques, hardware-aware NAS methods, and DNN accelerators), in terms of search time, task accuracy, and accelerator efficiency. Our codes are available at: https://github.com/RICE-EIC/Auto-NBA.' volume: 139 URL: https://proceedings.mlr.press/v139/fu21d.html PDF: http://proceedings.mlr.press/v139/fu21d/fu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yonggan family: Fu - given: Yongan family: Zhang - given: Yang family: Zhang - given: David family: Cox - given: Yingyan family: Lin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3505-3517 id: fu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 3505 lastpage: 3517 published: 2021-07-01 00:00:00 +0000 - title: 'A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation' abstract: 'Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution, is a promising approach for off-policy evaluation. However, current state-of-the-art MIS methods rely on complex optimization tricks and succeed mostly on simple toy problems. We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. The successor representation can be trained through deep reinforcement learning methodology and decouples the reward optimization from the dynamics of the environment, making the resulting algorithm stable and applicable to high-dimensional domains. We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.' volume: 139 URL: https://proceedings.mlr.press/v139/fujimoto21a.html PDF: http://proceedings.mlr.press/v139/fujimoto21a/fujimoto21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fujimoto21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Scott family: Fujimoto - given: David family: Meger - given: Doina family: Precup editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3518-3529 id: fujimoto21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3518 lastpage: 3529 published: 2021-07-01 00:00:00 +0000 - title: 'Learning disentangled representations via product manifold projection' abstract: 'We propose a novel approach to disentangle the generative factors of variation underlying a given set of observations. Our method builds upon the idea that the (unknown) low-dimensional manifold underlying the data space can be explicitly modeled as a product of submanifolds. This definition of disentanglement gives rise to a novel weakly-supervised algorithm for recovering the unknown explanatory factors behind the data. At training time, our algorithm only requires pairs of non i.i.d. data samples whose elements share at least one, possibly multidimensional, generative factor of variation. We require no knowledge on the nature of these transformations, and do not make any limiting assumption on the properties of each subspace. Our approach is easy to implement, and can be successfully applied to different kinds of data (from images to 3D surfaces) undergoing arbitrary transformations. In addition to standard synthetic benchmarks, we showcase our method in challenging real-world applications, where we compare favorably with the state of the art.' volume: 139 URL: https://proceedings.mlr.press/v139/fumero21a.html PDF: http://proceedings.mlr.press/v139/fumero21a/fumero21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-fumero21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Marco family: Fumero - given: Luca family: Cosmo - given: Simone family: Melzi - given: Emanuele family: Rodola editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3530-3540 id: fumero21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3530 lastpage: 3540 published: 2021-07-01 00:00:00 +0000 - title: 'Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning' abstract: 'Progress in deep reinforcement learning (RL) research is largely enabled by benchmark task environments. However, analyzing the nature of those environments is often overlooked. In particular, we still do not have agreeable ways to measure the difficulty or solvability of a task, given that each has fundamentally different actions, observations, dynamics, rewards, and can be tackled with diverse RL algorithms. In this work, we propose policy information capacity (PIC) – the mutual information between policy parameters and episodic return – and policy-optimal information capacity (POIC) – between policy parameters and episodic optimality – as two environment-agnostic, algorithm-agnostic quantitative metrics for task difficulty. Evaluating our metrics across toy environments as well as continuous control benchmark tasks from OpenAI Gym and DeepMind Control Suite, we empirically demonstrate that these information-theoretic metrics have higher correlations with normalized task solvability scores than a variety of alternatives. Lastly, we show that these metrics can also be used for fast and compute-efficient optimizations of key design parameters such as reward shaping, policy architectures, and MDP properties for better solvability by RL algorithms without ever running full RL experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/furuta21a.html PDF: http://proceedings.mlr.press/v139/furuta21a/furuta21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-furuta21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hiroki family: Furuta - given: Tatsuya family: Matsushima - given: Tadashi family: Kozuno - given: Yutaka family: Matsuo - given: Sergey family: Levine - given: Ofir family: Nachum - given: Shixiang Shane family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3541-3552 id: furuta21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3541 lastpage: 3552 published: 2021-07-01 00:00:00 +0000 - title: 'An Information-Geometric Distance on the Space of Tasks' abstract: 'This paper prescribes a distance between learning tasks modeled as joint distributions on data and labels. Using tools in information geometry, the distance is defined to be the length of the shortest weight trajectory on a Riemannian manifold as a classifier is fitted on an interpolated task. The interpolated task evolves from the source to the target task using an optimal transport formulation. This distance, which we call the "coupled transfer distance" can be compared across different classifier architectures. We develop an algorithm to compute the distance which iteratively transports the marginal on the data of the source task to that of the target task while updating the weights of the classifier to track this evolving data distribution. We develop theory to show that our distance captures the intuitive idea that a good transfer trajectory is the one that keeps the generalization gap small during transfer, in particular at the end on the target task. We perform thorough empirical validation and analysis across diverse image classification datasets to show that the coupled transfer distance correlates strongly with the difficulty of fine-tuning.' volume: 139 URL: https://proceedings.mlr.press/v139/gao21a.html PDF: http://proceedings.mlr.press/v139/gao21a/gao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yansong family: Gao - given: Pratik family: Chaudhari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3553-3563 id: gao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3553 lastpage: 3563 published: 2021-07-01 00:00:00 +0000 - title: 'Maximum Mean Discrepancy Test is Aware of Adversarial Attacks' abstract: 'The maximum mean discrepancy (MMD) test could in principle detect any distributional discrepancy between two datasets. However, it has been shown that the MMD test is unaware of adversarial attacks–the MMD test failed to detect the discrepancy between natural data and adversarial data. Given this phenomenon, we raise a question: are natural and adversarial data really from different distributions? The answer is affirmative–the previous use of the MMD test on the purpose missed three key factors, and accordingly, we propose three components. Firstly, the Gaussian kernel has limited representation power, and we replace it with an effective deep kernel. Secondly, the test power of the MMD test was neglected, and we maximize it following asymptotic statistics. Finally, adversarial data may be non-independent, and we overcome this issue with the help of wild bootstrap. By taking care of the three factors, we verify that the MMD test is aware of adversarial attacks, which lights up a novel road for adversarial data detection based on two-sample tests.' volume: 139 URL: https://proceedings.mlr.press/v139/gao21b.html PDF: http://proceedings.mlr.press/v139/gao21b/gao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ruize family: Gao - given: Feng family: Liu - given: Jingfeng family: Zhang - given: Bo family: Han - given: Tongliang family: Liu - given: Gang family: Niu - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3564-3575 id: gao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3564 lastpage: 3575 published: 2021-07-01 00:00:00 +0000 - title: 'Unsupervised Co-part Segmentation through Assembly' abstract: 'Co-part segmentation is an important problem in computer vision for its rich applications. We propose an unsupervised learning approach for co-part segmentation from images. For the training stage, we leverage motion information embedded in videos and explicitly extract latent representations to segment meaningful object parts. More importantly, we introduce a dual procedure of part-assembly to form a closed loop with part-segmentation, enabling an effective self-supervision. We demonstrate the effectiveness of our approach with a host of extensive experiments, ranging from human bodies, hands, quadruped, and robot arms. We show that our approach can achieve meaningful and compact part segmentation, outperforming state-of-the-art approaches on diverse benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/gao21c.html PDF: http://proceedings.mlr.press/v139/gao21c/gao21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gao21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qingzhe family: Gao - given: Bin family: Wang - given: Libin family: Liu - given: Baoquan family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3576-3586 id: gao21c issued: date-parts: - 2021 - 7 - 1 firstpage: 3576 lastpage: 3586 published: 2021-07-01 00:00:00 +0000 - title: 'Discriminative Complementary-Label Learning with Weighted Loss' abstract: 'Complementary-label learning (CLL) deals with the weak supervision scenario where each training instance is associated with one \emph{complementary} label, which specifies the class label that the instance does \emph{not} belong to. Given the training instance ${\bm x}$, existing CLL approaches aim at modeling the \emph{generative} relationship between the complementary label $\bar y$, i.e. $P(\bar y\mid {\bm x})$, and the ground-truth label $y$, i.e. $P(y\mid {\bm x})$. Nonetheless, as the ground-truth label is not directly accessible for complementarily labeled training instance, strong generative assumptions may not hold for real-world CLL tasks. In this paper, we derive a simple and theoretically-sound \emph{discriminative} model towards $P(\bar y\mid {\bm x})$, which naturally leads to a risk estimator with estimation error bound at $\mathcal{O}(1/\sqrt{n})$ convergence rate. Accordingly, a practical CLL approach is proposed by further introducing weighted loss to the empirical risk to maximize the predictive gap between potential ground-truth label and complementary label. Extensive experiments clearly validate the effectiveness of the proposed discriminative complementary-label learning approach.' volume: 139 URL: https://proceedings.mlr.press/v139/gao21d.html PDF: http://proceedings.mlr.press/v139/gao21d/gao21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gao21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi family: Gao - given: Min-Ling family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3587-3597 id: gao21d issued: date-parts: - 2021 - 7 - 1 firstpage: 3587 lastpage: 3597 published: 2021-07-01 00:00:00 +0000 - title: 'RATT: Leveraging Unlabeled Data to Guarantee Generalization' abstract: 'To assess generalization, machine learning scientists typically either (i) bound the generalization gap and then (after training) plug in the empirical risk to obtain a bound on the true risk; or (ii) validate empirically on holdout data. However, (i) typically yields vacuous guarantees for overparameterized models; and (ii) shrinks the training set and its guarantee erodes with each re-use of the holdout set. In this paper, we leverage unlabeled data to produce generalization bounds. After augmenting our (labeled) training set with randomly labeled data, we train in the standard fashion. Whenever classifiers achieve low error on the clean data but high error on the random data, our bound ensures that the true risk is low. We prove that our bound is valid for 0-1 empirical risk minimization and with linear classifiers trained by gradient descent. Our approach is especially useful in conjunction with deep learning due to the early learning phenomenon whereby networks fit true labels before noisy labels but requires one intuitive assumption. Empirically, on canonical computer vision and NLP tasks, our bound provides non-vacuous generalization guarantees that track actual performance closely. This work enables practitioners to certify generalization even when (labeled) holdout data is unavailable and provides insights into the relationship between random label noise and generalization.' volume: 139 URL: https://proceedings.mlr.press/v139/garg21a.html PDF: http://proceedings.mlr.press/v139/garg21a/garg21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-garg21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Saurabh family: Garg - given: Sivaraman family: Balakrishnan - given: Zico family: Kolter - given: Zachary family: Lipton editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3598-3609 id: garg21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3598 lastpage: 3609 published: 2021-07-01 00:00:00 +0000 - title: 'On Proximal Policy Optimization’s Heavy-tailed Gradients' abstract: 'Modern policy gradient algorithms such as Proximal Policy Optimization (PPO) rely on an arsenal of heuristics, including loss clipping and gradient clipping, to ensure successful learning. These heuristics are reminiscent of techniques from robust statistics, commonly used for estimation in outlier-rich ("heavy-tailed") regimes. In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function. We demonstrate that the gradients, especially for the actor network, exhibit pronounced heavy-tailedness and that it increases as the agent’s policy diverges from the behavioral policy (i.e., as the agent goes further off policy). Further examination implicates the likelihood ratios and advantages in the surrogate reward as the main sources of the observed heavy-tailedness. We then highlight issues arising due to the heavy-tailed nature of the gradients. In this light, we study the effects of the standard PPO clipping heuristics, demonstrating that these tricks primarily serve to offset heavy-tailedness in gradients. Thus motivated, we propose incorporating GMOM, a high-dimensional robust estimator, into PPO as a substitute for three clipping tricks. Despite requiring less hyperparameter tuning, our method matches the performance of PPO (with all heuristics enabled) on a battery of MuJoCo continuous control tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/garg21b.html PDF: http://proceedings.mlr.press/v139/garg21b/garg21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-garg21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Saurabh family: Garg - given: Joshua family: Zhanson - given: Emilio family: Parisotto - given: Adarsh family: Prasad - given: Zico family: Kolter - given: Zachary family: Lipton - given: Sivaraman family: Balakrishnan - given: Ruslan family: Salakhutdinov - given: Pradeep family: Ravikumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3610-3619 id: garg21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3610 lastpage: 3619 published: 2021-07-01 00:00:00 +0000 - title: 'What does LIME really see in images?' abstract: 'The performance of modern algorithms on certain computer vision tasks such as object recognition is now close to that of humans. This success was achieved at the price of complicated architectures depending on millions of parameters and it has become quite challenging to understand how particular predictions are made. Interpretability methods propose to give us this understanding. In this paper, we study LIME, perhaps one of the most popular. On the theoretical side, we show that when the number of generated examples is large, LIME explanations are concentrated around a limit explanation for which we give an explicit expression. We further this study for elementary shape detectors and linear models. As a consequence of this analysis, we uncover a connection between LIME and integrated gradients, another explanation method. More precisely, the LIME explanations are similar to the sum of integrated gradients over the superpixels used in the preprocessing step of LIME.' volume: 139 URL: https://proceedings.mlr.press/v139/garreau21a.html PDF: http://proceedings.mlr.press/v139/garreau21a/garreau21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-garreau21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Damien family: Garreau - given: Dina family: Mardaoui editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3620-3629 id: garreau21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3620 lastpage: 3629 published: 2021-07-01 00:00:00 +0000 - title: 'Parametric Graph for Unimodal Ranking Bandit' abstract: 'We tackle the online ranking problem of assigning $L$ items to $K$ positions on a web page in order to maximize the number of user clicks. We propose an original algorithm, easy to implement and with strong theoretical guarantees to tackle this problem in the Position-Based Model (PBM) setting, well suited for applications where items are displayed on a grid. Besides learning to rank, our algorithm, GRAB (for parametric Graph for unimodal RAnking Bandit), also learns the parameter of a compact graph over permutations of $K$ items among $L$. The logarithmic regret bound of this algorithm is a direct consequence of the unimodality property of the bandit setting with respect to the learned graph. Experiments against state-of-the-art learning algorithms which also tackle the PBM setting, show that our method is more efficient while giving regret performance on par with the best known algorithms on simulated and real life datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/gauthier21a.html PDF: http://proceedings.mlr.press/v139/gauthier21a/gauthier21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gauthier21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Camille-Sovanneary family: Gauthier - given: Romaric family: Gaudel - given: Elisa family: Fromont - given: Boammani Aser family: Lompo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3630-3639 id: gauthier21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3630 lastpage: 3639 published: 2021-07-01 00:00:00 +0000 - title: 'Let’s Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework' abstract: 'In this paper we cast neural networks defined on graphs as message-passing neural networks (MPNNs) to study the distinguishing power of different classes of such models. We are interested in when certain architectures are able to tell vertices apart based on the feature labels given as input with the graph. We consider two variants of MPNNS: anonymous MPNNs whose message functions depend only on the labels of vertices involved; and degree-aware MPNNs whose message functions can additionally use information regarding the degree of vertices. The former class covers popular graph neural network (GNN) formalisms for which the distinguished power is known. The latter covers graph convolutional networks (GCNs), introduced by Kipf and Welling, for which the distinguishing power was unknown. We obtain lower and upper bounds on the distinguishing power of (anonymous and degree-aware) MPNNs in terms of the distinguishing power of the Weisfeiler-Lehman (WL) algorithm. Our main results imply that (i) the distinguishing power of GCNs is bounded by the WL algorithm, but they may be one step ahead; (ii) the WL algorithm cannot be simulated by “plain vanilla” GCNs but the addition of a trade-off parameter between features of the vertex and those of its neighbours (as proposed by Kipf and Welling) resolves this problem.' volume: 139 URL: https://proceedings.mlr.press/v139/geerts21a.html PDF: http://proceedings.mlr.press/v139/geerts21a/geerts21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-geerts21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Floris family: Geerts - given: Filip family: Mazowiecki - given: Guillermo family: Perez editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3640-3649 id: geerts21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3640 lastpage: 3649 published: 2021-07-01 00:00:00 +0000 - title: 'On the difficulty of unbiased alpha divergence minimization' abstract: 'Several approximate inference algorithms have been proposed to minimize an alpha-divergence between an approximating distribution and a target distribution. Many of these algorithms introduce bias, the magnitude of which becomes problematic in high dimensions. Other algorithms are unbiased. These often seem to suffer from high variance, but little is rigorously known. In this work we study unbiased methods for alpha-divergence minimization through the Signal-to-Noise Ratio (SNR) of the gradient estimator. We study several representative scenarios where strong analytical results are possible, such as fully-factorized or Gaussian distributions. We find that when alpha is not zero, the SNR worsens exponentially in the dimensionality of the problem. This casts doubt on the practicality of these methods. We empirically confirm these theoretical results.' volume: 139 URL: https://proceedings.mlr.press/v139/geffner21a.html PDF: http://proceedings.mlr.press/v139/geffner21a/geffner21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-geffner21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tomas family: Geffner - given: Justin family: Domke editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3650-3659 id: geffner21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3650 lastpage: 3659 published: 2021-07-01 00:00:00 +0000 - title: 'How and Why to Use Experimental Data to Evaluate Methods for Observational Causal Inference' abstract: 'Methods that infer causal dependence from observational data are central to many areas of science, including medicine, economics, and the social sciences. A variety of theoretical properties of these methods have been proven, but empirical evaluation remains a challenge, largely due to the lack of observational data sets for which treatment effect is known. We describe and analyze observational sampling from randomized controlled trials (OSRCT), a method for evaluating causal inference methods using data from randomized controlled trials (RCTs). This method can be used to create constructed observational data sets with corresponding unbiased estimates of treatment effect, substantially increasing the number of data sets available for evaluating causal inference methods. We show that, in expectation, OSRCT creates data sets that are equivalent to those produced by randomly sampling from empirical data sets in which all potential outcomes are available. We then perform a large-scale evaluation of seven causal inference methods over 37 data sets, drawn from RCTs, as well as simulators, real-world computational systems, and observational data sets augmented with a synthetic response variable. We find notable performance differences when comparing across data from different sources, demonstrating the importance of using data from a variety of sources when evaluating any causal inference method.' volume: 139 URL: https://proceedings.mlr.press/v139/gentzel21a.html PDF: http://proceedings.mlr.press/v139/gentzel21a/gentzel21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gentzel21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amanda M family: Gentzel - given: Purva family: Pruthi - given: David family: Jensen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3660-3671 id: gentzel21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3660 lastpage: 3671 published: 2021-07-01 00:00:00 +0000 - title: 'Strategic Classification in the Dark' abstract: 'Strategic classification studies the interaction between a classification rule and the strategic agents it governs. Agents respond by manipulating their features, under the assumption that the classifier is known. However, in many real-life scenarios of high-stake classification (e.g., credit scoring), the classifier is not revealed to the agents, which leads agents to attempt to learn the classifier and game it too. In this paper we generalize the strategic classification model to such scenarios and analyze the effect of an unknown classifier. We define the ”price of opacity” as the difference between the prediction error under the opaque and transparent policies, characterize it, and give a sufficient condition for it to be strictly positive, in which case transparency is the recommended policy. Our experiments show how Hardt et al.’s robust classifier is affected by keeping agents in the dark.' volume: 139 URL: https://proceedings.mlr.press/v139/ghalme21a.html PDF: http://proceedings.mlr.press/v139/ghalme21a/ghalme21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ghalme21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ganesh family: Ghalme - given: Vineet family: Nair - given: Itay family: Eilat - given: Inbal family: Talgam-Cohen - given: Nir family: Rosenfeld editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3672-3681 id: ghalme21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3672 lastpage: 3681 published: 2021-07-01 00:00:00 +0000 - title: 'EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL' abstract: 'Off-policy reinforcement learning (RL) holds the promise of sample-efficient learning of decision-making policies by leveraging past experience. However, in the offline RL setting – where a fixed collection of interactions are provided and no further interactions are allowed – it has been shown that standard off-policy RL methods can significantly underperform. In this work, we closely investigate an important simplification of BCQ (Fujimoto et al., 2018) – a prior approach for offline RL – removing a heuristic design choice. Importantly, in contrast to their original theoretical considerations, we derive this simplified algorithm through the introduction of a novel backup operator, Expected-Max Q-Learning (EMaQ), which is more closely related to the resulting practical algorithm. Specifically, in addition to the distribution support, EMaQ explicitly considers the number of samples and the proposal distribution, allowing us to derive new sub-optimality bounds. In the offline RL setting – the main focus of this work – EMaQ matches and outperforms prior state-of-the-art in the D4RL benchmarks (Fu et al., 2020). In the online RL setting, we demonstrate that EMaQ is competitive with Soft Actor Critic (SAC). The key contributions of our empirical findings are demonstrating the importance of careful generative model design for estimating behavior policies, and an intuitive notion of complexity for offline RL problems. With its simple interpretation and fewer moving parts, such as no explicit function approximator representing the policy, EMaQ serves as a strong yet easy to implement baseline for future work.' volume: 139 URL: https://proceedings.mlr.press/v139/ghasemipour21a.html PDF: http://proceedings.mlr.press/v139/ghasemipour21a/ghasemipour21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ghasemipour21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Seyed Kamyar Seyed family: Ghasemipour - given: Dale family: Schuurmans - given: Shixiang Shane family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3682-3691 id: ghasemipour21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3682 lastpage: 3691 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message' abstract: 'The shuffle model of differential privacy has attracted attention in the literature due to it being a middle ground between the well-studied central and local models. In this work, we study the problem of summing (aggregating) real numbers or integers, a basic primitive in numerous machine learning tasks, in the shuffle model. We give a protocol achieving error arbitrarily close to that of the (Discrete) Laplace mechanism in central differential privacy, while each user only sends 1 + o(1) short messages in expectation.' volume: 139 URL: https://proceedings.mlr.press/v139/ghazi21a.html PDF: http://proceedings.mlr.press/v139/ghazi21a/ghazi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ghazi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Badih family: Ghazi - given: Ravi family: Kumar - given: Pasin family: Manurangsi - given: Rasmus family: Pagh - given: Amer family: Sinha editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3692-3701 id: ghazi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3692 lastpage: 3701 published: 2021-07-01 00:00:00 +0000 - title: 'The Power of Adaptivity for Stochastic Submodular Cover' abstract: 'In the stochastic submodular cover problem, the goal is to select a subset of stochastic items of minimum expected cost to cover a submodular function. Solutions in this setting correspond to a sequential decision process that selects items one by one “adaptively” (depending on prior observations). While such adaptive solutions achieve the best objective, the inherently sequential nature makes them undesirable in many applications. We ask: \emph{how well can solutions with only a few adaptive rounds approximate fully-adaptive solutions?} We consider both cases where the stochastic items are independent, and where they are correlated. For both situations, we obtain nearly tight answers, establishing smooth tradeoffs between the number of adaptive rounds and the solution quality, relative to fully adaptive solutions. Experiments on synthetic and real datasets validate the practical performance of our algorithms, showing qualitative improvements in the solutions as we allow more rounds of adaptivity; in practice, solutions using just a few rounds of adaptivity are nearly as good as fully adaptive solutions.' volume: 139 URL: https://proceedings.mlr.press/v139/ghuge21a.html PDF: http://proceedings.mlr.press/v139/ghuge21a/ghuge21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ghuge21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rohan family: Ghuge - given: Anupam family: Gupta - given: Viswanath family: Nagarajan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3702-3712 id: ghuge21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3702 lastpage: 3712 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially Private Quantiles' abstract: 'Quantiles are often used for summarizing and understanding data. If that data is sensitive, it may be necessary to compute quantiles in a way that is differentially private, providing theoretical guarantees that the result does not reveal private information. However, when multiple quantiles are needed, existing differentially private algorithms fare poorly: they either compute quantiles individually, splitting the privacy budget, or summarize the entire distribution, wasting effort. In either case the result is reduced accuracy. In this work we propose an instance of the exponential mechanism that simultaneously estimates exactly $m$ quantiles from $n$ data points while guaranteeing differential privacy. The utility function is carefully structured to allow for an efficient implementation that returns estimates of all $m$ quantiles in time $O(mn\log(n) + m^2n)$. Experiments show that our method significantly outperforms the current state of the art on both real and synthetic data while remaining efficient enough to be practical.' volume: 139 URL: https://proceedings.mlr.press/v139/gillenwater21a.html PDF: http://proceedings.mlr.press/v139/gillenwater21a/gillenwater21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gillenwater21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jennifer family: Gillenwater - given: Matthew family: Joseph - given: Alex family: Kulesza editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3713-3722 id: gillenwater21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3713 lastpage: 3722 published: 2021-07-01 00:00:00 +0000 - title: 'Query Complexity of Adversarial Attacks' abstract: 'There are two main attack models considered in the adversarial robustness literature: black-box and white-box. We consider these threat models as two ends of a fine-grained spectrum, indexed by the number of queries the adversary can ask. Using this point of view we investigate how many queries the adversary needs to make to design an attack that is comparable to the best possible attack in the white-box model. We give a lower bound on that number of queries in terms of entropy of decision boundaries of the classifier. Using this result we analyze two classical learning algorithms on two synthetic tasks for which we prove meaningful security guarantees. The obtained bounds suggest that some learning algorithms are inherently more robust against query-bounded adversaries than others.' volume: 139 URL: https://proceedings.mlr.press/v139/gluch21a.html PDF: http://proceedings.mlr.press/v139/gluch21a/gluch21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gluch21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Grzegorz family: Gluch - given: Rüdiger family: Urbanke editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3723-3733 id: gluch21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3723 lastpage: 3733 published: 2021-07-01 00:00:00 +0000 - title: 'Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective' abstract: 'Most of the recent deep reinforcement learning advances take an RL-centric perspective and focus on refinements of the training objective. We diverge from this view and show we can recover the performance of these developments not by changing the objective, but by regularising the value-function estimator. Constraining the Lipschitz constant of a single layer using spectral normalisation is sufficient to elevate the performance of a Categorical-DQN agent to that of a more elaborated agent on the challenging Atari domain. We conduct ablation studies to disentangle the various effects normalisation has on the learning dynamics and show that is sufficient to modulate the parameter updates to recover most of the performance of spectral normalisation. These findings hint towards the need to also focus on the neural component and its learning dynamics to tackle the peculiarities of Deep Reinforcement Learning.' volume: 139 URL: https://proceedings.mlr.press/v139/gogianu21a.html PDF: http://proceedings.mlr.press/v139/gogianu21a/gogianu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gogianu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Florin family: Gogianu - given: Tudor family: Berariu - given: Mihaela C family: Rosca - given: Claudia family: Clopath - given: Lucian family: Busoniu - given: Razvan family: Pascanu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3734-3744 id: gogianu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3734 lastpage: 3744 published: 2021-07-01 00:00:00 +0000 - title: '12-Lead ECG Reconstruction via Koopman Operators' abstract: '32% of all global deaths in the world are caused by cardiovascular diseases. Early detection, especially for patients with ischemia or cardiac arrhythmia, is crucial. To reduce the time between symptoms onset and treatment, wearable ECG sensors were developed to allow for the recording of the full 12-lead ECG signal at home. However, if even a single lead is not correctly positioned on the body that lead becomes corrupted, making automatic diagnosis on the basis of the full signal impossible. In this work, we present a methodology to reconstruct missing or noisy leads using the theory of Koopman Operators. Given a dataset consisting of full 12-lead ECGs, we learn a dynamical system describing the evolution of the 12 individual signals together in time. The Koopman theory indicates that there exists a high-dimensional embedding space in which the operator which propagates from one time instant to the next is linear. We therefore learn both the mapping to this embedding space, as well as the corresponding linear operator. Armed with this representation, we are able to impute missing leads by solving a least squares system in the embedding space, which can be achieved efficiently due to the sparse structure of the system. We perform an empirical evaluation using 12-lead ECG signals from thousands of patients, and show that we are able to reconstruct the signals in such way that enables accurate clinical diagnosis.' volume: 139 URL: https://proceedings.mlr.press/v139/golany21a.html PDF: http://proceedings.mlr.press/v139/golany21a/golany21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-golany21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tomer family: Golany - given: Kira family: Radinsky - given: Daniel family: Freedman - given: Saar family: Minha editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3745-3754 id: golany21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3745 lastpage: 3754 published: 2021-07-01 00:00:00 +0000 - title: 'Function Contrastive Learning of Transferable Meta-Representations' abstract: 'Meta-learning algorithms adapt quickly to new tasks that are drawn from the same task distribution as the training tasks. The mechanism leading to fast adaptation is the conditioning of a downstream predictive model on the inferred representation of the task’s underlying data generative process, or \emph{function}. This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model. In this work, we study the implications of this joint training on the transferability of the meta-representations. Our goal is to learn meta-representations that are robust to noise in the data and facilitate solving a wide range of downstream tasks that share the same underlying functions. To this end, we propose a decoupled encoder-decoder approach to supervised meta-learning, where the encoder is trained with a contrastive objective to find a good representation of the underlying function. In particular, our training scheme is driven by the self-supervision signal indicating whether two sets of examples stem from the same function. Our experiments on a number of synthetic and real-world datasets show that the representations we obtain outperform strong baselines in terms of downstream performance and noise robustness, even when these baselines are trained in an end-to-end manner.' volume: 139 URL: https://proceedings.mlr.press/v139/gondal21a.html PDF: http://proceedings.mlr.press/v139/gondal21a/gondal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gondal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Muhammad Waleed family: Gondal - given: Shruti family: Joshi - given: Nasim family: Rahaman - given: Stefan family: Bauer - given: Manuel family: Wuthrich - given: Bernhard family: Schölkopf editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3755-3765 id: gondal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3755 lastpage: 3765 published: 2021-07-01 00:00:00 +0000 - title: 'Active Slices for Sliced Stein Discrepancy' abstract: 'Sliced Stein discrepancy (SSD) and its kernelized variants have demonstrated promising successes in goodness-of-fit tests and model learning in high dimensions. Despite the theoretical elegance, their empirical performance depends crucially on the search of the optimal slicing directions to discriminate between two distributions. Unfortunately, previous gradient-based optimisation approach returns sub-optimal results for the slicing directions: it is computationally expensive, sensitive to initialization, and it lacks theoretical guarantee for convergence. We address these issues in two steps. First, we show in theory that the requirement of using optimal slicing directions in the kernelized version of SSD can be relaxed, validating the resulting discrepancy with finite random slicing directions. Second, given that good slicing directions are crucial for practical performance, we propose a fast algorithm for finding good slicing directions based on ideas of active sub-space construction and spectral decomposition. Experiments in goodness-of-fit tests and model learning show that our approach achieves both the best performance and the fastest convergence. Especially, we demonstrate 14-80x speed-up in goodness-of-fit tests when compared with the gradient-based approach.' volume: 139 URL: https://proceedings.mlr.press/v139/gong21a.html PDF: http://proceedings.mlr.press/v139/gong21a/gong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wenbo family: Gong - given: Kaibo family: Zhang - given: Yingzhen family: Li - given: Jose Miguel family: Hernandez-Lobato editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3766-3776 id: gong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3766 lastpage: 3776 published: 2021-07-01 00:00:00 +0000 - title: 'On the Problem of Underranking in Group-Fair Ranking' abstract: 'Bias in ranking systems, especially among the top ranks, can worsen social and economic inequalities, polarize opinions, and reinforce stereotypes. On the other hand, a bias correction for minority groups can cause more harm if perceived as favoring group-fair outcomes over meritocracy. Most group-fair ranking algorithms post-process a given ranking and output a group-fair ranking. In this paper, we formulate the problem of underranking in group-fair rankings based on how close the group-fair rank of each item is to its original rank, and prove a lower bound on the trade-off achievable for simultaneous underranking and group fairness in ranking. We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees comparable to the lower bound we prove. Our experimental results confirm the theoretical trade-off between underranking and group fairness, and also show that our algorithm achieves the best of both when compared to the state-of-the-art baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/gorantla21a.html PDF: http://proceedings.mlr.press/v139/gorantla21a/gorantla21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gorantla21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sruthi family: Gorantla - given: Amit family: Deshpande - given: Anand family: Louis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3777-3787 id: gorantla21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3777 lastpage: 3787 published: 2021-07-01 00:00:00 +0000 - title: 'MARINA: Faster Non-Convex Distributed Learning with Compression' abstract: 'We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy employed in the DIANA method of Mishchenko et al. (2019). Unlike virtually all competing distributed first-order methods, including DIANA, ours is based on a carefully designed biased gradient estimator, which is the key to its superior theoretical and practical performance. The communication complexity bounds we prove for MARINA are evidently better than those of all previous first-order methods. Further, we develop and analyze two variants of MARINA: VR-MARINA and PP-MARINA. The first method is designed for the case when the local loss functions owned by clients are either of a finite sum or of an expectation form, and the second method allows for a partial participation of clients {–} a feature important in federated learning. All our methods are superior to previous state-of-the-art methods in terms of oracle/communication complexity. Finally, we provide a convergence analysis of all methods for problems satisfying the Polyak-{Ł}ojasiewicz condition.' volume: 139 URL: https://proceedings.mlr.press/v139/gorbunov21a.html PDF: http://proceedings.mlr.press/v139/gorbunov21a/gorbunov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gorbunov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eduard family: Gorbunov - given: Konstantin P. family: Burlachenko - given: Zhize family: Li - given: Peter family: Richtarik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3788-3798 id: gorbunov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3788 lastpage: 3798 published: 2021-07-01 00:00:00 +0000 - title: 'Systematic Analysis of Cluster Similarity Indices: How to Validate Validation Measures' abstract: 'Many cluster similarity indices are used to evaluate clustering algorithms, and choosing the best one for a particular task remains an open problem. We demonstrate that this problem is crucial: there are many disagreements among the indices, these disagreements do affect which algorithms are preferred in applications, and this can lead to degraded performance in real-world systems. We propose a theoretical framework to tackle this problem: we develop a list of desirable properties and conduct an extensive theoretical analysis to verify which indices satisfy them. This allows for making an informed choice: given a particular application, one can first select properties that are desirable for the task and then identify indices satisfying these. Our work unifies and considerably extends existing attempts at analyzing cluster similarity indices: we introduce new properties, formalize existing ones, and mathematically prove or disprove each property for an extensive list of validation indices. This broader and more rigorous approach leads to recommendations that considerably differ from how validation indices are currently being chosen by practitioners. Some of the most popular indices are even shown to be dominated by previously overlooked ones.' volume: 139 URL: https://proceedings.mlr.press/v139/gosgens21a.html PDF: http://proceedings.mlr.press/v139/gosgens21a/gosgens21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gosgens21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Martijn M family: Gösgens - given: Alexey family: Tikhonov - given: Liudmila family: Prokhorenkova editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3799-3808 id: gosgens21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3799 lastpage: 3808 published: 2021-07-01 00:00:00 +0000 - title: 'Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline' abstract: 'Processing point cloud data is an important component of many real-world systems. As such, a wide variety of point-based approaches have been proposed, reporting steady benchmark improvements over time. We study the key ingredients of this progress and uncover two critical results. First, we find that auxiliary factors like different evaluation schemes, data augmentation strategies, and loss functions, which are independent of the model architecture, make a large difference in performance. The differences are large enough that they obscure the effect of architecture. When these factors are controlled for, PointNet++, a relatively older network, performs competitively with recent methods. Second, a very simple projection-based method, which we refer to as SimpleView, performs surprisingly well. It achieves on par or better results than sophisticated state-of-the-art methods on ModelNet40 while being half the size of PointNet++. It also outperforms state-of-the-art methods on ScanObjectNN, a real-world point cloud benchmark, and demonstrates better cross-dataset generalization. Code is available at https://github.com/princeton-vl/SimpleView.' volume: 139 URL: https://proceedings.mlr.press/v139/goyal21a.html PDF: http://proceedings.mlr.press/v139/goyal21a/goyal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-goyal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ankit family: Goyal - given: Hei family: Law - given: Bowei family: Liu - given: Alejandro family: Newell - given: Jia family: Deng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3809-3820 id: goyal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3809 lastpage: 3820 published: 2021-07-01 00:00:00 +0000 - title: 'Dissecting Supervised Contrastive Learning' abstract: 'Minimizing cross-entropy over the softmax scores of a linear map composed with a high-capacity encoder is arguably the most popular choice for training neural networks on supervised learning tasks. However, recent works show that one can directly optimize the encoder instead, to obtain equally (or even more) discriminative representations via a supervised variant of a contrastive objective. In this work, we address the question whether there are fundamental differences in the sought-for representation geometry in the output space of the encoder at minimal loss. Specifically, we prove, under mild assumptions, that both losses attain their minimum once the representations of each class collapse to the vertices of a regular simplex, inscribed in a hypersphere. We provide empirical evidence that this configuration is attained in practice and that reaching a close-to-optimal state typically indicates good generalization performance. Yet, the two losses show remarkably different optimization behavior. The number of iterations required to perfectly fit to data scales superlinearly with the amount of randomly flipped labels for the supervised contrastive loss. This is in contrast to the approximately linear scaling previously reported for networks trained with cross-entropy.' volume: 139 URL: https://proceedings.mlr.press/v139/graf21a.html PDF: http://proceedings.mlr.press/v139/graf21a/graf21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-graf21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Florian family: Graf - given: Christoph family: Hofer - given: Marc family: Niethammer - given: Roland family: Kwitt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3821-3830 id: graf21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3821 lastpage: 3830 published: 2021-07-01 00:00:00 +0000 - title: 'Oops I Took A Gradient: Scalable Sampling for Discrete Distributions' abstract: 'We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables. Our approach uses gradients of the likelihood function with respect to its discrete inputs to propose updates in a Metropolis-Hastings sampler. We show empirically that this approach outperforms generic samplers in a number of difficult settings including Ising models, Potts models, restricted Boltzmann machines, and factorial hidden Markov models. We also demonstrate our improved sampler for training deep energy-based models on high dimensional discrete image data. This approach outperforms variational auto-encoders and existing energy-based models. Finally, we give bounds showing that our approach is near-optimal in the class of samplers which propose local updates.' volume: 139 URL: https://proceedings.mlr.press/v139/grathwohl21a.html PDF: http://proceedings.mlr.press/v139/grathwohl21a/grathwohl21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-grathwohl21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Will family: Grathwohl - given: Kevin family: Swersky - given: Milad family: Hashemi - given: David family: Duvenaud - given: Chris family: Maddison editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3831-3841 id: grathwohl21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3831 lastpage: 3841 published: 2021-07-01 00:00:00 +0000 - title: 'Detecting Rewards Deterioration in Episodic Reinforcement Learning' abstract: 'In many RL applications, once training ends, it is vital to detect any deterioration in the agent performance as soon as possible. Furthermore, it often has to be done without modifying the policy and under minimal assumptions regarding the environment. In this paper, we address this problem by focusing directly on the rewards and testing for degradation. We consider an episodic framework, where the rewards within each episode are not independent, nor identically-distributed, nor Markov. We present this problem as a multivariate mean-shift detection problem with possibly partial observations. We define the mean-shift in a way corresponding to deterioration of a temporal signal (such as the rewards), and derive a test for this problem with optimal statistical power. Empirically, on deteriorated rewards in control problems (generated using various environment modifications), the test is demonstrated to be more powerful than standard tests - often by orders of magnitude. We also suggest a novel Bootstrap mechanism for False Alarm Rate control (BFAR), applicable to episodic (non-i.i.d) signal and allowing our test to run sequentially in an online manner. Our method does not rely on a learned model of the environment, is entirely external to the agent, and in fact can be applied to detect changes or drifts in any episodic signal.' volume: 139 URL: https://proceedings.mlr.press/v139/greenberg21a.html PDF: http://proceedings.mlr.press/v139/greenberg21a/greenberg21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-greenberg21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ido family: Greenberg - given: Shie family: Mannor editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3842-3853 id: greenberg21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3842 lastpage: 3853 published: 2021-07-01 00:00:00 +0000 - title: 'Crystallization Learning with the Delaunay Triangulation' abstract: 'Based on the Delaunay triangulation, we propose the crystallization learning to estimate the conditional expectation function in the framework of nonparametric regression. By conducting the crystallization search for the Delaunay simplices closest to the target point in a hierarchical way, the crystallization learning estimates the conditional expectation of the response by fitting a local linear model to the data points of the constructed Delaunay simplices. Instead of conducting the Delaunay triangulation for the entire feature space which would encounter enormous computational difficulty, our approach focuses only on the neighborhood of the target point and thus greatly expedites the estimation for high-dimensional cases. Because the volumes of Delaunay simplices are adaptive to the density of feature data points, our method selects neighbor data points uniformly in all directions and thus is more robust to the local geometric structure of the data than existing nonparametric regression methods. We develop the asymptotic properties of the crystallization learning and conduct numerical experiments on both synthetic and real data to demonstrate the advantages of our method in estimation of the conditional expectation function and prediction of the response.' volume: 139 URL: https://proceedings.mlr.press/v139/gu21a.html PDF: http://proceedings.mlr.press/v139/gu21a/gu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiaqi family: Gu - given: Guosheng family: Yin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3854-3863 id: gu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3854 lastpage: 3863 published: 2021-07-01 00:00:00 +0000 - title: 'AutoAttend: Automated Attention Representation Search' abstract: 'Self-attention mechanisms have been widely adopted in many machine learning areas, including Natural Language Processing (NLP) and Graph Representation Learning (GRL), etc. However, existing works heavily rely on hand-crafted design to obtain customized attention mechanisms. In this paper, we automate Key, Query and Value representation design, which is one of the most important steps to obtain effective self-attentions. We propose an automated self-attention representation model, AutoAttend, which can automatically search powerful attention representations for downstream tasks leveraging Neural Architecture Search (NAS). In particular, we design a tailored search space for attention representation automation, which is flexible to produce effective attention representation designs. Based on the design prior obtained from attention representations in previous works, we further regularize our search space to reduce the space complexity without the loss of expressivity. Moreover, we propose a novel context-aware parameter sharing mechanism considering special characteristics of each sub-architecture to provide more accurate architecture estimations when conducting parameter sharing in our tailored search space. Experiments show the superiority of our proposed AutoAttend model over previous state-of-the-arts on eight text classification tasks in NLP and four node classification tasks in GRL.' volume: 139 URL: https://proceedings.mlr.press/v139/guan21a.html PDF: http://proceedings.mlr.press/v139/guan21a/guan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-guan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chaoyu family: Guan - given: Xin family: Wang - given: Wenwu family: Zhu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3864-3874 id: guan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3864 lastpage: 3874 published: 2021-07-01 00:00:00 +0000 - title: 'Operationalizing Complex Causes: A Pragmatic View of Mediation' abstract: 'We examine the problem of causal response estimation for complex objects (e.g., text, images, genomics). In this setting, classical \emph{atomic} interventions are often not available (e.g., changes to characters, pixels, DNA base-pairs). Instead, we only have access to indirect or \emph{crude} interventions (e.g., enrolling in a writing program, modifying a scene, applying a gene therapy). In this work, we formalize this problem and provide an initial solution. Given a collection of candidate mediators, we propose (a) a two-step method for predicting the causal responses of crude interventions; and (b) a testing procedure to identify mediators of crude interventions. We demonstrate, on a range of simulated and real-world-inspired examples, that our approach allows us to efficiently estimate the effect of crude interventions with limited data from new treatment regimes.' volume: 139 URL: https://proceedings.mlr.press/v139/gultchin21a.html PDF: http://proceedings.mlr.press/v139/gultchin21a/gultchin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gultchin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Limor family: Gultchin - given: David family: Watson - given: Matt family: Kusner - given: Ricardo family: Silva editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3875-3885 id: gultchin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3875 lastpage: 3885 published: 2021-07-01 00:00:00 +0000 - title: 'On a Combination of Alternating Minimization and Nesterov’s Momentum' abstract: 'Alternating minimization (AM) procedures are practically efficient in many applications for solving convex and non-convex optimization problems. On the other hand, Nesterov’s accelerated gradient is theoretically optimal first-order method for convex optimization. In this paper we combine AM and Nesterov’s acceleration to propose an accelerated alternating minimization algorithm. We prove $1/k^2$ convergence rate in terms of the objective for convex problems and $1/k$ in terms of the squared gradient norm for non-convex problems, where $k$ is the iteration counter. Our method does not require any knowledge of neither convexity of the problem nor function parameters such as Lipschitz constant of the gradient, i.e. it is adaptive to convexity and smoothness and is uniformly optimal for smooth convex and non-convex problems. Further, we develop its primal-dual modification for strongly convex problems with linear constraints and prove the same $1/k^2$ for the primal objective residual and constraints feasibility.' volume: 139 URL: https://proceedings.mlr.press/v139/guminov21a.html PDF: http://proceedings.mlr.press/v139/guminov21a/guminov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-guminov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sergey family: Guminov - given: Pavel family: Dvurechensky - given: Nazarii family: Tupitsa - given: Alexander family: Gasnikov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3886-3898 id: guminov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3886 lastpage: 3898 published: 2021-07-01 00:00:00 +0000 - title: 'Decentralized Single-Timescale Actor-Critic on Zero-Sum Two-Player Stochastic Games' abstract: 'We study the global convergence and global optimality of the actor-critic algorithm applied for the zero-sum two-player stochastic games in a decentralized manner. We focus on the single-timescale setting where the critic is updated by applying the Bellman operator only once and the actor is updated by policy gradient with the information from the critic. Our algorithm is in a decentralized manner, as we assume that each player has no access to the actions of the other one, which, in a way, protects the privacy of both players. Moreover, we consider linear function approximations for both actor and critic, and we prove that the sequence of joint policy generated by our decentralized linear algorithm converges to the minimax equilibrium at a sublinear rate \(\cO(\sqrt{K})\), where \(K\){is} the number of iterations. To the best of our knowledge, we establish the global optimality and convergence of decentralized actor-critic algorithm on zero-sum two-player stochastic games with linear function approximations for the first time.' volume: 139 URL: https://proceedings.mlr.press/v139/guo21a.html PDF: http://proceedings.mlr.press/v139/guo21a/guo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-guo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hongyi family: Guo - given: Zuyue family: Fu - given: Zhuoran family: Yang - given: Zhaoran family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3899-3909 id: guo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3899 lastpage: 3909 published: 2021-07-01 00:00:00 +0000 - title: 'Adversarial Policy Learning in Two-player Competitive Games' abstract: 'In a two-player deep reinforcement learning task, recent work shows an attacker could learn an adversarial policy that triggers a target agent to perform poorly and even react in an undesired way. However, its efficacy heavily relies upon the zero-sum assumption made in the two-player game. In this work, we propose a new adversarial learning algorithm. It addresses the problem by resetting the optimization goal in the learning process and designing a new surrogate optimization function. Our experiments show that our method significantly improves adversarial agents’ exploitability compared with the state-of-art attack. Besides, we also discover that our method could augment an agent with the ability to abuse the target game’s unfairness. Finally, we show that agents adversarially re-trained against our adversarial agents could obtain stronger adversary-resistance.' volume: 139 URL: https://proceedings.mlr.press/v139/guo21b.html PDF: http://proceedings.mlr.press/v139/guo21b/guo21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-guo21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wenbo family: Guo - given: Xian family: Wu - given: Sui family: Huang - given: Xinyu family: Xing editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3910-3919 id: guo21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3910 lastpage: 3919 published: 2021-07-01 00:00:00 +0000 - title: 'Soft then Hard: Rethinking the Quantization in Neural Image Compression' abstract: 'Quantization is one of the core components in lossy image compression. For neural image compression, end-to-end optimization requires differentiable approximations of quantization, which can generally be grouped into three categories: additive uniform noise, straight-through estimator and soft-to-hard annealing. Training with additive uniform noise approximates the quantization error variationally but suffers from the train-test mismatch. The other two methods do not encounter this mismatch but, as shown in this paper, hurt the rate-distortion performance since the latent representation ability is weakened. We thus propose a novel soft-then-hard quantization strategy for neural image compression that first learns an expressive latent space softly, then closes the train-test mismatch with hard quantization. In addition, beyond the fixed integer-quantization, we apply scaled additive uniform noise to adaptively control the quantization granularity by deriving a new variational upper bound on actual rate. Experiments demonstrate that our proposed methods are easy to adopt, stable to train, and highly effective especially on complex compression models.' volume: 139 URL: https://proceedings.mlr.press/v139/guo21c.html PDF: http://proceedings.mlr.press/v139/guo21c/guo21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-guo21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zongyu family: Guo - given: Zhizheng family: Zhang - given: Runsen family: Feng - given: Zhibo family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3920-3929 id: guo21c issued: date-parts: - 2021 - 7 - 1 firstpage: 3920 lastpage: 3929 published: 2021-07-01 00:00:00 +0000 - title: 'UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning' abstract: 'VDN and QMIX are two popular value-based algorithms for cooperative MARL that learn a centralized action value function as a monotonic mixing of per-agent utilities. While this enables easy decentralization of the learned policy, the restricted joint action value function can prevent them from solving tasks that require significant coordination between agents at a given timestep. We show that this problem can be overcome by improving the joint exploration of all agents during training. Specifically, we propose a novel MARL approach called Universal Value Exploration (UneVEn) that learns a set of related tasks simultaneously with a linear decomposition of universal successor features. With the policies of already solved related tasks, the joint exploration process of all agents can be improved to help them achieve better coordination. Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.' volume: 139 URL: https://proceedings.mlr.press/v139/gupta21a.html PDF: http://proceedings.mlr.press/v139/gupta21a/gupta21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gupta21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tarun family: Gupta - given: Anuj family: Mahajan - given: Bei family: Peng - given: Wendelin family: Boehmer - given: Shimon family: Whiteson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3930-3941 id: gupta21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3930 lastpage: 3941 published: 2021-07-01 00:00:00 +0000 - title: 'Distribution-Free Calibration Guarantees for Histogram Binning without Sample Splitting' abstract: 'We prove calibration guarantees for the popular histogram binning (also called uniform-mass binning) method of Zadrozny and Elkan (2001). Histogram binning has displayed strong practical performance, but theoretical guarantees have only been shown for sample split versions that avoid ’double dipping’ the data. We demonstrate that the statistical cost of sample splitting is practically significant on a credit default dataset. We then prove calibration guarantees for the original method that double dips the data, using a certain Markov property of order statistics. Based on our results, we make practical recommendations for choosing the number of bins in histogram binning. In our illustrative simulations, we propose a new tool for assessing calibration—validity plots—which provide more information than an ECE estimate.' volume: 139 URL: https://proceedings.mlr.press/v139/gupta21b.html PDF: http://proceedings.mlr.press/v139/gupta21b/gupta21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gupta21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chirag family: Gupta - given: Aaditya family: Ramdas editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3942-3952 id: gupta21b issued: date-parts: - 2021 - 7 - 1 firstpage: 3942 lastpage: 3952 published: 2021-07-01 00:00:00 +0000 - title: 'Correcting Exposure Bias for Link Recommendation' abstract: 'Link prediction methods are frequently applied in recommender systems, e.g., to suggest citations for academic papers or friends in social networks. However, exposure bias can arise when users are systematically underexposed to certain relevant items. For example, in citation networks, authors might be more likely to encounter papers from their own field and thus cite them preferentially. This bias can propagate through naively trained link predictors, leading to both biased evaluation and high generalization error (as assessed by true relevance). Moreover, this bias can be exacerbated by feedback loops. We propose estimators that leverage known exposure probabilities to mitigate this bias and consequent feedback loops. Next, we provide a loss function for learning the exposure probabilities from data. Finally, experiments on semi-synthetic data based on real-world citation networks, show that our methods reliably identify (truly) relevant citations. Additionally, our methods lead to greater diversity in the recommended papers’ fields of study. The code is available at github.com/shantanu95/exposure-bias-link-rec.' volume: 139 URL: https://proceedings.mlr.press/v139/gupta21c.html PDF: http://proceedings.mlr.press/v139/gupta21c/gupta21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gupta21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shantanu family: Gupta - given: Hao family: Wang - given: Zachary family: Lipton - given: Yuyang family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3953-3963 id: gupta21c issued: date-parts: - 2021 - 7 - 1 firstpage: 3953 lastpage: 3963 published: 2021-07-01 00:00:00 +0000 - title: 'The Heavy-Tail Phenomenon in SGD' abstract: 'In recent years, various notions of capacity and complexity have been proposed for characterizing the generalization properties of stochastic gradient descent (SGD) in deep learning. Some of the popular notions that correlate well with the performance on unseen data are (i) the ‘flatness’ of the local minimum found by SGD, which is related to the eigenvalues of the Hessian, (ii) the ratio of the stepsize $\eta$ to the batch-size $b$, which essentially controls the magnitude of the stochastic gradient noise, and (iii) the ‘tail-index’, which measures the heaviness of the tails of the network weights at convergence. In this paper, we argue that these three seemingly unrelated perspectives for generalization are deeply linked to each other. We claim that depending on the structure of the Hessian of the loss at the minimum, and the choices of the algorithm parameters $\eta$ and $b$, the SGD iterates will converge to a \emph{heavy-tailed} stationary distribution. We rigorously prove this claim in the setting of quadratic optimization: we show that even in a simple linear regression problem with independent and identically distributed data whose distribution has finite moments of all order, the iterates can be heavy-tailed with infinite variance. We further characterize the behavior of the tails with respect to algorithm parameters, the dimension, and the curvature. We then translate our results into insights about the behavior of SGD in deep learning. We support our theory with experiments conducted on synthetic data, fully connected, and convolutional neural networks.' volume: 139 URL: https://proceedings.mlr.press/v139/gurbuzbalaban21a.html PDF: http://proceedings.mlr.press/v139/gurbuzbalaban21a/gurbuzbalaban21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gurbuzbalaban21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mert family: Gurbuzbalaban - given: Umut family: Simsekli - given: Lingjiong family: Zhu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3964-3975 id: gurbuzbalaban21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3964 lastpage: 3975 published: 2021-07-01 00:00:00 +0000 - title: 'Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks' abstract: 'Despite the great successes achieved by deep neural networks (DNNs), recent studies show that they are vulnerable against adversarial examples, which aim to mislead DNNs by adding small adversarial perturbations. Several defenses have been proposed against such attacks, while many of them have been adaptively attacked. In this work, we aim to enhance the ML robustness from a different perspective by leveraging domain knowledge: We propose a Knowledge Enhanced Machine Learning Pipeline (KEMLP) to integrate domain knowledge (i.e., logic relationships among different predictions) into a probabilistic graphical model via first-order logic rules. In particular, we develop KEMLP by integrating a diverse set of weak auxiliary models based on their logical relationships to the main DNN model that performs the target task. Theoretically, we provide convergence results and prove that, under mild conditions, the prediction of KEMLP is more robust than that of the main DNN model. Empirically, we take road sign recognition as an example and leverage the relationships between road signs and their shapes and contents as domain knowledge. We show that compared with adversarial training and other baselines, KEMLP achieves higher robustness against physical attacks, $\mathcal{L}_p$ bounded attacks, unforeseen attacks, and natural corruptions under both whitebox and blackbox settings, while still maintaining high clean accuracy.' volume: 139 URL: https://proceedings.mlr.press/v139/gurel21a.html PDF: http://proceedings.mlr.press/v139/gurel21a/gurel21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gurel21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nezihe Merve family: Gürel - given: Xiangyu family: Qi - given: Luka family: Rimanic - given: Ce family: Zhang - given: Bo family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3976-3987 id: gurel21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3976 lastpage: 3987 published: 2021-07-01 00:00:00 +0000 - title: 'Adapting to Delays and Data in Adversarial Multi-Armed Bandits' abstract: 'We consider the adversarial multi-armed bandit problem under delayed feedback. We analyze variants of the Exp3 algorithm that tune their step size using only information (about the losses and delays) available at the time of the decisions, and obtain regret guarantees that adapt to the observed (rather than the worst-case) sequences of delays and/or losses. First, through a remarkably simple proof technique, we show that with proper tuning of the step size, the algorithm achieves an optimal (up to logarithmic factors) regret of order $\sqrt{\log(K)(TK + D)}$ both in expectation and in high probability, where $K$ is the number of arms, $T$ is the time horizon, and $D$ is the cumulative delay. The high-probability version of the bound, which is the first high-probability delay-adaptive bound in the literature, crucially depends on the use of implicit exploration in estimating the losses. Then, following Zimmert and Seldin (2019), we extend these results so that the algorithm can “skip” rounds with large delays, resulting in regret bounds of order $\sqrt{TK\log(K)} + |R| + \sqrt{D_{\bar{R}}\log(K)}$, where $R$ is an arbitrary set of rounds (which are skipped) and $D_{\bar{R}}$ is the cumulative delay of the feedback for other rounds. Finally, we present another, data-adaptive (AdaGrad-style) version of the algorithm for which the regret adapts to the observed (delayed) losses instead of only adapting to the cumulative delay (this algorithm requires an a priori upper bound on the maximum delay, or the advance knowledge of the delay for each decision when it is made). The resulting bound can be orders of magnitude smaller on benign problems, and it can be shown that the delay only affects the regret through the loss of the best arm.' volume: 139 URL: https://proceedings.mlr.press/v139/gyorgy21a.html PDF: http://proceedings.mlr.press/v139/gyorgy21a/gyorgy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gyorgy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andras family: Gyorgy - given: Pooria family: Joulani editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3988-3997 id: gyorgy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3988 lastpage: 3997 published: 2021-07-01 00:00:00 +0000 - title: 'Rate-Distortion Analysis of Minimum Excess Risk in Bayesian Learning' abstract: 'In parametric Bayesian learning, a prior is assumed on the parameter $W$ which determines the distribution of samples. In this setting, Minimum Excess Risk (MER) is defined as the difference between the minimum expected loss achievable when learning from data and the minimum expected loss that could be achieved if $W$ was observed. In this paper, we build upon and extend the recent results of (Xu & Raginsky, 2020) to analyze the MER in Bayesian learning and derive information-theoretic bounds on it. We formulate the problem as a (constrained) rate-distortion optimization and show how the solution can be bounded above and below by two other rate-distortion functions that are easier to study. The lower bound represents the minimum possible excess risk achievable by \emph{any} process using $R$ bits of information from the parameter $W$. For the upper bound, the optimization is further constrained to use $R$ bits from the training set, a setting which relates MER to information-theoretic bounds on the generalization gap in frequentist learning. We derive information-theoretic bounds on the difference between these upper and lower bounds and show that they can provide order-wise tight rates for MER under certain conditions. This analysis gives more insight into the information-theoretic nature of Bayesian learning as well as providing novel bounds.' volume: 139 URL: https://proceedings.mlr.press/v139/hafez-kolahi21a.html PDF: http://proceedings.mlr.press/v139/hafez-kolahi21a/hafez-kolahi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hafez-kolahi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hassan family: Hafez-Kolahi - given: Behrad family: Moniri - given: Shohreh family: Kasaei - given: Mahdieh Soleymani family: Baghshah editor: - given: Marina family: Meila - given: Tong family: Zhang page: 3998-4007 id: hafez-kolahi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 3998 lastpage: 4007 published: 2021-07-01 00:00:00 +0000 - title: 'Regret Minimization in Stochastic Non-Convex Learning via a Proximal-Gradient Approach' abstract: 'This paper develops a methodology for regret minimization with stochastic first-order oracle feedback in online, constrained, non-smooth, non-convex problems. In this setting, the minimization of external regret is beyond reach for first-order methods, and there are no gradient-based algorithmic frameworks capable of providing a solution. On that account, we propose a conceptual approach that leverages non-convex optimality measures, leading to a suitable generalization of the learner’s local regret. We focus on a local regret measure defined via a proximal-gradient mapping, that also encompasses the original notion proposed by Hazan et al. (2017). To achieve no local regret in this setting, we develop a proximal-gradient method based on stochastic first-order feedback, and a simpler method for when access to a perfect first-order oracle is possible. Both methods are order-optimal (in the min-max sense), and we also establish a bound on the number of proximal-gradient queries these methods require. As an important application of our results, we also obtain a link between online and offline non-convex stochastic optimization manifested as a new proximal-gradient scheme with complexity guarantees matching those obtained via variance reduction techniques.' volume: 139 URL: https://proceedings.mlr.press/v139/hallak21a.html PDF: http://proceedings.mlr.press/v139/hallak21a/hallak21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hallak21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nadav family: Hallak - given: Panayotis family: Mertikopoulos - given: Volkan family: Cevher editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4008-4017 id: hallak21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4008 lastpage: 4017 published: 2021-07-01 00:00:00 +0000 - title: 'Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration' abstract: 'In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.' volume: 139 URL: https://proceedings.mlr.press/v139/han21a.html PDF: http://proceedings.mlr.press/v139/han21a/han21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-han21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Seungyul family: Han - given: Youngchul family: Sung editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4018-4029 id: han21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4018 lastpage: 4029 published: 2021-07-01 00:00:00 +0000 - title: 'Adversarial Combinatorial Bandits with General Non-linear Reward Functions' abstract: 'In this paper we study the adversarial combinatorial bandit with a known non-linear reward function, extending existing work on adversarial linear combinatorial bandit. {The adversarial combinatorial bandit with general non-linear reward is an important open problem in bandit literature, and it is still unclear whether there is a significant gap from the case of linear reward, stochastic bandit, or semi-bandit feedback.} We show that, with $N$ arms and subsets of $K$ arms being chosen at each of $T$ time periods, the minimax optimal regret is $\widetilde\Theta_{d}(\sqrt{N^d T})$ if the reward function is a $d$-degree polynomial with $d< K$, and $\Theta_K(\sqrt{N^K T})$ if the reward function is not a low-degree polynomial. {Both bounds are significantly different from the bound $O(\sqrt{\mathrm{poly}(N,K)T})$ for the linear case, which suggests that there is a fundamental gap between the linear and non-linear reward structures.} Our result also finds applications to adversarial assortment optimization problem in online recommendation. We show that in the worst-case of adversarial assortment problem, the optimal algorithm must treat each individual $\binom{N}{K}$ assortment as independent.' volume: 139 URL: https://proceedings.mlr.press/v139/han21b.html PDF: http://proceedings.mlr.press/v139/han21b/han21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-han21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yanjun family: Han - given: Yining family: Wang - given: Xi family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4030-4039 id: han21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4030 lastpage: 4039 published: 2021-07-01 00:00:00 +0000 - title: 'A Collective Learning Framework to Boost GNN Expressiveness for Node Classification' abstract: 'Collective Inference (CI) is a procedure designed to boost weak relational classifiers, specially for node classification tasks. Graph Neural Networks (GNNs) are strong classifiers that have been used with great success. Unfortunately, most existing practical GNNs are not most-expressive (universal). Thus, it is an open question whether one can improve strong relational node classifiers, such as GNNs, with CI. In this work, we investigate this question and propose {\em collective learning} for GNNs —a general collective classification approach for node representation learning that increases their representation power. We show that previous attempts to incorporate CI into GNNs fail to boost their expressiveness because they do not adapt CI’s Monte Carlo sampling to representation learning. We evaluate our proposed framework with a variety of state-of-the-art GNNs. Our experiments show a consistent, significant boost in node classification accuracy —regardless of the choice of underlying GNN— for inductive node classification in partially-labeled graphs, across five real-world network datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/hang21a.html PDF: http://proceedings.mlr.press/v139/hang21a/hang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mengyue family: Hang - given: Jennifer family: Neville - given: Bruno family: Ribeiro editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4040-4050 id: hang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4040 lastpage: 4050 published: 2021-07-01 00:00:00 +0000 - title: 'Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning' abstract: 'We investigate the use of natural language to drive the generalization of control policies and introduce the new multi-task environment Messenger with free-form text manuals describing the environment dynamics. Unlike previous work, Messenger does not assume prior knowledge connecting text and state observations {—} the control policy must simultaneously ground the game manual to entity symbols and dynamics in the environment. We develop a new model, EMMA (Entity Mapper with Multi-modal Attention) which uses an entity-conditioned attention module that allows for selective focus over relevant descriptions in the manual for each entity in the environment. EMMA is end-to-end differentiable and learns a latent grounding of entities and dynamics from text to observations using only environment rewards. EMMA achieves successful zero-shot generalization to unseen games with new dynamics, obtaining a 40% higher win rate compared to multiple baselines. However, win rate on the hardest stage of Messenger remains low (10%), demonstrating the need for additional work in this direction.' volume: 139 URL: https://proceedings.mlr.press/v139/hanjie21a.html PDF: http://proceedings.mlr.press/v139/hanjie21a/hanjie21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hanjie21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Austin W. family: Hanjie - given: Victor Y family: Zhong - given: Karthik family: Narasimhan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4051-4062 id: hanjie21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4051 lastpage: 4062 published: 2021-07-01 00:00:00 +0000 - title: 'Sparse Feature Selection Makes Batch Reinforcement Learning More Sample Efficient' abstract: 'This paper provides a statistical analysis of high-dimensional batch reinforcement learning (RL) using sparse linear function approximation. When there is a large number of candidate features, our result sheds light on the fact that sparsity-aware methods can make batch RL more sample efficient. We first consider the off-policy policy evaluation problem. To evaluate a new target policy, we analyze a Lasso fitted Q-evaluation method and establish a finite-sample error bound that has no polynomial dependence on the ambient dimension. To reduce the Lasso bias, we further propose a post model-selection estimator that applies fitted Q-evaluation to the features selected via group Lasso. Under an additional signal strength assumption, we derive a sharper instance-dependent error bound that depends on a divergence function measuring the distribution mismatch between the data distribution and occupancy measure of the target policy. Further, we study the Lasso fitted Q-iteration for batch policy optimization and establish a finite-sample error bound depending on the ratio between the number of relevant features and restricted minimal eigenvalue of the data’s covariance. In the end, we complement the results with minimax lower bounds for batch-data policy evaluation/optimization that nearly match our upper bounds. The results suggest that having well-conditioned data is crucial for sparse batch policy learning.' volume: 139 URL: https://proceedings.mlr.press/v139/hao21a.html PDF: http://proceedings.mlr.press/v139/hao21a/hao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Botao family: Hao - given: Yaqi family: Duan - given: Tor family: Lattimore - given: Csaba family: Szepesvari - given: Mengdi family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4063-4073 id: hao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4063 lastpage: 4073 published: 2021-07-01 00:00:00 +0000 - title: 'Bootstrapping Fitted Q-Evaluation for Off-Policy Inference' abstract: 'Bootstrapping provides a flexible and effective approach for assessing the quality of batch reinforcement learning, yet its theoretical properties are poorly understood. In this paper, we study the use of bootstrapping in off-policy evaluation (OPE), and in particular, we focus on the fitted Q-evaluation (FQE) that is known to be minimax-optimal in the tabular and linear-model cases. We propose a bootstrapping FQE method for inferring the distribution of the policy evaluation error and show that this method is asymptotically efficient and distributionally consistent for off-policy statistical inference. To overcome the computation limit of bootstrapping, we further adapt a subsampling procedure that improves the runtime by an order of magnitude. We numerically evaluate the bootrapping method in classical RL environments for confidence interval estimation, estimating the variance of off-policy evaluator, and estimating the correlation between multiple off-policy evaluators.' volume: 139 URL: https://proceedings.mlr.press/v139/hao21b.html PDF: http://proceedings.mlr.press/v139/hao21b/hao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Botao family: Hao - given: Xiang family: Ji - given: Yaqi family: Duan - given: Hao family: Lu - given: Csaba family: Szepesvari - given: Mengdi family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4074-4084 id: hao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4074 lastpage: 4084 published: 2021-07-01 00:00:00 +0000 - title: 'Compressed Maximum Likelihood' abstract: 'Maximum likelihood (ML) is one of the most fundamental and general statistical estimation techniques. Inspired by recent advances in estimating distribution functionals, we propose $\textit{compressed maximum likelihood}$ (CML) that applies ML to the compressed samples. We then show that CML is sample-efficient for several essential learning tasks over both discrete and continuous domains, including learning densities with structures, estimating probability multisets, and inferring symmetric distribution functionals.' volume: 139 URL: https://proceedings.mlr.press/v139/hao21c.html PDF: http://proceedings.mlr.press/v139/hao21c/hao21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hao21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi family: Hao - given: Alon family: Orlitsky editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4085-4095 id: hao21c issued: date-parts: - 2021 - 7 - 1 firstpage: 4085 lastpage: 4095 published: 2021-07-01 00:00:00 +0000 - title: 'Valid Causal Inference with (Some) Invalid Instruments' abstract: 'Instrumental variable methods provide a powerful approach to estimating causal effects in the presence of unobserved confounding. But a key challenge when applying them is the reliance on untestable "exclusion" assumptions that rule out any relationship between the instrument variable and the response that is not mediated by the treatment. In this paper, we show how to perform consistent IV estimation despite violations of the exclusion assumption. In particular, we show that when one has multiple candidate instruments, only a majority of these candidates—or, more generally, the modal candidate-response relationship—needs to be valid to estimate the causal effect. Our approach uses an estimate of the modal prediction from an ensemble of instrumental variable estimators. The technique is simple to apply and is "black-box" in the sense that it may be used with any instrumental variable estimator as long as the treatment effect is identified for each valid instrument independently. As such, it is compatible with recent machine-learning based estimators that allow for the estimation of conditional average treatment effects (CATE) on complex, high dimensional data. Experimentally, we achieve accurate estimates of conditional average treatment effects using an ensemble of deep network-based estimators, including on a challenging simulated Mendelian Randomization problem.' volume: 139 URL: https://proceedings.mlr.press/v139/hartford21a.html PDF: http://proceedings.mlr.press/v139/hartford21a/hartford21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hartford21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jason S family: Hartford - given: Victor family: Veitch - given: Dhanya family: Sridhar - given: Kevin family: Leyton-Brown editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4096-4106 id: hartford21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4096 lastpage: 4106 published: 2021-07-01 00:00:00 +0000 - title: 'Model Performance Scaling with Multiple Data Sources' abstract: 'Real-world machine learning systems are often trained using a mix of data sources with varying cost and quality. Understanding how the size and composition of a training dataset affect model performance is critical for advancing our understanding of generalization, as well as designing more effective data collection policies. We show that there is a simple scaling law that predicts the loss incurred by a model even under varying dataset composition. Our work expands recent observations of scaling laws for log-linear generalization error in the i.i.d setting and uses this to cast model performance prediction as a learning problem. Using the theory of optimal experimental design, we derive a simple rational function approximation to generalization error that can be fitted using a few model training runs. Our approach can achieve highly accurate ($r^2\approx .9$) predictions of model performance under substantial extrapolation in two different standard supervised learning tasks and is accurate ($r^2 \approx .83$) on more challenging machine translation and question answering tasks where many baselines achieve worse-than-random performance.' volume: 139 URL: https://proceedings.mlr.press/v139/hashimoto21a.html PDF: http://proceedings.mlr.press/v139/hashimoto21a/hashimoto21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hashimoto21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tatsunori family: Hashimoto editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4107-4116 id: hashimoto21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4107 lastpage: 4116 published: 2021-07-01 00:00:00 +0000 - title: 'Hierarchical VAEs Know What They Don’t Know' abstract: 'Deep generative models have been demonstrated as state-of-the-art density estimators. Yet, recent work has found that they often assign a higher likelihood to data from outside the training distribution. This seemingly paradoxical behavior has caused concerns over the quality of the attained density estimates. In the context of hierarchical variational autoencoders, we provide evidence to explain this behavior by out-of-distribution data having in-distribution low-level features. We argue that this is both expected and desirable behavior. With this insight in hand, we develop a fast, scalable and fully unsupervised likelihood-ratio score for OOD detection that requires data to be in-distribution across all feature-levels. We benchmark the method on a vast set of data and model combinations and achieve state-of-the-art results on out-of-distribution detection.' volume: 139 URL: https://proceedings.mlr.press/v139/havtorn21a.html PDF: http://proceedings.mlr.press/v139/havtorn21a/havtorn21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-havtorn21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jakob D. family: Havtorn - given: Jes family: Frellsen - given: Søren family: Hauberg - given: Lars family: Maaløe editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4117-4128 id: havtorn21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4117 lastpage: 4128 published: 2021-07-01 00:00:00 +0000 - title: 'SPECTRE: defending against backdoor attacks using robust statistics' abstract: 'Modern machine learning increasingly requires training on a large collection of data from multiple sources, not all of which can be trusted. A particularly frightening scenario is when a small fraction of corrupted data changes the behavior of the trained model when triggered by an attacker-specified watermark. Such a compromised model will be deployed unnoticed as the model is accurate otherwise. There has been promising attempts to use the intermediate representations of such a model to separate corrupted examples from clean ones. However, these methods require a significant fraction of the data to be corrupted, in order to have strong enough signal for detection. We propose a novel defense algorithm using robust covariance estimation to amplify the spectral signature of corrupted data. This defense is able to completely remove backdoors whenever the benchmark backdoor attacks are successful, even in regimes where previous methods have no hope for detecting poisoned examples.' volume: 139 URL: https://proceedings.mlr.press/v139/hayase21a.html PDF: http://proceedings.mlr.press/v139/hayase21a/hayase21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hayase21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jonathan family: Hayase - given: Weihao family: Kong - given: Raghav family: Somani - given: Sewoong family: Oh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4129-4139 id: hayase21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4129 lastpage: 4139 published: 2021-07-01 00:00:00 +0000 - title: 'Boosting for Online Convex Optimization' abstract: 'We consider the decision-making framework of online convex optimization with a very large number of experts. This setting is ubiquitous in contextual and reinforcement learning problems, where the size of the policy class renders enumeration and search within the policy class infeasible. Instead, we consider generalizing the methodology of online boosting. We define a weak learning algorithm as a mechanism that guarantees multiplicatively approximate regret against a base class of experts. In this access model, we give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class. We consider both full and partial (a.k.a. bandit) information feedback models. We also give an analogous efficient boosting algorithm for the i.i.d. statistical setting. Our results simultaneously generalize online boosting and gradient boosting guarantees to contextual learning model, online convex optimization and bandit linear optimization settings.' volume: 139 URL: https://proceedings.mlr.press/v139/hazan21a.html PDF: http://proceedings.mlr.press/v139/hazan21a/hazan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hazan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Elad family: Hazan - given: Karan family: Singh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4140-4149 id: hazan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4140 lastpage: 4149 published: 2021-07-01 00:00:00 +0000 - title: 'PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models' abstract: 'The size of Transformer models is growing at an unprecedented rate. It has taken less than one year to reach trillion-level parameters since the release of GPT-3 (175B). Training such models requires both substantial engineering efforts and enormous computing resources, which are luxuries most research teams cannot afford. In this paper, we propose PipeTransformer, which leverages automated elastic pipelining for efficient distributed training of Transformer models. In PipeTransformer, we design an adaptive on the fly freeze algorithm that can identify and freeze some layers gradually during training, and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on SQuAD and GLUE datasets. Our results show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design. Finally, we have modularized our training system with flexible APIs and made the source code publicly available at https://DistML.ai.' volume: 139 URL: https://proceedings.mlr.press/v139/he21a.html PDF: http://proceedings.mlr.press/v139/he21a/he21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-he21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chaoyang family: He - given: Shen family: Li - given: Mahdi family: Soltanolkotabi - given: Salman family: Avestimehr editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4150-4159 id: he21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4150 lastpage: 4159 published: 2021-07-01 00:00:00 +0000 - title: 'SoundDet: Polyphonic Moving Sound Event Detection and Localization from Raw Waveform' abstract: 'We present a new framework SoundDet, which is an end-to-end trainable and light-weight framework, for polyphonic moving sound event detection and localization. Prior methods typically approach this problem by preprocessing raw waveform into time-frequency representations, which is more amenable to process with well-established image processing pipelines. Prior methods also detect in segment-wise manner, leading to incomplete and partial detections. SoundDet takes a novel approach and directly consumes the raw, multichannel waveform and treats the spatio-temporal sound event as a complete “sound-object" to be detected. Specifically, SoundDet consists of a backbone neural network and two parallel heads for temporal detection and spatial localization, respectively. Given the large sampling rate of raw waveform, the backbone network first learns a set of phase-sensitive and frequency-selective bank of filters to explicitly retain direction-of-arrival information, whilst being highly computationally and parametrically efficient than standard 1D/2D convolution. A dense sound event proposal map is then constructed to handle the challenges of predicting events with large varying temporal duration. Accompanying the dense proposal map are a temporal overlapness map and a motion smoothness map that measure a proposal’s confidence to be an event from temporal detection accuracy and movement consistency perspective. Involving the two maps guarantees SoundDet to be trained in a spatio-temporally unified manner. Experimental results on the public DCASE dataset show the advantage of SoundDet on both segment-based evaluation and our newly proposed event-based evaluation system.' volume: 139 URL: https://proceedings.mlr.press/v139/he21b.html PDF: http://proceedings.mlr.press/v139/he21b/he21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-he21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuhang family: He - given: Niki family: Trigoni - given: Andrew family: Markham editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4160-4170 id: he21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4160 lastpage: 4170 published: 2021-07-01 00:00:00 +0000 - title: 'Logarithmic Regret for Reinforcement Learning with Linear Function Approximation' abstract: 'Reinforcement learning (RL) with linear function approximation has received increasing attention recently. However, existing work has focused on obtaining $\sqrt{T}$-type regret bound, where $T$ is the number of interactions with the MDP. In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. More specifically, under the linear MDP assumption (Jin et al., 2020), the LSVI-UCB algorithm can achieve $\tilde{O}(d^{3}H^5/\text{gap}_{\text{min}}\cdot \log(T))$regret; and under the linear mixture MDP assumption (Ayoub et al., 2020), the UCRL-VTR algorithm can achieve $\tilde{O}(d^{2}H^5/\text{gap}_{\text{min}}\cdot \log^3(T))$ regret, where $d$ is the dimension of feature mapping, $H$ is the length of episode, $\text{gap}_{\text{min}}$ is the minimal sub-optimality gap, and $\tilde O$ hides all logarithmic terms except $\log(T)$. To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation. We also establish gap-dependent lower bounds for the two linear MDP models.' volume: 139 URL: https://proceedings.mlr.press/v139/he21c.html PDF: http://proceedings.mlr.press/v139/he21c/he21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-he21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiafan family: He - given: Dongruo family: Zhou - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4171-4180 id: he21c issued: date-parts: - 2021 - 7 - 1 firstpage: 4171 lastpage: 4180 published: 2021-07-01 00:00:00 +0000 - title: 'Finding Relevant Information via a Discrete Fourier Expansion' abstract: 'A fundamental obstacle in learning information from data is the presence of nonlinear redundancies and dependencies in it. To address this, we propose a Fourier-based approach to extract relevant information in the supervised setting. We first develop a novel Fourier expansion for functions of correlated binary random variables. This expansion is a generalization of the standard Fourier analysis on the Boolean cube beyond product probability spaces. We further extend our Fourier analysis to stochastic mappings. As an important application of this analysis, we investigate learning with feature subset selection. We reformulate this problem in the Fourier domain and introduce a computationally efficient measure for selecting features. Bridging the Bayesian error rate with the Fourier coefficients, we demonstrate that the Fourier expansion provides a powerful tool to characterize nonlinear dependencies in the features-label relation. Via theoretical analysis, we show that our proposed measure finds provably asymptotically optimal feature subsets. Lastly, we present an algorithm based on our measure and verify our findings via numerical experiments on various datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/heidari21a.html PDF: http://proceedings.mlr.press/v139/heidari21a/heidari21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-heidari21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mohsen family: Heidari - given: Jithin family: Sreedharan - given: Gil I family: Shamir - given: Wojciech family: Szpankowski editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4181-4191 id: heidari21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4181 lastpage: 4191 published: 2021-07-01 00:00:00 +0000 - title: 'Zeroth-Order Non-Convex Learning via Hierarchical Dual Averaging' abstract: 'We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization {–} i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. The proposed class of policies relies on the construction of an online model that aggregates loss information as it arrives, and it consists of two principal components: (a) a regularizer adapted to the Fisher information metric (as opposed to the metric norm of the ambient space); and (b) a principled exploration of the problem’s state space based on an adapted hierarchical schedule. This construction enables sharper control of the model’s bias and variance, and allows us to derive tight bounds for both the learner’s static and dynamic regret {–} i.e., the regret incurred against the best dynamic policy in hindsight over the horizon of play.' volume: 139 URL: https://proceedings.mlr.press/v139/heliou21a.html PDF: http://proceedings.mlr.press/v139/heliou21a/heliou21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-heliou21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amélie family: Héliou - given: Matthieu family: Martin - given: Panayotis family: Mertikopoulos - given: Thibaud family: Rahier editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4192-4202 id: heliou21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4192 lastpage: 4202 published: 2021-07-01 00:00:00 +0000 - title: 'Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity' abstract: 'Rationalizing which parts of a molecule drive the predictions of a molecular graph convolutional neural network (GCNN) can be difficult. To help, we propose two simple regularization techniques to apply during the training of GCNNs: Batch Representation Orthonormalization (BRO) and Gini regularization. BRO, inspired by molecular orbital theory, encourages graph convolution operations to generate orthonormal node embeddings. Gini regularization is applied to the weights of the output layer and constrains the number of dimensions the model can use to make predictions. We show that Gini and BRO regularization can improve the accuracy of state-of-the-art GCNN attribution methods on artificial benchmark datasets. In a real-world setting, we demonstrate that medicinal chemists significantly prefer explanations extracted from regularized models. While we only study these regularizers in the context of GCNNs, both can be applied to other types of neural networks.' volume: 139 URL: https://proceedings.mlr.press/v139/henderson21a.html PDF: http://proceedings.mlr.press/v139/henderson21a/henderson21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-henderson21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ryan family: Henderson - given: Djork-Arné family: Clevert - given: Floriane family: Montanari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4203-4213 id: henderson21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4203 lastpage: 4213 published: 2021-07-01 00:00:00 +0000 - title: 'Muesli: Combining Improvements in Policy Optimization' abstract: 'We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. The update (henceforth Muesli) matches MuZero’s state-of-the-art performance on Atari. Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines. The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.' volume: 139 URL: https://proceedings.mlr.press/v139/hessel21a.html PDF: http://proceedings.mlr.press/v139/hessel21a/hessel21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hessel21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Matteo family: Hessel - given: Ivo family: Danihelka - given: Fabio family: Viola - given: Arthur family: Guez - given: Simon family: Schmitt - given: Laurent family: Sifre - given: Theophane family: Weber - given: David family: Silver - given: Hado family: Van Hasselt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4214-4226 id: hessel21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4214 lastpage: 4226 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Representations by Humans, for Humans' abstract: 'When machine predictors can achieve higher performance than the human decision-makers they support, improving the performance of human decision-makers is often conflated with improving machine accuracy. Here we propose a framework to directly support human decision-making, in which the role of machines is to reframe problems rather than to prescribe actions through prediction. Inspired by the success of representation learning in improving performance of machine predictors, our framework learns human-facing representations optimized for human performance. This “Mind Composed with Machine” framework incorporates a human decision-making model directly into the representation learning paradigm and is trained with a novel human-in-the-loop training procedure. We empirically demonstrate the successful application of the framework to various tasks and representational forms.' volume: 139 URL: https://proceedings.mlr.press/v139/hilgard21a.html PDF: http://proceedings.mlr.press/v139/hilgard21a/hilgard21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hilgard21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sophie family: Hilgard - given: Nir family: Rosenfeld - given: Mahzarin R family: Banaji - given: Jack family: Cao - given: David family: Parkes editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4227-4238 id: hilgard21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4227 lastpage: 4238 published: 2021-07-01 00:00:00 +0000 - title: 'Optimizing Black-box Metrics with Iterative Example Weighting' abstract: 'We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix. Such black-box learning settings are ubiquitous, for example, when the learner only has query access to the metric of interest, or in noisy-label and domain adaptation applications where the learner must evaluate the metric via performance evaluation using a small validation sample. Our approach is to adaptively learn example weights on the training dataset such that the resulting weighted objective best approximates the metric on the validation sample. We show how to model and estimate the example weights and use them to iteratively post-shift a pre-trained class probability estimator to construct a classifier. We also analyze the resulting procedure’s statistical properties. Experiments on various label noise, domain shift, and fair classification setups confirm that our proposal compares favorably to the state-of-the-art baselines for each application.' volume: 139 URL: https://proceedings.mlr.press/v139/hiranandani21a.html PDF: http://proceedings.mlr.press/v139/hiranandani21a/hiranandani21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hiranandani21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gaurush family: Hiranandani - given: Jatin family: Mathur - given: Harikrishna family: Narasimhan - given: Mahdi Milani family: Fard - given: Sanmi family: Koyejo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4239-4249 id: hiranandani21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4239 lastpage: 4249 published: 2021-07-01 00:00:00 +0000 - title: 'Trees with Attention for Set Prediction Tasks' abstract: 'In many machine learning applications, each record represents a set of items. For example, when making predictions from medical records, the medications prescribed to a patient are a set whose size is not fixed and whose order is arbitrary. However, most machine learning algorithms are not designed to handle set structures and are limited to processing records of fixed size. Set-Tree, presented in this work, extends the support for sets to tree-based models, such as Random-Forest and Gradient-Boosting, by introducing an attention mechanism and set-compatible split criteria. We evaluate the new method empirically on a wide range of problems ranging from making predictions on sub-atomic particle jets to estimating the redshift of galaxies. The new method outperforms existing tree-based methods consistently and significantly. Moreover, it is competitive and often outperforms Deep Learning. We also discuss the theoretical properties of Set-Trees and explain how they enable item-level explainability.' volume: 139 URL: https://proceedings.mlr.press/v139/hirsch21a.html PDF: http://proceedings.mlr.press/v139/hirsch21a/hirsch21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hirsch21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Roy family: Hirsch - given: Ran family: Gilad-Bachrach editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4250-4261 id: hirsch21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4250 lastpage: 4261 published: 2021-07-01 00:00:00 +0000 - title: 'Multiplicative Noise and Heavy Tails in Stochastic Optimization' abstract: 'Although stochastic optimization is central to modern machine learning, the precise mechanisms underlying its success, and in particular, the precise role of the stochasticity, still remain unclear. Modeling stochastic optimization algorithms as discrete random recurrence relations, we show that multiplicative noise, as it commonly arises due to variance in local rates of convergence, results in heavy-tailed stationary behaviour in the parameters. Theoretical results are obtained characterizing this for a large class of (non-linear and even non-convex) models and optimizers (including momentum, Adam, and stochastic Newton), demonstrating that this phenomenon holds generally. We describe dependence on key factors, including step size, batch size, and data variability, all of which exhibit similar qualitative behavior to recent empirical results on state-of-the-art neural network models. Furthermore, we empirically illustrate how multiplicative noise and heavy-tailed structure improve capacity for basin hopping and exploration of non-convex loss surfaces, over commonly-considered stochastic dynamics with only additive noise and light-tailed structure.' volume: 139 URL: https://proceedings.mlr.press/v139/hodgkinson21a.html PDF: http://proceedings.mlr.press/v139/hodgkinson21a/hodgkinson21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hodgkinson21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Liam family: Hodgkinson - given: Michael family: Mahoney editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4262-4274 id: hodgkinson21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4262 lastpage: 4274 published: 2021-07-01 00:00:00 +0000 - title: 'MC-LSTM: Mass-Conserving LSTM' abstract: 'The success of Convolutional Neural Networks (CNNs) in computer vision is mainly driven by their strong inductive bias, which is strong enough to allow CNNs to solve vision-related tasks with random weights, meaning without learning. Similarly, Long Short-Term Memory (LSTM) has a strong inductive bias towards storing information over time. However, many real-world systems are governed by conservation laws, which lead to the redistribution of particular quantities {—} e.g.in physical and economical systems. Our novel Mass-Conserving LSTM (MC-LSTM) adheres to these conservation laws by extending the inductive bias of LSTM to model the redistribution of those stored quantities. MC-LSTMs set a new state-of-the-art for neural arithmetic units at learning arithmetic operations, such as addition tasks,which have a strong conservation law, as the sum is constant over time. Further, MC-LSTM is applied to traffic forecasting, modeling a pendulum, and a large benchmark dataset in hydrology, where it sets a new state-of-the-art for predicting peak flows. In the hydrology example, we show that MC-LSTM states correlate with real world processes and are therefore interpretable.' volume: 139 URL: https://proceedings.mlr.press/v139/hoedt21a.html PDF: http://proceedings.mlr.press/v139/hoedt21a/hoedt21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hoedt21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pieter-Jan family: Hoedt - given: Frederik family: Kratzert - given: Daniel family: Klotz - given: Christina family: Halmich - given: Markus family: Holzleitner - given: Grey S family: Nearing - given: Sepp family: Hochreiter - given: Guenter family: Klambauer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4275-4286 id: hoedt21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4275 lastpage: 4286 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Curves for Analysis of Deep Networks' abstract: 'Learning curves model a classifier’s test error as a function of the number of training samples. Prior works show that learning curves can be used to select model parameters and extrapolate performance. We investigate how to use learning curves to evaluate design choices, such as pretraining, architecture, and data augmentation. We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. Our experiments exemplify use of learning curves for analysis and yield several interesting observations.' volume: 139 URL: https://proceedings.mlr.press/v139/hoiem21a.html PDF: http://proceedings.mlr.press/v139/hoiem21a/hoiem21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hoiem21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Derek family: Hoiem - given: Tanmay family: Gupta - given: Zhizhong family: Li - given: Michal family: Shlapentokh-Rothman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4287-4296 id: hoiem21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4287 lastpage: 4296 published: 2021-07-01 00:00:00 +0000 - title: 'Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes' abstract: 'Motivated by objects such as electric fields or fluid streams, we study the problem of learning stochastic fields, i.e. stochastic processes whose samples are fields like those occurring in physics and engineering. Considering general transformations such as rotations and reflections, we show that spatial invariance of stochastic fields requires an inference model to be equivariant. Leveraging recent advances from the equivariance literature, we study equivariance in two classes of models. Firstly, we fully characterise equivariant Gaussian processes. Secondly, we introduce Steerable Conditional Neural Processes (SteerCNPs), a new, fully equivariant member of the Neural Process family. In experiments with Gaussian process vector fields, images, and real-world weather data, we observe that SteerCNPs significantly improve the performance of previous models and equivariance leads to improvements in transfer learning tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/holderrieth21a.html PDF: http://proceedings.mlr.press/v139/holderrieth21a/holderrieth21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-holderrieth21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peter family: Holderrieth - given: Michael J family: Hutchinson - given: Yee Whye family: Teh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4297-4307 id: holderrieth21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4297 lastpage: 4307 published: 2021-07-01 00:00:00 +0000 - title: 'Latent Programmer: Discrete Latent Codes for Program Synthesis' abstract: 'A key problem in program synthesis is searching over the large space of possible programs. Human programmers might decide the high-level structure of the desired program before thinking about the details; motivated by this intuition, we consider two-level search for program synthesis, in which the synthesizer first generates a plan, a sequence of symbols that describes the desired program at a high level, before generating the program. We propose to learn representations of programs that can act as plans to organize such a two-level search. Discrete latent codes are appealing for this purpose, and can be learned by applying recent work on discrete autoencoders. Based on these insights, we introduce the Latent Programmer (LP), a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language. We evaluate the LP on two domains, demonstrating that it yields an improvement in accuracy, especially on longer programs for which search is most difficult.' volume: 139 URL: https://proceedings.mlr.press/v139/hong21a.html PDF: http://proceedings.mlr.press/v139/hong21a/hong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Joey family: Hong - given: David family: Dohan - given: Rishabh family: Singh - given: Charles family: Sutton - given: Manzil family: Zaheer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4308-4318 id: hong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4308 lastpage: 4318 published: 2021-07-01 00:00:00 +0000 - title: 'Chebyshev Polynomial Codes: Task Entanglement-based Coding for Distributed Matrix Multiplication' abstract: 'Distributed computing has been a prominent solution to efficiently process massive datasets in parallel. However, the existence of stragglers is one of the major concerns that slows down the overall speed of distributed computing. To deal with this problem, we consider a distributed matrix multiplication scenario where a master assigns multiple tasks to each worker to exploit stragglers’ computing ability (which is typically wasted in conventional distributed computing). We propose Chebyshev polynomial codes, which can achieve order-wise improvement in encoding complexity at the master and communication load in distributed matrix multiplication using task entanglement. The key idea of task entanglement is to reduce the number of encoded matrices for multiple tasks assigned to each worker by intertwining encoded matrices. We experimentally demonstrate that, in cloud environments, Chebyshev polynomial codes can provide significant reduction in overall processing time in distributed computing for matrix multiplication, which is a key computational component in modern deep learning.' volume: 139 URL: https://proceedings.mlr.press/v139/hong21b.html PDF: http://proceedings.mlr.press/v139/hong21b/hong21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hong21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sangwoo family: Hong - given: Heecheol family: Yang - given: Youngseok family: Yoon - given: Taehyun family: Cho - given: Jungwoo family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4319-4327 id: hong21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4319 lastpage: 4327 published: 2021-07-01 00:00:00 +0000 - title: 'Federated Learning of User Verification Models Without Sharing Embeddings' abstract: 'We consider the problem of training User Verification (UV) models in federated setup, where each user has access to the data of only one class and user embeddings cannot be shared with the server or other users. To address this problem, we propose Federated User Verification (FedUV), a framework in which users jointly learn a set of vectors and maximize the correlation of their instance embeddings with a secret linear combination of those vectors. We show that choosing the linear combinations from the codewords of an error-correcting code allows users to collaboratively train the model without revealing their embedding vectors. We present the experimental results for user verification with voice, face, and handwriting data and show that FedUV is on par with existing approaches, while not sharing the embeddings with other users or the server.' volume: 139 URL: https://proceedings.mlr.press/v139/hosseini21a.html PDF: http://proceedings.mlr.press/v139/hosseini21a/hosseini21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hosseini21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hossein family: Hosseini - given: Hyunsin family: Park - given: Sungrack family: Yun - given: Christos family: Louizos - given: Joseph family: Soriaga - given: Max family: Welling editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4328-4336 id: hosseini21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4328 lastpage: 4336 published: 2021-07-01 00:00:00 +0000 - title: 'The Limits of Min-Max Optimization Algorithms: Convergence to Spurious Non-Critical Sets' abstract: 'Compared to minimization, the min-max optimization in machine learning applications is considerably more convoluted because of the existence of cycles and similar phenomena. Such oscillatory behaviors are well-understood in the convex-concave regime, and many algorithms are known to overcome them. In this paper, we go beyond this basic setting and characterize the convergence properties of many popular methods in solving non-convex/non-concave problems. In particular, we show that a wide class of state-of-the-art schemes and heuristics may converge with arbitrarily high probability to attractors that are in no way min-max optimal or even stationary. Our work thus points out a potential pitfall among many existing theoretical frameworks, and we corroborate our theoretical claims by explicitly showcasing spurious attractors in simple two-dimensional problems.' volume: 139 URL: https://proceedings.mlr.press/v139/hsieh21a.html PDF: http://proceedings.mlr.press/v139/hsieh21a/hsieh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hsieh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ya-Ping family: Hsieh - given: Panayotis family: Mertikopoulos - given: Volkan family: Cevher editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4337-4348 id: hsieh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4337 lastpage: 4348 published: 2021-07-01 00:00:00 +0000 - title: 'Near-Optimal Representation Learning for Linear Bandits and Linear RL' abstract: 'This paper studies representation learning for multi-task linear bandits and multi-task episodic RL with linear value function approximation. We first consider the setting where we play $M$ linear bandits with dimension $d$ concurrently, and these bandits share a common $k$-dimensional linear representation so that $k\ll d$ and $k \ll M$. We propose a sample-efficient algorithm, MTLR-OFUL, which leverages the shared representation to achieve $\tilde{O}(M\sqrt{dkT} + d\sqrt{kMT} )$ regret, with $T$ being the number of total steps. Our regret significantly improves upon the baseline $\tilde{O}(Md\sqrt{T})$ achieved by solving each task independently. We further develop a lower bound that shows our regret is near-optimal when $d > M$. Furthermore, we extend the algorithm and analysis to multi-task episodic RL with linear value function approximation under low inherent Bellman error (Zanette et al., 2020a). To the best of our knowledge, this is the first theoretical result that characterize the benefits of multi-task representation learning for exploration in RL with function approximation.' volume: 139 URL: https://proceedings.mlr.press/v139/hu21a.html PDF: http://proceedings.mlr.press/v139/hu21a/hu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiachen family: Hu - given: Xiaoyu family: Chen - given: Chi family: Jin - given: Lihong family: Li - given: Liwei family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4349-4358 id: hu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4349 lastpage: 4358 published: 2021-07-01 00:00:00 +0000 - title: 'On the Random Conjugate Kernel and Neural Tangent Kernel' abstract: 'We investigate the distributions of Conjugate Kernel (CK) and Neural Tangent Kernel (NTK) for ReLU networks with random initialization. We derive the precise distributions and moments of the diagonal elements of these kernels. For a feedforward network, these values converge in law to a log-normal distribution when the network depth $d$ and width $n$ simultaneously tend to infinity and the variance of log diagonal elements is proportional to ${d}/{n}$. For the residual network, in the limit that number of branches $m$ increases to infinity and the width $n$ remains fixed, the diagonal elements of Conjugate Kernel converge in law to a log-normal distribution where the variance of log value is proportional to ${1}/{n}$, and the diagonal elements of NTK converge in law to a log-normal distributed variable times the conjugate kernel of one feedforward network. Our new theoretical analysis results suggest that residual network remains trainable in the limit of infinite branches and fixed network width. The numerical experiments are conducted and all results validate the soundness of our theoretical analysis.' volume: 139 URL: https://proceedings.mlr.press/v139/hu21b.html PDF: http://proceedings.mlr.press/v139/hu21b/hu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhengmian family: Hu - given: Heng family: Huang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4359-4368 id: hu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4359 lastpage: 4368 published: 2021-07-01 00:00:00 +0000 - title: 'Off-Belief Learning' abstract: 'The standard problem setting in Dec-POMDPs is self-play, where the goal is to find a set of policies that play optimally together. Policies learned through self-play may adopt arbitrary conventions and implicitly rely on multi-step reasoning based on fragile assumptions about other agents’ actions and thus fail when paired with humans or independently trained agents at test time. To address this, we present off-belief learning (OBL). At each timestep OBL agents follow a policy $\pi_1$ that is optimized assuming past actions were taken by a given, fixed policy ($\pi_0$), but assuming that future actions will be taken by $\pi_1$. When $\pi_0$ is uniform random, OBL converges to an optimal policy that does not rely on inferences based on other agents’ behavior (an optimal grounded policy). OBL can be iterated in a hierarchy, where the optimal policy from one level becomes the input to the next, thereby introducing multi-level cognitive reasoning in a controlled manner. Unlike existing approaches, which may converge to any equilibrium policy, OBL converges to a unique policy, making it suitable for zero-shot coordination (ZSC). OBL can be scaled to high-dimensional settings with a fictitious transition mechanism and shows strong performance in both a toy-setting and the benchmark human-AI & ZSC problem Hanabi.' volume: 139 URL: https://proceedings.mlr.press/v139/hu21c.html PDF: http://proceedings.mlr.press/v139/hu21c/hu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hengyuan family: Hu - given: Adam family: Lerer - given: Brandon family: Cui - given: Luis family: Pineda - given: Noam family: Brown - given: Jakob family: Foerster editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4369-4379 id: hu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 4369 lastpage: 4379 published: 2021-07-01 00:00:00 +0000 - title: 'Generalizable Episodic Memory for Deep Reinforcement Learning' abstract: 'Episodic memory-based methods can rapidly latch onto past successful strategies by a non-parametric memory and improve sample efficiency of traditional reinforcement learning. However, little effort is put into the continuous domain, where a state is never visited twice, and previous episodic methods fail to efficiently aggregate experience across trajectories. To address this problem, we propose Generalizable Episodic Memory (GEM), which effectively organizes the state-action values of episodic memory in a generalizable manner and supports implicit planning on memorized trajectories. GEM utilizes a double estimator to reduce the overestimation bias induced by value propagation in the planning process. Empirical evaluation shows that our method significantly outperforms existing trajectory-based methods on various MuJoCo continuous control tasks. To further show the general applicability, we evaluate our method on Atari games with discrete action space, which also shows a significant improvement over baseline algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/hu21d.html PDF: http://proceedings.mlr.press/v139/hu21d/hu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hao family: Hu - given: Jianing family: Ye - given: Guangxiang family: Zhu - given: Zhizhou family: Ren - given: Chongjie family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4380-4390 id: hu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 4380 lastpage: 4390 published: 2021-07-01 00:00:00 +0000 - title: 'A Scalable Deterministic Global Optimization Algorithm for Clustering Problems' abstract: 'The minimum sum-of-squares clustering (MSSC) task, which can be treated as a Mixed Integer Second Order Cone Programming (MISOCP) problem, is rarely investigated in the literature through deterministic optimization to find its global optimal value. In this paper, we modelled the MSSC task as a two-stage optimization problem and proposed a tailed reduced-space branch and bound (BB) algorithm. We designed several approaches to construct lower and upper bounds at each node in the BB scheme, including a scenario grouping based Lagrangian decomposition approach. One key advantage of this reduced-space algorithm is that it only needs to perform branching on the centers of clusters to guarantee convergence, and the size of centers is independent of the number of data samples. Moreover, the lower bounds can be computed by solving small-scale sample subproblems, and upper bounds can be obtained trivially. These two properties enable our algorithm easy to be paralleled and can be scalable to the dataset with up to 200,000 samples for finding a global $\epsilon$-optimal solution of the MSSC task. We performed numerical experiments on both synthetic and real-world datasets and compared our proposed algorithms with the off-the-shelf global optimal solvers and classical local optimal algorithms. The results reveal a strong performance and scalability of our algorithm.' volume: 139 URL: https://proceedings.mlr.press/v139/hua21a.html PDF: http://proceedings.mlr.press/v139/hua21a/hua21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hua21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kaixun family: Hua - given: Mingfei family: Shi - given: Yankai family: Cao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4391-4401 id: hua21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4391 lastpage: 4401 published: 2021-07-01 00:00:00 +0000 - title: 'On Recovering from Modeling Errors Using Testing Bayesian Networks' abstract: 'We consider the problem of supervised learning with Bayesian Networks when the used dependency structure is incomplete due to missing edges or missing variable states. These modeling errors induce independence constraints on the learned model that may not hold in the true, data-generating distribution. We provide a unified treatment of these modeling errors as instances of state-space abstractions. We then identify a class of Bayesian Networks and queries which allow one to fully recover from such modeling errors if one can choose Conditional Probability Tables (CPTs) dynamically based on evidence. We show theoretically that the recently proposed Testing Bayesian Networks (TBNs), which can be trained by compiling them into Testing Arithmetic Circuits (TACs), provide a promising construct for emulating this CPT selection mechanism. Finally, we present empirical results that illustrate the promise of TBNs as a tool for recovering from certain modeling errors in the context of supervised learning.' volume: 139 URL: https://proceedings.mlr.press/v139/huang21a.html PDF: http://proceedings.mlr.press/v139/huang21a/huang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-huang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Haiying family: Huang - given: Adnan family: Darwiche editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4402-4411 id: huang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4402 lastpage: 4411 published: 2021-07-01 00:00:00 +0000 - title: 'A Novel Sequential Coreset Method for Gradient Descent Algorithms' abstract: 'A wide range of optimization problems arising in machine learning can be solved by gradient descent algorithms, and a central question in this area is how to efficiently compress a large-scale dataset so as to reduce the computational complexity. Coreset is a popular data compression technique that has been extensively studied before. However, most of existing coreset methods are problem-dependent and cannot be used as a general tool for a broader range of applications. A key obstacle is that they often rely on the pseudo-dimension and total sensitivity bound that can be very high or hard to obtain. In this paper, based on the “locality” property of gradient descent algorithms, we propose a new framework, termed “sequential coreset”, which effectively avoids these obstacles. Moreover, our method is particularly suitable for sparse optimization whence the coreset size can be further reduced to be only poly-logarithmically dependent on the dimension. In practice, the experimental results suggest that our method can save a large amount of running time compared with the baseline algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/huang21b.html PDF: http://proceedings.mlr.press/v139/huang21b/huang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-huang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiawei family: Huang - given: Ruomin family: Huang - given: Wenjie family: Liu - given: Nikolaos family: Freris - given: Hu family: Ding editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4412-4422 id: huang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4412 lastpage: 4422 published: 2021-07-01 00:00:00 +0000 - title: 'FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis' abstract: 'Federated Learning (FL) is an emerging learning scheme that allows different distributed clients to train deep neural networks together without data sharing. Neural networks have become popular due to their unprecedented success. To the best of our knowledge, the theoretical guarantees of FL concerning neural networks with explicit forms and multi-step updates are unexplored. Nevertheless, training analysis of neural networks in FL is non-trivial for two reasons: first, the objective loss function we are optimizing is non-smooth and non-convex, and second, we are even not updating in the gradient direction. Existing convergence results for gradient descent-based methods heavily rely on the fact that the gradient direction is used for updating. The current paper presents a new class of convergence analysis for FL, Federated Neural Tangent Kernel (FL-NTK), which corresponds to overparamterized ReLU neural networks trained by gradient descent in FL and is inspired by the analysis in Neural Tangent Kernel (NTK). Theoretically, FL-NTK converges to a global-optimal solution at a linear rate with properly tuned learning parameters. Furthermore, with proper distributional assumptions, FL-NTK can also achieve good generalization. The proposed theoretical analysis scheme can be generalized to more complex neural networks.' volume: 139 URL: https://proceedings.mlr.press/v139/huang21c.html PDF: http://proceedings.mlr.press/v139/huang21c/huang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-huang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Baihe family: Huang - given: Xiaoxiao family: Li - given: Zhao family: Song - given: Xin family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4423-4434 id: huang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 4423 lastpage: 4434 published: 2021-07-01 00:00:00 +0000 - title: 'STRODE: Stochastic Boundary Ordinary Differential Equation' abstract: 'Perception of time from sequentially acquired sensory inputs is rooted in everyday behaviors of individual organisms. Yet, most algorithms for time-series modeling fail to learn dynamics of random event timings directly from visual or audio inputs, requiring timing annotations during training that are usually unavailable for real-world applications. For instance, neuroscience perspectives on postdiction imply that there exist variable temporal ranges within which the incoming sensory inputs can affect the earlier perception, but such temporal ranges are mostly unannotated for real applications such as automatic speech recognition (ASR). In this paper, we present a probabilistic ordinary differential equation (ODE), called STochastic boundaRy ODE (STRODE), that learns both the timings and the dynamics of time series data without requiring any timing annotations during training. STRODE allows the usage of differential equations to sample from the posterior point processes, efficiently and analytically. We further provide theoretical guarantees on the learning of STRODE. Our empirical results show that our approach successfully infers event timings of time series data. Our method achieves competitive or superior performances compared to existing state-of-the-art methods for both synthetic and real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/huang21d.html PDF: http://proceedings.mlr.press/v139/huang21d/huang21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-huang21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hengguan family: Huang - given: Hongfu family: Liu - given: Hao family: Wang - given: Chang family: Xiao - given: Ye family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4435-4445 id: huang21d issued: date-parts: - 2021 - 7 - 1 firstpage: 4435 lastpage: 4445 published: 2021-07-01 00:00:00 +0000 - title: 'A Riemannian Block Coordinate Descent Method for Computing the Projection Robust Wasserstein Distance' abstract: 'The Wasserstein distance has become increasingly important in machine learning and deep learning. Despite its popularity, the Wasserstein distance is hard to approximate because of the curse of dimensionality. A recently proposed approach to alleviate the curse of dimensionality is to project the sampled data from the high dimensional probability distribution onto a lower-dimensional subspace, and then compute the Wasserstein distance between the projected data. However, this approach requires to solve a max-min problem over the Stiefel manifold, which is very challenging in practice. In this paper, we propose a Riemannian block coordinate descent (RBCD) method to solve this problem, which is based on a novel reformulation of the regularized max-min problem over the Stiefel manifold. We show that the complexity of arithmetic operations for RBCD to obtain an $\epsilon$-stationary point is $O(\epsilon^{-3})$, which is significantly better than the complexity of existing methods. Numerical results on both synthetic and real datasets demonstrate that our method is more efficient than existing methods, especially when the number of sampled data is very large.' volume: 139 URL: https://proceedings.mlr.press/v139/huang21e.html PDF: http://proceedings.mlr.press/v139/huang21e/huang21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-huang21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Minhui family: Huang - given: Shiqian family: Ma - given: Lifeng family: Lai editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4446-4455 id: huang21e issued: date-parts: - 2021 - 7 - 1 firstpage: 4446 lastpage: 4455 published: 2021-07-01 00:00:00 +0000 - title: 'Projection Robust Wasserstein Barycenters' abstract: 'Collecting and aggregating information from several probability measures or histograms is a fundamental task in machine learning. One of the popular solution methods for this task is to compute the barycenter of the probability measures under the Wasserstein metric. However, approximating the Wasserstein barycenter is numerically challenging because of the curse of dimensionality. This paper proposes the projection robust Wasserstein barycenter (PRWB) that has the potential to mitigate the curse of dimensionality, and a relaxed PRWB (RPRWB) model that is computationally more tractable. By combining the iterative Bregman projection algorithm and Riemannian optimization, we propose two algorithms for computing the RPRWB, which is a max-min problem over the Stiefel manifold. The complexity of arithmetic operations of the proposed algorithms for obtaining an $\epsilon$-stationary solution is analyzed. We incorporate the RPRWB into a discrete distribution clustering algorithm, and the numerical results on real text datasets confirm that our RPRWB model helps improve the clustering performance significantly.' volume: 139 URL: https://proceedings.mlr.press/v139/huang21f.html PDF: http://proceedings.mlr.press/v139/huang21f/huang21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-huang21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Minhui family: Huang - given: Shiqian family: Ma - given: Lifeng family: Lai editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4456-4465 id: huang21f issued: date-parts: - 2021 - 7 - 1 firstpage: 4456 lastpage: 4465 published: 2021-07-01 00:00:00 +0000 - title: 'Accurate Post Training Quantization With Small Calibration Sets' abstract: 'Lately, post-training quantization methods have gained considerable attention, as they are simple to use, and require only a small unlabeled calibration set. This small dataset cannot be used to fine-tune the model without significant over-fitting. Instead, these methods only use the calibration set to set the activations’ dynamic ranges. However, such methods always resulted in significant accuracy degradation, when used below 8-bits (except on small datasets). Here we aim to break the 8-bit barrier. To this end, we minimize the quantization errors of each layer or block separately by optimizing its parameters over the calibration set. We empirically demonstrate that this approach is: (1) much less susceptible to over-fitting than the standard fine-tuning approaches, and can be used even on a very small calibration set; and (2) more powerful than previous methods, which only set the activations’ dynamic ranges. We suggest two flavors for our method, parallel and sequential aim for a fixed and flexible bit-width allocation. For the latter, we demonstrate how to optimally allocate the bit-widths for each layer, while constraining accuracy degradation or model compression by proposing a novel integer programming formulation. Finally, we suggest model global statistics tuning, to correct biases introduced during quantization. Together, these methods yield state-of-the-art results for both vision and text models. For instance, on ResNet50, we obtain less than 1% accuracy degradation — with 4-bit weights and activations in all layers, but first and last. The suggested methods are two orders of magnitude faster than the traditional Quantize Aware Training approach used for lower than 8-bit quantization. We open-sourced our code \textit{https://github.com/papers-submission/CalibTIP}.' volume: 139 URL: https://proceedings.mlr.press/v139/hubara21a.html PDF: http://proceedings.mlr.press/v139/hubara21a/hubara21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hubara21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Itay family: Hubara - given: Yury family: Nahshan - given: Yair family: Hanani - given: Ron family: Banner - given: Daniel family: Soudry editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4466-4475 id: hubara21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4466 lastpage: 4475 published: 2021-07-01 00:00:00 +0000 - title: 'Learning and Planning in Complex Action Spaces' abstract: 'Many important real-world problems have action spaces that are high-dimensional, continuous or both, making full enumeration of all possible actions infeasible. Instead, only small subsets of actions can be sampled for the purpose of policy evaluation and improvement. In this paper, we propose a general framework to reason in a principled way about policy evaluation and improvement over such sampled action subsets. This sample-based policy iteration framework can in principle be applied to any reinforcement learning algorithm based upon policy iteration. Concretely, we propose Sampled MuZero, an extension of the MuZero algorithm that is able to learn in domains with arbitrarily complex action spaces by planning over sampled actions. We demonstrate this approach on the classical board game of Go and on two continuous control benchmark domains: DeepMind Control Suite and Real-World RL Suite.' volume: 139 URL: https://proceedings.mlr.press/v139/hubert21a.html PDF: http://proceedings.mlr.press/v139/hubert21a/hubert21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hubert21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Thomas family: Hubert - given: Julian family: Schrittwieser - given: Ioannis family: Antonoglou - given: Mohammadamin family: Barekatain - given: Simon family: Schmitt - given: David family: Silver editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4476-4486 id: hubert21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4476 lastpage: 4486 published: 2021-07-01 00:00:00 +0000 - title: 'Generative Adversarial Transformers' abstract: 'We introduce the GANsformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linear efficiency, that can readily scale to high-resolution synthesis. It iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network. We demonstrate the model’s strength and robustness through a careful evaluation over a range of datasets, from simulated multi-object environments to rich real-world indoor and outdoor scenes, showing it achieves state-of-the-art results in terms of image quality and diversity, while enjoying fast learning and better data-efficiency. Further qualitative and quantitative experiments offer us an insight into the model’s inner workings, revealing improved interpretability and stronger disentanglement, and illustrating the benefits and efficacy of our approach. An implementation of the model is available at https://github.com/dorarad/gansformer.' volume: 139 URL: https://proceedings.mlr.press/v139/hudson21a.html PDF: http://proceedings.mlr.press/v139/hudson21a/hudson21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hudson21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Drew A family: Hudson - given: Larry family: Zitnick editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4487-4499 id: hudson21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4487 lastpage: 4499 published: 2021-07-01 00:00:00 +0000 - title: 'Neural Pharmacodynamic State Space Modeling' abstract: 'Modeling the time-series of high-dimensional, longitudinal data is important for predicting patient disease progression. However, existing neural network based approaches that learn representations of patient state, while very flexible, are susceptible to overfitting. We propose a deep generative model that makes use of a novel attention-based neural architecture inspired by the physics of how treatments affect disease state. The result is a scalable and accurate model of high-dimensional patient biomarkers as they vary over time. Our proposed model yields significant improvements in generalization and, on real-world clinical data, provides interpretable insights into the dynamics of cancer progression.' volume: 139 URL: https://proceedings.mlr.press/v139/hussain21a.html PDF: http://proceedings.mlr.press/v139/hussain21a/hussain21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hussain21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zeshan M family: Hussain - given: Rahul G. family: Krishnan - given: David family: Sontag editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4500-4510 id: hussain21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4500 lastpage: 4510 published: 2021-07-01 00:00:00 +0000 - title: 'Hyperparameter Selection for Imitation Learning' abstract: 'We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10’000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.' volume: 139 URL: https://proceedings.mlr.press/v139/hussenot21a.html PDF: http://proceedings.mlr.press/v139/hussenot21a/hussenot21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hussenot21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Léonard family: Hussenot - given: Marcin family: Andrychowicz - given: Damien family: Vincent - given: Robert family: Dadashi - given: Anton family: Raichuk - given: Sabela family: Ramos - given: Nikola family: Momchev - given: Sertan family: Girgin - given: Raphael family: Marinier - given: Lukasz family: Stafiniak - given: Manu family: Orsini - given: Olivier family: Bachem - given: Matthieu family: Geist - given: Olivier family: Pietquin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4511-4522 id: hussenot21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4511 lastpage: 4522 published: 2021-07-01 00:00:00 +0000 - title: 'Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions' abstract: 'Generative adversarial networks (GANs) are often billed as "universal distribution learners", but precisely what distributions they can represent and learn is still an open question. Heavy-tailed distributions are prevalent in many different domains such as financial risk-assessment, physics, and epidemiology. We observe that existing GAN architectures do a poor job of matching the asymptotic behavior of heavy-tailed distributions, a problem that we show stems from their construction. Additionally, common loss functions produce unstable or near-zero gradients when faced with the infinite moments and large distances between outlier points characteristic of heavy-tailed distributions. We address these problems with the Pareto GAN. A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features. We identify issues with standard loss functions and propose the use of alternative metric spaces that enable stable and efficient learning. Finally, we evaluate our proposed approach on a variety of heavy-tailed datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/huster21a.html PDF: http://proceedings.mlr.press/v139/huster21a/huster21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-huster21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Todd family: Huster - given: Jeremy family: Cohen - given: Zinan family: Lin - given: Kevin family: Chan - given: Charles family: Kamhoua - given: Nandi O. family: Leslie - given: Cho-Yu Jason family: Chiang - given: Vyas family: Sekar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4523-4532 id: huster21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4523 lastpage: 4532 published: 2021-07-01 00:00:00 +0000 - title: 'LieTransformer: Equivariant Self-Attention for Lie Groups' abstract: 'Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing. Such works have mostly focused on group equivariant convolutions, building on the result that group equivariant linear maps are necessarily convolutions. In this work, we extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models. We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. We demonstrate the generality of our approach by showing experimental results that are competitive to baseline methods on a wide range of tasks: shape counting on point clouds, molecular property regression and modelling particle trajectories under Hamiltonian dynamics.' volume: 139 URL: https://proceedings.mlr.press/v139/hutchinson21a.html PDF: http://proceedings.mlr.press/v139/hutchinson21a/hutchinson21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-hutchinson21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michael J family: Hutchinson - given: Charline Le family: Lan - given: Sheheryar family: Zaidi - given: Emilien family: Dupont - given: Yee Whye family: Teh - given: Hyunjik family: Kim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4533-4543 id: hutchinson21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4533 lastpage: 4543 published: 2021-07-01 00:00:00 +0000 - title: 'Crowdsourcing via Annotator Co-occurrence Imputation and Provable Symmetric Nonnegative Matrix Factorization' abstract: 'Unsupervised learning of the Dawid-Skene (D&S) model from noisy, incomplete and crowdsourced annotations has been a long-standing challenge, and is a critical step towards reliably labeling massive data. A recent work takes a coupled nonnegative matrix factorization (CNMF) perspective, and shows appealing features: It ensures the identifiability of the D&S model and enjoys low sample complexity, as only the estimates of the co-occurrences of annotator labels are involved. However, the identifiability holds only when certain somewhat restrictive conditions are met in the context of crowdsourcing. Optimizing the CNMF criterion is also costly—and convergence assurances are elusive. This work recasts the pairwise co-occurrence based D&S model learning problem as a symmetric NMF (SymNMF) problem—which offers enhanced identifiability relative to CNMF. In practice, the SymNMF model is often (largely) incomplete, due to the lack of co-labeled items by some annotators. Two lightweight algorithms are proposed for co-occurrence imputation. Then, a low-complexity shifted rectified linear unit (ReLU)-empowered SymNMF algorithm is proposed to identify the D&S model. Various performance characterizations (e.g., missing co-occurrence recoverability, stability, and convergence) and evaluations are also presented.' volume: 139 URL: https://proceedings.mlr.press/v139/ibrahim21a.html PDF: http://proceedings.mlr.press/v139/ibrahim21a/ibrahim21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ibrahim21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shahana family: Ibrahim - given: Xiao family: Fu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4544-4554 id: ibrahim21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4544 lastpage: 4554 published: 2021-07-01 00:00:00 +0000 - title: 'Selecting Data Augmentation for Simulating Interventions' abstract: 'Machine learning models trained with purely observational data and the principle of empirical risk minimization (Vapnik 1992) can fail to generalize to unseen domains. In this paper, we focus on the case where the problem arises through spurious correlation between the observed domains and the actual task labels. We find that many domain generalization methods do not explicitly take this spurious correlation into account. Instead, especially in more application-oriented research areas like medical imaging or robotics, data augmentation techniques that are based on heuristics are used to learn domain invariant features. To bridge the gap between theory and practice, we develop a causal perspective on the problem of domain generalization. We argue that causal concepts can be used to explain the success of data augmentation by describing how they can weaken the spurious correlation between the observed domains and the task labels. We demonstrate that data augmentation can serve as a tool for simulating interventional data. We use these theoretical insights to derive a simple algorithm that is able to select data augmentation techniques that will lead to better domain generalization.' volume: 139 URL: https://proceedings.mlr.press/v139/ilse21a.html PDF: http://proceedings.mlr.press/v139/ilse21a/ilse21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ilse21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Maximilian family: Ilse - given: Jakub M family: Tomczak - given: Patrick family: Forré editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4555-4562 id: ilse21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4555 lastpage: 4562 published: 2021-07-01 00:00:00 +0000 - title: 'Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning' abstract: 'Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties. Instead, most approaches rely on validation data, which may not be readily available. In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. Some hyperparameters can be estimated online during training, simplifying the procedure. Our marginal-likelihood estimate is based on Laplace’s method and Gauss-Newton approximations to the Hessian, and it outperforms cross-validation and manual tuning on standard regression and image classification datasets, especially in terms of calibration and out-of-distribution detection. Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable (e.g., in nonstationary settings).' volume: 139 URL: https://proceedings.mlr.press/v139/immer21a.html PDF: http://proceedings.mlr.press/v139/immer21a/immer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-immer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexander family: Immer - given: Matthias family: Bauer - given: Vincent family: Fortuin - given: Gunnar family: Rätsch - given: Khan Mohammad family: Emtiyaz editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4563-4573 id: immer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4563 lastpage: 4573 published: 2021-07-01 00:00:00 +0000 - title: 'Active Learning for Distributionally Robust Level-Set Estimation' abstract: 'Many cases exist in which a black-box function $f$ with high evaluation cost depends on two types of variables $\bm x$ and $\bm w$, where $\bm x$ is a controllable \emph{design} variable and $\bm w$ are uncontrollable \emph{environmental} variables that have random variation following a certain distribution $P$. In such cases, an important task is to find the range of design variables $\bm x$ such that the function $f(\bm x, \bm w)$ has the desired properties by incorporating the random variation of the environmental variables $\bm w$. A natural measure of robustness is the probability that $f(\bm x, \bm w)$ exceeds a given threshold $h$, which is known as the \emph{probability threshold robustness} (PTR) measure in the literature on robust optimization. However, this robustness measure cannot be correctly evaluated when the distribution $P$ is unknown. In this study, we addressed this problem by considering the \textit{distributionally robust PTR} (DRPTR) measure, which considers the worst-case PTR within given candidate distributions. Specifically, we studied the problem of efficiently identifying a reliable set $H$, which is defined as a region in which the DRPTR measure exceeds a certain desired probability $\alpha$, which can be interpreted as a level set estimation (LSE) problem for DRPTR. We propose a theoretically grounded and computationally efficient active learning method for this problem. We show that the proposed method has theoretical guarantees on convergence and accuracy, and confirmed through numerical experiments that the proposed method outperforms existing methods.' volume: 139 URL: https://proceedings.mlr.press/v139/inatsu21a.html PDF: http://proceedings.mlr.press/v139/inatsu21a/inatsu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-inatsu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yu family: Inatsu - given: Shogo family: Iwazaki - given: Ichiro family: Takeuchi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4574-4584 id: inatsu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4574 lastpage: 4584 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization' abstract: 'Direct loss minimization is a popular approach for learning predictors over structured label spaces. This approach is computationally appealing as it replaces integration with optimization and allows to propagate gradients in a deep net using loss-perturbed prediction. Recently, this technique was extended to generative models, by introducing a randomized predictor that samples a structure from a randomly perturbed score function. In this work, we interpolate between these techniques by learning the variance of randomized structured predictors as well as their mean, in order to balance between the learned score function and the randomized noise. We demonstrate empirically the effectiveness of learning this balance in structured discrete spaces.' volume: 139 URL: https://proceedings.mlr.press/v139/indelman21a.html PDF: http://proceedings.mlr.press/v139/indelman21a/indelman21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-indelman21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hedda Cohen family: Indelman - given: Tamir family: Hazan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4585-4595 id: indelman21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4585 lastpage: 4595 published: 2021-07-01 00:00:00 +0000 - title: 'Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning' abstract: 'Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities; however, common patterns of behavior often emerge among these agents/entities. Our method aims to leverage these commonalities by asking the question: “What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?” By posing this counterfactual question, we can recognize state-action trajectories within sub-groups of entities that we may have encountered in another task and use what we learned in that task to inform our prediction in the current one. We then reconstruct a prediction of the full returns as a combination of factors considering these disjoint groups of entities and train this “randomly factorized" value function as an auxiliary objective for value-based multi-agent reinforcement learning. By doing so, our model can recognize and leverage similarities across tasks to improve learning efficiency in a multi-task setting. Our approach, Randomized Entity-wise Factorization for Imagined Learning (REFIL), outperforms all strong baselines by a significant margin in challenging multi-task StarCraft micromanagement settings.' volume: 139 URL: https://proceedings.mlr.press/v139/iqbal21a.html PDF: http://proceedings.mlr.press/v139/iqbal21a/iqbal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-iqbal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shariq family: Iqbal - given: Christian A Schroeder family: De Witt - given: Bei family: Peng - given: Wendelin family: Boehmer - given: Shimon family: Whiteson - given: Fei family: Sha editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4596-4606 id: iqbal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4596 lastpage: 4606 published: 2021-07-01 00:00:00 +0000 - title: 'Randomized Exploration in Reinforcement Learning with General Value Function Approximation' abstract: 'We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle. Unlike existing upper-confidence-bound (UCB) based approaches, which are often computationally intractable, our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises. To attain optimistic value function estimation without resorting to a UCB-style bonus, we introduce an optimistic reward sampling procedure. When the value functions can be represented by a function class $\mathcal{F}$, our algorithm achieves a worst-case regret bound of $\tilde{O}(\mathrm{poly}(d_EH)\sqrt{T})$ where $T$ is the time elapsed, $H$ is the planning horizon and $d_E$ is the \emph{eluder dimension} of $\mathcal{F}$. In the linear setting, our algorithm reduces to LSVI-PHE, a variant of RLSVI, that enjoys an $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret. We complement the theory with an empirical evaluation across known difficult exploration tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/ishfaq21a.html PDF: http://proceedings.mlr.press/v139/ishfaq21a/ishfaq21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ishfaq21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Haque family: Ishfaq - given: Qiwen family: Cui - given: Viet family: Nguyen - given: Alex family: Ayoub - given: Zhuoran family: Yang - given: Zhaoran family: Wang - given: Doina family: Precup - given: Lin family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4607-4616 id: ishfaq21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4607 lastpage: 4616 published: 2021-07-01 00:00:00 +0000 - title: 'Distributed Second Order Methods with Fast Rates and Compressed Communication' abstract: 'We develop several new communication-efficient second-order methods for distributed optimization. Our first method, NEWTON-STAR, is a variant of Newton’s method from which it inherits its fast local quadratic rate. However, unlike Newton’s method, NEWTON-STAR enjoys the same per iteration communication cost as gradient descent. While this method is impractical as it relies on the use of certain unknown parameters characterizing the Hessian of the objective function at the optimum, it serves as the starting point which enables us to design practical variants thereof with strong theoretical guarantees. In particular, we design a stochastic sparsification strategy for learning the unknown parameters in an iterative fashion in a communication efficient manner. Applying this strategy to NEWTON-STAR leads to our next method, NEWTON-LEARN, for which we prove local linear and superlinear rates independent of the condition number. When applicable, this method can have dramatically superior convergence behavior when compared to state-of-the-art methods. Finally, we develop a globalization strategy using cubic regularization which leads to our next method, CUBIC-NEWTON-LEARN, for which we prove global sublinear and linear convergence rates, and a fast superlinear rate. Our results are supported with experimental results on real datasets, and show several orders of magnitude improvement on baseline and state-of-the-art methods in terms of communication complexity.' volume: 139 URL: https://proceedings.mlr.press/v139/islamov21a.html PDF: http://proceedings.mlr.press/v139/islamov21a/islamov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-islamov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rustem family: Islamov - given: Xun family: Qian - given: Peter family: Richtarik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4617-4628 id: islamov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4617 lastpage: 4628 published: 2021-07-01 00:00:00 +0000 - title: 'What Are Bayesian Neural Network Posteriors Really Like?' abstract: 'The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a “cold posterior” effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC can provide good generalization, their predictive distributions are distinct from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.' volume: 139 URL: https://proceedings.mlr.press/v139/izmailov21a.html PDF: http://proceedings.mlr.press/v139/izmailov21a/izmailov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-izmailov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pavel family: Izmailov - given: Sharad family: Vikram - given: Matthew D family: Hoffman - given: Andrew Gordon Gordon family: Wilson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4629-4640 id: izmailov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4629 lastpage: 4640 published: 2021-07-01 00:00:00 +0000 - title: 'How to Learn when Data Reacts to Your Model: Performative Gradient Descent' abstract: 'Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution. For example, a bank which uses the number of open credit lines to determine a customer’s risk of default on a loan may induce customers to open more credit lines in order to improve their chances of being approved. Because of the interactions between the model and data distribution, finding the optimal model parameters is challenging. Works in this area have focused on finding stable points, which can be far from optimal. Here we introduce \emph{performative gradient descent} (PerfGD), an algorithm for computing performatively optimal points. Under regularity assumptions on the performative loss, PerfGD is the first algorithm which provably converges to an optimal point. PerfGD explicitly captures how changes in the model affects the data distribution and is simple to use. We support our findings with theory and experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/izzo21a.html PDF: http://proceedings.mlr.press/v139/izzo21a/izzo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-izzo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zachary family: Izzo - given: Lexing family: Ying - given: James family: Zou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4641-4650 id: izzo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4641 lastpage: 4650 published: 2021-07-01 00:00:00 +0000 - title: 'Perceiver: General Perception with Iterative Attention' abstract: 'Biological systems understand the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver {–} a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture is competitive with or outperforms strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video and video+audio. The Perceiver obtains performance comparable to ResNet-50 and ViT on ImageNet without 2D convolutions by directly attending to 50,000 pixels. It is also competitive in all modalities in AudioSet.' volume: 139 URL: https://proceedings.mlr.press/v139/jaegle21a.html PDF: http://proceedings.mlr.press/v139/jaegle21a/jaegle21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jaegle21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrew family: Jaegle - given: Felix family: Gimeno - given: Andy family: Brock - given: Oriol family: Vinyals - given: Andrew family: Zisserman - given: Joao family: Carreira editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4651-4664 id: jaegle21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4651 lastpage: 4664 published: 2021-07-01 00:00:00 +0000 - title: 'Imitation by Predicting Observations' abstract: 'Imitation learning enables agents to reuse and adapt the hard-won expertise of others, offering a solution to several key challenges in learning behavior. Although it is easy to observe behavior in the real-world, the underlying actions may not be accessible. We present a new method for imitation solely from observations that achieves comparable performance to experts on challenging continuous control tasks while also exhibiting robustness in the presence of observations unrelated to the task. Our method, which we call FORM (for "Future Observation Reward Model") is derived from an inverse RL objective and imitates using a model of expert behavior learned by generative modelling of the expert’s observations, without needing ground truth actions. We show that FORM performs comparably to a strong baseline IRL method (GAIL) on the DeepMind Control Suite benchmark, while outperforming GAIL in the presence of task-irrelevant features.' volume: 139 URL: https://proceedings.mlr.press/v139/jaegle21b.html PDF: http://proceedings.mlr.press/v139/jaegle21b/jaegle21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jaegle21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrew family: Jaegle - given: Yury family: Sulsky - given: Arun family: Ahuja - given: Jake family: Bruce - given: Rob family: Fergus - given: Greg family: Wayne editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4665-4676 id: jaegle21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4665 lastpage: 4676 published: 2021-07-01 00:00:00 +0000 - title: 'Local Correlation Clustering with Asymmetric Classification Errors' abstract: 'In the Correlation Clustering problem, we are given a complete weighted graph $G$ with its edges labeled as “similar" and “dissimilar" by a noisy binary classifier. For a clustering $\mathcal{C}$ of graph $G$, a similar edge is in disagreement with $\mathcal{C}$, if its endpoints belong to distinct clusters; and a dissimilar edge is in disagreement with $\mathcal{C}$ if its endpoints belong to the same cluster. The disagreements vector, $\disagree$, is a vector indexed by the vertices of $G$ such that the $v$-th coordinate $\disagree_v$ equals the weight of all disagreeing edges incident on $v$. The goal is to produce a clustering that minimizes the $\ell_p$ norm of the disagreements vector for $p\geq 1$. We study the $\ell_p$ objective in Correlation Clustering under the following assumption: Every similar edge has weight in $[\alpha\mathbf{w},\mathbf{w}]$ and every dissimilar edge has weight at least $\alpha\mathbf{w}$ (where $\alpha \leq 1$ and $\mathbf{w}>0$ is a scaling parameter). We give an $O\left((\nicefrac{1}{\alpha})^{\nicefrac{1}{2}-\nicefrac{1}{2p}}\cdot \log\nicefrac{1}{\alpha}\right)$ approximation algorithm for this problem. Furthermore, we show an almost matching convex programming integrality gap.' volume: 139 URL: https://proceedings.mlr.press/v139/jafarov21a.html PDF: http://proceedings.mlr.press/v139/jafarov21a/jafarov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jafarov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jafar family: Jafarov - given: Sanchit family: Kalhan - given: Konstantin family: Makarychev - given: Yury family: Makarychev editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4677-4686 id: jafarov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4677 lastpage: 4686 published: 2021-07-01 00:00:00 +0000 - title: 'Alternative Microfoundations for Strategic Classification' abstract: 'When reasoning about strategic behavior in a machine learning context it is tempting to combine standard microfoundations of rational agents with the statistical decision theory underlying classification. In this work, we argue that a direct combination of these ingredients leads to brittle solution concepts of limited descriptive and prescriptive value. First, we show that rational agents with perfect information produce discontinuities in the aggregate response to a decision rule that we often do not observe empirically. Second, when any positive fraction of agents is not perfectly strategic, desirable stable points—where the classifier is optimal for the data it entails—no longer exist. Third, optimal decision rules under standard microfoundations maximize a measure of negative externality known as social burden within a broad class of assumptions about agent behavior. Recognizing these limitations we explore alternatives to standard microfoundations for binary classification. We describe desiderata that help navigate the space of possible assumptions about agent responses, and we then propose the noisy response model. Inspired by smoothed analysis and empirical observations, noisy response incorporates imperfection in the agent responses, which we show mitigates the limitations of standard microfoundations. Our model retains analytical tractability, leads to more robust insights about stable points, and imposes a lower social burden at optimality.' volume: 139 URL: https://proceedings.mlr.press/v139/jagadeesan21a.html PDF: http://proceedings.mlr.press/v139/jagadeesan21a/jagadeesan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jagadeesan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Meena family: Jagadeesan - given: Celestine family: Mendler-Dünner - given: Moritz family: Hardt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4687-4697 id: jagadeesan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4687 lastpage: 4697 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Density Estimation from Batches: The Best Things in Life are (Nearly) Free' abstract: 'In many applications data are collected in batches, some potentially biased, corrupt, or even adversarial. Learning algorithms for this setting have therefore garnered considerable recent attention. In particular, a sequence of works has shown that all approximately piecewise polynomial distributions—and in particular all Gaussian, Gaussian-mixture, log-concave, low-modal, and monotone-hazard distributions—can be learned robustly in polynomial time. However, these results left open the question, stated explicitly in \cite{chen2020learning}, about the best possible sample complexity of such algorithms. We answer this question, showing that, perhaps surprisingly, up to logarithmic factors, the optimal sample complexity is the same as for genuine, non-adversarial, data! To establish the result, we reduce robust learning of approximately piecewise polynomial distributions to robust learning of the probability of all subsets of size at most $k$ of a larger discrete domain, and learn these probabilities in optimal sample complexity linear in $k$ regardless of the domain size. In simulations, the algorithm runs very quickly and estimates distributions to essentially the accuracy achieved when all adversarial batches are removed. The results also imply the first polynomial-time sample-optimal algorithm for robust interval-based classification based on batched data.' volume: 139 URL: https://proceedings.mlr.press/v139/jain21a.html PDF: http://proceedings.mlr.press/v139/jain21a/jain21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jain21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ayush family: Jain - given: Alon family: Orlitsky editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4698-4708 id: jain21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4698 lastpage: 4708 published: 2021-07-01 00:00:00 +0000 - title: 'Instance-Optimal Compressed Sensing via Posterior Sampling' abstract: 'We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and \emph{any} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.' volume: 139 URL: https://proceedings.mlr.press/v139/jalal21a.html PDF: http://proceedings.mlr.press/v139/jalal21a/jalal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jalal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ajil family: Jalal - given: Sushrut family: Karmalkar - given: Alex family: Dimakis - given: Eric family: Price editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4709-4720 id: jalal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4709 lastpage: 4720 published: 2021-07-01 00:00:00 +0000 - title: 'Fairness for Image Generation with Uncertain Sensitive Attributes' abstract: 'This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution, which entail different definitions from the standard classification setting. Moreover, while traditional group fairness definitions are typically defined with respect to specified protected groups – camouflaging the fact that these groupings are artificial and carry historical and political motivations – we emphasize that there are no ground truth identities. For instance, should South and East Asians be viewed as a single group or separate groups? Should we consider one race as a whole or further split by gender? Choosing which groups are valid and who belongs in them is an impossible dilemma and being “fair” with respect to Asians may require being “unfair” with respect to South Asians. This motivates the introduction of definitions that allow algorithms to be \emph{oblivious} to the relevant groupings. We define several intuitive notions of group fairness and study their incompatibilities and trade-offs. We show that the natural extension of demographic parity is strongly dependent on the grouping, and \emph{impossible} to achieve obliviously. On the other hand, the conceptually new definition we introduce, Conditional Proportional Representation, can be achieved obliviously through Posterior Sampling. Our experiments validate our theoretical results and achieve fair image reconstruction using state-of-the-art generative models.' volume: 139 URL: https://proceedings.mlr.press/v139/jalal21b.html PDF: http://proceedings.mlr.press/v139/jalal21b/jalal21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jalal21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ajil family: Jalal - given: Sushrut family: Karmalkar - given: Jessica family: Hoffmann - given: Alex family: Dimakis - given: Eric family: Price editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4721-4732 id: jalal21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4721 lastpage: 4732 published: 2021-07-01 00:00:00 +0000 - title: 'Feature Clustering for Support Identification in Extreme Regions' abstract: 'Understanding the complex structure of multivariate extremes is a major challenge in various fields from portfolio monitoring and environmental risk management to insurance. In the framework of multivariate Extreme Value Theory, a common characterization of extremes’ dependence structure is the angular measure. It is a suitable measure to work in extreme regions as it provides meaningful insights concerning the subregions where extremes tend to concentrate their mass. The present paper develops a novel optimization-based approach to assess the dependence structure of extremes. This support identification scheme rewrites as estimating clusters of features which best capture the support of extremes. The dimension reduction technique we provide is applied to statistical learning tasks such as feature clustering and anomaly detection. Numerical experiments provide strong empirical evidence of the relevance of our approach.' volume: 139 URL: https://proceedings.mlr.press/v139/jalalzai21a.html PDF: http://proceedings.mlr.press/v139/jalalzai21a/jalalzai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jalalzai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hamid family: Jalalzai - given: Rémi family: Leluc editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4733-4743 id: jalalzai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4733 lastpage: 4743 published: 2021-07-01 00:00:00 +0000 - title: 'Improved Regret Bounds of Bilinear Bandits using Action Space Analysis' abstract: 'We consider the bilinear bandit problem where the learner chooses a pair of arms, each from two different action spaces of dimension $d_1$ and $d_2$, respectively. The learner then receives a reward whose expectation is a bilinear function of the two chosen arms with an unknown matrix parameter $\Theta^*\in\mathbb{R}^{d_1 \times d_2}$ with rank $r$. Despite abundant applications such as drug discovery, the optimal regret rate is unknown for this problem, though it was conjectured to be $\tilde O(\sqrt{d_1d_2(d_1+d_2)r T})$ by Jun et al. (2019) where $\tilde O$ ignores polylogarithmic factors in $T$. In this paper, we make progress towards closing the gap between the upper and lower bound on the optimal regret. First, we reject the conjecture above by proposing algorithms that achieve the regret $\tilde O(\sqrt{d_1 d_2 (d_1+d_2) T})$ using the fact that the action space dimension $O(d_1+d_2)$ is significantly lower than the matrix parameter dimension $O(d_1 d_2)$. Second, we additionally devise an algorithm with better empirical performance than previous algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/jang21a.html PDF: http://proceedings.mlr.press/v139/jang21a/jang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kyoungseok family: Jang - given: Kwang-Sung family: Jun - given: Se-Young family: Yun - given: Wanmo family: Kang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4744-4754 id: jang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4744 lastpage: 4754 published: 2021-07-01 00:00:00 +0000 - title: 'Inverse Decision Modeling: Learning Interpretable Representations of Behavior' abstract: 'Decision analysis deals with modeling and enhancing decision processes. A principal challenge in improving behavior is in obtaining a transparent *description* of existing behavior in the first place. In this paper, we develop an expressive, unifying perspective on *inverse decision modeling*: a framework for learning parameterized representations of sequential decision behavior. First, we formalize the *forward* problem (as a normative standard), subsuming common classes of control behavior. Second, we use this to formalize the *inverse* problem (as a descriptive model), generalizing existing work on imitation/reward learning—while opening up a much broader class of research problems in behavior representation. Finally, we instantiate this approach with an example (*inverse bounded rational control*), illustrating how this structure enables learning (interpretable) representations of (bounded) rationality—while naturally capturing intuitive notions of suboptimal actions, biased beliefs, and imperfect knowledge of environments.' volume: 139 URL: https://proceedings.mlr.press/v139/jarrett21a.html PDF: http://proceedings.mlr.press/v139/jarrett21a/jarrett21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jarrett21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Daniel family: Jarrett - given: Alihan family: Hüyük - given: Mihaela family: Van Der Schaar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4755-4771 id: jarrett21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4755 lastpage: 4771 published: 2021-07-01 00:00:00 +0000 - title: 'Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization' abstract: 'The early phase of training a deep neural network has a dramatic effect on the local curvature of the loss function. For instance, using a small learning rate does not guarantee stable optimization because the optimization trajectory has a tendency to steer towards regions of the loss surface with increasing local curvature. We ask whether this tendency is connected to the widely observed phenomenon that the choice of the learning rate strongly influences generalization. We first show that stochastic gradient descent (SGD) implicitly penalizes the trace of the Fisher Information Matrix (FIM), a measure of the local curvature, from the start of training. We argue it is an implicit regularizer in SGD by showing that explicitly penalizing the trace of the FIM can significantly improve generalization. We highlight that poor final generalization coincides with the trace of the FIM attaining a large value early in training, to which we refer as catastrophic Fisher explosion. Finally, to gain insight into the regularization effect of penalizing the trace of the FIM, we show that it limits memorization by reducing the learning speed of examples with noisy labels more than that of the examples with clean labels.' volume: 139 URL: https://proceedings.mlr.press/v139/jastrzebski21a.html PDF: http://proceedings.mlr.press/v139/jastrzebski21a/jastrzebski21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jastrzebski21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Stanislaw family: Jastrzebski - given: Devansh family: Arpit - given: Oliver family: Astrand - given: Giancarlo B family: Kerg - given: Huan family: Wang - given: Caiming family: Xiong - given: Richard family: Socher - given: Kyunghyun family: Cho - given: Krzysztof J family: Geras editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4772-4784 id: jastrzebski21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4772 lastpage: 4784 published: 2021-07-01 00:00:00 +0000 - title: 'Policy Gradient Bayesian Robust Optimization for Imitation Learning' abstract: 'The difficulty in specifying rewards for many real-world problems has led to an increased focus on learning rewards from human feedback, such as demonstrations. However, there are often many different reward functions that explain the human feedback, leaving agents with uncertainty over what the true reward function is. While most policy optimization approaches handle this uncertainty by optimizing for expected performance, many applications demand risk-averse behavior. We derive a novel policy gradient-style robust optimization approach, PG-BROIL, that optimizes a soft-robust objective that balances expected performance and risk. To the best of our knowledge, PG-BROIL is the first policy optimization algorithm robust to a distribution of reward hypotheses which can scale to continuous MDPs. Results suggest that PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse and outperforms state-of-the-art imitation learning algorithms when learning from ambiguous demonstrations by hedging against uncertainty, rather than seeking to uniquely identify the demonstrator’s reward function.' volume: 139 URL: https://proceedings.mlr.press/v139/javed21a.html PDF: http://proceedings.mlr.press/v139/javed21a/javed21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-javed21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zaynah family: Javed - given: Daniel S family: Brown - given: Satvik family: Sharma - given: Jerry family: Zhu - given: Ashwin family: Balakrishna - given: Marek family: Petrik - given: Anca family: Dragan - given: Ken family: Goldberg editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4785-4796 id: javed21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4785 lastpage: 4796 published: 2021-07-01 00:00:00 +0000 - title: 'In-Database Regression in Input Sparsity Time' abstract: 'Sketching is a powerful dimensionality reduction technique for accelerating algorithms for data analysis. A crucial step in sketching methods is to compute a subspace embedding (SE) for a large matrix $A \in \mathbb{R}^{N \times d}$. SE’s are the primary tool for obtaining extremely efficient solutions for many linear-algebraic tasks, such as least squares regression and low rank approximation. Computing an SE often requires an explicit representation of $A$ and running time proportional to the size of $A$. However, if $A= T_1 \Join T_2 \Join …\Join T_m$ is the result of a database join query on several smaller tables $T_i \in \mathbb{R}^{n_i \times d_i}$, then this running time can be prohibitive, as $A$ itself can have as many as $O(n_1 n_2 \cdots n_m)$ rows. In this work, we design subspace embeddings for database joins which can be computed significantly faster than computing the join. For the case of a two table join $A = T_1 \Join T_2$ we give input-sparsity algorithms for computing subspace embeddings, with running time bounded by the number of non-zero entries in $T_1,T_2$. This results in input-sparsity time algorithms for high accuracy regression, significantly improving upon the running time of prior FAQ-based methods for regression. We extend our results to arbitrary joins for the ridge regression problem, also considerably improving the running time of prior methods. Empirically, we apply our method to real datasets and show that it is significantly faster than existing algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/jayaram21a.html PDF: http://proceedings.mlr.press/v139/jayaram21a/jayaram21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jayaram21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rajesh family: Jayaram - given: Alireza family: Samadian - given: David family: Woodruff - given: Peng family: Ye editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4797-4806 id: jayaram21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4797 lastpage: 4806 published: 2021-07-01 00:00:00 +0000 - title: 'Parallel and Flexible Sampling from Autoregressive Models via Langevin Dynamics' abstract: 'This paper introduces an alternative approach to sampling from autoregressive models. Autoregressive models are typically sampled sequentially, according to the transition dynamics defined by the model. Instead, we propose a sampling procedure that initializes a sequence with white noise and follows a Markov chain defined by Langevin dynamics on the global log-likelihood of the sequence. This approach parallelizes the sampling process and generalizes to conditional sampling. Using an autoregressive model as a Bayesian prior, we can steer the output of a generative model using a conditional likelihood or constraints. We apply these techniques to autoregressive models in the visual and audio domains, with competitive results for audio source separation, super-resolution, and inpainting.' volume: 139 URL: https://proceedings.mlr.press/v139/jayaram21b.html PDF: http://proceedings.mlr.press/v139/jayaram21b/jayaram21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jayaram21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vivek family: Jayaram - given: John family: Thickstun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4807-4818 id: jayaram21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4807 lastpage: 4818 published: 2021-07-01 00:00:00 +0000 - title: 'Objective Bound Conditional Gaussian Process for Bayesian Optimization' abstract: 'A Gaussian process is a standard surrogate model for an unknown objective function in Bayesian optimization. In this paper, we propose a new surrogate model, called the objective bound conditional Gaussian process (OBCGP), to condition a Gaussian process on a bound on the optimal function value. The bound is obtained and updated as the best observed value during the sequential optimization procedure. Unlike the standard Gaussian process, the OBCGP explicitly incorporates the existence of a point that improves the best known bound. We treat the location of such a point as a model parameter and estimate it jointly with other parameters by maximizing the likelihood using variational inference. Within the standard Bayesian optimization framework, the OBCGP can be combined with various acquisition functions to select the next query point. In particular, we derive cumulative regret bounds for the OBCGP combined with the upper confidence bound acquisition algorithm. Furthermore, the OBCGP can inherently incorporate a new type of prior knowledge, i.e., the bounds on the optimum, if it is available. The incorporation of this type of prior knowledge into a surrogate model has not been studied previously. We demonstrate the effectiveness of the OBCGP through its application to Bayesian optimization tasks, such as the sequential design of experiments and hyperparameter optimization in neural networks.' volume: 139 URL: https://proceedings.mlr.press/v139/jeong21a.html PDF: http://proceedings.mlr.press/v139/jeong21a/jeong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jeong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Taewon family: Jeong - given: Heeyoung family: Kim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4819-4828 id: jeong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4819 lastpage: 4828 published: 2021-07-01 00:00:00 +0000 - title: 'Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding' abstract: 'We study the problem of learning conditional average treatment effects (CATE) from high-dimensional, observational data with unobserved confounders. Unobserved confounders introduce ignorance—a level of unidentifiability—about an individual’s response to treatment by inducing bias in CATE estimates. We present a new parametric interval estimator suited for high-dimensional data, that estimates a range of possible CATE values when given a predefined bound on the level of hidden confounding. Further, previous interval estimators do not account for ignorance about the CATE associated with samples that may be underrepresented in the original study, or samples that violate the overlap assumption. Our interval estimator also incorporates model uncertainty so that practitioners can be made aware of such out-of-distribution data. We prove that our estimator converges to tight bounds on CATE when there may be unobserved confounding and assess it using semi-synthetic, high-dimensional datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/jesson21a.html PDF: http://proceedings.mlr.press/v139/jesson21a/jesson21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jesson21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrew family: Jesson - given: Sören family: Mindermann - given: Yarin family: Gal - given: Uri family: Shalit editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4829-4838 id: jesson21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4829 lastpage: 4838 published: 2021-07-01 00:00:00 +0000 - title: 'DeepReDuce: ReLU Reduction for Fast Private Inference' abstract: 'The recent rise of privacy concerns has led researchers to devise methods for private neural inference—where inferences are made directly on encrypted data, never seeing inputs. The primary challenge facing private inference is that computing on encrypted data levies an impractically-high latency penalty, stemming mostly from non-linear operators like ReLU. Enabling practical and private inference requires new optimization methods that minimize network ReLU counts while preserving accuracy. This paper proposes DeepReDuce: a set of optimizations for the judicious removal of ReLUs to reduce private inference latency. The key insight is that not all ReLUs contribute equally to accuracy. We leverage this insight to drop, or remove, ReLUs from classic networks to significantly reduce inference latency and maintain high accuracy. Given a network architecture, DeepReDuce outputs a Pareto frontier of networks that tradeoff the number of ReLUs and accuracy. Compared to the state-of-the-art for private inference DeepReDuce improves accuracy and reduces ReLU count by up to 3.5% (iso-ReLU count) and 3.5x (iso-accuracy), respectively.' volume: 139 URL: https://proceedings.mlr.press/v139/jha21a.html PDF: http://proceedings.mlr.press/v139/jha21a/jha21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jha21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nandan Kumar family: Jha - given: Zahra family: Ghodsi - given: Siddharth family: Garg - given: Brandon family: Reagen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4839-4849 id: jha21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4839 lastpage: 4849 published: 2021-07-01 00:00:00 +0000 - title: 'Factor-analytic inverse regression for high-dimension, small-sample dimensionality reduction' abstract: 'Sufficient dimension reduction (SDR) methods are a family of supervised methods for dimensionality reduction that seek to reduce dimensionality while preserving information about a target variable of interest. However, existing SDR methods typically require more observations than the number of dimensions ($N > p$). To overcome this limitation, we propose Class-conditional Factor Analytic Dimensions (CFAD), a model-based dimensionality reduction method for high-dimensional, small-sample data. We show that CFAD substantially outperforms existing SDR methods in the small-sample regime, and can be extended to incorporate prior information such as smoothness in the projection axes. We demonstrate the effectiveness of CFAD with an application to functional magnetic resonance imaging (fMRI) measurements during visual object recognition and working memory tasks, where it outperforms existing SDR and a variety of other dimensionality-reduction methods.' volume: 139 URL: https://proceedings.mlr.press/v139/jha21b.html PDF: http://proceedings.mlr.press/v139/jha21b/jha21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jha21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aditi family: Jha - given: Michael J. family: Morais - given: Jonathan W family: Pillow editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4850-4859 id: jha21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4850 lastpage: 4859 published: 2021-07-01 00:00:00 +0000 - title: 'Fast margin maximization via dual acceleration' abstract: 'We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of O(1/t^2). This contrasts with a rate of O(1/log(t)) for standard gradient descent, and O(1/t) for normalized gradient descent. The momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.' volume: 139 URL: https://proceedings.mlr.press/v139/ji21a.html PDF: http://proceedings.mlr.press/v139/ji21a/ji21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ji21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ziwei family: Ji - given: Nathan family: Srebro - given: Matus family: Telgarsky editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4860-4869 id: ji21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4860 lastpage: 4869 published: 2021-07-01 00:00:00 +0000 - title: 'Marginalized Stochastic Natural Gradients for Black-Box Variational Inference' abstract: 'Black-box variational inference algorithms use stochastic sampling to analyze diverse statistical models, like those expressed in probabilistic programming languages, without model-specific derivations. While the popular score-function estimator computes unbiased gradient estimates, its variance is often unacceptably large, especially in models with discrete latent variables. We propose a stochastic natural gradient estimator that is as broadly applicable and unbiased, but improves efficiency by exploiting the curvature of the variational bound, and provably reduces variance by marginalizing discrete latent variables. Our marginalized stochastic natural gradients have intriguing connections to classic coordinate ascent variational inference, but allow parallel updates of variational parameters, and provide superior convergence guarantees relative to naive Monte Carlo approximations. We integrate our method with the probabilistic programming language Pyro and evaluate real-world models of documents, images, networks, and crowd-sourcing. Compared to score-function estimators, we require far fewer Monte Carlo samples and consistently convergence orders of magnitude faster.' volume: 139 URL: https://proceedings.mlr.press/v139/ji21b.html PDF: http://proceedings.mlr.press/v139/ji21b/ji21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ji21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Geng family: Ji - given: Debora family: Sujono - given: Erik B family: Sudderth editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4870-4881 id: ji21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4870 lastpage: 4881 published: 2021-07-01 00:00:00 +0000 - title: 'Bilevel Optimization: Convergence Analysis and Enhanced Design' abstract: 'Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization problem. For deterministic bilevel optimization, we provide a comprehensive convergence rate analysis for two popular algorithms respectively based on approximate implicit differentiation (AID) and iterative differentiation (ITD). For the AID-based method, we orderwisely improve the previous convergence rate analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate. Our analysis also provides a quantitative comparison between ITD and AID based approaches. For stochastic bilevel optimization, we propose a novel algorithm named stocBiO, which features a sample-efficient hypergradient estimator using efficient Jacobian- and Hessian-vector product computations. We provide the convergence rate guarantee for stocBiO, and show that stocBiO outperforms the best known computational complexities orderwisely with respect to the condition number $\kappa$ and the target accuracy $\epsilon$. We further validate our theoretical results and demonstrate the efficiency of bilevel optimization algorithms by the experiments on meta-learning and hyperparameter optimization.' volume: 139 URL: https://proceedings.mlr.press/v139/ji21c.html PDF: http://proceedings.mlr.press/v139/ji21c/ji21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ji21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kaiyi family: Ji - given: Junjie family: Yang - given: Yingbin family: Liang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4882-4892 id: ji21c issued: date-parts: - 2021 - 7 - 1 firstpage: 4882 lastpage: 4892 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Statistical Tests: A Neural Tangent Kernel Approach' abstract: 'For machine learning models to make reliable predictions in deployment, one needs to ensure the previously unknown test samples need to be sufficiently similar to the training data. The commonly used shift-invariant kernels do not have the compositionality and fail to capture invariances in high-dimensional data in computer vision. We propose a shift-invariant convolutional neural tangent kernel (SCNTK) based outlier detector and two-sample tests with maximum mean discrepancy (MMD) that is O(n) in the number of samples due to using the random feature approximation. On MNIST and CIFAR10 with various types of dataset shifts, we empirically show that statistical tests with such compositional kernels, inherited from infinitely wide neural networks, achieve higher detection accuracy than existing non-parametric methods. Our method also provides a competitive alternative to adapted kernel methods that require a training phase.' volume: 139 URL: https://proceedings.mlr.press/v139/jia21a.html PDF: http://proceedings.mlr.press/v139/jia21a/jia21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jia21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sheng family: Jia - given: Ehsan family: Nezhadarya - given: Yuhuai family: Wu - given: Jimmy family: Ba editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4893-4903 id: jia21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4893 lastpage: 4903 published: 2021-07-01 00:00:00 +0000 - title: 'Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision' abstract: 'Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.' volume: 139 URL: https://proceedings.mlr.press/v139/jia21b.html PDF: http://proceedings.mlr.press/v139/jia21b/jia21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jia21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chao family: Jia - given: Yinfei family: Yang - given: Ye family: Xia - given: Yi-Ting family: Chen - given: Zarana family: Parekh - given: Hieu family: Pham - given: Quoc family: Le - given: Yun-Hsuan family: Sung - given: Zhen family: Li - given: Tom family: Duerig editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4904-4916 id: jia21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4904 lastpage: 4916 published: 2021-07-01 00:00:00 +0000 - title: 'Multi-Dimensional Classification via Sparse Label Encoding' abstract: 'In multi-dimensional classification (MDC), there are multiple class variables in the output space with each of them corresponding to one heterogeneous class space. Due to the heterogeneity of class spaces, it is quite challenging to consider the dependencies among class variables when learning from MDC examples. In this paper, we propose a novel MDC approach named SLEM which learns the predictive model in an encoded label space instead of the original heterogeneous one. Specifically, SLEM works in an encoding-training-decoding framework. In the encoding phase, each class vector is mapped into a real-valued one via three cascaded operations including pairwise grouping, one-hot conversion and sparse linear encoding. In the training phase, a multi-output regression model is learned within the encoded label space. In the decoding phase, the predicted class vector is obtained by adapting orthogonal matching pursuit over outputs of the learned multi-output regression model. Experimental results clearly validate the superiority of SLEM against state-of-the-art MDC approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/jia21c.html PDF: http://proceedings.mlr.press/v139/jia21c/jia21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jia21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bin-Bin family: Jia - given: Min-Ling family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4917-4926 id: jia21c issued: date-parts: - 2021 - 7 - 1 firstpage: 4917 lastpage: 4926 published: 2021-07-01 00:00:00 +0000 - title: 'Self-Damaging Contrastive Learning' abstract: 'The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervised training on real-world data applications. However, unlabeled data in reality is commonly imbalanced and shows a long-tail distribution, and it is unclear how robustly the latest contrastive learning methods could perform in the practical scenario. This paper proposes to explicitly tackle this challenge, via a principled framework called Self-Damaging Contrastive Learning (SDCLR), to automatically balance the representation learning without knowing the classes. Our main inspiration is drawn from the recent finding that deep models have difficult-to-memorize samples, and those may be exposed through network pruning. It is further natural to hypothesize that long-tail samples are also tougher for the model to learn well due to insufficient examples. Hence, the key innovation in SDCLR is to create a dynamic self-competitor model to contrast with the target model, which is a pruned version of the latter. During training, contrasting the two models will lead to adaptive online mining of the most easily forgotten samples for the current target model, and implicitly emphasize them more in the contrastive loss. Extensive experiments across multiple datasets and imbalance settings show that SDCLR significantly improves not only overall accuracies but also balancedness, in terms of linear evaluation on the full-shot and few-shot settings. Our code is available at https://github.com/VITA-Group/SDCLR.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21a.html PDF: http://proceedings.mlr.press/v139/jiang21a/jiang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ziyu family: Jiang - given: Tianlong family: Chen - given: Bobak J family: Mortazavi - given: Zhangyang family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4927-4939 id: jiang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 4927 lastpage: 4939 published: 2021-07-01 00:00:00 +0000 - title: 'Prioritized Level Replay' abstract: 'Environments with procedurally generated content serve as important benchmarks for testing systematic generalization in deep reinforcement learning. In this setting, each level is an algorithmically created environment instance with a unique configuration of its factors of variation. Training on a prespecified subset of levels allows for testing generalization to unseen levels. What can be learned from a level depends on the current policy, yet prior work defaults to uniform sampling of training levels independently of the policy. We introduce Prioritized Level Replay (PLR), a general framework for selectively sampling the next training level by prioritizing those with higher estimated learning potential when revisited in the future. We show TD-errors effectively estimate a level’s future learning potential and, when used to guide the sampling procedure, induce an emergent curriculum of increasingly difficult levels. By adapting the sampling of training levels, PLR significantly improves sample-efficiency and generalization on Procgen Benchmark—matching the previous state-of-the-art in test return—and readily combines with other methods. Combined with the previous leading method, PLR raises the state-of-the-art to over 76% improvement in test return relative to standard RL baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21b.html PDF: http://proceedings.mlr.press/v139/jiang21b/jiang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Minqi family: Jiang - given: Edward family: Grefenstette - given: Tim family: Rocktäschel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4940-4950 id: jiang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 4940 lastpage: 4950 published: 2021-07-01 00:00:00 +0000 - title: 'Monotonic Robust Policy Optimization with Model Discrepancy' abstract: 'State-of-the-art deep reinforcement learning (DRL) algorithms tend to overfit due to the model discrepancy between source and target environments. Though applying domain randomization during training can improve the average performance by randomly generating a sufficient diversity of environments in simulator, the worst-case environment is still neglected without any performance guarantee. Since the average and worst-case performance are both important for generalization in RL, in this paper, we propose a policy optimization approach for concurrently improving the policy’s performance in the average and worst-case environment. We theoretically derive a lower bound for the worst-case performance of a given policy by relating it to the expected performance. Guided by this lower bound, we formulate an optimization problem to jointly optimize the policy and sampling distribution, and prove that by iteratively solving it the worst-case performance is monotonically improved. We then develop a practical algorithm, named monotonic robust policy optimization (MRPO). Experimental evaluations in several robot control tasks demonstrate that MRPO can generally improve both the average and worst-case performance in the source environments for training, and facilitate in all cases the learned policy with a better generalization capability in some unseen testing environments.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21c.html PDF: http://proceedings.mlr.press/v139/jiang21c/jiang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuankun family: Jiang - given: Chenglin family: Li - given: Wenrui family: Dai - given: Junni family: Zou - given: Hongkai family: Xiong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4951-4960 id: jiang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 4951 lastpage: 4960 published: 2021-07-01 00:00:00 +0000 - title: 'Approximation Theory of Convolutional Architectures for Time Series Modelling' abstract: 'We study the approximation properties of convolutional architectures applied to time series modelling, which can be formulated mathematically as a functional approximation problem. In the recurrent setting, recent results reveal an intricate connection between approximation efficiency and memory structures in the data generation process. In this paper, we derive parallel results for convolutional architectures, with WaveNet being a prime example. Our results reveal that in this new setting, approximation efficiency is not only characterised by memory, but also additional fine structures in the target relationship. This leads to a novel definition of spectrum-based regularity that measures the complexity of temporal relationships under the convolutional approximation scheme. These analyses provide a foundation to understand the differences between architectural choices for time series modelling and can give theoretically grounded guidance for practical applications.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21d.html PDF: http://proceedings.mlr.press/v139/jiang21d/jiang21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Haotian family: Jiang - given: Zhong family: Li - given: Qianxiao family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4961-4970 id: jiang21d issued: date-parts: - 2021 - 7 - 1 firstpage: 4961 lastpage: 4970 published: 2021-07-01 00:00:00 +0000 - title: 'Streaming and Distributed Algorithms for Robust Column Subset Selection' abstract: 'We give the first single-pass streaming algorithm for Column Subset Selection with respect to the entrywise $\ell_p$-norm with $1 \leq p < 2$. We study the $\ell_p$ norm loss since it is often considered more robust to noise than the standard Frobenius norm. Given an input matrix $A \in \mathbb{R}^{d \times n}$ ($n \gg d$), our algorithm achieves a multiplicative $k^{\frac{1}{p} - \frac{1}{2}}\poly(\log nd)$-approximation to the error with respect to the \textit{best possible column subset} of size $k$. Furthermore, the space complexity of the streaming algorithm is optimal up to a logarithmic factor. Our streaming algorithm also extends naturally to a 1-round distributed protocol with nearly optimal communication cost. A key ingredient in our algorithms is a reduction to column subset selection in the $\ell_{p,2}$-norm, which corresponds to the $p$-norm of the vector of Euclidean norms of each of the columns of $A$. This enables us to leverage strong coreset constructions for the Euclidean norm, which previously had not been applied in this context. We also give the first provable guarantees for greedy column subset selection in the $\ell_{1, 2}$ norm, which can be used as an alternative, practical subroutine in our algorithms. Finally, we show that our algorithms give significant practical advantages on real-world data analysis tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21e.html PDF: http://proceedings.mlr.press/v139/jiang21e/jiang21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shuli family: Jiang - given: Dennis family: Li - given: Irene Mengze family: Li - given: Arvind V family: Mahankali - given: David family: Woodruff editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4971-4981 id: jiang21e issued: date-parts: - 2021 - 7 - 1 firstpage: 4971 lastpage: 4981 published: 2021-07-01 00:00:00 +0000 - title: 'Single Pass Entrywise-Transformed Low Rank Approximation' abstract: 'In applications such as natural language processing or computer vision, one is given a large $n \times n$ matrix $A = (a_{i,j})$ and would like to compute a matrix decomposition, e.g., a low rank approximation, of a function $f(A) = (f(a_{i,j}))$ applied entrywise to $A$. A very important special case is the likelihood function $f\left( A \right ) = \log{\left( \left| a_{ij}\right| +1\right)}$. A natural way to do this would be to simply apply $f$ to each entry of $A$, and then compute the matrix decomposition, but this requires storing all of $A$ as well as multiple passes over its entries. Recent work of Liang et al. shows how to find a rank-$k$ factorization to $f(A)$ using only $n \cdot \poly(\eps^{-1}k\log n)$ words of memory, with overall error $10\|f(A)-[f(A)]_k\|_F^2 + \poly(\epsilon/k) \|f(A)\|_{1,2}^2$, where $[f(A)]_k$ is the best rank-$k$ approximation to $f(A)$ and $\|f(A)\|_{1,2}^2$ is the square of the sum of Euclidean lengths of rows of $f(A)$. Their algorithm uses $3$ passes over the entries of $A$. The authors pose the open question of obtaining an algorithm with $n \cdot \poly(\eps^{-1}k\log n)$ words of memory using only a single pass over the entries of $A$. In this paper we resolve this open question, obtaining the first single-pass algorithm for this problem and for the same class of functions $f$ studied by Liang et al. Moreover, our error is $\|f(A)-[f(A)]_k\|_F^2 + \poly(\epsilon/k) \|f(A)\|_F^2$, where $\|f(A)\|_F^2$ is the sum of squares of Euclidean lengths of rows of $f(A)$. Thus our error is significantly smaller, as it removes the factor of $10$ and also $\|f(A)\|_F^2 \leq \|f(A)\|_{1,2}^2$.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21f.html PDF: http://proceedings.mlr.press/v139/jiang21f/jiang21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yifei family: Jiang - given: Yi family: Li - given: Yiming family: Sun - given: Jiaxin family: Wang - given: David family: Woodruff editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4982-4991 id: jiang21f issued: date-parts: - 2021 - 7 - 1 firstpage: 4982 lastpage: 4991 published: 2021-07-01 00:00:00 +0000 - title: 'The Emergence of Individuality' abstract: 'Individuality is essential in human society. It induces the division of labor and thus improves the efficiency and productivity. Similarly, it should also be a key to multi-agent cooperation. Inspired by that individuality is of being an individual separate from others, we propose a simple yet efficient method for the emergence of individuality (EOI) in multi-agent reinforcement learning (MARL). EOI learns a probabilistic classifier that predicts a probability distribution over agents given their observation and gives each agent an intrinsic reward of being correctly predicted by the classifier. The intrinsic reward encourages the agents to visit their own familiar observations, and learning the classifier by such observations makes the intrinsic reward signals stronger and in turn makes the agents more identifiable. To further enhance the intrinsic reward and promote the emergence of individuality, two regularizers are proposed to increase the discriminability of the classifier. We implement EOI on top of popular MARL algorithms. Empirically, we show that EOI outperforms existing methods in a variety of multi-agent cooperative scenarios.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21g.html PDF: http://proceedings.mlr.press/v139/jiang21g/jiang21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiechuan family: Jiang - given: Zongqing family: Lu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 4992-5001 id: jiang21g issued: date-parts: - 2021 - 7 - 1 firstpage: 4992 lastpage: 5001 published: 2021-07-01 00:00:00 +0000 - title: 'Online Selection Problems against Constrained Adversary' abstract: 'Inspired by a recent line of work in online algorithms with predictions, we study the constrained adversary model that utilizes predictions from a different perspective. Prior works mostly focused on designing simultaneously robust and consistent algorithms, without making assumptions on the quality of the predictions. In contrary, our model assumes the adversarial instance is consistent with the predictions and aim to design algorithms that have best worst-case performance against all such instances. We revisit classical online selection problems under the constrained adversary model. For the single item selection problem, we design an optimal algorithm in the adversarial arrival model and an improved algorithm in the random arrival model (a.k.a., the secretary problem). For the online edge-weighted bipartite matching problem, we extend the classical Water-filling and Ranking algorithms and achieve improved competitive ratios.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21h.html PDF: http://proceedings.mlr.press/v139/jiang21h/jiang21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhihao family: Jiang - given: Pinyan family: Lu - given: Zhihao Gavin family: Tang - given: Yuhao family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5002-5012 id: jiang21h issued: date-parts: - 2021 - 7 - 1 firstpage: 5002 lastpage: 5012 published: 2021-07-01 00:00:00 +0000 - title: 'Active Covering' abstract: 'We analyze the problem of active covering, where the learner is given an unlabeled dataset and can sequentially label query examples. The objective is to label query all of the positive examples in the fewest number of total label queries. We show under standard non-parametric assumptions that a classical support estimator can be repurposed as an offline algorithm attaining an excess query cost of $\widetilde{\Theta}(n^{D/(D+1)})$ compared to the optimal learner, where $n$ is the number of datapoints and $D$ is the dimension. We then provide a simple active learning method that attains an improved excess query cost of $\widetilde{O}(n^{(D-1)/D})$. Furthermore, the proposed algorithms only require access to the positive labeled examples, which in certain settings provides additional computational and privacy benefits. Finally, we show that the active learning method consistently outperforms offline methods as well as a variety of baselines on a wide range of benchmark image-based datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21i.html PDF: http://proceedings.mlr.press/v139/jiang21i/jiang21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Heinrich family: Jiang - given: Afshin family: Rostamizadeh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5013-5022 id: jiang21i issued: date-parts: - 2021 - 7 - 1 firstpage: 5013 lastpage: 5022 published: 2021-07-01 00:00:00 +0000 - title: 'Emphatic Algorithms for Deep Reinforcement Learning' abstract: 'Off-policy learning allows us to learn about possible policies of behavior from experience generated by a different behavior policy. Temporal difference (TD) learning algorithms can become unstable when combined with function approximation and off-policy sampling—this is known as the “deadly triad”. Emphatic temporal difference (ETD($\lambda$)) algorithm ensures convergence in the linear case by appropriately weighting the TD($\lambda$) updates. In this paper, we extend the use of emphatic methods to deep reinforcement learning agents. We show that naively adapting ETD($\lambda$) to popular deep reinforcement learning algorithms, which use forward view multi-step returns, results in poor performance. We then derive new emphatic algorithms for use in the context of such algorithms, and we demonstrate that they provide noticeable benefits in small problems designed to highlight the instability of TD methods. Finally, we observed improved performance when applying these algorithms at scale on classic Atari games from the Arcade Learning Environment.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21j.html PDF: http://proceedings.mlr.press/v139/jiang21j/jiang21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ray family: Jiang - given: Tom family: Zahavy - given: Zhongwen family: Xu - given: Adam family: White - given: Matteo family: Hessel - given: Charles family: Blundell - given: Hado family: Van Hasselt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5023-5033 id: jiang21j issued: date-parts: - 2021 - 7 - 1 firstpage: 5023 lastpage: 5033 published: 2021-07-01 00:00:00 +0000 - title: 'Characterizing Structural Regularities of Labeled Data in Overparameterized Models' abstract: 'Humans are accustomed to environments that contain both regularities and exceptions. For example, at most gas stations, one pays prior to pumping, but the occasional rural station does not accept payment in advance. Likewise, deep neural networks can generalize across instances that share common patterns or structures, yet have the capacity to memorize rare or irregular forms. We analyze how individual instances are treated by a model via a consistency score. The score characterizes the expected accuracy for a held-out instance given training sets of varying size sampled from the data distribution. We obtain empirical estimates of this score for individual instances in multiple data sets, and we show that the score identifies out-of-distribution and mislabeled examples at one end of the continuum and strongly regular examples at the other end. We identify computationally inexpensive proxies to the consistency score using statistics collected during training. We apply the score toward understanding the dynamics of representation learning and to filter outliers during training.' volume: 139 URL: https://proceedings.mlr.press/v139/jiang21k.html PDF: http://proceedings.mlr.press/v139/jiang21k/jiang21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jiang21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ziheng family: Jiang - given: Chiyuan family: Zhang - given: Kunal family: Talwar - given: Michael C family: Mozer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5034-5044 id: jiang21k issued: date-parts: - 2021 - 7 - 1 firstpage: 5034 lastpage: 5044 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Streaming Algorithms for Multi-Armed Bandits' abstract: 'This paper studies two variants of the best arm identification (BAI) problem under the streaming model, where we have a stream of n arms with reward distributions supported on [0,1] with unknown means. The arms in the stream are arriving one by one, and the algorithm cannot access an arm unless it is stored in a limited size memory. We first study the streaming \epslion-topk-arms identification problem, which asks for k arms whose reward means are lower than that of the k-th best arm by at most \epsilon with probability at least 1-\delta. For general \epsilon \in (0,1), the existing solution for this problem assumes k = 1 and achieves the optimal sample complexity O(\frac{n}{\epsilon^2} \log \frac{1}{\delta}) using O(\log^*(n)) memory and a single pass of the stream. We propose an algorithm that works for any k and achieves the optimal sample complexity O(\frac{n}{\epsilon^2} \log\frac{k}{\delta}) using a single-arm memory and a single pass of the stream. Second, we study the streaming BAI problem, where the objective is to identify the arm with the maximum reward mean with at least 1-\delta probability, using a single-arm memory and as few passes of the input stream as possible. We present a single-arm-memory algorithm that achieves a near instance-dependent optimal sample complexity within O(\log \Delta_2^{-1}) passes, where \Delta_2 is the gap between the mean of the best arm and that of the second best arm.' volume: 139 URL: https://proceedings.mlr.press/v139/jin21a.html PDF: http://proceedings.mlr.press/v139/jin21a/jin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianyuan family: Jin - given: Keke family: Huang - given: Jing family: Tang - given: Xiaokui family: Xiao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5045-5054 id: jin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5045 lastpage: 5054 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Tight Bounds on the Sample Complexity of Average-reward MDPs' abstract: 'We prove new upper and lower bounds for sample complexity of finding an $\epsilon$-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most $t_\mathrm{mix}$, we provide an algorithm that solves the problem using $\widetilde{O}(t_\mathrm{mix} \epsilon^{-3})$ (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on $t_\mathrm{mix}$ is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.' volume: 139 URL: https://proceedings.mlr.press/v139/jin21b.html PDF: http://proceedings.mlr.press/v139/jin21b/jin21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jin21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yujia family: Jin - given: Aaron family: Sidford editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5055-5064 id: jin21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5055 lastpage: 5064 published: 2021-07-01 00:00:00 +0000 - title: 'Almost Optimal Anytime Algorithm for Batched Multi-Armed Bandits' abstract: 'In batched multi-armed bandit problems, the learner can adaptively pull arms and adjust strategy in batches. In many real applications, not only the regret but also the batch complexity need to be optimized. Existing batched bandit algorithms usually assume that the time horizon T is known in advance. However, many applications involve an unpredictable stopping time. In this paper, we study the anytime batched multi-armed bandit problem. We propose an anytime algorithm that achieves the asymptotically optimal regret for exponential families of reward distributions with $O(\log \log T \ilog^{\alpha} (T))$ \footnote{Notation \ilog^{\alpha} (T) is the result of iteratively applying the logarithm function on T for \alpha times, e.g., \ilog^{3} (T)=\log\log\log T.} batches, where $\alpha\in O_{T}(1)$. Moreover, we prove that for any constant c>0, no algorithm can achieve the asymptotically optimal regret within c\log\log T batches.' volume: 139 URL: https://proceedings.mlr.press/v139/jin21c.html PDF: http://proceedings.mlr.press/v139/jin21c/jin21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jin21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianyuan family: Jin - given: Jing family: Tang - given: Pan family: Xu - given: Keke family: Huang - given: Xiaokui family: Xiao - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5065-5073 id: jin21c issued: date-parts: - 2021 - 7 - 1 firstpage: 5065 lastpage: 5073 published: 2021-07-01 00:00:00 +0000 - title: 'MOTS: Minimax Optimal Thompson Sampling' abstract: 'Thompson sampling is one of the most widely used algorithms in many online decision problems due to its simplicity for implementation and superior empirical performance over other state-of-the-art methods. Despite its popularity and empirical success, it has remained an open problem whether Thompson sampling can achieve the minimax optimal regret O(\sqrt{TK}) for K-armed bandit problems, where T is the total time horizon. In this paper we fill this long open gap by proposing a new Thompson sampling algorithm called MOTS that adaptively truncates the sampling result of the chosen arm at each time step. We prove that this simple variant of Thompson sampling achieves the minimax optimal regret bound O(\sqrt{TK}) for finite time horizon T and also the asymptotic optimal regret bound when $T$ grows to infinity as well. This is the first time that the minimax optimality of multi-armed bandit problems has been attained by Thompson sampling type of algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/jin21d.html PDF: http://proceedings.mlr.press/v139/jin21d/jin21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jin21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianyuan family: Jin - given: Pan family: Xu - given: Jieming family: Shi - given: Xiaokui family: Xiao - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5074-5083 id: jin21d issued: date-parts: - 2021 - 7 - 1 firstpage: 5074 lastpage: 5083 published: 2021-07-01 00:00:00 +0000 - title: 'Is Pessimism Provably Efficient for Offline RL?' abstract: 'We study offline reinforcement learning (RL), which aims to learn an optimal policy based on a dataset collected a priori. Due to the lack of further interactions with the environment, offline RL suffers from the insufficient coverage of the dataset, which eludes most existing theoretical analysis. In this paper, we propose a pessimistic variant of the value iteration algorithm (PEVI), which incorporates an uncertainty quantifier as the penalty function. Such a penalty function simply flips the sign of the bonus function for promoting exploration in online RL, which makes it easily implementable and compatible with general function approximators. Without assuming the sufficient coverage of the dataset, we establish a data-dependent upper bound on the suboptimality of PEVI for general Markov decision processes (MDPs). When specialized to linear MDPs, it matches the information-theoretic lower bound up to multiplicative factors of the dimension and horizon. In other words, pessimism is not only provably efficient but also minimax optimal. In particular, given the dataset, the learned policy serves as the “best effort” among all policies, as no other policies can do better. Our theoretical analysis identifies the critical role of pessimism in eliminating a notion of spurious correlation, which emerges from the “irrelevant” trajectories that are less covered by the dataset and not informative for the optimal policy.' volume: 139 URL: https://proceedings.mlr.press/v139/jin21e.html PDF: http://proceedings.mlr.press/v139/jin21e/jin21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jin21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ying family: Jin - given: Zhuoran family: Yang - given: Zhaoran family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5084-5096 id: jin21e issued: date-parts: - 2021 - 7 - 1 firstpage: 5084 lastpage: 5096 published: 2021-07-01 00:00:00 +0000 - title: 'Adversarial Option-Aware Hierarchical Imitation Learning' abstract: 'It has been a challenge to learning skills for an agent from long-horizon unannotated demonstrations. Existing approaches like Hierarchical Imitation Learning(HIL) are prone to compounding errors or suboptimal solutions. In this paper, we propose Option-GAIL, a novel method to learn skills at long horizon. The key idea of Option-GAIL is modeling the task hierarchy by options and train the policy via generative adversarial optimization. In particular, we propose an Expectation-Maximization(EM)-style algorithm: an E-step that samples the options of expert conditioned on the current learned policy, and an M-step that updates the low- and high-level policies of agent simultaneously to minimize the newly proposed option-occupancy measurement between the expert and the agent. We theoretically prove the convergence of the proposed algorithm. Experiments show that Option-GAIL outperforms other counterparts consistently across a variety of tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/jing21a.html PDF: http://proceedings.mlr.press/v139/jing21a/jing21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jing21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mingxuan family: Jing - given: Wenbing family: Huang - given: Fuchun family: Sun - given: Xiaojian family: Ma - given: Tao family: Kong - given: Chuang family: Gan - given: Lei family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5097-5106 id: jing21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5097 lastpage: 5106 published: 2021-07-01 00:00:00 +0000 - title: 'Discrete-Valued Latent Preference Matrix Estimation with Graph Side Information' abstract: 'Incorporating graph side information into recommender systems has been widely used to better predict ratings, but relatively few works have focused on theoretical guarantees. Ahn et al. (2018) firstly characterized the optimal sample complexity in the presence of graph side information, but the results are limited due to strict, unrealistic assumptions made on the unknown latent preference matrix and the structure of user clusters. In this work, we propose a new model in which 1) the unknown latent preference matrix can have any discrete values, and 2) users can be clustered into multiple clusters, thereby relaxing the assumptions made in prior work. Under this new model, we fully characterize the optimal sample complexity and develop a computationally-efficient algorithm that matches the optimal sample complexity. Our algorithm is robust to model errors and outperforms the existing algorithms in terms of prediction performance on both synthetic and real data.' volume: 139 URL: https://proceedings.mlr.press/v139/jo21a.html PDF: http://proceedings.mlr.press/v139/jo21a/jo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Changhun family: Jo - given: Kangwook family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5107-5117 id: jo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5107 lastpage: 5117 published: 2021-07-01 00:00:00 +0000 - title: 'Provable Lipschitz Certification for Generative Models' abstract: 'We present a scalable technique for upper bounding the Lipschitz constant of generative models. We relate this quantity to the maximal norm over the set of attainable vector-Jacobian products of a given generative model. We approximate this set by layerwise convex approximations using zonotopes. Our approach generalizes and improves upon prior work using zonotope transformers and we extend to Lipschitz estimation of neural networks with large output dimension. This provides efficient and tight bounds on small networks and can scale to generative models on VAE and DCGAN architectures.' volume: 139 URL: https://proceedings.mlr.press/v139/jordan21a.html PDF: http://proceedings.mlr.press/v139/jordan21a/jordan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jordan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Matt family: Jordan - given: Alex family: Dimakis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5118-5126 id: jordan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5118 lastpage: 5126 published: 2021-07-01 00:00:00 +0000 - title: 'Isometric Gaussian Process Latent Variable Model for Dissimilarity Data' abstract: 'We present a probabilistic model where the latent variable respects both the distances and the topology of the modeled data. The model leverages the Riemannian geometry of the generated manifold to endow the latent space with a well-defined stochastic distance measure, which is modeled locally as Nakagami distributions. These stochastic distances are sought to be as similar as possible to observed distances along a neighborhood graph through a censoring process. The model is inferred by variational inference based on observations of pairwise distances. We demonstrate how the new model can encode invariances in the learned manifolds.' volume: 139 URL: https://proceedings.mlr.press/v139/jorgensen21a.html PDF: http://proceedings.mlr.press/v139/jorgensen21a/jorgensen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jorgensen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Martin family: Jørgensen - given: Soren family: Hauberg editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5127-5136 id: jorgensen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5127 lastpage: 5136 published: 2021-07-01 00:00:00 +0000 - title: 'On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models' abstract: 'In this paper, we study the generalization performance of min $\ell_2$-norm overfitting solutions for the neural tangent kernel (NTK) model of a two-layer neural network with ReLU activation that has no bias term. We show that, depending on the ground-truth function, the test error of overfitted NTK models exhibits characteristics that are different from the "double-descent" of other overparameterized linear models with simple Fourier or Gaussian features. Specifically, for a class of learnable functions, we provide a new upper bound of the generalization error that approaches a small limiting value, even when the number of neurons $p$ approaches infinity. This limiting value further decreases with the number of training samples $n$. For functions outside of this class, we provide a lower bound on the generalization error that does not diminish to zero even when $n$ and $p$ are both large.' volume: 139 URL: https://proceedings.mlr.press/v139/ju21a.html PDF: http://proceedings.mlr.press/v139/ju21a/ju21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ju21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peizhong family: Ju - given: Xiaojun family: Lin - given: Ness family: Shroff editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5137-5147 id: ju21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5137 lastpage: 5147 published: 2021-07-01 00:00:00 +0000 - title: 'Improved Confidence Bounds for the Linear Logistic Model and Applications to Bandits' abstract: 'We propose improved fixed-design confidence bounds for the linear logistic model. Our bounds significantly improve upon the state-of-the-art bound by Li et al. (2017) via recent developments of the self-concordant analysis of the logistic loss (Faury et al., 2020). Specifically, our confidence bound avoids a direct dependence on $1/\kappa$, where $\kappa$ is the minimal variance over all arms’ reward distributions. In general, $1/\kappa$ scales exponentially with the norm of the unknown linear parameter $\theta^*$. Instead of relying on this worst case quantity, our confidence bound for the reward of any given arm depends directly on the variance of that arm’s reward distribution. We present two applications of our novel bounds to pure exploration and regret minimization logistic bandits improving upon state-of-the-art performance guarantees. For pure exploration we also provide a lower bound highlighting a dependence on $1/\kappa$ for a family of instances.' volume: 139 URL: https://proceedings.mlr.press/v139/jun21a.html PDF: http://proceedings.mlr.press/v139/jun21a/jun21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jun21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kwang-Sung family: Jun - given: Lalit family: Jain - given: Blake family: Mason - given: Houssam family: Nassif editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5148-5157 id: jun21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5148 lastpage: 5157 published: 2021-07-01 00:00:00 +0000 - title: 'Detection of Signal in the Spiked Rectangular Models' abstract: 'We consider the problem of detecting signals in the rank-one signal-plus-noise data matrix models that generalize the spiked Wishart matrices. We show that the principal component analysis can be improved by pre-transforming the matrix entries if the noise is non-Gaussian. As an intermediate step, we prove a sharp phase transition of the largest eigenvalues of spiked rectangular matrices, which extends the Baik–Ben Arous–Péché (BBP) transition. We also propose a hypothesis test to detect the presence of signal with low computational complexity, based on the linear spectral statistics, which minimizes the sum of the Type-I and Type-II errors when the noise is Gaussian.' volume: 139 URL: https://proceedings.mlr.press/v139/jung21a.html PDF: http://proceedings.mlr.press/v139/jung21a/jung21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jung21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ji Hyung family: Jung - given: Hye Won family: Chung - given: Ji Oon family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5158-5167 id: jung21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5158 lastpage: 5167 published: 2021-07-01 00:00:00 +0000 - title: 'Estimating Identifiable Causal Effects on Markov Equivalence Class through Double Machine Learning' abstract: 'General methods have been developed for estimating causal effects from observational data under causal assumptions encoded in the form of a causal graph. Most of this literature assumes that the underlying causal graph is completely specified. However, only observational data is available in most practical settings, which means that one can learn at most a Markov equivalence class (MEC) of the underlying causal graph. In this paper, we study the problem of causal estimation from a MEC represented by a partial ancestral graph (PAG), which is learnable from observational data. We develop a general estimator for any identifiable causal effects in a PAG. The result fills a gap for an end-to-end solution to causal inference from observational data to effects estimation. Specifically, we develop a complete identification algorithm that derives an influence function for any identifiable causal effects from PAGs. We then construct a double/debiased machine learning (DML) estimator that is robust to model misspecification and biases in nuisance function estimation, permitting the use of modern machine learning techniques. Simulation results corroborate with the theory.' volume: 139 URL: https://proceedings.mlr.press/v139/jung21b.html PDF: http://proceedings.mlr.press/v139/jung21b/jung21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-jung21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yonghan family: Jung - given: Jin family: Tian - given: Elias family: Bareinboim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5168-5179 id: jung21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5168 lastpage: 5179 published: 2021-07-01 00:00:00 +0000 - title: 'A Nullspace Property for Subspace-Preserving Recovery' abstract: 'Much of the theory for classical sparse recovery is based on conditions on the dictionary that are both necessary and sufficient (e.g., nullspace property) or only sufficient (e.g., incoherence and restricted isometry). In contrast, much of the theory for subspace-preserving recovery, the theoretical underpinnings for sparse subspace classification and clustering methods, is based on conditions on the subspaces and the data that are only sufficient (e.g., subspace incoherence and data inner-radius). This paper derives a necessary and sufficient condition for subspace-preserving recovery that is inspired by the classical nullspace property.Based on this novel condition, called here the subspace nullspace property, we derive equivalent characterizations that either admit a clear geometric interpretation that relates data distribution and subspace separation to the recovery success, or can be verified using a finite set of extreme points of a properly defined set. We further exploit these characterizations to derive new sufficient conditions, based on inner-radius and outer-radius measures and dual bounds, that generalize existing conditions and preserve the geometric interpretations. These results fill an important gap in the subspace-preserving recovery literature.' volume: 139 URL: https://proceedings.mlr.press/v139/kaba21a.html PDF: http://proceedings.mlr.press/v139/kaba21a/kaba21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kaba21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mustafa D family: Kaba - given: Chong family: You - given: Daniel P family: Robinson - given: Enrique family: Mallada - given: Rene family: Vidal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5180-5188 id: kaba21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5180 lastpage: 5188 published: 2021-07-01 00:00:00 +0000 - title: 'Training Recurrent Neural Networks via Forward Propagation Through Time' abstract: 'Back-propagation through time (BPTT) has been widely used for training Recurrent Neural Networks (RNNs). BPTT updates RNN parameters on an instance by back-propagating the error in time over the entire sequence length, and as a result, leads to poor trainability due to the well-known gradient explosion/decay phenomena. While a number of prior works have proposed to mitigate vanishing/explosion effect through careful RNN architecture design, these RNN variants still train with BPTT. We propose a novel forward-propagation algorithm, FPTT, where at each time, for an instance, we update RNN parameters by optimizing an instantaneous risk function. Our proposed risk is a regularization penalty at time $t$ that evolves dynamically based on previously observed losses, and allows for RNN parameter updates to converge to a stationary solution of the empirical RNN objective. We consider both sequence-to-sequence as well as terminal loss problems. Empirically FPTT outperforms BPTT on a number of well-known benchmark tasks, thus enabling architectures like LSTMs to solve long range dependencies problems.' volume: 139 URL: https://proceedings.mlr.press/v139/kag21a.html PDF: http://proceedings.mlr.press/v139/kag21a/kag21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kag21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Anil family: Kag - given: Venkatesh family: Saligrama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5189-5200 id: kag21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5189 lastpage: 5200 published: 2021-07-01 00:00:00 +0000 - title: 'The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation' abstract: 'We consider training models on private data that are distributed across user devices. To ensure privacy, we add on-device noise and use secure aggregation so that only the noisy sum is revealed to the server. We present a comprehensive end-to-end system, which appropriately discretizes the data and adds discrete Gaussian noise before performing secure aggregation. We provide a novel privacy analysis for sums of discrete Gaussians and carefully analyze the effects of data quantization and modular summation arithmetic. Our theoretical guarantees highlight the complex tension between communication, privacy, and accuracy. Our extensive experimental results demonstrate that our solution is essentially able to match the accuracy to central differential privacy with less than 16 bits of precision per value.' volume: 139 URL: https://proceedings.mlr.press/v139/kairouz21a.html PDF: http://proceedings.mlr.press/v139/kairouz21a/kairouz21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kairouz21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peter family: Kairouz - given: Ziyu family: Liu - given: Thomas family: Steinke editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5201-5212 id: kairouz21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5201 lastpage: 5212 published: 2021-07-01 00:00:00 +0000 - title: 'Practical and Private (Deep) Learning Without Sampling or Shuffling' abstract: 'We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires \emph{privacy amplification by sampling or shuffling} to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification.' volume: 139 URL: https://proceedings.mlr.press/v139/kairouz21b.html PDF: http://proceedings.mlr.press/v139/kairouz21b/kairouz21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kairouz21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peter family: Kairouz - given: Brendan family: Mcmahan - given: Shuang family: Song - given: Om family: Thakkar - given: Abhradeep family: Thakurta - given: Zheng family: Xu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5213-5225 id: kairouz21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5213 lastpage: 5225 published: 2021-07-01 00:00:00 +0000 - title: 'A Differentiable Point Process with Its Application to Spiking Neural Networks' abstract: 'This paper is concerned about a learning algorithm for a probabilistic model of spiking neural networks (SNNs). Jimenez Rezende & Gerstner (2014) proposed a stochastic variational inference algorithm to train SNNs with hidden neurons. The algorithm updates the variational distribution using the score function gradient estimator, whose high variance often impedes the whole learning algorithm. This paper presents an alternative gradient estimator for SNNs based on the path-wise gradient estimator. The main technical difficulty is a lack of a general method to differentiate a realization of an arbitrary point process, which is necessary to derive the path-wise gradient estimator. We develop a differentiable point process, which is the technical highlight of this paper, and apply it to derive the path-wise gradient estimator for SNNs. We investigate the effectiveness of our gradient estimator through numerical simulation.' volume: 139 URL: https://proceedings.mlr.press/v139/kajino21a.html PDF: http://proceedings.mlr.press/v139/kajino21a/kajino21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kajino21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hiroshi family: Kajino editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5226-5235 id: kajino21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5226 lastpage: 5235 published: 2021-07-01 00:00:00 +0000 - title: 'Projection techniques to update the truncated SVD of evolving matrices with applications' abstract: 'This submission considers the problem of updating the rank-$k$ truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time. Such matrix problems represent an important computational kernel in applications such as Latent Semantic Indexing and Recommender Systems. Nonetheless, the proposed framework is purely algebraic and targets general updating problems. The algorithm presented in this paper undertakes a projection viewpoint and focuses on building a pair of subspaces which approximate the linear span of the sought singular vectors of the updated matrix. We discuss and analyze two different choices to form the projection subspaces. Results on matrices from real applications suggest that the proposed algorithm can lead to higher accuracy, especially for the singular triplets associated with the largest modulus singular values. Several practical details and key differences with other approaches are also discussed.' volume: 139 URL: https://proceedings.mlr.press/v139/kalantzis21a.html PDF: http://proceedings.mlr.press/v139/kalantzis21a/kalantzis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kalantzis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vasileios family: Kalantzis - given: Georgios family: Kollias - given: Shashanka family: Ubaru - given: Athanasios N. family: Nikolakopoulos - given: Lior family: Horesh - given: Kenneth family: Clarkson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5236-5246 id: kalantzis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5236 lastpage: 5246 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Off-Policy Evaluation from Multiple Logging Policies' abstract: 'We study off-policy evaluation (OPE) from multiple logging policies, each generating a dataset of fixed size, i.e., stratified sampling. Previous work noted that in this setting the ordering of the variances of different importance sampling estimators is instance-dependent, which brings up a dilemma as to which importance sampling weights to use. In this paper, we resolve this dilemma by finding the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one. In particular, we establish the efficiency bound under stratified sampling and propose an estimator achieving this bound when given consistent $q$-estimates. To guard against misspecification of $q$-functions, we also provide a way to choose the control variate in a hypothesis class to minimize variance. Extensive experiments demonstrate the benefits of our methods’ efficiently leveraging of the stratified sampling of off-policy data from multiple loggers.' volume: 139 URL: https://proceedings.mlr.press/v139/kallus21a.html PDF: http://proceedings.mlr.press/v139/kallus21a/kallus21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kallus21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nathan family: Kallus - given: Yuta family: Saito - given: Masatoshi family: Uehara editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5247-5256 id: kallus21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5247 lastpage: 5256 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Performance Bounds for Primal-Dual Reinforcement Learning from Demonstrations' abstract: 'We consider large-scale Markov decision processes with an unknown cost function and address the problem of learning a policy from a finite set of expert demonstrations. We assume that the learner is not allowed to interact with the expert and has no access to reinforcement signal of any kind. Existing inverse reinforcement learning methods come with strong theoretical guarantees, but are computationally expensive, while state-of-the-art policy optimization algorithms achieve significant empirical success, but are hampered by limited theoretical understanding. To bridge the gap between theory and practice, we introduce a novel bilinear saddle-point framework using Lagrangian duality. The proposed primal-dual viewpoint allows us to develop a model-free provably efficient algorithm through the lens of stochastic convex optimization. The method enjoys the advantages of simplicity of implementation, low memory requirements, and computational and sample complexities independent of the number of states. We further present an equivalent no-regret online-learning interpretation.' volume: 139 URL: https://proceedings.mlr.press/v139/kamoutsi21a.html PDF: http://proceedings.mlr.press/v139/kamoutsi21a/kamoutsi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kamoutsi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Angeliki family: Kamoutsi - given: Goran family: Banjac - given: John family: Lygeros editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5257-5268 id: kamoutsi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5257 lastpage: 5268 published: 2021-07-01 00:00:00 +0000 - title: 'Statistical Estimation from Dependent Data' abstract: 'We consider a general statistical estimation problem wherein binary labels across different observations are not independent conditioning on their feature vectors, but dependent, capturing settings where e.g. these observations are collected on a spatial domain, a temporal domain, or a social network, which induce dependencies. We model these dependencies in the language of Markov Random Fields and, importantly, allow these dependencies to be substantial, i.e. do not assume that the Markov Random Field capturing these dependencies is in high temperature. As our main contribution we provide algorithms and statistically efficient estimation rates for this model, giving several instantiations of our bounds in logistic regression, sparse logistic regression, and neural network regression settings with dependent data. Our estimation guarantees follow from novel results for estimating the parameters (i.e. external fields and interaction strengths) of Ising models from a single sample.' volume: 139 URL: https://proceedings.mlr.press/v139/kandiros21a.html PDF: http://proceedings.mlr.press/v139/kandiros21a/kandiros21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kandiros21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vardis family: Kandiros - given: Yuval family: Dagan - given: Nishanth family: Dikkala - given: Surbhi family: Goel - given: Constantinos family: Daskalakis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5269-5278 id: kandiros21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5269 lastpage: 5278 published: 2021-07-01 00:00:00 +0000 - title: 'SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes' abstract: 'State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the co-variance kernel. The Structured Kernel Interpolation (SKI) framework accelerates these MVMs by performing efficient MVMs on a grid and interpolating back to the original space. In this work, we develop a connection between SKI and the permutohedral lattice used for high-dimensional fast bilateral filtering. Using a sparse simplicial grid instead of a dense rectangular one, we can perform GP inference exponentially faster in the dimension than SKI. Our approach, Simplex-GP, enables scaling SKI to high dimensions, while maintaining strong predictive performance. We additionally provide a CUDA implementation of Simplex-GP, which enables significant GPU acceleration of MVM based inference.' volume: 139 URL: https://proceedings.mlr.press/v139/kapoor21a.html PDF: http://proceedings.mlr.press/v139/kapoor21a/kapoor21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kapoor21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sanyam family: Kapoor - given: Marc family: Finzi - given: Ke Alexander family: Wang - given: Andrew Gordon Gordon family: Wilson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5279-5289 id: kapoor21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5279 lastpage: 5289 published: 2021-07-01 00:00:00 +0000 - title: 'Variational Auto-Regressive Gaussian Processes for Continual Learning' abstract: 'Through sequential construction of posteriors on observing data online, Bayes’ theorem provides a natural framework for continual learning. We develop Variational Auto-Regressive Gaussian Processes (VAR-GPs), a principled posterior updating mechanism to solve sequential tasks in continual learning. By relying on sparse inducing point approximations for scalable posteriors, we propose a novel auto-regressive variational distribution which reveals two fruitful connections to existing results in Bayesian inference, expectation propagation and orthogonal inducing points. Mean predictive entropy estimates show VAR-GPs prevent catastrophic forgetting, which is empirically supported by strong performance on modern continual learning benchmarks against competitive baselines. A thorough ablation study demonstrates the efficacy of our modeling choices.' volume: 139 URL: https://proceedings.mlr.press/v139/kapoor21b.html PDF: http://proceedings.mlr.press/v139/kapoor21b/kapoor21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kapoor21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sanyam family: Kapoor - given: Theofanis family: Karaletsos - given: Thang D family: Bui editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5290-5300 id: kapoor21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5290 lastpage: 5300 published: 2021-07-01 00:00:00 +0000 - title: 'Off-Policy Confidence Sequences' abstract: 'We develop confidence bounds that hold uniformly over time for off-policy evaluation in the contextual bandit setting. These confidence sequences are based on recent ideas from martingale analysis and are non-asymptotic, non-parametric, and valid at arbitrary stopping times. We provide algorithms for computing these confidence sequences that strike a good balance between computational and statistical efficiency. We empirically demonstrate the tightness of our approach in terms of failure probability and width and apply it to the “gated deployment” problem of safely upgrading a production contextual bandit system.' volume: 139 URL: https://proceedings.mlr.press/v139/karampatziakis21a.html PDF: http://proceedings.mlr.press/v139/karampatziakis21a/karampatziakis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-karampatziakis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nikos family: Karampatziakis - given: Paul family: Mineiro - given: Aaditya family: Ramdas editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5301-5310 id: karampatziakis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5301 lastpage: 5310 published: 2021-07-01 00:00:00 +0000 - title: 'Learning from History for Byzantine Robust Optimization' abstract: 'Byzantine robustness has received significant attention recently given its importance for distributed and federated learning. In spite of this, we identify severe flaws in existing algorithms even when the data across the participants is identically distributed. First, we show realistic examples where current state of the art robust aggregation rules fail to converge even in the absence of any Byzantine attackers. Secondly, we prove that even if the aggregation rules may succeed in limiting the influence of the attackers in a single round, the attackers can couple their attacks across time eventually leading to divergence. To address these issues, we present two surprisingly simple strategies: a new robust iterative clipping procedure, and incorporating worker momentum to overcome time-coupled attacks. This is the first provably robust method for the standard stochastic optimization setting.' volume: 139 URL: https://proceedings.mlr.press/v139/karimireddy21a.html PDF: http://proceedings.mlr.press/v139/karimireddy21a/karimireddy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-karimireddy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sai Praneeth family: Karimireddy - given: Lie family: He - given: Martin family: Jaggi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5311-5319 id: karimireddy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5311 lastpage: 5319 published: 2021-07-01 00:00:00 +0000 - title: 'Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation' abstract: 'Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation. In the DRE literature, existing studies have extensively studied methods based on Bregman divergence (BD) minimization. However, when we apply the BD minimization with highly flexible models, such as deep neural networks, it tends to suffer from what we call train-loss hacking, which is a source of over-fitting caused by a typical characteristic of empirical BD estimators. In this paper, to mitigate train-loss hacking, we propose non-negative correction for empirical BD estimators. Theoretically, we confirm the soundness of the proposed method through a generalization error bound. In our experiments, the proposed methods show favorable performances in inlier-based outlier detection.' volume: 139 URL: https://proceedings.mlr.press/v139/kato21a.html PDF: http://proceedings.mlr.press/v139/kato21a/kato21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kato21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Masahiro family: Kato - given: Takeshi family: Teshima editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5320-5333 id: kato21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5320 lastpage: 5333 published: 2021-07-01 00:00:00 +0000 - title: 'Improved Algorithms for Agnostic Pool-based Active Classification' abstract: 'We consider active learning for binary classification in the agnostic pool-based setting. The vast majority of works in active learning in the agnostic setting are inspired by the CAL algorithm where each query is uniformly sampled from the disagreement region of the current version space. The sample complexity of such algorithms is described by a quantity known as the disagreement coefficient which captures both the geometry of the hypothesis space as well as the underlying probability space. To date, the disagreement coefficient has been justified by minimax lower bounds only, leaving the door open for superior instance dependent sample complexities. In this work we propose an algorithm that, in contrast to uniform sampling over the disagreement region, solves an experimental design problem to determine a distribution over examples from which to request labels. We show that the new approach achieves sample complexity bounds that are never worse than the best disagreement coefficient-based bounds, but in specific cases can be dramatically smaller. From a practical perspective, the proposed algorithm requires no hyperparameters to tune (e.g., to control the aggressiveness of sampling), and is computationally efficient by means of assuming access to an empirical risk minimization oracle (without any constraints). Empirically, we demonstrate that our algorithm is superior to state of the art agnostic active learning algorithms on image classification datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/katz-samuels21a.html PDF: http://proceedings.mlr.press/v139/katz-samuels21a/katz-samuels21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-katz-samuels21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Julian family: Katz-Samuels - given: Jifan family: Zhang - given: Lalit family: Jain - given: Kevin family: Jamieson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5334-5344 id: katz-samuels21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5334 lastpage: 5344 published: 2021-07-01 00:00:00 +0000 - title: 'When Does Data Augmentation Help With Membership Inference Attacks?' abstract: 'Deep learning models often raise privacy concerns as they leak information about their training data. This leakage enables membership inference attacks (MIA) that can identify whether a data point was in a model’s training set. Research shows that some ’data augmentation’ mechanisms may reduce the risk by combatting a key factor increasing the leakage, overfitting. While many mechanisms exist, their effectiveness against MIAs and privacy properties have not been studied systematically. Employing two recent MIAs, we explore the lower bound on the risk in the absence of formal upper bounds. First, we evaluate 7 mechanisms and differential privacy, on three image classification tasks. We find that applying augmentation to increase the model’s utility does not mitigate the risk and protection comes with a utility penalty. Further, we also investigate why popular label smoothing mechanism consistently amplifies the risk. Finally, we propose ’loss-rank-correlation’ (LRC) metric to assess how similar the effects of different mechanisms are. This, for example, reveals the similarity of applying high-intensity augmentation against MIAs to simply reducing the training time. Our findings emphasize the utility-privacy trade-off and provide practical guidelines on using augmentation to manage the trade-off.' volume: 139 URL: https://proceedings.mlr.press/v139/kaya21a.html PDF: http://proceedings.mlr.press/v139/kaya21a/kaya21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kaya21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yigitcan family: Kaya - given: Tudor family: Dumitras editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5345-5355 id: kaya21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5345 lastpage: 5355 published: 2021-07-01 00:00:00 +0000 - title: 'Regularized Submodular Maximization at Scale' abstract: 'In this paper, we propose scalable methods for maximizing a regularized submodular function $f \triangleq g-\ell$ expressed as the difference between a monotone submodular function $g$ and a modular function $\ell$. Submodularity is inherently related to the notions of diversity, coverage, and representativeness. In particular, finding the mode (i.e., the most likely configuration) of many popular probabilistic models of diversity, such as determinantal point processes and strongly log-concave distributions, involves maximization of (regularized) submodular functions. Since a regularized function $f$ can potentially take on negative values, the classic theory of submodular maximization, which heavily relies on the non-negativity assumption of submodular functions, is not applicable. To circumvent this challenge, we develop the first one-pass streaming algorithm for maximizing a regularized submodular function subject to a $k$-cardinality constraint. Furthermore, we develop the first distributed algorithm that returns a solution $S$ in $O(1/ \epsilon)$ rounds of MapReduce computation. We highlight that our result, even for the unregularized case where the modular term $\ell$ is zero, improves the memory and communication complexity of the state-of-the-art by a factor of $O(1/ \epsilon)$ while arguably provides a simpler distributed algorithm and a unifying analysis. We empirically study the performance of our scalable methods on a set of real-life applications, including finding the mode of negatively correlated distributions, vertex cover of social networks, and several data summarization tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/kazemi21a.html PDF: http://proceedings.mlr.press/v139/kazemi21a/kazemi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kazemi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ehsan family: Kazemi - given: Shervin family: Minaee - given: Moran family: Feldman - given: Amin family: Karbasi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5356-5366 id: kazemi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5356 lastpage: 5366 published: 2021-07-01 00:00:00 +0000 - title: 'Prior Image-Constrained Reconstruction using Style-Based Generative Models' abstract: 'Obtaining a useful estimate of an object from highly incomplete imaging measurements remains a holy grail of imaging science. Deep learning methods have shown promise in learning object priors or constraints to improve the conditioning of an ill-posed imaging inverse problem. In this study, a framework for estimating an object of interest that is semantically related to a known prior image, is proposed. An optimization problem is formulated in the disentangled latent space of a style-based generative model, and semantically meaningful constraints are imposed using the disentangled latent representation of the prior image. Stable recovery from incomplete measurements with the help of a prior image is theoretically analyzed. Numerical experiments demonstrating the superior performance of our approach as compared to related methods are presented.' volume: 139 URL: https://proceedings.mlr.press/v139/kelkar21a.html PDF: http://proceedings.mlr.press/v139/kelkar21a/kelkar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kelkar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Varun A family: Kelkar - given: Mark family: Anastasio editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5367-5377 id: kelkar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5367 lastpage: 5377 published: 2021-07-01 00:00:00 +0000 - title: 'Self Normalizing Flows' abstract: 'Efficient gradient computation of the Jacobian determinant term is a core problem in many machine learning settings, and especially so in the normalizing flow framework. Most proposed flow models therefore either restrict to a function class with easy evaluation of the Jacobian determinant, or an efficient estimator thereof. However, these restrictions limit the performance of such density models, frequently requiring significant depth to reach desired performance levels. In this work, we propose \emph{Self Normalizing Flows}, a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer. This reduces the computational complexity of each layer’s exact update from $\mathcal{O}(D^3)$ to $\mathcal{O}(D^2)$, allowing for the training of flow architectures which were otherwise computationally infeasible, while also providing efficient sampling. We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts, while training more quickly and surpassing the performance of functionally constrained counterparts.' volume: 139 URL: https://proceedings.mlr.press/v139/keller21a.html PDF: http://proceedings.mlr.press/v139/keller21a/keller21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-keller21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Thomas A family: Keller - given: Jorn W.T. family: Peters - given: Priyank family: Jaini - given: Emiel family: Hoogeboom - given: Patrick family: Forré - given: Max family: Welling editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5378-5387 id: keller21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5378 lastpage: 5387 published: 2021-07-01 00:00:00 +0000 - title: 'Interpretable Stability Bounds for Spectral Graph Filters' abstract: 'Graph-structured data arise in a variety of real-world context ranging from sensor and transportation to biological and social networks. As a ubiquitous tool to process graph-structured data, spectral graph filters have been used to solve common tasks such as denoising and anomaly detection, as well as design deep learning architectures such as graph neural networks. Despite being an important tool, there is a lack of theoretical understanding of the stability properties of spectral graph filters, which are important for designing robust machine learning models. In this paper, we study filter stability and provide a novel and interpretable upper bound on the change of filter output, where the bound is expressed in terms of the endpoint degrees of the deleted and newly added edges, as well as the spatial proximity of those edges. This upper bound allows us to reason, in terms of structural properties of the graph, when a spectral graph filter will be stable. We further perform extensive experiments to verify intuition that can be gained from the bound.' volume: 139 URL: https://proceedings.mlr.press/v139/kenlay21a.html PDF: http://proceedings.mlr.press/v139/kenlay21a/kenlay21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kenlay21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Henry family: Kenlay - given: Dorina family: Thanou - given: Xiaowen family: Dong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5388-5397 id: kenlay21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5388 lastpage: 5397 published: 2021-07-01 00:00:00 +0000 - title: 'Affine Invariant Analysis of Frank-Wolfe on Strongly Convex Sets' abstract: 'It is known that the Frank-Wolfe (FW) algorithm, which is affine covariant, enjoys faster convergence rates than $\mathcal{O}\left(1/K\right)$ when the constraint set is strongly convex. However, these results rely on norm-dependent assumptions, usually incurring non-affine invariant bounds, in contradiction with FW’s affine covariant property. In this work, we introduce new structural assumptions on the problem (such as the directional smoothness) and derive an affine invariant, norm-independent analysis of Frank-Wolfe. We show that our rates are better than any other known convergence rates of FW in this setting. Based on our analysis, we propose an affine invariant backtracking line-search. Interestingly, we show that typical backtracking line-searches using smoothness of the objective function present similar performances than its affine invariant counterpart, despite using affine dependent norms in the step size’s computation.' volume: 139 URL: https://proceedings.mlr.press/v139/kerdreux21a.html PDF: http://proceedings.mlr.press/v139/kerdreux21a/kerdreux21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kerdreux21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Thomas family: Kerdreux - given: Lewis family: Liu - given: Simon family: Lacoste-Julien - given: Damien family: Scieur editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5398-5408 id: kerdreux21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5398 lastpage: 5408 published: 2021-07-01 00:00:00 +0000 - title: 'Markpainting: Adversarial Machine Learning meets Inpainting' abstract: 'Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting. Source code is available at: https://github.com/iliaishacked/markpainting.' volume: 139 URL: https://proceedings.mlr.press/v139/khachaturov21a.html PDF: http://proceedings.mlr.press/v139/khachaturov21a/khachaturov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-khachaturov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Khachaturov - given: Ilia family: Shumailov - given: Yiren family: Zhao - given: Nicolas family: Papernot - given: Ross family: Anderson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5409-5419 id: khachaturov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5409 lastpage: 5419 published: 2021-07-01 00:00:00 +0000 - title: 'Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm' abstract: 'In this paper, we provide finite-sample convergence guarantees for an off-policy variant of the natural actor-critic (NAC) algorithm based on Importance Sampling. In particular, we show that the algorithm converges to a global optimal policy with a sample complexity of $\mathcal{O}(\epsilon^{-3}\log^2(1/\epsilon))$ under an appropriate choice of stepsizes. In order to overcome the issue of large variance due to Importance Sampling, we propose the $Q$-trace algorithm for the critic, which is inspired by the V-trace algorithm (Espeholt et al., 2018). This enables us to explicitly control the bias and variance, and characterize the trade-off between them. As an advantage of off-policy sampling, a major feature of our result is that we do not need any additional assumptions, beyond the ergodicity of the Markov chain induced by the behavior policy.' volume: 139 URL: https://proceedings.mlr.press/v139/khodadadian21a.html PDF: http://proceedings.mlr.press/v139/khodadadian21a/khodadadian21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-khodadadian21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sajad family: Khodadadian - given: Zaiwei family: Chen - given: Siva Theja family: Maguluri editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5420-5431 id: khodadadian21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5420 lastpage: 5431 published: 2021-07-01 00:00:00 +0000 - title: 'Functional Space Analysis of Local GAN Convergence' abstract: 'Recent work demonstrated the benefits of studying continuous-time dynamics governing the GAN training. However, this dynamics is analyzed in the model parameter space, which results in finite-dimensional dynamical systems. We propose a novel perspective where we study the local dynamics of adversarial training in the general functional space and show how it can be represented as a system of partial differential equations. Thus, the convergence properties can be inferred from the eigenvalues of the resulting differential operator. We show that these eigenvalues can be efficiently estimated from the target dataset before training. Our perspective reveals several insights on the practical tricks commonly used to stabilize GANs, such as gradient penalty, data augmentation, and advanced integration schemes. As an immediate practical benefit, we demonstrate how one can a priori select an optimal data augmentation strategy for a particular generation task.' volume: 139 URL: https://proceedings.mlr.press/v139/khrulkov21a.html PDF: http://proceedings.mlr.press/v139/khrulkov21a/khrulkov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-khrulkov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Valentin family: Khrulkov - given: Artem family: Babenko - given: Ivan family: Oseledets editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5432-5442 id: khrulkov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5432 lastpage: 5442 published: 2021-07-01 00:00:00 +0000 - title: '"Hey, that’s not an ODE": Faster ODE Adjoints via Seminorms' abstract: 'Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step is accepted if its error, \emph{relative to some norm}, is sufficiently small; else it is rejected, the step is shrunk, and the process is repeated. Here, we demonstrate that the particular structure of the adjoint equations makes the usual choices of norm (such as $L^2$) unnecessarily stringent. By replacing it with a more appropriate (semi)norm, fewer steps are unnecessarily rejected and the backpropagation is made faster. This requires only minor code modifications. Experiments on a wide range of tasks—including time series, generative modeling, and physical control—demonstrate a median improvement of 40% fewer function evaluations. On some problems we see as much as 62% fewer function evaluations, so that the overall training time is roughly halved.' volume: 139 URL: https://proceedings.mlr.press/v139/kidger21a.html PDF: http://proceedings.mlr.press/v139/kidger21a/kidger21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kidger21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Patrick family: Kidger - given: Ricky T. Q. family: Chen - given: Terry J family: Lyons editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5443-5452 id: kidger21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5443 lastpage: 5452 published: 2021-07-01 00:00:00 +0000 - title: 'Neural SDEs as Infinite-Dimensional GANs' abstract: 'Stochastic differential equations (SDEs) are a staple of mathematical modelling of temporal dynamics. However, a fundamental limitation has been that such models have typically been relatively inflexible, which recent work introducing Neural SDEs has sought to solve. Here, we show that the current classical approach to fitting SDEs may be approached as a special case of (Wasserstein) GANs, and in doing so the neural and classical regimes may be brought together. The input noise is Brownian motion, the output samples are time-evolving paths produced by a numerical solver, and by parameterising a discriminator as a Neural Controlled Differential Equation (CDE), we obtain Neural SDEs as (in modern machine learning parlance) continuous-time generative time series models. Unlike previous work on this problem, this is a direct extension of the classical approach without reference to either prespecified statistics or density functions. Arbitrary drift and diffusions are admissible, so as the Wasserstein loss has a unique global minima, in the infinite data limit \textit{any} SDE may be learnt.' volume: 139 URL: https://proceedings.mlr.press/v139/kidger21b.html PDF: http://proceedings.mlr.press/v139/kidger21b/kidger21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kidger21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Patrick family: Kidger - given: James family: Foster - given: Xuechen family: Li - given: Terry J family: Lyons editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5453-5463 id: kidger21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5453 lastpage: 5463 published: 2021-07-01 00:00:00 +0000 - title: 'GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training' abstract: 'The great success of modern machine learning models on large datasets is contingent on extensive computational resources with high financial and environmental costs. One way to address this is by extracting subsets that generalize on par with the full data. In this work, we propose a general framework, GRAD-MATCH, which finds subsets that closely match the gradient of the \emph{training or validation} set. We find such subsets effectively using an orthogonal matching pursuit algorithm. We show rigorous theoretical and convergence guarantees of the proposed algorithm and, through our extensive experiments on real-world datasets, show the effectiveness of our proposed framework. We show that GRAD-MATCH significantly and consistently outperforms several recent data-selection algorithms and achieves the best accuracy-efficiency trade-off. GRAD-MATCH is available as a part of the CORDS toolkit: \url{https://github.com/decile-team/cords}.' volume: 139 URL: https://proceedings.mlr.press/v139/killamsetty21a.html PDF: http://proceedings.mlr.press/v139/killamsetty21a/killamsetty21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-killamsetty21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Krishnateja family: Killamsetty - given: Durga family: S - given: Ganesh family: Ramakrishnan - given: Abir family: De - given: Rishabh family: Iyer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5464-5474 id: killamsetty21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5464 lastpage: 5474 published: 2021-07-01 00:00:00 +0000 - title: 'Improving Predictors via Combination Across Diverse Task Categories' abstract: 'Predictor combination is the problem of improving a task predictor using predictors of other tasks when the forms of individual predictors are unknown. Previous work approached this problem by nonparametrically assessing predictor relationships based on their joint evaluations on a shared sample. This limits their application to cases where all predictors are defined on the same task category, e.g. all predictors estimate attributes of shoes. We present a new predictor combination algorithm that overcomes this limitation. Our algorithm aligns the heterogeneous domains of different predictors in a shared latent space to facilitate comparisons of predictors independently of the domains on which they are originally defined. We facilitate this by a new data alignment scheme that matches data distributions across task categories. Based on visual attribute ranking experiments on datasets that span diverse task categories (e.g. shoes and animals), we demonstrate that our approach often significantly improves the performances of the initial predictors.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21a.html PDF: http://proceedings.mlr.press/v139/kim21a/kim21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kwang In family: Kim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5475-5485 id: kim21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5475 lastpage: 5485 published: 2021-07-01 00:00:00 +0000 - title: 'Self-Improved Retrosynthetic Planning' abstract: 'Retrosynthetic planning is a fundamental problem in chemistry for finding a pathway of reactions to synthesize a target molecule. Recently, search algorithms have shown promising results for solving this problem by using deep neural networks (DNNs) to expand their candidate solutions, i.e., adding new reactions to reaction pathways. However, the existing works on this line are suboptimal; the retrosynthetic planning problem requires the reaction pathways to be (a) represented by real-world reactions and (b) executable using “building block” molecules, yet the DNNs expand reaction pathways without fully incorporating such requirements. Motivated by this, we propose an end-to-end framework for directly training the DNNs towards generating reaction pathways with the desirable properties. Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself. We also propose a novel reaction augmentation scheme based on a forward reaction model. Our experiments demonstrate that our scheme significantly improves the success rate of solving the retrosynthetic problem from 86.84% to 96.32% while maintaining the performance of DNN for predicting valid reactions.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21b.html PDF: http://proceedings.mlr.press/v139/kim21b/kim21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Junsu family: Kim - given: Sungsoo family: Ahn - given: Hankook family: Lee - given: Jinwoo family: Shin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5486-5495 id: kim21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5486 lastpage: 5495 published: 2021-07-01 00:00:00 +0000 - title: 'Reward Identification in Inverse Reinforcement Learning' abstract: 'We study the problem of reward identifiability in the context of Inverse Reinforcement Learning (IRL). The reward identifiability question is critical to answer when reasoning about the effectiveness of using Markov Decision Processes (MDPs) as computational models of real world decision makers in order to understand complex decision making behavior and perform counterfactual reasoning. While identifiability has been acknowledged as a fundamental theoretical question in IRL, little is known about the types of MDPs for which rewards are identifiable, or even if there exist such MDPs. In this work, we formalize the reward identification problem in IRL and study how identifiability relates to properties of the MDP model. For deterministic MDP models with the MaxEntRL objective, we prove necessary and sufficient conditions for identifiability. Building on these results, we present efficient algorithms for testing whether or not an MDP model is identifiable.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21c.html PDF: http://proceedings.mlr.press/v139/kim21c/kim21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kuno family: Kim - given: Shivam family: Garg - given: Kirankumar family: Shiragur - given: Stefano family: Ermon editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5496-5505 id: kim21c issued: date-parts: - 2021 - 7 - 1 firstpage: 5496 lastpage: 5505 published: 2021-07-01 00:00:00 +0000 - title: 'I-BERT: Integer-only BERT Quantization' abstract: 'Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4- 4.0x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21d.html PDF: http://proceedings.mlr.press/v139/kim21d/kim21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sehoon family: Kim - given: Amir family: Gholami - given: Zhewei family: Yao - given: Michael W. family: Mahoney - given: Kurt family: Keutzer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5506-5518 id: kim21d issued: date-parts: - 2021 - 7 - 1 firstpage: 5506 lastpage: 5518 published: 2021-07-01 00:00:00 +0000 - title: 'Message Passing Adaptive Resonance Theory for Online Active Semi-supervised Learning' abstract: 'Active learning is widely used to reduce labeling effort and training time by repeatedly querying only the most beneficial samples from unlabeled data. In real-world problems where data cannot be stored indefinitely due to limited storage or privacy issues, the query selection and the model update should be performed as soon as a new data sample is observed. Various online active learning methods have been studied to deal with these challenges; however, there are difficulties in selecting representative query samples and updating the model efficiently without forgetting. In this study, we propose Message Passing Adaptive Resonance Theory (MPART) that learns the distribution and topology of input data online. Through message passing on the topological graph, MPART actively queries informative and representative samples, and continuously improves the classification performance using both labeled and unlabeled data. We evaluate our model in stream-based selective sampling scenarios with comparable query selection strategies, showing that MPART significantly outperforms competitive models.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21e.html PDF: http://proceedings.mlr.press/v139/kim21e/kim21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Taehyeong family: Kim - given: Injune family: Hwang - given: Hyundo family: Lee - given: Hyunseo family: Kim - given: Won-Seok family: Choi - given: Joseph J family: Lim - given: Byoung-Tak family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5519-5529 id: kim21e issued: date-parts: - 2021 - 7 - 1 firstpage: 5519 lastpage: 5529 published: 2021-07-01 00:00:00 +0000 - title: 'Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech' abstract: 'Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21f.html PDF: http://proceedings.mlr.press/v139/kim21f/kim21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jaehyeon family: Kim - given: Jungil family: Kong - given: Juhee family: Son editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5530-5540 id: kim21f issued: date-parts: - 2021 - 7 - 1 firstpage: 5530 lastpage: 5540 published: 2021-07-01 00:00:00 +0000 - title: 'A Policy Gradient Algorithm for Learning to Learn in Multiagent Reinforcement Learning' abstract: 'A fundamental challenge in multiagent reinforcement learning is to learn beneficial behaviors in a shared environment with other simultaneously learning agents. In particular, each agent perceives the environment as effectively non-stationary due to the changing policies of other agents. Moreover, each agent is itself constantly learning, leading to natural non-stationarity in the distribution of experiences encountered. In this paper, we propose a novel meta-multiagent policy gradient theorem that directly accounts for the non-stationary policy dynamics inherent to multiagent learning settings. This is achieved by modeling our gradient updates to consider both an agent’s own non-stationary policy dynamics and the non-stationary policy dynamics of other agents in the environment. We show that our theoretically grounded approach provides a general solution to the multiagent learning problem, which inherently comprises all key aspects of previous state of the art approaches on this topic. We test our method on a diverse suite of multiagent benchmarks and demonstrate a more efficient ability to adapt to new agents as they learn than baseline methods across the full spectrum of mixed incentive, competitive, and cooperative domains.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21g.html PDF: http://proceedings.mlr.press/v139/kim21g/kim21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dong Ki family: Kim - given: Miao family: Liu - given: Matthew D family: Riemer - given: Chuangchuang family: Sun - given: Marwa family: Abdulhai - given: Golnaz family: Habibi - given: Sebastian family: Lopez-Cot - given: Gerald family: Tesauro - given: Jonathan family: How editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5541-5550 id: kim21g issued: date-parts: - 2021 - 7 - 1 firstpage: 5541 lastpage: 5550 published: 2021-07-01 00:00:00 +0000 - title: 'Inferring Latent Dynamics Underlying Neural Population Activity via Neural Differential Equations' abstract: 'An important problem in systems neuroscience is to identify the latent dynamics underlying neural population activity. Here we address this problem by introducing a low-dimensional nonlinear model for latent neural population dynamics using neural ordinary differential equations (neural ODEs), with noisy sensory inputs and Poisson spike train outputs. We refer to this as the Poisson Latent Neural Differential Equations (PLNDE) model. We apply the PLNDE framework to a variety of synthetic datasets, and show that it accurately infers the phase portraits and fixed points of nonlinear systems augmented to produce spike train data, including the FitzHugh-Nagumo oscillator, a 3-dimensional nonlinear spiral, and a nonlinear sensory decision-making model with attractor dynamics. Our model significantly outperforms existing methods at inferring single-trial neural firing rates and the corresponding latent trajectories that generated them, especially in the regime where the spike counts and number of trials are low. We then apply our model to multi-region neural population recordings from medial frontal cortex of rats performing an auditory decision-making task. Our model provides a general, interpretable framework for investigating the neural mechanisms of decision-making and other cognitive computations through the lens of dynamical systems.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21h.html PDF: http://proceedings.mlr.press/v139/kim21h/kim21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Timothy D. family: Kim - given: Thomas Z. family: Luo - given: Jonathan W. family: Pillow - given: Carlos D. family: Brody editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5551-5561 id: kim21h issued: date-parts: - 2021 - 7 - 1 firstpage: 5551 lastpage: 5561 published: 2021-07-01 00:00:00 +0000 - title: 'The Lipschitz Constant of Self-Attention' abstract: 'Lipschitz constants of neural networks have been explored in various contexts in deep learning, such as provable adversarial robustness, estimating Wasserstein distance, stabilising training of GANs, and formulating invertible neural networks. Such works have focused on bounding the Lipschitz constant of fully connected or convolutional networks, composed of linear maps and pointwise non-linearities. In this paper, we investigate the Lipschitz constant of self-attention, a non-linear neural network module widely used in sequence modelling. We prove that the standard dot-product self-attention is not Lipschitz for unbounded input domain, and propose an alternative L2 self-attention that is Lipschitz. We derive an upper bound on the Lipschitz constant of L2 self-attention and provide empirical evidence for its asymptotic tightness. To demonstrate the practical relevance of our theoretical work, we formulate invertible self-attention and use it in a Transformer-based architecture for a character-level language modelling task.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21i.html PDF: http://proceedings.mlr.press/v139/kim21i/kim21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hyunjik family: Kim - given: George family: Papamakarios - given: Andriy family: Mnih editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5562-5571 id: kim21i issued: date-parts: - 2021 - 7 - 1 firstpage: 5562 lastpage: 5571 published: 2021-07-01 00:00:00 +0000 - title: 'Unsupervised Skill Discovery with Bottleneck Option Learning' abstract: 'Having the ability to acquire inherent skills from environments without any external rewards or supervision like humans is an important problem. We propose a novel unsupervised skill discovery method named Information Bottleneck Option Learning (IBOL). On top of the linearization of environments that promotes more various and distant state transitions, IBOL enables the discovery of diverse skills. It provides the abstraction of the skills learned with the information bottleneck framework for the options with improved stability and encouraged disentanglement. We empirically demonstrate that IBOL outperforms multiple state-of-the-art unsupervised skill discovery methods on the information-theoretic evaluations and downstream tasks in MuJoCo environments, including Ant, HalfCheetah, Hopper and D’Kitty. Our code is available at https://vision.snu.ac.kr/projects/ibol.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21j.html PDF: http://proceedings.mlr.press/v139/kim21j/kim21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jaekyeom family: Kim - given: Seohong family: Park - given: Gunhee family: Kim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5572-5582 id: kim21j issued: date-parts: - 2021 - 7 - 1 firstpage: 5572 lastpage: 5582 published: 2021-07-01 00:00:00 +0000 - title: 'ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision' abstract: 'Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.' volume: 139 URL: https://proceedings.mlr.press/v139/kim21k.html PDF: http://proceedings.mlr.press/v139/kim21k/kim21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kim21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wonjae family: Kim - given: Bokyung family: Son - given: Ildoo family: Kim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5583-5594 id: kim21k issued: date-parts: - 2021 - 7 - 1 firstpage: 5583 lastpage: 5594 published: 2021-07-01 00:00:00 +0000 - title: 'Bias-Robust Bayesian Optimization via Dueling Bandits' abstract: 'We consider Bayesian optimization in settings where observations can be adversarially biased, for example by an uncontrolled hidden confounder. Our first contribution is a reduction of the confounded setting to the dueling bandit model. Then we propose a novel approach for dueling bandits based on information-directed sampling (IDS). Thereby, we obtain the first efficient kernelized algorithm for dueling bandits that comes with cumulative regret guarantees. Our analysis further generalizes a previously proposed semi-parametric linear bandit model to non-linear reward functions, and uncovers interesting links to doubly-robust estimation.' volume: 139 URL: https://proceedings.mlr.press/v139/kirschner21a.html PDF: http://proceedings.mlr.press/v139/kirschner21a/kirschner21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kirschner21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Johannes family: Kirschner - given: Andreas family: Krause editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5595-5605 id: kirschner21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5595 lastpage: 5605 published: 2021-07-01 00:00:00 +0000 - title: 'CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients' abstract: 'The healthcare industry generates troves of unlabelled physiological data. This data can be exploited via contrastive learning, a self-supervised pre-training method that encourages representations of instances to be similar to one another. We propose a family of contrastive learning methods, CLOCS, that encourages representations across space, time, \textit{and} patients to be similar to one another. We show that CLOCS consistently outperforms the state-of-the-art methods, BYOL and SimCLR, when performing a linear evaluation of, and fine-tuning on, downstream tasks. We also show that CLOCS achieves strong generalization performance with only 25% of labelled training data. Furthermore, our training procedure naturally generates patient-specific representations that can be used to quantify patient-similarity.' volume: 139 URL: https://proceedings.mlr.press/v139/kiyasseh21a.html PDF: http://proceedings.mlr.press/v139/kiyasseh21a/kiyasseh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kiyasseh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dani family: Kiyasseh - given: Tingting family: Zhu - given: David A family: Clifton editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5606-5615 id: kiyasseh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5606 lastpage: 5615 published: 2021-07-01 00:00:00 +0000 - title: 'Scalable Optimal Transport in High Dimensions for Graph Distances, Embedding Alignment, and More' abstract: 'The current best practice for computing optimal transport (OT) is via entropy regularization and Sinkhorn iterations. This algorithm runs in quadratic time as it requires the full pairwise cost matrix, which is prohibitively expensive for large sets of objects. In this work we propose two effective log-linear time approximations of the cost matrix: First, a sparse approximation based on locality sensitive hashing (LSH) and, second, a Nystr{ö}m approximation with LSH-based sparse corrections, which we call locally corrected Nystr{ö}m (LCN). These approximations enable general log-linear time algorithms for entropy-regularized OT that perform well even for the complex, high-dimensional spaces common in deep learning. We analyse these approximations theoretically and evaluate them experimentally both directly and end-to-end as a component for real-world applications. Using our approximations for unsupervised word embedding alignment enables us to speed up a state-of-the-art method by a factor of 3 while also improving the accuracy by 3.1 percentage points without any additional model changes. For graph distance regression we propose the graph transport network (GTN), which combines graph neural networks (GNNs) with enhanced Sinkhorn. GTN outcompetes previous models by 48% and still scales log-linearly in the number of nodes.' volume: 139 URL: https://proceedings.mlr.press/v139/gasteiger21a.html PDF: http://proceedings.mlr.press/v139/gasteiger21a/gasteiger21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-gasteiger21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Johannes family: Gasteiger - given: Marten family: Lienen - given: Stephan family: Günnemann editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5616-5627 id: gasteiger21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5616 lastpage: 5627 published: 2021-07-01 00:00:00 +0000 - title: 'Representational aspects of depth and conditioning in normalizing flows' abstract: 'Normalizing flows are among the most popular paradigms in generative modeling, especially for images, primarily because we can efficiently evaluate the likelihood of a data point. This is desirable both for evaluating the fit of a model, and for ease of training, as maximizing the likelihood can be done by gradient descent. However, training normalizing flows comes with difficulties as well: models which produce good samples typically need to be extremely deep – which comes with accompanying vanishing/exploding gradient problems. A very related problem is that they are often poorly \emph{conditioned}: since they are parametrized as invertible maps from $\mathbb{R}^d \to \mathbb{R}^d$, and typical training data like images intuitively is lower-dimensional, the learned maps often have Jacobians that are close to being singular. In our paper, we tackle representational aspects around depth and conditioning of normalizing flows: both for general invertible architectures, and for a particular common architecture, affine couplings. We prove that $\Theta(1)$ affine coupling layers suffice to exactly represent a permutation or $1 \times 1$ convolution, as used in GLOW, showing that representationally the choice of partition is not a bottleneck for depth. We also show that shallow affine coupling networks are universal approximators in Wasserstein distance if ill-conditioning is allowed, and experimentally investigate related phenomena involving padding. Finally, we show a depth lower bound for general flow architectures with few neurons per layer and bounded Lipschitz constant.' volume: 139 URL: https://proceedings.mlr.press/v139/koehler21a.html PDF: http://proceedings.mlr.press/v139/koehler21a/koehler21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-koehler21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Frederic family: Koehler - given: Viraj family: Mehta - given: Andrej family: Risteski editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5628-5636 id: koehler21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5628 lastpage: 5636 published: 2021-07-01 00:00:00 +0000 - title: 'WILDS: A Benchmark of in-the-Wild Distribution Shifts' abstract: 'Distribution shifts—where the training distribution differs from the test distribution—can substantially degrade the accuracy of machine learning (ML) systems deployed in the wild. Despite their ubiquity in the real-world deployments, these distribution shifts are under-represented in the datasets widely used in the ML community today. To address this gap, we present WILDS, a curated benchmark of 10 datasets reflecting a diverse range of distribution shifts that naturally arise in real-world applications, such as shifts across hospitals for tumor identification; across camera traps for wildlife monitoring; and across time and location in satellite imaging and poverty mapping. On each dataset, we show that standard training yields substantially lower out-of-distribution than in-distribution performance. This gap remains even with models trained by existing methods for tackling distribution shifts, underscoring the need for new methods for training models that are more robust to the types of distribution shifts that arise in practice. To facilitate method development, we provide an open-source package that automates dataset loading, contains default model architectures and hyperparameters, and standardizes evaluations. The full paper, code, and leaderboards are available at https://wilds.stanford.edu.' volume: 139 URL: https://proceedings.mlr.press/v139/koh21a.html PDF: http://proceedings.mlr.press/v139/koh21a/koh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-koh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pang Wei family: Koh - given: Shiori family: Sagawa - given: Henrik family: Marklund - given: Sang Michael family: Xie - given: Marvin family: Zhang - given: Akshay family: Balsubramani - given: Weihua family: Hu - given: Michihiro family: Yasunaga - given: Richard Lanas family: Phillips - given: Irena family: Gao - given: Tony family: Lee - given: Etienne family: David - given: Ian family: Stavness - given: Wei family: Guo - given: Berton family: Earnshaw - given: Imran family: Haque - given: Sara M family: Beery - given: Jure family: Leskovec - given: Anshul family: Kundaje - given: Emma family: Pierson - given: Sergey family: Levine - given: Chelsea family: Finn - given: Percy family: Liang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5637-5664 id: koh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5637 lastpage: 5664 published: 2021-07-01 00:00:00 +0000 - title: 'One-sided Frank-Wolfe algorithms for saddle problems' abstract: 'We study a class of convex-concave saddle-point problems of the form $\min_x\max_y ⟨Kx,y⟩+f_{\cal P}(x)-h^*(y)$ where $K$ is a linear operator, $f_{\cal P}$ is the sum of a convex function $f$ with a Lipschitz-continuous gradient and the indicator function of a bounded convex polytope ${\cal P}$, and $h^\ast$ is a convex (possibly nonsmooth) function. Such problem arises, for example, as a Lagrangian relaxation of various discrete optimization problems. Our main assumptions are the existence of an efficient {\em linear minimization oracle} ($lmo$) for $f_{\cal P}$ and an efficient {\em proximal map} ($prox$) for $h^*$ which motivate the solution via a blend of proximal primal-dual algorithms and Frank-Wolfe algorithms. In case $h^*$ is the indicator function of a linear constraint and function $f$ is quadratic, we show a $O(1/n^2)$ convergence rate on the dual objective, requiring $O(n \log n)$ calls of $lmo$. If the problem comes from the constrained optimization problem $\min_{x\in\mathbb R^d}\{f_{\cal P}(x)\:|\:Ax-b=0\}$ then we additionally get bound $O(1/n^2)$ both on the primal gap and on the infeasibility gap. In the most general case, we show a $O(1/n)$ convergence rate of the primal-dual gap again requiring $O(n\log n)$ calls of $lmo$. To the best of our knowledge, this improves on the known convergence rates for the considered class of saddle-point problems. We show applications to labeling problems frequently appearing in machine learning and computer vision.' volume: 139 URL: https://proceedings.mlr.press/v139/kolmogorov21a.html PDF: http://proceedings.mlr.press/v139/kolmogorov21a/kolmogorov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kolmogorov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vladimir family: Kolmogorov - given: Thomas family: Pock editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5665-5675 id: kolmogorov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5665 lastpage: 5675 published: 2021-07-01 00:00:00 +0000 - title: 'A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning' abstract: 'Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP). This paper develops an information-theoretic lower bound for the sample complexity of the finite state, finite action IRL problem. A geometric construction of $\beta$-strict separable IRL problems using spherical codes is considered. Properties of the ensemble size as well as the Kullback-Leibler divergence between the generated trajectories are derived. The resulting ensemble is then used along with Fano’s inequality to derive a sample complexity lower bound of $O(n \log n)$, where $n$ is the number of states in the MDP.' volume: 139 URL: https://proceedings.mlr.press/v139/komanduru21a.html PDF: http://proceedings.mlr.press/v139/komanduru21a/komanduru21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-komanduru21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Abi family: Komanduru - given: Jean family: Honorio editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5676-5685 id: komanduru21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5676 lastpage: 5685 published: 2021-07-01 00:00:00 +0000 - title: 'Consensus Control for Decentralized Deep Learning' abstract: 'Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters. Experiments in earlier works reveal that, even in a data-center setup, decentralized training often suffers from the degradation in the quality of the model: the training and test performance of models trained in a decentralized fashion is in general worse than that of models trained in a centralized fashion, and this performance drop is impacted by parameters such as network size, communication topology and data partitioning. We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart. We empirically validate that the relation between generalization performance and consensus distance is consistent with this theoretical observation. Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop. To this end, we provide practical training guidelines and exemplify its effectiveness on the data-center setup as the important first step.' volume: 139 URL: https://proceedings.mlr.press/v139/kong21a.html PDF: http://proceedings.mlr.press/v139/kong21a/kong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lingjing family: Kong - given: Tao family: Lin - given: Anastasia family: Koloskova - given: Martin family: Jaggi - given: Sebastian family: Stich editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5686-5696 id: kong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5686 lastpage: 5696 published: 2021-07-01 00:00:00 +0000 - title: 'A Distribution-dependent Analysis of Meta Learning' abstract: 'A key problem in the theory of meta-learning is to understand how the task distributions influence transfer risk, the expected error of a meta-learner on a new task drawn from the unknown task distribution. In this paper, focusing on fixed design linear regression with Gaussian noise and a Gaussian task (or parameter) distribution, we give distribution-dependent lower bounds on the transfer risk of any algorithm, while we also show that a novel, weighted version of the so-called biased regularized regression method is able to match these lower bounds up to a fixed constant factor. Notably, the weighting is derived from the covariance of the Gaussian task distribution. Altogether, our results provide a precise characterization of the difficulty of meta-learning in this Gaussian setting. While this problem setting may appear simple, we show that it is rich enough to unify the “parameter sharing” and “representation learning” streams of meta-learning; in particular, representation learning is obtained as the special case when the covariance matrix of the task distribution is unknown. For this case we propose to adopt the EM method, which is shown to enjoy efficient updates in our case. The paper is completed by an empirical study of EM. In particular, our experimental results show that the EM algorithm can attain the lower bound as the number of tasks grows, while the algorithm is also successful in competing with its alternatives when used in a representation learning context.' volume: 139 URL: https://proceedings.mlr.press/v139/konobeev21a.html PDF: http://proceedings.mlr.press/v139/konobeev21a/konobeev21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-konobeev21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mikhail family: Konobeev - given: Ilja family: Kuzborskij - given: Csaba family: Szepesvari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5697-5706 id: konobeev21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5697 lastpage: 5706 published: 2021-07-01 00:00:00 +0000 - title: 'Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?' abstract: 'Dirichlet-based uncertainty (DBU) models are a recent and promising class of uncertainty-aware models. DBU models predict the parameters of a Dirichlet distribution to provide fast, high-quality uncertainty estimates alongside with class predictions. In this work, we present the first large-scale, in-depth study of the robustness of DBU models under adversarial attacks. Our results suggest that uncertainty estimates of DBU models are not robust w.r.t. three important tasks: (1) indicating correctly and wrongly classified samples; (2) detecting adversarial examples; and (3) distinguishing between in-distribution (ID) and out-of-distribution (OOD) data. Additionally, we explore the first approaches to make DBU mod- els more robust. While adversarial training has a minor effect, our median smoothing based ap- proach significantly increases robustness of DBU models.' volume: 139 URL: https://proceedings.mlr.press/v139/kopetzki21a.html PDF: http://proceedings.mlr.press/v139/kopetzki21a/kopetzki21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kopetzki21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Anna-Kathrin family: Kopetzki - given: Bertrand family: Charpentier - given: Daniel family: Zügner - given: Sandhya family: Giri - given: Stephan family: Günnemann editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5707-5718 id: kopetzki21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5707 lastpage: 5718 published: 2021-07-01 00:00:00 +0000 - title: 'Kernel Stein Discrepancy Descent' abstract: 'Among dissimilarities between probability distributions, the Kernel Stein Discrepancy (KSD) has received much interest recently. We investigate the properties of its Wasserstein gradient flow to approximate a target probability distribution $\pi$ on $\mathbb{R}^d$, known up to a normalization constant. This leads to a straightforwardly implementable, deterministic score-based method to sample from $\pi$, named KSD Descent, which uses a set of particles to approximate $\pi$. Remarkably, owing to a tractable loss function, KSD Descent can leverage robust parameter-free optimization schemes such as L-BFGS; this contrasts with other popular particle-based schemes such as the Stein Variational Gradient Descent algorithm. We study the convergence properties of KSD Descent and demonstrate its practical relevance. However, we also highlight failure cases by showing that the algorithm can get stuck in spurious local minima.' volume: 139 URL: https://proceedings.mlr.press/v139/korba21a.html PDF: http://proceedings.mlr.press/v139/korba21a/korba21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-korba21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Anna family: Korba - given: Pierre-Cyril family: Aubin-Frankowski - given: Szymon family: Majewski - given: Pierre family: Ablin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5719-5730 id: korba21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5719 lastpage: 5730 published: 2021-07-01 00:00:00 +0000 - title: 'Boosting the Throughput and Accelerator Utilization of Specialized CNN Inference Beyond Increasing Batch Size' abstract: 'Datacenter vision systems widely use small, specialized convolutional neural networks (CNNs) trained on specific tasks for high-throughput inference. These settings employ accelerators with massive computational capacity, but which specialized CNNs underutilize due to having low arithmetic intensity. This results in suboptimal application-level throughput and poor returns on accelerator investment. Increasing batch size is the only known way to increase both application-level throughput and accelerator utilization for inference, but yields diminishing returns; specialized CNNs poorly utilize accelerators even with large batch size. We propose FoldedCNNs, a new approach to CNN design that increases inference throughput and utilization beyond large batch size. FoldedCNNs rethink the structure of inputs and layers of specialized CNNs to boost arithmetic intensity: in FoldedCNNs, f images with C channels each are concatenated into a single input with fC channels and jointly classified by a wider CNN. Increased arithmetic intensity in FoldedCNNs increases the throughput and GPU utilization of specialized CNN inference by up to 2.5x and 2.8x, with accuracy close to the original CNN in most cases.' volume: 139 URL: https://proceedings.mlr.press/v139/kosaian21a.html PDF: http://proceedings.mlr.press/v139/kosaian21a/kosaian21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kosaian21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jack family: Kosaian - given: Amar family: Phanishayee - given: Matthai family: Philipose - given: Debadeepta family: Dey - given: Rashmi family: Vinayak editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5731-5741 id: kosaian21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5731 lastpage: 5741 published: 2021-07-01 00:00:00 +0000 - title: 'NeRF-VAE: A Geometry Aware 3D Scene Generative Model' abstract: 'We propose NeRF-VAE, a 3D scene generative model that incorporates geometric structure via Neural Radiance Fields (NeRF) and differentiable volume rendering. In contrast to NeRF, our model takes into account shared structure across scenes, and is able to infer the structure of a novel scene—without the need to re-train—using amortized inference. NeRF-VAE’s explicit 3D rendering process further contrasts previous generative models with convolution-based rendering which lacks geometric structure. Our model is a VAE that learns a distribution over radiance fields by conditioning them on a latent scene representation. We show that, once trained, NeRF-VAE is able to infer and render geometrically-consistent scenes from previously unseen 3D environments of synthetic scenes using very few input images. We further demonstrate that NeRF-VAE generalizes well to out-of-distribution cameras, while convolutional models do not. Finally, we introduce and study an attention-based conditioning mechanism of NeRF-VAE’s decoder, which improves model performance.' volume: 139 URL: https://proceedings.mlr.press/v139/kosiorek21a.html PDF: http://proceedings.mlr.press/v139/kosiorek21a/kosiorek21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kosiorek21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Adam R family: Kosiorek - given: Heiko family: Strathmann - given: Daniel family: Zoran - given: Pol family: Moreno - given: Rosalia family: Schneider - given: Sona family: Mokra - given: Danilo Jimenez family: Rezende editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5742-5752 id: kosiorek21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5742 lastpage: 5752 published: 2021-07-01 00:00:00 +0000 - title: 'Active Testing: Sample-Efficient Model Evaluation' abstract: 'We introduce a new framework for sample-efficient model evaluation that we call active testing. While approaches like active learning reduce the number of labels needed for model training, existing literature largely ignores the cost of labeling test data, typically unrealistically assuming large test sets for model evaluation. This creates a disconnect to real applications, where test labels are important and just as expensive, e.g. for optimizing hyperparameters. Active testing addresses this by carefully selecting the test points to label, ensuring model evaluation is sample-efficient. To this end, we derive theoretically-grounded and intuitive acquisition strategies that are specifically tailored to the goals of active testing, noting these are distinct to those of active learning. As actively selecting labels introduces a bias; we further show how to remove this bias while reducing the variance of the estimator at the same time. Active testing is easy to implement and can be applied to any supervised machine learning method. We demonstrate its effectiveness on models including WideResNets and Gaussian processes on datasets including Fashion-MNIST and CIFAR-100.' volume: 139 URL: https://proceedings.mlr.press/v139/kossen21a.html PDF: http://proceedings.mlr.press/v139/kossen21a/kossen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kossen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jannik family: Kossen - given: Sebastian family: Farquhar - given: Yarin family: Gal - given: Tom family: Rainforth editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5753-5763 id: kossen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5753 lastpage: 5763 published: 2021-07-01 00:00:00 +0000 - title: 'High Confidence Generalization for Reinforcement Learning' abstract: 'We present several classes of reinforcement learning algorithms that safely generalize to Markov decision processes (MDPs) not seen during training. Specifically, we study the setting in which some set of MDPs is accessible for training. The goal is to generalize safely to MDPs that are sampled from the same distribution, but which may not be in the set accessible for training. For various definitions of safety, our algorithms give probabilistic guarantees that agents can safely generalize to MDPs that are sampled from the same distribution but are not necessarily in the training set. These algorithms are a type of Seldonian algorithm (Thomas et al., 2019), which is a class of machine learning algorithms that return models with probabilistic safety guarantees for user-specified definitions of safety.' volume: 139 URL: https://proceedings.mlr.press/v139/kostas21a.html PDF: http://proceedings.mlr.press/v139/kostas21a/kostas21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kostas21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: James family: Kostas - given: Yash family: Chandak - given: Scott M family: Jordan - given: Georgios family: Theocharous - given: Philip family: Thomas editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5764-5773 id: kostas21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5764 lastpage: 5773 published: 2021-07-01 00:00:00 +0000 - title: 'Offline Reinforcement Learning with Fisher Divergence Critic Regularization' abstract: 'Many modern approaches to offline Reinforcement Learning (RL) utilize behavior regularization, typically augmenting a model-free actor critic algorithm with a penalty measuring divergence of the policy from the offline data. In this work, we propose an alternative approach to encouraging the learned policy to stay close to the data, namely parameterizing the critic as the log-behavior-policy, which generated the offline data, plus a state-action value offset term, which can be learned using a neural network. Behavior regularization then corresponds to an appropriate regularizer on the offset term. We propose using a gradient penalty regularizer for the offset term and demonstrate its equivalence to Fisher divergence regularization, suggesting connections to the score matching and generative energy-based model literature. We thus term our resulting algorithm Fisher-BRC (Behavior Regularized Critic). On standard offline RL benchmarks, Fisher-BRC achieves both improved performance and faster convergence over existing state-of-the-art methods.' volume: 139 URL: https://proceedings.mlr.press/v139/kostrikov21a.html PDF: http://proceedings.mlr.press/v139/kostrikov21a/kostrikov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kostrikov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ilya family: Kostrikov - given: Rob family: Fergus - given: Jonathan family: Tompson - given: Ofir family: Nachum editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5774-5783 id: kostrikov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5774 lastpage: 5783 published: 2021-07-01 00:00:00 +0000 - title: 'ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks' abstract: 'We propose ADOM – an accelerated method for smooth and strongly convex decentralized optimization over time-varying networks. ADOM uses a dual oracle, i.e., we assume access to the gradient of the Fenchel conjugate of the individual loss functions. Up to a constant factor, which depends on the network structure only, its communication complexity is the same as that of accelerated Nesterov gradient method. To the best of our knowledge, only the algorithm of Rogozin et al. (2019) has a convergence rate with similar properties. However, their algorithm converges under the very restrictive assumption that the number of network changes can not be greater than a tiny percentage of the number of iterations. This assumption is hard to satisfy in practice, as the network topology changes usually can not be controlled. In contrast, ADOM merely requires the network to stay connected throughout time.' volume: 139 URL: https://proceedings.mlr.press/v139/kovalev21a.html PDF: http://proceedings.mlr.press/v139/kovalev21a/kovalev21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kovalev21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dmitry family: Kovalev - given: Egor family: Shulgin - given: Peter family: Richtarik - given: Alexander V family: Rogozin - given: Alexander family: Gasnikov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5784-5793 id: kovalev21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5784 lastpage: 5793 published: 2021-07-01 00:00:00 +0000 - title: 'Revisiting Peng’s Q($λ$) for Modern Reinforcement Learning' abstract: 'Off-policy multi-step reinforcement learning algorithms consist of conservative and non-conservative algorithms: the former actively cut traces, whereas the latter do not. Recently, Munos et al. (2016) proved the convergence of conservative algorithms to an optimal Q-function. In contrast, non-conservative algorithms are thought to be unsafe and have a limited or no theoretical guarantee. Nonetheless, recent studies have shown that non-conservative algorithms empirically outperform conservative ones. Motivated by the empirical results and the lack of theory, we carry out theoretical analyses of Peng’s Q($\lambda$), a representative example of non-conservative algorithms. We prove that \emph{it also converges to an optimal policy} provided that the behavior policy slowly tracks a greedy policy in a way similar to conservative policy iteration. Such a result has been conjectured to be true but has not been proven. We also experiment with Peng’s Q($\lambda$) in complex continuous control tasks, confirming that Peng’s Q($\lambda$) often outperforms conservative algorithms despite its simplicity. These results indicate that Peng’s Q($\lambda$), which was thought to be unsafe, is a theoretically-sound and practically effective algorithm.' volume: 139 URL: https://proceedings.mlr.press/v139/kozuno21a.html PDF: http://proceedings.mlr.press/v139/kozuno21a/kozuno21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kozuno21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tadashi family: Kozuno - given: Yunhao family: Tang - given: Mark family: Rowland - given: Remi family: Munos - given: Steven family: Kapturowski - given: Will family: Dabney - given: Michal family: Valko - given: David family: Abel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5794-5804 id: kozuno21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5794 lastpage: 5804 published: 2021-07-01 00:00:00 +0000 - title: 'Adapting to misspecification in contextual bandits with offline regression oracles' abstract: 'Computationally efficient contextual bandits are often based on estimating a predictive model of rewards given contexts and arms using past data. However, when the reward model is not well-specified, the bandit algorithm may incur unexpected regret, so recent work has focused on algorithms that are robust to misspecification. We propose a simple family of contextual bandit algorithms that adapt to misspecification error by reverting to a good safe policy when there is evidence that misspecification is causing a regret increase. Our algorithm requires only an offline regression oracle to ensure regret guarantees that gracefully degrade in terms of a measure of the average misspecification level. Compared to prior work, we attain similar regret guarantees, but we do no rely on a master algorithm, and do not require more robust oracles like online or constrained regression oracles (e.g., Foster et al. (2020), Krishnamurthy et al. (2020)). This allows us to design algorithms for more general function approximation classes.' volume: 139 URL: https://proceedings.mlr.press/v139/krishnamurthy21a.html PDF: http://proceedings.mlr.press/v139/krishnamurthy21a/krishnamurthy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-krishnamurthy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sanath Kumar family: Krishnamurthy - given: Vitor family: Hadad - given: Susan family: Athey editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5805-5814 id: krishnamurthy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5805 lastpage: 5814 published: 2021-07-01 00:00:00 +0000 - title: 'Out-of-Distribution Generalization via Risk Extrapolation (REx)' abstract: 'Distributional shift is one of the major obstacles when transferring machine learning prediction systems from the lab to the real world. To tackle this problem, we assume that variation across training domains is representative of the variation we might encounter at test time, but also that shifts at test time may be more extreme in magnitude. In particular, we show that reducing differences in risk across training domains can reduce a model’s sensitivity to a wide range of extreme distributional shifts, including the challenging setting where the input contains both causal and anti-causal elements. We motivate this approach, Risk Extrapolation (REx), as a form of robust optimization over a perturbation set of extrapolated domains (MM-REx), and propose a penalty on the variance of training risks (V-REx) as a simpler variant. We prove that variants of REx can recover the causal mechanisms of the targets, while also providing robustness to changes in the input distribution (“covariate shift”). By appropriately trading-off robustness to causally induced distributional shifts and covariate shift, REx is able to outperform alternative methods such as Invariant Risk Minimization in situations where these types of shift co-occur.' volume: 139 URL: https://proceedings.mlr.press/v139/krueger21a.html PDF: http://proceedings.mlr.press/v139/krueger21a/krueger21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-krueger21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Krueger - given: Ethan family: Caballero - given: Joern-Henrik family: Jacobsen - given: Amy family: Zhang - given: Jonathan family: Binas - given: Dinghuai family: Zhang - given: Remi Le family: Priol - given: Aaron family: Courville editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5815-5826 id: krueger21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5815 lastpage: 5826 published: 2021-07-01 00:00:00 +0000 - title: 'Near-Optimal Confidence Sequences for Bounded Random Variables' abstract: 'Many inference problems, such as sequential decision problems like A/B testing, adaptive sampling schemes like bandit selection, are often online in nature. The fundamental problem for online inference is to provide a sequence of confidence intervals that are valid uniformly over the growing-into-infinity sample sizes. To address this question, we provide a near-optimal confidence sequence for bounded random variables by utilizing Bentkus’ concentration results. We show that it improves on the existing approaches that use the Cram{é}r-Chernoff technique such as the Hoeffding, Bernstein, and Bennett inequalities. The resulting confidence sequence is confirmed to be favorable in synthetic coverage problems, adaptive stopping algorithms, and multi-armed bandit problems.' volume: 139 URL: https://proceedings.mlr.press/v139/kuchibhotla21a.html PDF: http://proceedings.mlr.press/v139/kuchibhotla21a/kuchibhotla21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kuchibhotla21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Arun K family: Kuchibhotla - given: Qinqing family: Zheng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5827-5837 id: kuchibhotla21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5827 lastpage: 5837 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially Private Bayesian Inference for Generalized Linear Models' abstract: 'Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst’s repertoire and often used on sensitive datasets. A large body of prior works that investigate GLMs under differential privacy (DP) constraints provide only private point estimates of the regression coefficients, and are not able to quantify parameter uncertainty. In this work, with logistic and Poisson regression as running examples, we introduce a generic noise-aware DP Bayesian inference method for a GLM at hand, given a noisy sum of summary statistics. Quantifying uncertainty allows us to determine which of the regression coefficients are statistically significantly different from zero. We provide a previously unknown tight privacy analysis and experimentally demonstrate that the posteriors obtained from our model, while adhering to strong privacy guarantees, are close to the non-private posteriors.' volume: 139 URL: https://proceedings.mlr.press/v139/kulkarni21a.html PDF: http://proceedings.mlr.press/v139/kulkarni21a/kulkarni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kulkarni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tejas family: Kulkarni - given: Joonas family: Jälkö - given: Antti family: Koskela - given: Samuel family: Kaski - given: Antti family: Honkela editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5838-5849 id: kulkarni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5838 lastpage: 5849 published: 2021-07-01 00:00:00 +0000 - title: 'Bayesian Structural Adaptation for Continual Learning' abstract: 'Continual Learning is a learning paradigm where learning systems are trained on a sequence of tasks. The goal here is to perform well on the current task without suffering from a performance drop on the previous tasks. Two notable directions among the recent advances in continual learning with neural networks are (1) variational Bayes based regularization by learning priors from previous tasks, and, (2) learning the structure of deep networks to adapt to new tasks. So far, these two approaches have been largely orthogonal. We present a novel Bayesian framework based on continually learning the structure of deep neural networks, to unify these distinct yet complementary approaches. The proposed framework learns the deep structure for each task by learning which weights to be used, and supports inter-task transfer through the overlapping of different sparse subsets of weights learned by different tasks. An appealing aspect of our proposed continual learning framework is that it is applicable to both discriminative (supervised) and generative (unsupervised) settings. Experimental results on supervised and unsupervised benchmarks demonstrate that our approach performs comparably or better than recent advances in continual learning.' volume: 139 URL: https://proceedings.mlr.press/v139/kumar21a.html PDF: http://proceedings.mlr.press/v139/kumar21a/kumar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kumar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Abhishek family: Kumar - given: Sunabha family: Chatterjee - given: Piyush family: Rai editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5850-5860 id: kumar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5850 lastpage: 5860 published: 2021-07-01 00:00:00 +0000 - title: 'Implicit rate-constrained optimization of non-decomposable objectives' abstract: 'We consider a popular family of constrained optimization problems arising in machine learning that involve optimizing a non-decomposable evaluation metric with a certain thresholded form, while constraining another metric of interest. Examples of such problems include optimizing false negative rate at a fixed false positive rate, optimizing precision at a fixed recall, optimizing the area under the precision-recall or ROC curves, etc. Our key idea is to formulate a rate-constrained optimization that expresses the threshold parameter as a function of the model parameters via the Implicit Function theorem. We show how the resulting optimization problem can be solved using standard gradient based methods. Experiments on benchmark datasets demonstrate the effectiveness of our proposed method over existing state-of-the-art approaches for these problems.' volume: 139 URL: https://proceedings.mlr.press/v139/kumar21b.html PDF: http://proceedings.mlr.press/v139/kumar21b/kumar21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kumar21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Abhishek family: Kumar - given: Harikrishna family: Narasimhan - given: Andrew family: Cotter editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5861-5871 id: kumar21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5861 lastpage: 5871 published: 2021-07-01 00:00:00 +0000 - title: 'A Scalable Second Order Method for Ill-Conditioned Matrix Completion from Few Samples' abstract: 'We propose an iterative algorithm for low-rank matrix completion with that can be interpreted as an iteratively reweighted least squares (IRLS) algorithm, a saddle-escaping smoothing Newton method or a variable metric proximal gradient method applied to a non-convex rank surrogate. It combines the favorable data-efficiency of previous IRLS approaches with an improved scalability by several orders of magnitude. We establish the first local convergence guarantee from a minimal number of samples for that class of algorithms, showing that the method attains a local quadratic convergence rate. Furthermore, we show that the linear systems to be solved are well-conditioned even for very ill-conditioned ground truth matrices. We provide extensive experiments, indicating that unlike many state-of-the-art approaches, our method is able to complete very ill-conditioned matrices with a condition number of up to $10^{10}$ from few samples, while being competitive in its scalability.' volume: 139 URL: https://proceedings.mlr.press/v139/kummerle21a.html PDF: http://proceedings.mlr.press/v139/kummerle21a/kummerle21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kummerle21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Christian family: Kümmerle - given: Claudio M. family: Verdun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5872-5883 id: kummerle21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5872 lastpage: 5883 published: 2021-07-01 00:00:00 +0000 - title: 'Meta-Thompson Sampling' abstract: 'Efficient exploration in bandits is a fundamental online learning problem. We propose a variant of Thompson sampling that learns to explore better as it interacts with bandit instances drawn from an unknown prior. The algorithm meta-learns the prior and thus we call it MetaTS. We propose several efficient implementations of MetaTS and analyze it in Gaussian bandits. Our analysis shows the benefit of meta-learning and is of a broader interest, because we derive a novel prior-dependent Bayes regret bound for Thompson sampling. Our theory is complemented by empirical evaluation, which shows that MetaTS quickly adapts to the unknown prior.' volume: 139 URL: https://proceedings.mlr.press/v139/kveton21a.html PDF: http://proceedings.mlr.press/v139/kveton21a/kveton21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kveton21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Branislav family: Kveton - given: Mikhail family: Konobeev - given: Manzil family: Zaheer - given: Chih-Wei family: Hsu - given: Martin family: Mladenov - given: Craig family: Boutilier - given: Csaba family: Szepesvari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5884-5893 id: kveton21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5884 lastpage: 5893 published: 2021-07-01 00:00:00 +0000 - title: 'Targeted Data Acquisition for Evolving Negotiation Agents' abstract: 'Successful negotiators must learn how to balance optimizing for self-interest and cooperation. Yet current artificial negotiation agents often heavily depend on the quality of the static datasets they were trained on, limiting their capacity to fashion an adaptive response balancing self-interest and cooperation. For this reason, we find that these agents can achieve either high utility or cooperation, but not both. To address this, we introduce a targeted data acquisition framework where we guide the exploration of a reinforcement learning agent using annotations from an expert oracle. The guided exploration incentivizes the learning agent to go beyond its static dataset and develop new negotiation strategies. We show that this enables our agents to obtain higher-reward and more Pareto-optimal solutions when negotiating with both simulated and human partners compared to standard supervised learning and reinforcement learning methods. This trend additionally holds when comparing agents using our targeted data acquisition framework to variants of agents trained with a mix of supervised learning and reinforcement learning, or to agents using tailored reward functions that explicitly optimize for utility and Pareto-optimality.' volume: 139 URL: https://proceedings.mlr.press/v139/kwon21a.html PDF: http://proceedings.mlr.press/v139/kwon21a/kwon21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kwon21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Minae family: Kwon - given: Siddharth family: Karamcheti - given: Mariano-Florentino family: Cuellar - given: Dorsa family: Sadigh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5894-5904 id: kwon21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5894 lastpage: 5904 published: 2021-07-01 00:00:00 +0000 - title: 'ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks' abstract: 'Recently, learning algorithms motivated from sharpness of loss surface as an effective measure of generalization gap have shown state-of-the-art performances. Nevertheless, sharpness defined in a rigid region with a fixed radius, has a drawback in sensitivity to parameter re-scaling which leaves the loss unaffected, leading to weakening of the connection between sharpness and generalization gap. In this paper, we introduce the concept of adaptive sharpness which is scale-invariant and propose the corresponding generalization bound. We suggest a novel learning method, adaptive sharpness-aware minimization (ASAM), utilizing the proposed generalization bound. Experimental results in various benchmark datasets show that ASAM contributes to significant improvement of model generalization performance.' volume: 139 URL: https://proceedings.mlr.press/v139/kwon21b.html PDF: http://proceedings.mlr.press/v139/kwon21b/kwon21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-kwon21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jungmin family: Kwon - given: Jeongseop family: Kim - given: Hyunseo family: Park - given: In Kwon family: Choi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5905-5914 id: kwon21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5905 lastpage: 5914 published: 2021-07-01 00:00:00 +0000 - title: 'On the price of explainability for some clustering problems' abstract: 'The price of explainability for a clustering task can be defined as the unavoidable loss, in terms of the objective function, if we force the final partition to be explainable. Here, we study this price for the following clustering problems: $k$-means, $k$-medians, $k$-centers and maximum-spacing. We provide upper and lower bounds for a natural model where explainability is achieved via decision trees. For the $k$-means and $k$-medians problems our upper bounds improve those obtained by [Dasgupta et. al, ICML 20] for low dimensions. Another contribution is a simple and efficient algorithm for building explainable clusterings for the $k$-means problem. We provide empirical evidence that its performance is better than the current state of the art for decision-tree based explainable clustering.' volume: 139 URL: https://proceedings.mlr.press/v139/laber21a.html PDF: http://proceedings.mlr.press/v139/laber21a/laber21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-laber21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eduardo S family: Laber - given: Lucas family: Murtinho editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5915-5925 id: laber21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5915 lastpage: 5925 published: 2021-07-01 00:00:00 +0000 - title: 'Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality' abstract: 'We propose a randomized algorithm with quadratic convergence rate for convex optimization problems with a self-concordant, composite, strongly convex objective function. Our method is based on performing an approximate Newton step using a random projection of the Hessian. Our first contribution is to show that, at each iteration, the embedding dimension (or sketch size) can be as small as the effective dimension of the Hessian matrix. Leveraging this novel fundamental result, we design an algorithm with a sketch size proportional to the effective dimension and which exhibits a quadratic rate of convergence. This result dramatically improves on the classical linear-quadratic convergence rates of state-of-the-art sub-sampled Newton methods. However, in most practical cases, the effective dimension is not known beforehand, and this raises the question of how to pick a sketch size as small as the effective dimension while preserving a quadratic convergence rate. Our second and main contribution is thus to propose an adaptive sketch size algorithm with quadratic convergence rate and which does not require prior knowledge or estimation of the effective dimension: at each iteration, it starts with a small sketch size, and increases it until quadratic progress is achieved. Importantly, we show that the embedding dimension remains proportional to the effective dimension throughout the entire path and that our method achieves state-of-the-art computational complexity for solving convex optimization programs with a strongly convex component. We discuss and illustrate applications to linear and quadratic programming, as well as logistic regression and other generalized linear models.' volume: 139 URL: https://proceedings.mlr.press/v139/lacotte21a.html PDF: http://proceedings.mlr.press/v139/lacotte21a/lacotte21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lacotte21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jonathan family: Lacotte - given: Yifei family: Wang - given: Mert family: Pilanci editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5926-5936 id: lacotte21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5926 lastpage: 5936 published: 2021-07-01 00:00:00 +0000 - title: 'Generalization Bounds in the Presence of Outliers: a Median-of-Means Study' abstract: 'In contrast to the empirical mean, the Median-of-Means (MoM) is an estimator of the mean $\theta$ of a square integrable r.v. Z, around which accurate nonasymptotic confidence bounds can be built, even when Z does not exhibit a sub-Gaussian tail behavior. Thanks to the high confidence it achieves on heavy-tailed data, MoM has found various applications in machine learning, where it is used to design training procedures that are not sensitive to atypical observations. More recently, a new line of work is now trying to characterize and leverage MoM’s ability to deal with corrupted data. In this context, the present work proposes a general study of MoM’s concentration properties under the contamination regime, that provides a clear understanding on the impact of the outlier proportion and the number of blocks chosen. The analysis is extended to (multisample) U-statistics, i.e. averages over tuples of observations, that raise additional challenges due to the dependence induced. Finally, we show that the latter bounds can be used in a straightforward fashion to derive generalization guarantees for pairwise learning in a contaminated setting, and propose an algorithm to compute provably reliable decision functions.' volume: 139 URL: https://proceedings.mlr.press/v139/laforgue21a.html PDF: http://proceedings.mlr.press/v139/laforgue21a/laforgue21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-laforgue21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pierre family: Laforgue - given: Guillaume family: Staerman - given: Stephan family: Clémençon editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5937-5947 id: laforgue21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5937 lastpage: 5947 published: 2021-07-01 00:00:00 +0000 - title: 'Model Fusion for Personalized Learning' abstract: 'Production systems operating on a growing domain of analytic services often require generating warm-start solution models for emerging tasks with limited data. One potential approach to address this warm-start challenge is to adopt meta learning to generate a base model that can be adapted to solve unseen tasks with minimal fine-tuning. This however requires the training processes of previous solution models of existing tasks to be synchronized. This is not possible if these models were pre-trained separately on private data owned by different entities and cannot be synchronously re-trained. To accommodate for such scenarios, we develop a new personalized learning framework that synthesizes customized models for unseen tasks via fusion of independently pre-trained models of related tasks. We establish performance guarantee for the proposed framework and demonstrate its effectiveness on both synthetic and real datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/lam21a.html PDF: http://proceedings.mlr.press/v139/lam21a/lam21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lam21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Thanh Chi family: Lam - given: Nghia family: Hoang - given: Bryan Kian Hsiang family: Low - given: Patrick family: Jaillet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5948-5958 id: lam21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5948 lastpage: 5958 published: 2021-07-01 00:00:00 +0000 - title: 'Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix' abstract: 'We show that aggregated model updates in federated learning may be insecure. An untrusted central server may disaggregate user updates from sums of updates across participants given repeated observations, enabling the server to recover privileged information about individual users’ private training data via traditional gradient inference attacks. Our method revolves around reconstructing participant information (e.g: which rounds of training users participated in) from aggregated model updates by leveraging summary information from device analytics commonly used to monitor, debug, and manage federated learning systems. Our attack is parallelizable and we successfully disaggregate user updates on settings with up to thousands of participants. We quantitatively and qualitatively demonstrate significant improvements in the capability of various inference attacks on the disaggregated updates. Our attack enables the attribution of learned properties to individual users, violating anonymity, and shows that a determined central server may undermine the secure aggregation protocol to break individual users’ data privacy in federated learning.' volume: 139 URL: https://proceedings.mlr.press/v139/lam21b.html PDF: http://proceedings.mlr.press/v139/lam21b/lam21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lam21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Maximilian family: Lam - given: Gu-Yeon family: Wei - given: David family: Brooks - given: Vijay Janapa family: Reddi - given: Michael family: Mitzenmacher editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5959-5968 id: lam21b issued: date-parts: - 2021 - 7 - 1 firstpage: 5959 lastpage: 5968 published: 2021-07-01 00:00:00 +0000 - title: 'Stochastic Multi-Armed Bandits with Unrestricted Delay Distributions' abstract: 'We study the stochastic Multi-Armed Bandit (MAB) problem with random delays in the feedback received by the algorithm. We consider two settings: the {\it reward dependent} delay setting, where realized delays may depend on the stochastic rewards, and the {\it reward-independent} delay setting. Our main contribution is algorithms that achieve near-optimal regret in each of the settings, with an additional additive dependence on the quantiles of the delay distribution. Our results do not make any assumptions on the delay distributions: in particular, we do not assume they come from any parametric family of distributions and allow for unbounded support and expectation; we further allow for the case of infinite delays where the algorithm might occasionally not observe any feedback.' volume: 139 URL: https://proceedings.mlr.press/v139/lancewicki21a.html PDF: http://proceedings.mlr.press/v139/lancewicki21a/lancewicki21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lancewicki21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tal family: Lancewicki - given: Shahar family: Segal - given: Tomer family: Koren - given: Yishay family: Mansour editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5969-5978 id: lancewicki21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5969 lastpage: 5978 published: 2021-07-01 00:00:00 +0000 - title: 'Discovering symbolic policies with deep reinforcement learning' abstract: 'Deep reinforcement learning (DRL) has proven successful for many difficult control problems by learning policies represented by neural networks. However, the complexity of neural network-based policies{—}involving thousands of composed non-linear operators{—}can render them problematic to understand, trust, and deploy. In contrast, simple policies comprising short symbolic expressions can facilitate human understanding, while also being transparent and exhibiting predictable behavior. To this end, we propose deep symbolic policy, a novel approach to directly search the space of symbolic policies. We use an autoregressive recurrent neural network to generate control policies represented by tractable mathematical expressions, employing a risk-seeking policy gradient to maximize performance of the generated policies. To scale to environments with multi-dimensional action spaces, we propose an "anchoring" algorithm that distills pre-trained neural network-based policies into fully symbolic policies, one action dimension at a time. We also introduce two novel methods to improve exploration in DRL-based combinatorial optimization, building on ideas of entropy regularization and distribution initialization. Despite their dramatically reduced complexity, we demonstrate that discovered symbolic policies outperform seven state-of-the-art DRL algorithms in terms of average rank and average normalized episodic reward across eight benchmark environments.' volume: 139 URL: https://proceedings.mlr.press/v139/landajuela21a.html PDF: http://proceedings.mlr.press/v139/landajuela21a/landajuela21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-landajuela21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mikel family: Landajuela - given: Brenden K family: Petersen - given: Sookyung family: Kim - given: Claudio P family: Santiago - given: Ruben family: Glatt - given: Nathan family: Mundhenk - given: Jacob F family: Pettit - given: Daniel family: Faissol editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5979-5989 id: landajuela21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5979 lastpage: 5989 published: 2021-07-01 00:00:00 +0000 - title: 'Graph Cuts Always Find a Global Optimum for Potts Models (With a Catch)' abstract: 'We prove that the alpha-expansion algorithm for MAP inference always returns a globally optimal assignment for Markov Random Fields with Potts pairwise potentials, with a catch: the returned assignment is only guaranteed to be optimal for an instance within a small perturbation of the original problem instance. In other words, all local minima with respect to expansion moves are global minima to slightly perturbed versions of the problem. On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance. We design an algorithm that can certify whether this is the case in practice. On several MAP inference problem instances from computer vision, this algorithm certifies that MAP solutions to all of these perturbations are very close to solutions of the original instance. These results taken together give a cohesive explanation for the good performance of "graph cuts" algorithms in practice. Every local expansion minimum is a global minimum in a small perturbation of the problem, and all of these global minima are close to the original solution.' volume: 139 URL: https://proceedings.mlr.press/v139/lang21a.html PDF: http://proceedings.mlr.press/v139/lang21a/lang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hunter family: Lang - given: David family: Sontag - given: Aravindan family: Vijayaraghavan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 5990-5999 id: lang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 5990 lastpage: 5999 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Message Passing for 0–1 ILPs with Binary Decision Diagrams' abstract: 'We present a message passing method for 0{–}1 integer linear programs. Our algorithm is based on a decomposition of the original problem into subproblems that are represented as binary deci- sion diagrams. The resulting Lagrangean dual is solved iteratively by a series of efficient block coordinate ascent steps. Our method has linear iteration complexity in the size of the decomposi- tion and can be effectively parallelized. The char- acteristics of our approach are desirable towards solving ever larger problems arising in structured prediction. We present experimental results on combinatorial problems from MAP inference for Markov Random Fields, quadratic assignment, discrete tomography and cell tracking for develop- mental biology and show promising performance.' volume: 139 URL: https://proceedings.mlr.press/v139/lange21a.html PDF: http://proceedings.mlr.press/v139/lange21a/lange21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lange21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jan-Hendrik family: Lange - given: Paul family: Swoboda editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6000-6010 id: lange21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6000 lastpage: 6010 published: 2021-07-01 00:00:00 +0000 - title: 'CountSketches, Feature Hashing and the Median of Three' abstract: 'In this paper, we revisit the classic CountSketch method, which is a sparse, random projection that transforms a (high-dimensional) Euclidean vector $v$ to a vector of dimension $(2t-1) s$, where $t, s > 0$ are integer parameters. It is known that a CountSketch allows estimating coordinates of $v$ with variance bounded by $\|v\|_2^2/s$. For $t > 1$, the estimator takes the median of $2t-1$ independent estimates, and the probability that the estimate is off by more than $2 \|v\|_2/\sqrt{s}$ is exponentially small in $t$. This suggests choosing $t$ to be logarithmic in a desired inverse failure probability. However, implementations of CountSketch often use a small, constant $t$. Previous work only predicts a constant factor improvement in this setting. Our main contribution is a new analysis of CountSketch, showing an improvement in variance to $O(\min\{\|v\|_1^2/s^2,\|v\|_2^2/s\})$ when $t > 1$. That is, the variance decreases proportionally to $s^{-2}$, asymptotically for large enough $s$.' volume: 139 URL: https://proceedings.mlr.press/v139/larsen21a.html PDF: http://proceedings.mlr.press/v139/larsen21a/larsen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-larsen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kasper Green family: Larsen - given: Rasmus family: Pagh - given: Jakub family: Tětek editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6011-6020 id: larsen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6011 lastpage: 6020 published: 2021-07-01 00:00:00 +0000 - title: 'MorphVAE: Generating Neural Morphologies from 3D-Walks using a Variational Autoencoder with Spherical Latent Space' abstract: 'For the past century, the anatomy of a neuron has been considered one of its defining features: The shape of a neuron’s dendrites and axon fundamentally determines what other neurons it can connect to. These neurites have been described using mathematical tools e.g. in the context of cell type classification, but generative models of these structures have only rarely been proposed and are often computationally inefficient. Here we propose MorphVAE, a sequence-to-sequence variational autoencoder with spherical latent space as a generative model for neural morphologies. The model operates on walks within the tree structure of a neuron and can incorporate expert annotations on a subset of the data using semi-supervised learning. We develop our model on artificially generated toy data and evaluate its performance on dendrites of excitatory cells and axons of inhibitory cells of mouse motor cortex (M1) and dendrites of retinal ganglion cells. We show that the learned latent feature space allows for better cell type discrimination than other commonly used features. By sampling new walks from the latent space we can easily construct new morphologies with a specified degree of similarity to their reference neuron, providing an efficient generative model for neural morphologies.' volume: 139 URL: https://proceedings.mlr.press/v139/laturnus21a.html PDF: http://proceedings.mlr.press/v139/laturnus21a/laturnus21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-laturnus21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sophie C. family: Laturnus - given: Philipp family: Berens editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6021-6031 id: laturnus21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6021 lastpage: 6031 published: 2021-07-01 00:00:00 +0000 - title: 'Improved Regret Bound and Experience Replay in Regularized Policy Iteration' abstract: 'In this work, we study algorithms for learning in infinite-horizon undiscounted Markov decision processes (MDPs) with function approximation. We first show that the regret analysis of the Politex algorithm (a version of regularized policy iteration) can be sharpened from $O(T^{3/4})$ to $O(\sqrt{T})$ under nearly identical assumptions, and instantiate the bound with linear function approximation. Our result provides the first high-probability $O(\sqrt{T})$ regret bound for a computationally efficient algorithm in this setting. The exact implementation of Politex with neural network function approximation is inefficient in terms of memory and computation. Since our analysis suggests that we need to approximate the average of the action-value functions of past policies well, we propose a simple efficient implementation where we train a single Q-function on a replay buffer with past data. We show that this often leads to superior performance over other implementation choices, especially in terms of wall-clock time. Our work also provides a novel theoretical justification for using experience replay within policy iteration algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/lazic21a.html PDF: http://proceedings.mlr.press/v139/lazic21a/lazic21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lazic21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nevena family: Lazic - given: Dong family: Yin - given: Yasin family: Abbasi-Yadkori - given: Csaba family: Szepesvari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6032-6042 id: lazic21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6032 lastpage: 6042 published: 2021-07-01 00:00:00 +0000 - title: 'LAMDA: Label Matching Deep Domain Adaptation' abstract: 'Deep domain adaptation (DDA) approaches have recently been shown to perform better than their shallow rivals with better modeling capacity on complex domains (e.g., image, structural data, and sequential data). The underlying idea is to learn domain invariant representations on a latent space that can bridge the gap between source and target domains. Several theoretical studies have established insightful understanding and the benefit of learning domain invariant features; however, they are usually limited to the case where there is no label shift, hence hindering its applicability. In this paper, we propose and study a new challenging setting that allows us to use a Wasserstein distance (WS) to not only quantify the data shift but also to define the label shift directly. We further develop a theory to demonstrate that minimizing the WS of the data shift leads to closing the gap between the source and target data distributions on the latent space (e.g., an intermediate layer of a deep net), while still being able to quantify the label shift with respect to this latent space. Interestingly, our theory can consequently explain certain drawbacks of learning domain invariant features on the latent space. Finally, grounded on the results and guidance of our developed theory, we propose the Label Matching Deep Domain Adaptation (LAMDA) approach that outperforms baselines on real-world datasets for DA problems.' volume: 139 URL: https://proceedings.mlr.press/v139/le21a.html PDF: http://proceedings.mlr.press/v139/le21a/le21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-le21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Trung family: Le - given: Tuan family: Nguyen - given: Nhat family: Ho - given: Hung family: Bui - given: Dinh family: Phung editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6043-6054 id: le21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6043 lastpage: 6054 published: 2021-07-01 00:00:00 +0000 - title: 'Gaussian Process-Based Real-Time Learning for Safety Critical Applications' abstract: 'The safe operation of physical systems typically relies on high-quality models. Since a continuous stream of data is generated during run-time, such models are often obtained through the application of Gaussian process regression because it provides guarantees on the prediction error. Due to its high computational complexity, Gaussian process regression must be used offline on batches of data, which prevents applications, where a fast adaptation through online learning is necessary to ensure safety. In order to overcome this issue, we propose the LoG-GP. It achieves a logarithmic update and prediction complexity in the number of training points through the aggregation of locally active Gaussian process models. Under weak assumptions on the aggregation scheme, it inherits safety guarantees from exact Gaussian process regression. These theoretical advantages are exemplarily exploited in the design of a safe and data-efficient, online-learning control policy. The efficiency and performance of the proposed real-time learning approach is demonstrated in a comparison to state-of-the-art methods.' volume: 139 URL: https://proceedings.mlr.press/v139/lederer21a.html PDF: http://proceedings.mlr.press/v139/lederer21a/lederer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lederer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Armin family: Lederer - given: Alejandro J Ordóñez family: Conejo - given: Korbinian A family: Maier - given: Wenxin family: Xiao - given: Jonas family: Umlauft - given: Sandra family: Hirche editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6055-6064 id: lederer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6055 lastpage: 6064 published: 2021-07-01 00:00:00 +0000 - title: 'Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer' abstract: 'Effective lifelong learning across diverse tasks requires the transfer of diverse knowledge, yet transferring irrelevant knowledge may lead to interference and catastrophic forgetting. In deep networks, transferring the appropriate granularity of knowledge is as important as the transfer mechanism, and must be driven by the relationships among tasks. We first show that the lifelong learning performance of several current deep learning architectures can be significantly improved by transfer at the appropriate layers. We then develop an expectation-maximization (EM) method to automatically select the appropriate transfer configuration and optimize the task network weights. This EM-based selective transfer is highly effective, balancing transfer performance on all tasks with avoiding catastrophic forgetting, as demonstrated on three algorithms in several lifelong object classification scenarios.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21a.html PDF: http://proceedings.mlr.press/v139/lee21a/lee21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Seungwon family: Lee - given: Sima family: Behpour - given: Eric family: Eaton editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6065-6075 id: lee21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6065 lastpage: 6075 published: 2021-07-01 00:00:00 +0000 - title: 'Fair Selective Classification Via Sufficiency' abstract: 'Selective classification is a powerful tool for decision-making in scenarios where mistakes are costly but abstentions are allowed. In general, by allowing a classifier to abstain, one can improve the performance of a model at the cost of reducing coverage and classifying fewer samples. However, recent work has shown, in some cases, that selective classification can magnify disparities between groups, and has illustrated this phenomenon on multiple real-world datasets. We prove that the sufficiency criterion can be used to mitigate these disparities by ensuring that selective classification increases performance on all groups, and introduce a method for mitigating the disparity in precision across the entire coverage scale based on this criterion. We then provide an upper bound on the conditional mutual information between the class label and sensitive attribute, conditioned on the learned features, which can be used as a regularizer to achieve fairer selective classification. The effectiveness of the method is demonstrated on the Adult, CelebA, Civil Comments, and CheXpert datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21b.html PDF: http://proceedings.mlr.press/v139/lee21b/lee21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Joshua K family: Lee - given: Yuheng family: Bu - given: Deepta family: Rajan - given: Prasanna family: Sattigeri - given: Rameswar family: Panda - given: Subhro family: Das - given: Gregory W family: Wornell editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6076-6086 id: lee21b issued: date-parts: - 2021 - 7 - 1 firstpage: 6076 lastpage: 6086 published: 2021-07-01 00:00:00 +0000 - title: 'On-the-fly Rectification for Robust Large-Vocabulary Topic Inference' abstract: 'Across many data domains, co-occurrence statistics about the joint appearance of objects are powerfully informative. By transforming unsupervised learning problems into decompositions of co-occurrence statistics, spectral algorithms provide transparent and efficient algorithms for posterior inference such as latent topic analysis and community detection. As object vocabularies grow, however, it becomes rapidly more expensive to store and run inference algorithms on co-occurrence statistics. Rectifying co-occurrence, the key process to uphold model assumptions, becomes increasingly more vital in the presence of rare terms, but current techniques cannot scale to large vocabularies. We propose novel methods that simultaneously compress and rectify co-occurrence statistics, scaling gracefully with the size of vocabulary and the dimension of latent space. We also present new algorithms learning latent variables from the compressed statistics, and verify that our methods perform comparably to previous approaches on both textual and non-textual data.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21c.html PDF: http://proceedings.mlr.press/v139/lee21c/lee21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Moontae family: Lee - given: Sungjun family: Cho - given: Kun family: Dong - given: David family: Mimno - given: David family: Bindel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6087-6097 id: lee21c issued: date-parts: - 2021 - 7 - 1 firstpage: 6087 lastpage: 6097 published: 2021-07-01 00:00:00 +0000 - title: 'Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification' abstract: 'We propose unsupervised embedding adaptation for the downstream few-shot classification task. Based on findings that deep neural networks learn to generalize before memorizing, we develop Early-Stage Feature Reconstruction (ESFR) — a novel adaptation scheme with feature reconstruction and dimensionality-driven early stopping that finds generalizable features. Incorporating ESFR consistently improves the performance of baseline methods on all standard settings, including the recently proposed transductive method. ESFR used in conjunction with the transductive method further achieves state-of-the-art performance on mini-ImageNet, tiered-ImageNet, and CUB; especially with 1.2% 2.0% improvements in accuracy over the previous best performing method on 1-shot setting.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21d.html PDF: http://proceedings.mlr.press/v139/lee21d/lee21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dong Hoon family: Lee - given: Sae-Young family: Chung editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6098-6108 id: lee21d issued: date-parts: - 2021 - 7 - 1 firstpage: 6098 lastpage: 6108 published: 2021-07-01 00:00:00 +0000 - title: 'Continual Learning in the Teacher-Student Setup: Impact of Task Similarity' abstract: 'Continual learning{—}the ability to learn many tasks in sequence{—}is critical for artificial learning systems. Yet standard training methods for deep networks often suffer from catastrophic forgetting, where learning new tasks erases knowledge of the earlier tasks. While catastrophic forgetting labels the problem, the theoretical reasons for interference between tasks remain unclear. Here, we attempt to narrow this gap between theory and practice by studying continual learning in the teacher-student setup. We extend previous analytical work on two-layer networks in the teacher-student setup to multiple teachers. Using each teacher to represent a different task, we investigate how the relationship between teachers affects the amount of forgetting and transfer exhibited by the student when the task switches. In line with recent work, we find that when tasks depend on similar features, intermediate task similarity leads to greatest forgetting. However, feature similarity is only one way in which tasks may be related. The teacher-student approach allows us to disentangle task similarity at the level of \emph{readouts} (hidden-to-output weights) as well as \emph{features} (input-to-hidden weights). We find a complex interplay between both types of similarity, initial transfer/forgetting rates, maximum transfer/forgetting, and the long-time (post-switch) amount of transfer/forgetting. Together, these results help illuminate the diverse factors contributing to catastrophic forgetting.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21e.html PDF: http://proceedings.mlr.press/v139/lee21e/lee21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sebastian family: Lee - given: Sebastian family: Goldt - given: Andrew family: Saxe editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6109-6119 id: lee21e issued: date-parts: - 2021 - 7 - 1 firstpage: 6109 lastpage: 6119 published: 2021-07-01 00:00:00 +0000 - title: 'OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation' abstract: 'We consider the offline reinforcement learning (RL) setting where the agent aims to optimize the policy solely from the data without further environment interactions. In offline RL, the distributional shift becomes the primary source of difficulty, which arises from the deviation of the target policy being optimized from the behavior policy used for data collection. This typically causes overestimation of action values, which poses severe problems for model-free algorithms that use bootstrapping. To mitigate the problem, prior offline RL algorithms often used sophisticated techniques that encourage underestimation of action values, which introduces an additional set of hyperparameters that need to be tuned properly. In this paper, we present an offline RL algorithm that prevents overestimation in a more principled way. Our algorithm, OptiDICE, directly estimates the stationary distribution corrections of the optimal policy and does not rely on policy-gradients, unlike previous offline RL algorithms. Using an extensive set of benchmark datasets for offline RL, we show that OptiDICE performs competitively with the state-of-the-art methods.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21f.html PDF: http://proceedings.mlr.press/v139/lee21f/lee21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jongmin family: Lee - given: Wonseok family: Jeon - given: Byungjun family: Lee - given: Joelle family: Pineau - given: Kee-Eung family: Kim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6120-6130 id: lee21f issued: date-parts: - 2021 - 7 - 1 firstpage: 6120 lastpage: 6130 published: 2021-07-01 00:00:00 +0000 - title: 'SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement Learning' abstract: 'Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from several issues, such as instability in Q-learning and balancing exploration and exploitation. To mitigate these issues, we present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy RL algorithms. SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration. By enforcing the diversity between agents using Bootstrap with random initialization, we show that these different ideas are largely orthogonal and can be fruitfully integrated, together further improving the performance of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and discrete control tasks on both low-dimensional and high-dimensional environments.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21g.html PDF: http://proceedings.mlr.press/v139/lee21g/lee21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kimin family: Lee - given: Michael family: Laskin - given: Aravind family: Srinivas - given: Pieter family: Abbeel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6131-6141 id: lee21g issued: date-parts: - 2021 - 7 - 1 firstpage: 6131 lastpage: 6141 published: 2021-07-01 00:00:00 +0000 - title: 'Achieving Near Instance-Optimality and Minimax-Optimality in Stochastic and Adversarial Linear Bandits Simultaneously' abstract: 'In this work, we develop linear bandit algorithms that automatically adapt to different environments. By plugging a novel loss estimator into the optimization problem that characterizes the instance-optimal strategy, our first algorithm not only achieves nearly instance-optimal regret in stochastic environments, but also works in corrupted environments with additional regret being the amount of corruption, while the state-of-the-art (Li et al., 2019) achieves neither instance-optimality nor the optimal dependence on the corruption amount. Moreover, by equipping this algorithm with an adversarial component and carefully-designed testings, our second algorithm additionally enjoys minimax-optimal regret in completely adversarial environments, which is the first of this kind to our knowledge. Finally, all our guarantees hold with high probability, while existing instance-optimal guarantees only hold in expectation.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21h.html PDF: http://proceedings.mlr.press/v139/lee21h/lee21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chung-Wei family: Lee - given: Haipeng family: Luo - given: Chen-Yu family: Wei - given: Mengxiao family: Zhang - given: Xiaojin family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6142-6151 id: lee21h issued: date-parts: - 2021 - 7 - 1 firstpage: 6142 lastpage: 6151 published: 2021-07-01 00:00:00 +0000 - title: 'PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training' abstract: 'Conveying complex objectives to reinforcement learning (RL) agents can often be difficult, involving meticulous design of reward functions that are sufficiently informative yet easy enough to provide. Human-in-the-loop RL methods allow practitioners to instead interactively teach agents through tailored feedback; however, such approaches have been challenging to scale since human feedback is very expensive. In this work, we aim to make this process more sample- and feedback-efficient. We present an off-policy, interactive RL algorithm that capitalizes on the strengths of both feedback and off-policy learning. Specifically, we learn a reward model by actively querying a teacher’s preferences between two clips of behavior and use it to train an agent. To enable off-policy learning, we relabel all the agent’s past experience when its reward model changes. We additionally show that pre-training our agents with unsupervised exploration substantially increases the mileage of its queries. We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods, including a variety of locomotion and robotic manipulation skills. We also show that our method is able to utilize real-time human feedback to effectively prevent reward exploitation and learn new behaviors that are difficult to specify with standard reward functions.' volume: 139 URL: https://proceedings.mlr.press/v139/lee21i.html PDF: http://proceedings.mlr.press/v139/lee21i/lee21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lee21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kimin family: Lee - given: Laura M family: Smith - given: Pieter family: Abbeel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6152-6163 id: lee21i issued: date-parts: - 2021 - 7 - 1 firstpage: 6152 lastpage: 6163 published: 2021-07-01 00:00:00 +0000 - title: 'Near-Optimal Linear Regression under Distribution Shift' abstract: 'Transfer learning is essential when sufficient data comes from the source domain, with scarce labeled data from the target domain. We develop estimators that achieve minimax linear risk for linear regression problems under distribution shift. Our algorithms cover different transfer learning settings including covariate shift and model shift. We also consider when data are generated from either linear or general nonlinear models. We show that linear minimax estimators are within an absolute constant of the minimax risk even among nonlinear estimators for various source/target distributions.' volume: 139 URL: https://proceedings.mlr.press/v139/lei21a.html PDF: http://proceedings.mlr.press/v139/lei21a/lei21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lei21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qi family: Lei - given: Wei family: Hu - given: Jason family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6164-6174 id: lei21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6164 lastpage: 6174 published: 2021-07-01 00:00:00 +0000 - title: 'Stability and Generalization of Stochastic Gradient Methods for Minimax Problems' abstract: 'Many machine learning problems can be formulated as minimax problems such as Generative Adversarial Networks (GANs), AUC maximization and robust estimation, to mention but a few. A substantial amount of studies are devoted to studying the convergence behavior of their stochastic gradient-type algorithms. In contrast, there is relatively little work on understanding their generalization, i.e., how the learning models built from training examples would behave on test examples. In this paper, we provide a comprehensive generalization analysis of stochastic gradient methods for minimax problems under both convex-concave and nonconvex-nonconcave cases through the lens of algorithmic stability. We establish a quantitative connection between stability and several generalization measures both in expectation and with high probability. For the convex-concave setting, our stability analysis shows that stochastic gradient descent ascent attains optimal generalization bounds for both smooth and nonsmooth minimax problems. We also establish generalization bounds for both weakly-convex-weakly-concave and gradient-dominated problems. We report preliminary experimental results to verify our theory.' volume: 139 URL: https://proceedings.mlr.press/v139/lei21b.html PDF: http://proceedings.mlr.press/v139/lei21b/lei21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lei21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yunwen family: Lei - given: Zhenhuan family: Yang - given: Tianbao family: Yang - given: Yiming family: Ying editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6175-6186 id: lei21b issued: date-parts: - 2021 - 7 - 1 firstpage: 6175 lastpage: 6186 published: 2021-07-01 00:00:00 +0000 - title: 'Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot' abstract: 'Existing evaluation suites for multi-agent reinforcement learning (MARL) do not assess generalization to novel situations as their primary objective (unlike supervised learning benchmarks). Our contribution, Melting Pot, is a MARL evaluation suite that fills this gap and uses reinforcement learning to reduce the human labor required to create novel test scenarios. This works because one agent’s behavior constitutes (part of) another agent’s environment. To demonstrate scalability, we have created over 80 unique test scenarios covering a broad range of research topics such as social dilemmas, reciprocity, resource sharing, and task partitioning. We apply these test scenarios to standard MARL training algorithms, and demonstrate how Melting Pot reveals weaknesses not apparent from training performance alone.' volume: 139 URL: https://proceedings.mlr.press/v139/leibo21a.html PDF: http://proceedings.mlr.press/v139/leibo21a/leibo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-leibo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Joel Z family: Leibo - given: Edgar A family: Dueñez-Guzman - given: Alexander family: Vezhnevets - given: John P family: Agapiou - given: Peter family: Sunehag - given: Raphael family: Koster - given: Jayd family: Matyas - given: Charlie family: Beattie - given: Igor family: Mordatch - given: Thore family: Graepel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6187-6199 id: leibo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6187 lastpage: 6199 published: 2021-07-01 00:00:00 +0000 - title: 'Better Training using Weight-Constrained Stochastic Dynamics' abstract: 'We employ constraints to control the parameter space of deep neural networks throughout training. The use of customised, appropriately designed constraints can reduce the vanishing/exploding gradients problem, improve smoothness of classification boundaries, control weight magnitudes and stabilize deep neural networks, and thus enhance the robustness of training algorithms and the generalization capabilities of neural networks. We provide a general approach to efficiently incorporate constraints into a stochastic gradient Langevin framework, allowing enhanced exploration of the loss landscape. We also present specific examples of constrained training methods motivated by orthogonality preservation for weight matrices and explicit weight normalizations. Discretization schemes are provided both for the overdamped formulation of Langevin dynamics and the underdamped form, in which momenta further improve sampling efficiency. These optimisation schemes can be used directly, without needing to adapt neural network architecture design choices or to modify the objective with regularization terms, and see performance improvements in classification tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/leimkuhler21a.html PDF: http://proceedings.mlr.press/v139/leimkuhler21a/leimkuhler21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-leimkuhler21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Benedict family: Leimkuhler - given: Tiffany J family: Vlaar - given: Timothée family: Pouchon - given: Amos family: Storkey editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6200-6211 id: leimkuhler21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6200 lastpage: 6211 published: 2021-07-01 00:00:00 +0000 - title: 'Globally-Robust Neural Networks' abstract: 'The threat of adversarial examples has motivated work on training certifiably robust neural networks to facilitate efficient verification of local robustness at inference time. We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification while yielding a natural learning objective for robust training. We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network, yielding certifiably-robust models by construction that achieve state-of-the-art verifiable accuracy. Notably, this approach requires significantly less time and memory than recent certifiable training methods, and leads to negligible costs when certifying points on-line; for example, our evaluation shows that it is possible to train a large robust Tiny-Imagenet model in a matter of hours. Our models effectively leverage inexpensive global Lipschitz bounds for real-time certification, despite prior suggestions that tighter local bounds are needed for good performance; we posit this is possible because our models are specifically trained to achieve tighter global bounds. Namely, we prove that the maximum achievable verifiable accuracy for a given dataset is not improved by using a local bound.' volume: 139 URL: https://proceedings.mlr.press/v139/leino21a.html PDF: http://proceedings.mlr.press/v139/leino21a/leino21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-leino21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Klas family: Leino - given: Zifan family: Wang - given: Matt family: Fredrikson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6212-6222 id: leino21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6212 lastpage: 6222 published: 2021-07-01 00:00:00 +0000 - title: 'Learning to Price Against a Moving Target' abstract: 'In the Learning to Price setting, a seller posts prices over time with the goal of maximizing revenue while learning the buyer’s valuation. This problem is very well understood when values are stationary (fixed or iid). Here we study the problem where the buyer’s value is a moving target, i.e., they change over time either by a stochastic process or adversarially with bounded variation. In either case, we provide matching upper and lower bounds on the optimal revenue loss. Since the target is moving, any information learned soon becomes out-dated, which forces the algorithms to keep switching between exploring and exploiting phases.' volume: 139 URL: https://proceedings.mlr.press/v139/leme21a.html PDF: http://proceedings.mlr.press/v139/leme21a/leme21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-leme21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Renato Paes family: Leme - given: Balasubramanian family: Sivan - given: Yifeng family: Teng - given: Pratik family: Worah editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6223-6232 id: leme21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6223 lastpage: 6232 published: 2021-07-01 00:00:00 +0000 - title: 'SigGPDE: Scaling Sparse Gaussian Processes on Sequential Data' abstract: 'Making predictions and quantifying their uncertainty when the input data is sequential is a fundamental learning challenge, recently attracting increasing attention. We develop SigGPDE, a new scalable sparse variational inference framework for Gaussian Processes (GPs) on sequential data. Our contribution is twofold. First, we construct inducing variables underpinning the sparse approximation so that the resulting evidence lower bound (ELBO) does not require any matrix inversion. Second, we show that the gradients of the GP signature kernel are solutions of a hyperbolic partial differential equation (PDE). This theoretical insight allows us to build an efficient back-propagation algorithm to optimize the ELBO. We showcase the significant computational gains of SigGPDE compared to existing methods, while achieving state-of-the-art performance for classification tasks on large datasets of up to 1 million multivariate time series.' volume: 139 URL: https://proceedings.mlr.press/v139/lemercier21a.html PDF: http://proceedings.mlr.press/v139/lemercier21a/lemercier21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lemercier21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Maud family: Lemercier - given: Cristopher family: Salvi - given: Thomas family: Cass - given: Edwin V. family: Bonilla - given: Theodoros family: Damoulas - given: Terry J family: Lyons editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6233-6242 id: lemercier21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6233 lastpage: 6242 published: 2021-07-01 00:00:00 +0000 - title: 'Strategic Classification Made Practical' abstract: 'Strategic classification regards the problem of learning in settings where users can strategically modify their features to improve outcomes. This setting applies broadly, and has received much recent attention. But despite its practical significance, work in this space has so far been predominantly theoretical. In this paper we present a learning framework for strategic classification that is practical. Our approach directly minimizes the “strategic” empirical risk, which we achieve by differentiating through the strategic response of users. This provides flexibility that allows us to extend beyond the original problem formulation and towards more realistic learning scenarios. A series of experiments demonstrates the effectiveness of our approach on various learning settings.' volume: 139 URL: https://proceedings.mlr.press/v139/levanon21a.html PDF: http://proceedings.mlr.press/v139/levanon21a/levanon21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-levanon21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sagi family: Levanon - given: Nir family: Rosenfeld editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6243-6253 id: levanon21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6243 lastpage: 6253 published: 2021-07-01 00:00:00 +0000 - title: 'Improved, Deterministic Smoothing for L_1 Certified Robustness' abstract: 'Randomized smoothing is a general technique for computing sample-dependent robustness guarantees against adversarial attacks for deep classifiers. Prior works on randomized smoothing against L_1 adversarial attacks use additive smoothing noise and provide probabilistic robustness guarantees. In this work, we propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN). To develop DSSN, we first develop SSN, a randomized method which involves generating each noisy smoothing sample by first randomly splitting the input space and then returning a representation of the center of the subdivision occupied by the input sample. In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent. Thus, smoothing can be done effectively in just one dimension and can therefore be efficiently derandomized for quantized data (e.g., images). To the best of our knowledge, this is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model while allowing for an arbitrary classifier (i.e., a deep model) to be used as a base classifier and without requiring an exponential number of smoothing samples. On CIFAR-10 and ImageNet datasets, we provide substantially larger L_1 robustness certificates compared to prior works, establishing a new state-of-the-art. The determinism of our method also leads to significantly faster certificate computation. Code is available at: https://github.com/alevine0/smoothingSplittingNoise.' volume: 139 URL: https://proceedings.mlr.press/v139/levine21a.html PDF: http://proceedings.mlr.press/v139/levine21a/levine21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-levine21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexander J family: Levine - given: Soheil family: Feizi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6254-6264 id: levine21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6254 lastpage: 6264 published: 2021-07-01 00:00:00 +0000 - title: 'BASE Layers: Simplifying Training of Large, Sparse Models' abstract: 'We introduce a new balanced assignment of experts (BASE) layer for large language models that greatly simplifies existing high capacity sparse layers. Sparse layers can dramatically improve the efficiency of training and inference by routing each token to specialized expert modules that contain only a small fraction of the model parameters. However, it can be difficult to learn balanced routing functions that make full use of the available experts; existing approaches typically use routing heuristics or auxiliary expert-balancing loss functions. In contrast, we formulate token-to-expert allocation as a linear assignment problem, allowing an optimal assignment in which each expert receives an equal number of tokens. This optimal assignment scheme improves efficiency by guaranteeing balanced compute loads, and also simplifies training by not requiring any new hyperparameters or auxiliary losses. Code is publicly released.' volume: 139 URL: https://proceedings.mlr.press/v139/lewis21a.html PDF: http://proceedings.mlr.press/v139/lewis21a/lewis21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lewis21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mike family: Lewis - given: Shruti family: Bhosale - given: Tim family: Dettmers - given: Naman family: Goyal - given: Luke family: Zettlemoyer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6265-6274 id: lewis21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6265 lastpage: 6274 published: 2021-07-01 00:00:00 +0000 - title: 'Run-Sort-ReRun: Escaping Batch Size Limitations in Sliced Wasserstein Generative Models' abstract: 'When training an implicit generative model, ideally one would like the generator to reproduce all the different modes and subtleties of the target distribution. Naturally, when comparing two empirical distributions, the larger the sample population, the more these statistical nuances can be captured. However, existing objective functions are computationally constrained in the amount of samples they can consider by the memory required to process a batch of samples. In this paper, we build upon recent progress in sliced Wasserstein distances, a family of differentiable metrics for distribution discrepancy based on the Optimal Transport paradigm. We introduce a procedure to train these distances with virtually any batch size, allowing the discrepancy measure to capture richer statistics and better approximating the distance between the underlying continuous distributions. As an example, we demonstrate the matching of the distribution of Inception features with batches of tens of thousands of samples, achieving FID scores that outperform state-of-the-art implicit generative models.' volume: 139 URL: https://proceedings.mlr.press/v139/lezama21a.html PDF: http://proceedings.mlr.press/v139/lezama21a/lezama21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lezama21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jose family: Lezama - given: Wei family: Chen - given: Qiang family: Qiu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6275-6285 id: lezama21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6275 lastpage: 6285 published: 2021-07-01 00:00:00 +0000 - title: 'PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization' abstract: 'In this paper, we propose a novel stochastic gradient estimator—ProbAbilistic Gradient Estimator (PAGE)—for nonconvex optimization. PAGE is easy to implement as it is designed via a small adjustment to vanilla SGD: in each iteration, PAGE uses the vanilla minibatch SGD update with probability $p_t$ or reuses the previous gradient with a small adjustment, at a much lower computational cost, with probability $1-p_t$. We give a simple formula for the optimal choice of $p_t$. Moreover, we prove the first tight lower bound $\Omega(n+\frac{\sqrt{n}}{\epsilon^2})$ for nonconvex finite-sum problems, which also leads to a tight lower bound $\Omega(b+\frac{\sqrt{b}}{\epsilon^2})$ for nonconvex online problems, where $b:= \min\{\frac{\sigma^2}{\epsilon^2}, n\}$. Then, we show that PAGE obtains the optimal convergence results $O(n+\frac{\sqrt{n}}{\epsilon^2})$ (finite-sum) and $O(b+\frac{\sqrt{b}}{\epsilon^2})$ (online) matching our lower bounds for both nonconvex finite-sum and online problems. Besides, we also show that for nonconvex functions satisfying the Polyak-Ł{ojasiewicz} (PL) condition, PAGE can automatically switch to a faster linear convergence rate $O(\cdot\log \frac{1}{\epsilon})$. Finally, we conduct several deep learning experiments (e.g., LeNet, VGG, ResNet) on real datasets in PyTorch showing that PAGE not only converges much faster than SGD in training but also achieves the higher test accuracy, validating the optimal theoretical results and confirming the practical superiority of PAGE.' volume: 139 URL: https://proceedings.mlr.press/v139/li21a.html PDF: http://proceedings.mlr.press/v139/li21a/li21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhize family: Li - given: Hongyan family: Bao - given: Xiangliang family: Zhang - given: Peter family: Richtarik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6286-6295 id: li21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6286 lastpage: 6295 published: 2021-07-01 00:00:00 +0000 - title: 'Tightening the Dependence on Horizon in the Sample Complexity of Q-Learning' abstract: 'Q-learning, which seeks to learn the optimal Q-function of a Markov decision process (MDP) in a model-free fashion, lies at the heart of reinforcement learning. Focusing on the synchronous setting (such that independent samples for all state-action pairs are queried via a generative model in each iteration), substantial progress has been made recently towards understanding the sample efficiency of Q-learning. To yield an entrywise $\varepsilon$-accurate estimate of the optimal Q-function, state-of-the-art theory requires at least an order of $\frac{|S||A|}{(1-\gamma)^5\varepsilon^{2}}$ samples in the infinite-horizon $\gamma$-discounted setting. In this work, we sharpen the sample complexity of synchronous Q-learning to the order of $\frac{|S||A|}{(1-\gamma)^4\varepsilon^2}$ (up to some logarithmic factor) for any $0<\varepsilon <1$, leading to an order-wise improvement in $\frac{1}{1-\gamma}$. Analogous results are derived for finite-horizon MDPs as well. Notably, our sample complexity analysis unveils the effectiveness of vanilla Q-learning, which matches that of speedy Q-learning without requiring extra computation and storage. Our result is obtained by identifying novel error decompositions and recursion relations, which might shed light on how to study other variants of Q-learning.' volume: 139 URL: https://proceedings.mlr.press/v139/li21b.html PDF: http://proceedings.mlr.press/v139/li21b/li21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gen family: Li - given: Changxiao family: Cai - given: Yuxin family: Chen - given: Yuantao family: Gu - given: Yuting family: Wei - given: Yuejie family: Chi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6296-6306 id: li21b issued: date-parts: - 2021 - 7 - 1 firstpage: 6296 lastpage: 6306 published: 2021-07-01 00:00:00 +0000 - title: 'Winograd Algorithm for AdderNet' abstract: 'Adder neural network (AdderNet) is a new kind of deep model that replaces the original massive multiplications in convolutions by additions while preserving the high performance. Since the hardware complexity of additions is much lower than that of multiplications, the overall energy consumption is thus reduced significantly. To further optimize the hardware overhead of using AdderNet, this paper studies the winograd algorithm, which is a widely used fast algorithm for accelerating convolution and saving the computational costs. Unfortunately, the conventional Winograd algorithm cannot be directly applied to AdderNets since the distributive law in multiplication is not valid for the l1-norm. Therefore, we replace the element-wise multiplication in the Winograd equation by additions and then develop a new set of transform matrixes that can enhance the representation ability of output features to maintain the performance. Moreover, we propose the l2-to-l1 training strategy to mitigate the negative impacts caused by formal inconsistency. Experimental results on both FPGA and benchmarks show that the new method can further reduce the energy consumption without affecting the accuracy of the original AdderNet.' volume: 139 URL: https://proceedings.mlr.press/v139/li21c.html PDF: http://proceedings.mlr.press/v139/li21c/li21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wenshuo family: Li - given: Hanting family: Chen - given: Mingqiang family: Huang - given: Xinghao family: Chen - given: Chunjing family: Xu - given: Yunhe family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6307-6315 id: li21c issued: date-parts: - 2021 - 7 - 1 firstpage: 6307 lastpage: 6315 published: 2021-07-01 00:00:00 +0000 - title: 'A Free Lunch From ANN: Towards Efficient, Accurate Spiking Neural Networks Calibration' abstract: 'Spiking Neural Network (SNN) has been recognized as one of the next generation of neural networks. Conventionally, SNN can be converted from a pre-trained ANN by only replacing the ReLU activation to spike activation while keeping the parameters intact. Perhaps surprisingly, in this work we show that a proper way to calibrate the parameters during the conversion of ANN to SNN can bring significant improvements. We introduce SNN Calibration, a cheap but extraordinarily effective method by leveraging the knowledge within a pre-trained Artificial Neural Network (ANN). Starting by analyzing the conversion error and its propagation through layers theoretically, we propose the calibration algorithm that can correct the error layer-by-layer. The calibration only takes a handful number of training data and several minutes to finish. Moreover, our calibration algorithm can produce SNN with state-of-the-art architecture on the large-scale ImageNet dataset, including MobileNet and RegNet. Extensive experiments demonstrate the effectiveness and efficiency of our algorithm. For example, our advanced pipeline can increase up to 69% top-1 accuracy when converting MobileNet on ImageNet compared to baselines. Codes are released at https://github.com/yhhhli/SNN_Calibration.' volume: 139 URL: https://proceedings.mlr.press/v139/li21d.html PDF: http://proceedings.mlr.press/v139/li21d/li21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuhang family: Li - given: Shikuang family: Deng - given: Xin family: Dong - given: Ruihao family: Gong - given: Shi family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6316-6325 id: li21d issued: date-parts: - 2021 - 7 - 1 firstpage: 6316 lastpage: 6325 published: 2021-07-01 00:00:00 +0000 - title: 'Privacy-Preserving Feature Selection with Secure Multiparty Computation' abstract: 'Existing work on privacy-preserving machine learning with Secure Multiparty Computation (MPC) is almost exclusively focused on model training and on inference with trained models, thereby overlooking the important data pre-processing stage. In this work, we propose the first MPC based protocol for private feature selection based on the filter method, which is independent of model training, and can be used in combination with any MPC protocol to rank features. We propose an efficient feature scoring protocol based on Gini impurity to this end. To demonstrate the feasibility of our approach for practical data science, we perform experiments with the proposed MPC protocols for feature selection in a commonly used machine-learning-as-a-service configuration where computations are outsourced to multiple servers, with semi-honest and with malicious adversaries. Regarding effectiveness, we show that secure feature selection with the proposed protocols improves the accuracy of classifiers on a variety of real-world data sets, without leaking information about the feature values or even which features were selected. Regarding efficiency, we document runtimes ranging from several seconds to an hour for our protocols to finish, depending on the size of the data set and the security settings.' volume: 139 URL: https://proceedings.mlr.press/v139/li21e.html PDF: http://proceedings.mlr.press/v139/li21e/li21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiling family: Li - given: Rafael family: Dowsley - given: Martine family: De Cock editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6326-6336 id: li21e issued: date-parts: - 2021 - 7 - 1 firstpage: 6326 lastpage: 6336 published: 2021-07-01 00:00:00 +0000 - title: 'Theory of Spectral Method for Union of Subspaces-Based Random Geometry Graph' abstract: 'Spectral method is a commonly used scheme to cluster data points lying close to Union of Subspaces, a task known as Subspace Clustering. The typical usage is to construct a Random Geometry Graph first and then apply spectral method to the graph to obtain clustering result. The latter step has been coined the name Spectral Clustering. As far as we know, in spite of the significance of both steps in spectral-method-based Subspace Clustering, all existing theoretical results focus on the first step of constructing the graph, but ignore the final step to correct false connections through spectral clustering. This paper establishes a theory to show the power of this method for the first time, in which we demonstrate the mechanism of spectral clustering by analyzing a simplified algorithm under the widely used semi-random model. Based on this theory, we prove the efficiency of Subspace Clustering in fairly broad conditions. The insights and analysis techniques developed in this paper might also have implications for other random graph problems.' volume: 139 URL: https://proceedings.mlr.press/v139/li21f.html PDF: http://proceedings.mlr.press/v139/li21f/li21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gen family: Li - given: Yuantao family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6337-6345 id: li21f issued: date-parts: - 2021 - 7 - 1 firstpage: 6337 lastpage: 6345 published: 2021-07-01 00:00:00 +0000 - title: 'MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning' abstract: 'Exploration in reinforcement learning is, in general, a challenging problem. A common technique to make learning easier is providing demonstrations from a human supervisor, but such demonstrations can be expensive and time-consuming to acquire. In this work, we study a more tractable class of reinforcement learning problems defined simply by examples of successful outcome states, which can be much easier to provide while still making the exploration problem more tractable. In this problem setting, the reward function can be obtained automatically by training a classifier to categorize states as successful or not. However, as we will show, this requires the classifier to make uncertainty-aware predictions that are very difficult using standard techniques for training deep networks. To address this, we propose a novel mechanism for obtaining calibrated uncertainty based on an amortized technique for computing the normalized maximum likelihood (NML) distribution, leveraging tools from meta-learning to make this distribution tractable. We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions, while also providing more effective guidance towards the goal. We demonstrate that our algorithm solves a number of challenging navigation and robotic manipulation tasks which prove difficult or impossible for prior methods.' volume: 139 URL: https://proceedings.mlr.press/v139/li21g.html PDF: http://proceedings.mlr.press/v139/li21g/li21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kevin family: Li - given: Abhishek family: Gupta - given: Ashwin family: Reddy - given: Vitchyr H family: Pong - given: Aurick family: Zhou - given: Justin family: Yu - given: Sergey family: Levine editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6346-6356 id: li21g issued: date-parts: - 2021 - 7 - 1 firstpage: 6346 lastpage: 6356 published: 2021-07-01 00:00:00 +0000 - title: 'Ditto: Fair and Robust Federated Learning Through Personalization' abstract: 'Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/li21h.html PDF: http://proceedings.mlr.press/v139/li21h/li21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tian family: Li - given: Shengyuan family: Hu - given: Ahmad family: Beirami - given: Virginia family: Smith editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6357-6368 id: li21h issued: date-parts: - 2021 - 7 - 1 firstpage: 6357 lastpage: 6368 published: 2021-07-01 00:00:00 +0000 - title: 'Quantization Algorithms for Random Fourier Features' abstract: 'The method of random projection (RP) is the standard technique for dimensionality reduction, approximate near neighbor search, compressed sensing, etc., which provides a simple and effective scheme for approximating pairwise inner products and Euclidean distances in massive data. Closely related to RP, the method of random Fourier features (RFF) has also become popular for approximating the (nonlinear) Gaussian kernel. RFF applies a specific nonlinear transformation on the projected data from RP. In practice, using the Gaussian kernel often leads to better performance than the linear kernel (inner product). After random projections, quantization is an important step for efficient data storage, computation and transmission. Quantization for RP has been extensively studied in the literature. In this paper, we focus on developing quantization algorithms for RFF. The task is in a sense challenging due to the tuning parameter $\gamma$ in the Gaussian kernel. For example, the quantizer and the quantized data might be tied to each specific Gaussian kernel parameter $\gamma$. Our contribution begins with the analysis on the probability distributions of RFF, and an interesting discovery that the marginal distribution of RFF is free of the parameter $\gamma$. This significantly simplifies the design of the Lloyd-Max (LM) quantization scheme for RFF in that there would be only one LM quantizer (regardless of $\gamma$). Detailed theoretical analysis is provided on the kernel estimators and approximation error, and experiments confirm the effectiveness and efficiency of the proposed method.' volume: 139 URL: https://proceedings.mlr.press/v139/li21i.html PDF: http://proceedings.mlr.press/v139/li21i/li21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiaoyun family: Li - given: Ping family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6369-6380 id: li21i issued: date-parts: - 2021 - 7 - 1 firstpage: 6369 lastpage: 6380 published: 2021-07-01 00:00:00 +0000 - title: 'Approximate Group Fairness for Clustering' abstract: 'We incorporate group fairness into the algorithmic centroid clustering problem, where $k$ centers are to be located to serve $n$ agents distributed in a metric space. We refine the notion of proportional fairness proposed in [Chen et al., ICML 2019] as {\em core fairness}. A $k$-clustering is in the core if no coalition containing at least $n/k$ agents can strictly decrease their total distance by deviating to a new center together. Our solution concept is motivated by the situation where agents are able to coordinate and utilities are transferable. A string of existence, hardness and approximability results is provided. Particularly, we propose two dimensions to relax core requirements: one is on the degree of distance improvement, and the other is on the size of deviating coalition. For both relaxations and their combination, we study the extent to which relaxed core fairness can be satisfied in metric spaces including line, tree and general metric space, and design approximation algorithms accordingly. We also conduct experiments on synthetic and real-world data to examine the performance of our algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/li21j.html PDF: http://proceedings.mlr.press/v139/li21j/li21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bo family: Li - given: Lijun family: Li - given: Ankang family: Sun - given: Chenhao family: Wang - given: Yingfan family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6381-6391 id: li21j issued: date-parts: - 2021 - 7 - 1 firstpage: 6381 lastpage: 6391 published: 2021-07-01 00:00:00 +0000 - title: 'Sharper Generalization Bounds for Clustering' abstract: 'Existing generalization analysis of clustering mainly focuses on specific instantiations, such as (kernel) $k$-means, and a unified framework for studying clustering performance is still lacking. Besides, the existing excess clustering risk bounds are mostly of order $\mathcal{O}(K/\sqrt{n})$ provided that the underlying distribution has bounded support, where $n$ is the sample size and $K$ is the cluster numbers, or of order $\mathcal{O}(K^2/n)$ under strong assumptions on the underlying distribution, where these assumptions are hard to be verified in general. In this paper, we propose a unified clustering learning framework and investigate its excess risk bounds, obtaining state-of-the-art upper bounds under mild assumptions. Specifically, we derive sharper bounds of order $\mathcal{O}(K^2/n)$ under mild assumptions on the covering number of the hypothesis spaces, where these assumptions are easy to be verified. Moreover, for the hard clustering scheme, such as (kernel) $k$-means, if just assume the hypothesis functions to be bounded, we improve the upper bounds from the order $\mathcal{O}(K/\sqrt{n})$ to $\mathcal{O}(\sqrt{K}/\sqrt{n})$. Furthermore, state-of-the-art bounds of faster order $\mathcal{O}(K/n)$ are obtained with the covering number assumptions.' volume: 139 URL: https://proceedings.mlr.press/v139/li21k.html PDF: http://proceedings.mlr.press/v139/li21k/li21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shaojie family: Li - given: Yong family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6392-6402 id: li21k issued: date-parts: - 2021 - 7 - 1 firstpage: 6392 lastpage: 6402 published: 2021-07-01 00:00:00 +0000 - title: 'Provably End-to-end Label-noise Learning without Anchor Points' abstract: 'In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers. Existing consistent estimators for the transition matrix have been developed by exploiting anchor points. However, the anchor-point assumption is not always satisfied in real scenarios. In this paper, we propose an end-to-end framework for solving label-noise learning without anchor points, in which we simultaneously optimize two objectives: the cross entropy loss between the noisy label and the predicted probability by the neural network, and the volume of the simplex formed by the columns of the transition matrix. Our proposed framework can identify the transition matrix if the clean class-posterior probabilities are sufficiently scattered. This is by far the mildest assumption under which the transition matrix is provably identifiable and the learned classifier is statistically consistent. Experimental results on benchmark datasets demonstrate the effectiveness and robustness of the proposed method.' volume: 139 URL: https://proceedings.mlr.press/v139/li21l.html PDF: http://proceedings.mlr.press/v139/li21l/li21l.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21l.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xuefeng family: Li - given: Tongliang family: Liu - given: Bo family: Han - given: Gang family: Niu - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6403-6413 id: li21l issued: date-parts: - 2021 - 7 - 1 firstpage: 6403 lastpage: 6413 published: 2021-07-01 00:00:00 +0000 - title: 'A Novel Method to Solve Neural Knapsack Problems' abstract: '0-1 knapsack is of fundamental importance across many fields. In this paper, we present a game-theoretic method to solve 0-1 knapsack problems (KPs) where the number of items (products) is large and the values of items are not predetermined but decided by an external value assignment function (e.g., a neural network in our case) during the optimization process. While existing papers are interested in predicting solutions with neural networks for classical KPs whose objective functions are mostly linear functions, we are interested in solving KPs whose objective functions are neural networks. In other words, we choose a subset of items that maximize the sum of the values predicted by neural networks. Its key challenge is how to optimize the neural network-based non-linear KP objective with a budget constraint. Our solution is inspired by game-theoretic approaches in deep learning, e.g., generative adversarial networks. After formally defining our two-player game, we develop an adaptive gradient ascent method to solve it. In our experiments, our method successfully solves two neural network-based non-linear KPs and conventional linear KPs with 1 million items.' volume: 139 URL: https://proceedings.mlr.press/v139/li21m.html PDF: http://proceedings.mlr.press/v139/li21m/li21m.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21m.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Duanshun family: Li - given: Jing family: Liu - given: Dongeun family: Lee - given: Ali family: Seyedmazloom - given: Giridhar family: Kaushik - given: Kookjin family: Lee - given: Noseong family: Park editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6414-6424 id: li21m issued: date-parts: - 2021 - 7 - 1 firstpage: 6414 lastpage: 6424 published: 2021-07-01 00:00:00 +0000 - title: 'Mixed Cross Entropy Loss for Neural Machine Translation' abstract: 'In neural machine translation, Cross Entropy loss (CE) is the standard loss function in two training methods of auto-regressive models, i.e., teacher forcing and scheduled sampling. In this paper, we propose mixed Cross Entropy loss (mixed CE) as a substitute for CE in both training approaches. In teacher forcing, the model trained with CE regards the translation problem as a one-to-one mapping process, while in mixed CE this process can be relaxed to one-to-many. In scheduled sampling, we show that mixed CE has the potential to encourage the training and testing behaviours to be similar to each other, more effectively mitigating the exposure bias problem. We demonstrate the superiority of mixed CE over CE on several machine translation datasets, WMT’16 Ro-En, WMT’16 Ru-En, and WMT’14 En-De in both teacher forcing and scheduled sampling setups. Furthermore, in WMT’14 En-De, we also find mixed CE consistently outperforms CE on a multi-reference set as well as a challenging paraphrased reference set. We also found the model trained with mixed CE is able to provide a better probability distribution defined over the translation output space. Our code is available at https://github.com/haorannlp/mix.' volume: 139 URL: https://proceedings.mlr.press/v139/li21n.html PDF: http://proceedings.mlr.press/v139/li21n/li21n.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21n.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Haoran family: Li - given: Wei family: Lu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6425-6436 id: li21n issued: date-parts: - 2021 - 7 - 1 firstpage: 6425 lastpage: 6436 published: 2021-07-01 00:00:00 +0000 - title: 'Training Graph Neural Networks with 1000 Layers' abstract: 'Deep graph neural networks (GNNs) have achieved excellent results on various tasks on increasingly large graph datasets with millions of nodes and edges. However, memory complexity has become a major obstacle when training deep GNNs for practical applications due to the immense number of nodes, edges, and intermediate activations. To improve the scalability of GNNs, prior works propose smart graph sampling or partitioning strategies to train GNNs with a smaller set of nodes or sub-graphs. In this work, we study reversible connections, group convolutions, weight tying, and equilibrium models to advance the memory and parameter efficiency of GNNs. We find that reversible connections in combination with deep network architectures enable the training of overparameterized GNNs that significantly outperform existing methods on multiple datasets. Our models RevGNN-Deep (1001 layers with 80 channels each) and RevGNN-Wide (448 layers with 224 channels each) were both trained on a single commodity GPU and achieve an ROC-AUC of 87.74 $\pm$ 0.13 and 88.14 $\pm$ 0.15 on the ogbn-proteins dataset. To the best of our knowledge, RevGNN-Deep is the deepest GNN in the literature by one order of magnitude.' volume: 139 URL: https://proceedings.mlr.press/v139/li21o.html PDF: http://proceedings.mlr.press/v139/li21o/li21o.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21o.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Guohao family: Li - given: Matthias family: Müller - given: Bernard family: Ghanem - given: Vladlen family: Koltun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6437-6449 id: li21o issued: date-parts: - 2021 - 7 - 1 firstpage: 6437 lastpage: 6449 published: 2021-07-01 00:00:00 +0000 - title: 'Active Feature Acquisition with Generative Surrogate Models' abstract: 'Many real-world situations allow for the acquisition of additional relevant information when making an assessment with limited or uncertain data. However, traditional ML approaches either require all features to be acquired beforehand or regard part of them as missing data that cannot be acquired. In this work, we consider models that perform active feature acquisition (AFA) and query the environment for unobserved features to improve the prediction assessments at evaluation time. Our work reformulates the Markov decision process (MDP) that underlies the AFA problem as a generative modeling task and optimizes a policy via a novel model-based approach. We propose learning a generative surrogate model (GSM) that captures the dependencies among input features to assess potential information gain from acquisitions. The GSM is leveraged to provide intermediate rewards and auxiliary information to aid the agent navigate a complicated high-dimensional action space and sparse rewards. Furthermore, we extend AFA in a task we coin active instance recognition (AIR) for the unsupervised case where the target variables are the unobserved features themselves and the goal is to collect information for a particular instance in a cost-efficient way. Empirical results demonstrate that our approach achieves considerably better performance than previous state of the art methods on both supervised and unsupervised tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/li21p.html PDF: http://proceedings.mlr.press/v139/li21p/li21p.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21p.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yang family: Li - given: Junier family: Oliva editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6450-6459 id: li21p issued: date-parts: - 2021 - 7 - 1 firstpage: 6450 lastpage: 6459 published: 2021-07-01 00:00:00 +0000 - title: 'Partially Observed Exchangeable Modeling' abstract: 'Modeling dependencies among features is fundamental for many machine learning tasks. Although there are often multiple related instances that may be leveraged to inform conditional dependencies, typical approaches only model conditional dependencies over individual instances. In this work, we propose a novel framework, partially observed exchangeable modeling (POEx) that takes in a set of related partially observed instances and infers the conditional distribution for the unobserved dimensions over multiple elements. Our approach jointly models the intra-instance (among features in a point) and inter-instance (among multiple points in a set) dependencies in data. POEx is a general framework that encompasses many existing tasks such as point cloud expansion and few-shot generation, as well as new tasks like few-shot imputation. Despite its generality, extensive empirical evaluations show that our model achieves state-of-the-art performance across a range of applications.' volume: 139 URL: https://proceedings.mlr.press/v139/li21q.html PDF: http://proceedings.mlr.press/v139/li21q/li21q.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21q.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yang family: Li - given: Junier family: Oliva editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6460-6470 id: li21q issued: date-parts: - 2021 - 7 - 1 firstpage: 6460 lastpage: 6470 published: 2021-07-01 00:00:00 +0000 - title: 'Testing DNN-based Autonomous Driving Systems under Critical Environmental Conditions' abstract: 'Due to the increasing usage of Deep Neural Network (DNN) based autonomous driving systems (ADS) where erroneous or unexpected behaviours can lead to catastrophic accidents, testing such systems is of growing importance. Existing approaches often just focus on finding erroneous behaviours and have not thoroughly studied the impact of environmental conditions. In this paper, we propose to test DNN-based ADS under different environmental conditions to identify the critical ones, that is, the environmental conditions under which the ADS are more prone to errors. To tackle the problem of the space of environmental conditions being extremely large, we present a novel approach named TACTIC that employs the search-based method to identify critical environmental conditions generated by an image-to-image translation model. Large-scale experiments show that TACTIC can effectively identify critical environmental conditions and produce realistic testing images, and meanwhile, reveal more erroneous behaviours compared to existing approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/li21r.html PDF: http://proceedings.mlr.press/v139/li21r/li21r.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21r.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhong family: Li - given: Minxue family: Pan - given: Tian family: Zhang - given: Xuandong family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6471-6482 id: li21r issued: date-parts: - 2021 - 7 - 1 firstpage: 6471 lastpage: 6482 published: 2021-07-01 00:00:00 +0000 - title: 'The Symmetry between Arms and Knapsacks: A Primal-Dual Approach for Bandits with Knapsacks' abstract: 'In this paper, we study the bandits with knapsacks (BwK) problem and develop a primal-dual based algorithm that achieves a problem-dependent logarithmic regret bound. The BwK problem extends the multi-arm bandit (MAB) problem to model the resource consumption, and the existing BwK literature has been mainly focused on deriving asymptotically optimal distribution-free regret bounds. We first study the primal and dual linear programs underlying the BwK problem. From this primal-dual perspective, we discover symmetry between arms and knapsacks, and then propose a new notion of suboptimality measure for the BwK problem. The suboptimality measure highlights the important role of knapsacks in determining algorithm regret and inspires the design of our two-phase algorithm. In the first phase, the algorithm identifies the optimal arms and the binding knapsacks, and in the second phase, it exhausts the binding knapsacks via playing the optimal arms through an adaptive procedure. Our regret upper bound involves the proposed suboptimality measure and it has a logarithmic dependence on length of horizon $T$ and a polynomial dependence on $m$ (the numbers of arms) and $d$ (the number of knapsacks). To the best of our knowledge, this is the first problem-dependent logarithmic regret bound for solving the general BwK problem.' volume: 139 URL: https://proceedings.mlr.press/v139/li21s.html PDF: http://proceedings.mlr.press/v139/li21s/li21s.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21s.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiaocheng family: Li - given: Chunlin family: Sun - given: Yinyu family: Ye editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6483-6492 id: li21s issued: date-parts: - 2021 - 7 - 1 firstpage: 6483 lastpage: 6492 published: 2021-07-01 00:00:00 +0000 - title: 'Distributionally Robust Optimization with Markovian Data' abstract: 'We study a stochastic program where the probability distribution of the uncertain problem parameters is unknown and only indirectly observed via finitely many correlated samples generated by an unknown Markov chain with $d$ states. We propose a data-driven distributionally robust optimization model to estimate the problem’s objective function and optimal solution. By leveraging results from large deviations theory, we derive statistical guarantees on the quality of these estimators. The underlying worst-case expectation problem is nonconvex and involves $\mathcal O(d^2)$ decision variables. Thus, it cannot be solved efficiently for large $d$. By exploiting the structure of this problem, we devise a customized Frank-Wolfe algorithm with convex direction-finding subproblems of size $\mathcal O(d)$. We prove that this algorithm finds a stationary point efficiently under mild conditions. The efficiency of the method is predicated on a dimensionality reduction enabled by a dual reformulation. Numerical experiments indicate that our approach has better computational and statistical properties than the state-of-the-art methods.' volume: 139 URL: https://proceedings.mlr.press/v139/li21t.html PDF: http://proceedings.mlr.press/v139/li21t/li21t.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21t.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mengmeng family: Li - given: Tobias family: Sutter - given: Daniel family: Kuhn editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6493-6503 id: li21t issued: date-parts: - 2021 - 7 - 1 firstpage: 6493 lastpage: 6503 published: 2021-07-01 00:00:00 +0000 - title: 'Communication-Efficient Distributed SVD via Local Power Iterations' abstract: 'We study distributed computing of the truncated singular value decomposition (SVD). We develop an algorithm that we call \texttt{LocalPower} for improving communication efficiency. Specifically, we uniformly partition the dataset among $m$ nodes and alternate between multiple (precisely $p$) local power iterations and one global aggregation. In the aggregation, we propose to weight each local eigenvector matrix with orthogonal Procrustes transformation (OPT). As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with $\pm 1$ entries as weights, has better computation complexity and stability in experiments. We theoretically show that under certain assumptions \texttt{LocalPower} lowers the required number of communications by a factor of $p$ to reach a constant accuracy. We also show that the strategy of periodically decaying $p$ helps obtain high-precision solutions. We conduct experiments to demonstrate the effectiveness of \texttt{LocalPower}.' volume: 139 URL: https://proceedings.mlr.press/v139/li21u.html PDF: http://proceedings.mlr.press/v139/li21u/li21u.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21u.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiang family: Li - given: Shusen family: Wang - given: Kun family: Chen - given: Zhihua family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6504-6514 id: li21u issued: date-parts: - 2021 - 7 - 1 firstpage: 6504 lastpage: 6514 published: 2021-07-01 00:00:00 +0000 - title: 'FILTRA: Rethinking Steerable CNN by Filter Transform' abstract: 'Steerable CNN imposes the prior knowledge of transformation invariance or equivariance in the network architecture to enhance the the network robustness on geometry transformation of data and reduce overfitting. It has been an intuitive and widely used technique to construct a steerable filter by augmenting a filter with its transformed copies in the past decades, which is named as filter transform in this paper. Recently, the problem of steerable CNN has been studied from aspect of group representation theory, which reveals the function space structure of a steerable kernel function. However, it is not yet clear on how this theory is related to the filter transform technique. In this paper, we show that kernel constructed by filter transform can also be interpreted in the group representation theory. This interpretation help complete the puzzle of steerable CNN theory and provides a novel and simple approach to implement steerable convolution operators. Experiments are executed on multiple datasets to verify the feasibility of the proposed approach.' volume: 139 URL: https://proceedings.mlr.press/v139/li21v.html PDF: http://proceedings.mlr.press/v139/li21v/li21v.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21v.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bo family: Li - given: Qili family: Wang - given: Gim Hee family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6515-6522 id: li21v issued: date-parts: - 2021 - 7 - 1 firstpage: 6515 lastpage: 6522 published: 2021-07-01 00:00:00 +0000 - title: 'Online Unrelated Machine Load Balancing with Predictions Revisited' abstract: 'We study the online load balancing problem with machine learned predictions, and give results that improve upon and extend those in a recent paper by Lattanzi et al. (2020). First, we design deterministic and randomized online rounding algorithms for the problem in the unrelated machine setting, with $O(\frac{\log m}{\log \log m})$- and $O(\frac{\log \log m}{\log \log \log m})$-competitive ratios. They respectively improve upon the previous ratios of $O(\log m)$ and $O(\log^3\log m)$, and match the lower bounds given by Lattanzi et al. Second, we extend their prediction scheme from the identical machine restricted assignment setting to the unrelated machine setting. With the knowledge of two vectors over machines, a dual vector and a weight vector, we can construct a good fractional assignment online, that can be passed to an online rounding algorithm. Finally, we consider the learning model introduced by Lavastida et al. (2020), and show that under the model, the two vectors can be learned efficiently with a few samples of instances.' volume: 139 URL: https://proceedings.mlr.press/v139/li21w.html PDF: http://proceedings.mlr.press/v139/li21w/li21w.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21w.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shi family: Li - given: Jiayi family: Xian editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6523-6532 id: li21w issued: date-parts: - 2021 - 7 - 1 firstpage: 6523 lastpage: 6532 published: 2021-07-01 00:00:00 +0000 - title: 'Asymptotic Normality and Confidence Intervals for Prediction Risk of the Min-Norm Least Squares Estimator' abstract: 'This paper quantifies the uncertainty of prediction risk for the min-norm least squares estimator in high-dimensional linear regression models. We establish the asymptotic normality of prediction risk when both the sample size and the number of features tend to infinity. Based on the newly established central limit theorems(CLTs), we derive the confidence intervals of the prediction risk under various scenarios. Our results demonstrate the sample-wise non-monotonicity of the prediction risk and confirm “more data hurt" phenomenon. Furthermore, the width of confidence intervals indicates that over-parameterization would enlarge the randomness of prediction performance.' volume: 139 URL: https://proceedings.mlr.press/v139/li21x.html PDF: http://proceedings.mlr.press/v139/li21x/li21x.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21x.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zeng family: Li - given: Chuanlong family: Xie - given: Qinwen family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6533-6542 id: li21x issued: date-parts: - 2021 - 7 - 1 firstpage: 6533 lastpage: 6542 published: 2021-07-01 00:00:00 +0000 - title: 'TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models' abstract: 'Model parallelism has become a necessity for training modern large-scale deep language models. In this work, we identify a new and orthogonal dimension from existing model parallel approaches: it is possible to perform pipeline parallelism within a single training sequence for Transformer-based language models thanks to its autoregressive property. This enables a more fine-grained pipeline compared with previous work. With this key idea, we design TeraPipe, a high-performance token-level pipeline parallel algorithm for synchronous model-parallel training of Transformer-based language models. We develop a novel dynamic programming-based algorithm to calculate the optimal pipelining execution scheme given a specific model and cluster configuration. We show that TeraPipe can speed up the training by 5.0x for the largest GPT-3 model with 175 billion parameters on an AWS cluster with 48 p3.16xlarge instances compared with state-of-the-art model-parallel methods. The code for reproduction can be found at https://github.com/zhuohan123/terapipe' volume: 139 URL: https://proceedings.mlr.press/v139/li21y.html PDF: http://proceedings.mlr.press/v139/li21y/li21y.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21y.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhuohan family: Li - given: Siyuan family: Zhuang - given: Shiyuan family: Guo - given: Danyang family: Zhuo - given: Hao family: Zhang - given: Dawn family: Song - given: Ion family: Stoica editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6543-6552 id: li21y issued: date-parts: - 2021 - 7 - 1 firstpage: 6543 lastpage: 6552 published: 2021-07-01 00:00:00 +0000 - title: 'A Second look at Exponential and Cosine Step Sizes: Simplicity, Adaptivity, and Performance' abstract: 'Stochastic Gradient Descent (SGD) is a popular tool in training large-scale machine learning models. Its performance, however, is highly variable, depending crucially on the choice of the step sizes. Accordingly, a variety of strategies for tuning the step sizes have been proposed, ranging from coordinate-wise approaches (a.k.a. “adaptive” step sizes) to sophisticated heuristics to change the step size in each iteration. In this paper, we study two step size schedules whose power has been repeatedly confirmed in practice: the exponential and the cosine step sizes. For the first time, we provide theoretical support for them proving convergence rates for smooth non-convex functions, with and without the Polyak-Ł{}ojasiewicz (PL) condition. Moreover, we show the surprising property that these two strategies are \emph{adaptive} to the noise level in the stochastic gradients of PL functions. That is, contrary to polynomial step sizes, they achieve almost optimal performance without needing to know the noise level nor tuning their hyperparameters based on it. Finally, we conduct a fair and comprehensive empirical evaluation of real-world datasets with deep learning architectures. Results show that, even if only requiring at most two hyperparameters to tune, these two strategies best or match the performance of various finely-tuned state-of-the-art strategies.' volume: 139 URL: https://proceedings.mlr.press/v139/li21z.html PDF: http://proceedings.mlr.press/v139/li21z/li21z.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-li21z.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiaoyu family: Li - given: Zhenxun family: Zhuang - given: Francesco family: Orabona editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6553-6564 id: li21z issued: date-parts: - 2021 - 7 - 1 firstpage: 6553 lastpage: 6564 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Understanding and Mitigating Social Biases in Language Models' abstract: 'As machine learning methods are deployed in real-world settings such as healthcare, legal systems, and social science, it is crucial to recognize how they shape social biases and stereotypes in these sensitive decision-making processes. Among such real-world deployments are large-scale pretrained language models (LMs) that can be potentially dangerous in manifesting undesirable representational biases - harmful biases resulting from stereotyping that propagate negative generalizations involving gender, race, religion, and other social constructs. As a step towards improving the fairness of LMs, we carefully define several sources of representational biases before proposing new benchmarks and metrics to measure them. With these tools, we propose steps towards mitigating social biases during text generation. Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information for high-fidelity text generation, thereby pushing forward the performance-fairness Pareto frontier.' volume: 139 URL: https://proceedings.mlr.press/v139/liang21a.html PDF: http://proceedings.mlr.press/v139/liang21a/liang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Paul Pu family: Liang - given: Chiyu family: Wu - given: Louis-Philippe family: Morency - given: Ruslan family: Salakhutdinov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6565-6576 id: liang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6565 lastpage: 6576 published: 2021-07-01 00:00:00 +0000 - title: 'Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability' abstract: 'Knowledge transferability, or transfer learning, has been widely adopted to allow a pre-trained model in the source domain to be effectively adapted to downstream tasks in the target domain. It is thus important to explore and understand the factors affecting knowledge transferability. In this paper, as the first work, we analyze and demonstrate the connections between knowledge transferability and another important phenomenon–adversarial transferability, \emph{i.e.}, adversarial examples generated against one model can be transferred to attack other models. Our theoretical studies show that adversarial transferability indicates knowledge transferability, and vice versa. Moreover, based on the theoretical insights, we propose two practical adversarial transferability metrics to characterize this process, serving as bidirectional indicators between adversarial and knowledge transferability. We conduct extensive experiments for different scenarios on diverse datasets, showing a positive correlation between adversarial transferability and knowledge transferability. Our findings will shed light on future research about effective knowledge transfer learning and adversarial transferability analyses.' volume: 139 URL: https://proceedings.mlr.press/v139/liang21b.html PDF: http://proceedings.mlr.press/v139/liang21b/liang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kaizhao family: Liang - given: Jacky Y family: Zhang - given: Boxin family: Wang - given: Zhuolin family: Yang - given: Sanmi family: Koyejo - given: Bo family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6577-6587 id: liang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 6577 lastpage: 6587 published: 2021-07-01 00:00:00 +0000 - title: 'Parallel Droplet Control in MEDA Biochips using Multi-Agent Reinforcement Learning' abstract: 'Microfluidic biochips are being utilized for clinical diagnostics, including COVID-19 testing, because of they provide sample-to-result turnaround at low cost. Recently, microelectrode-dot-array (MEDA) biochips have been proposed to advance microfluidics technology. A MEDA biochip manipulates droplets of nano/picoliter volumes to automatically execute biochemical protocols. During bioassay execution, droplets are transported in parallel to achieve high-throughput outcomes. However, a major concern associated with the use of MEDA biochips is microelectrode degradation over time. Recent work has shown that formulating droplet transportation as a reinforcement-learning (RL) problem enables the training of policies to capture the underlying health conditions of microelectrodes and ensure reliable fluidic operations. However, the above RL-based approach suffers from two key limitations: 1) it cannot be used for concurrent transportation of multiple droplets; 2) it requires the availability of CCD cameras for monitoring droplet movement. To overcome these problems, we present a multi-agent reinforcement learning (MARL) droplet-routing solution that can be used for various sizes of MEDA biochips with integrated sensors, and we demonstrate the reliable execution of a serial-dilution bioassay with the MARL droplet router on a fabricated MEDA biochip. To facilitate further research, we also present a simulation environment based on the PettingZoo Gym Interface for MARL-guided droplet-routing problems on MEDA biochips.' volume: 139 URL: https://proceedings.mlr.press/v139/liang21c.html PDF: http://proceedings.mlr.press/v139/liang21c/liang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tung-Che family: Liang - given: Jin family: Zhou - given: Yun-Sheng family: Chan - given: Tsung-Yi family: Ho - given: Krishnendu family: Chakrabarty - given: Cy family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6588-6599 id: liang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 6588 lastpage: 6599 published: 2021-07-01 00:00:00 +0000 - title: 'Information Obfuscation of Graph Neural Networks' abstract: 'While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level information about sensitive attributes. In this paper, we study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data. We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance. Our method creates a strong defense against inference attacks, while only suffering small loss in task performance. Theoretically, we analyze the effectiveness of our framework against a worst-case adversary, and characterize an inherent trade-off between maximizing predictive accuracy and minimizing information leakage. Experiments across multiple datasets from recommender systems, knowledge graphs and quantum chemistry demonstrate that the proposed approach provides a robust defense across various graph structures and tasks, while producing competitive GNN encoders for downstream tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/liao21a.html PDF: http://proceedings.mlr.press/v139/liao21a/liao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peiyuan family: Liao - given: Han family: Zhao - given: Keyulu family: Xu - given: Tommi family: Jaakkola - given: Geoffrey J. family: Gordon - given: Stefanie family: Jegelka - given: Ruslan family: Salakhutdinov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6600-6610 id: liao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6600 lastpage: 6610 published: 2021-07-01 00:00:00 +0000 - title: 'Guided Exploration with Proximal Policy Optimization using a Single Demonstration' abstract: 'Solving sparse reward tasks through exploration is one of the major challenges in deep reinforcement learning, especially in three-dimensional, partially-observable environments. Critically, the algorithm proposed in this article is capable of using a single human demonstration to solve hard-exploration problems. We train an agent on a combination of demonstrations and own experience to solve problems with variable initial conditions and we integrate it with proximal policy optimization (PPO). The agent is also able to increase its performance and to tackle harder problems by replaying its own past trajectories prioritizing them based on the obtained reward and the maximum value of the trajectory. We finally compare variations of this algorithm to different imitation learning algorithms on a set of hard-exploration tasks in the Animal-AI Olympics environment. To the best of our knowledge, learning a task in a three-dimensional environment with comparable difficulty has never been considered before using only one human demonstration.' volume: 139 URL: https://proceedings.mlr.press/v139/libardi21a.html PDF: http://proceedings.mlr.press/v139/libardi21a/libardi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-libardi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gabriele family: Libardi - given: Gianni family: De Fabritiis - given: Sebastian family: Dittert editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6611-6620 id: libardi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6611 lastpage: 6620 published: 2021-07-01 00:00:00 +0000 - title: 'Debiasing a First-order Heuristic for Approximate Bi-level Optimization' abstract: 'Approximate bi-level optimization (ABLO) consists of (outer-level) optimization problems, involving numerical (inner-level) optimization loops. While ABLO has many applications across deep learning, it suffers from time and memory complexity proportional to the length $r$ of its inner optimization loop. To address this complexity, an earlier first-order method (FOM) was proposed as a heuristic which omits second derivative terms, yielding significant speed gains and requiring only constant memory. Despite FOM’s popularity, there is a lack of theoretical understanding of its convergence properties. We contribute by theoretically characterizing FOM’s gradient bias under mild assumptions. We further demonstrate a rich family of examples where FOM-based SGD does not converge to a stationary point of the ABLO objective. We address this concern by proposing an unbiased FOM (UFOM) enjoying constant memory complexity as a function of $r$. We characterize the introduced time-variance tradeoff, demonstrate convergence bounds, and find an optimal UFOM for a given ABLO problem. Finally, we propose an efficient adaptive UFOM scheme.' volume: 139 URL: https://proceedings.mlr.press/v139/likhosherstov21a.html PDF: http://proceedings.mlr.press/v139/likhosherstov21a/likhosherstov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-likhosherstov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Valerii family: Likhosherstov - given: Xingyou family: Song - given: Krzysztof family: Choromanski - given: Jared Q family: Davis - given: Adrian family: Weller editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6621-6630 id: likhosherstov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6621 lastpage: 6630 published: 2021-07-01 00:00:00 +0000 - title: 'Making transport more robust and interpretable by moving data through a small number of anchor points' abstract: 'Optimal transport (OT) is a widely used technique for distribution alignment, with applications throughout the machine learning, graphics, and vision communities. Without any additional structural assumptions on transport, however, OT can be fragile to outliers or noise, especially in high dimensions. Here, we introduce Latent Optimal Transport (LOT), a new approach for OT that simultaneously learns low-dimensional structure in data while leveraging this structure to solve the alignment task. The idea behind our approach is to learn two sets of “anchors” that constrain the flow of transport between a source and target distribution. In both theoretical and empirical studies, we show that LOT regularizes the rank of transport and makes it more robust to outliers and the sampling density. We show that by allowing the source and target to have different anchors, and using LOT to align the latent spaces between anchors, the resulting transport plan has better structural interpretability and highlights connections between both the individual data points and the local geometry of the datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/lin21a.html PDF: http://proceedings.mlr.press/v139/lin21a/lin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chi-Heng family: Lin - given: Mehdi family: Azabou - given: Eva family: Dyer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6631-6641 id: lin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6631 lastpage: 6641 published: 2021-07-01 00:00:00 +0000 - title: 'Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation' abstract: 'Advanced large-scale neural language models have led to significant success in many language generation tasks. However, the most commonly used training objective, Maximum Likelihood Estimation (MLE), has been shown problematic, where the trained model prefers using dull and repetitive phrases. In this work, we introduce ScaleGrad, a modification straight to the gradient of the loss function, to remedy the degeneration issue of the standard MLE objective. By directly maneuvering the gradient information, ScaleGrad makes the model learn to use novel tokens. Empirical results show the effectiveness of our method not only in open-ended generation, but also in directed generation tasks. With the simplicity in architecture, our method can serve as a general training objective that is applicable to most of the neural text generation tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/lin21b.html PDF: http://proceedings.mlr.press/v139/lin21b/lin21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lin21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiang family: Lin - given: Simeng family: Han - given: Shafiq family: Joty editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6642-6653 id: lin21b issued: date-parts: - 2021 - 7 - 1 firstpage: 6642 lastpage: 6653 published: 2021-07-01 00:00:00 +0000 - title: 'Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data' abstract: 'Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks. In realistic learning scenarios, the presence of heterogeneity across different clients’ local datasets poses an optimization challenge and may severely deteriorate the generalization performance. In this paper, we investigate and identify the limitation of several decentralized optimization algorithms for different degrees of data heterogeneity. We propose a novel momentum-based method to mitigate this decentralized training difficulty. We show in extensive empirical experiments on various CV/NLP datasets (CIFAR-10, ImageNet, and AG News) and several network topologies (Ring and Social Network) that our method is much more robust to the heterogeneity of clients’ data than other existing methods, by a significant improvement in test performance (1%-20%).' volume: 139 URL: https://proceedings.mlr.press/v139/lin21c.html PDF: http://proceedings.mlr.press/v139/lin21c/lin21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lin21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tao family: Lin - given: Sai Praneeth family: Karimireddy - given: Sebastian family: Stich - given: Martin family: Jaggi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6654-6665 id: lin21c issued: date-parts: - 2021 - 7 - 1 firstpage: 6654 lastpage: 6665 published: 2021-07-01 00:00:00 +0000 - title: 'Generative Causal Explanations for Graph Neural Networks' abstract: 'This paper presents {\em Gem}, a model-agnostic approach for providing interpretable explanations for any GNNs on various graph learning tasks. Specifically, we formulate the problem of providing explanations for the decisions of GNNs as a causal learning task. Then we train a causal explanation model equipped with a loss function based on Granger causality. Different from existing explainers for GNNs, {\em Gem} explains GNNs on graph-structured data from a causal perspective. It has better generalization ability as it has no requirements on the internal structure of the GNNs or prior knowledge on the graph learning tasks. In addition, {\em Gem}, once trained, can be used to explain the target GNN very quickly. Our theoretical analysis shows that several recent explainers fall into a unified framework of {\em additive feature attribution methods}. Experimental results on synthetic and real-world datasets show that {\em Gem} achieves a relative increase of the explanation accuracy by up to $30%$ and speeds up the explanation process by up to $110\times$ as compared to its state-of-the-art alternatives.' volume: 139 URL: https://proceedings.mlr.press/v139/lin21d.html PDF: http://proceedings.mlr.press/v139/lin21d/lin21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lin21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wanyu family: Lin - given: Hao family: Lan - given: Baochun family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6666-6679 id: lin21d issued: date-parts: - 2021 - 7 - 1 firstpage: 6666 lastpage: 6679 published: 2021-07-01 00:00:00 +0000 - title: 'Tractable structured natural-gradient descent using local parameterizations' abstract: 'Natural-gradient descent (NGD) on structured parameter spaces (e.g., low-rank covariances) is computationally challenging due to difficult Fisher-matrix computations. We address this issue by using \emph{local-parameter coordinates} to obtain a flexible and efficient NGD method that works well for a wide-variety of structured parameterizations. We show four applications where our method (1) generalizes the exponential natural evolutionary strategy, (2) recovers existing Newton-like algorithms, (3) yields new structured second-order algorithms, and (4) gives new algorithms to learn covariances of Gaussian and Wishart-based distributions. We show results on a range of problems from deep learning, variational inference, and evolution strategies. Our work opens a new direction for scalable structured geometric methods.' volume: 139 URL: https://proceedings.mlr.press/v139/lin21e.html PDF: http://proceedings.mlr.press/v139/lin21e/lin21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lin21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wu family: Lin - given: Frank family: Nielsen - given: Khan Mohammad family: Emtiyaz - given: Mark family: Schmidt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6680-6691 id: lin21e issued: date-parts: - 2021 - 7 - 1 firstpage: 6680 lastpage: 6691 published: 2021-07-01 00:00:00 +0000 - title: 'Active Learning of Continuous-time Bayesian Networks through Interventions' abstract: 'We consider the problem of learning structures and parameters of Continuous-time Bayesian Networks (CTBNs) from time-course data under minimal experimental resources. In practice, the cost of generating experimental data poses a bottleneck, especially in the natural and social sciences. A popular approach to overcome this is Bayesian optimal experimental design (BOED). However, BOED becomes infeasible in high-dimensional settings, as it involves integration over all possible experimental outcomes. We propose a novel criterion for experimental design based on a variational approximation of the expected information gain. We show that for CTBNs, a semi-analytical expression for this criterion can be calculated for structure and parameter learning. By doing so, we can replace sampling over experimental outcomes by solving the CTBNs master-equation, for which scalable approximations exist. This alleviates the computational burden of sampling possible experimental outcomes in high-dimensions. We employ this framework to recommend interventional sequences. In this context, we extend the CTBN model to conditional CTBNs to incorporate interventions. We demonstrate the performance of our criterion on synthetic and real-world data.' volume: 139 URL: https://proceedings.mlr.press/v139/linzner21a.html PDF: http://proceedings.mlr.press/v139/linzner21a/linzner21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-linzner21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dominik family: Linzner - given: Heinz family: Koeppl editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6692-6701 id: linzner21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6692 lastpage: 6701 published: 2021-07-01 00:00:00 +0000 - title: 'Phase Transitions, Distance Functions, and Implicit Neural Representations' abstract: 'Representing surfaces as zero level sets of neural networks recently emerged as a powerful modeling paradigm, named Implicit Neural Representations (INRs), serving numerous downstream applications in geometric deep learning and 3D vision. Training INRs previously required choosing between occupancy and distance function representation and different losses with unknown limit behavior and/or bias. In this paper we draw inspiration from the theory of phase transitions of fluids and suggest a loss for training INRs that learns a density function that converges to a proper occupancy function, while its log transform converges to a distance function. Furthermore, we analyze the limit minimizer of this loss showing it satisfies the reconstruction constraints and has minimal surface perimeter, a desirable inductive bias for surface reconstruction. Training INRs with this new loss leads to state-of-the-art reconstructions on a standard benchmark.' volume: 139 URL: https://proceedings.mlr.press/v139/lipman21a.html PDF: http://proceedings.mlr.press/v139/lipman21a/lipman21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lipman21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yaron family: Lipman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6702-6712 id: lipman21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6702 lastpage: 6712 published: 2021-07-01 00:00:00 +0000 - title: 'The Earth Mover’s Pinball Loss: Quantiles for Histogram-Valued Regression' abstract: 'Although ubiquitous in the sciences, histogram data have not received much attention by the Deep Learning community. Whilst regression and classification tasks for scalar and vector data are routinely solved by neural networks, a principled approach for estimating histogram labels as a function of an input vector or image is lacking in the literature. We present a dedicated method for Deep Learning-based histogram regression, which incorporates cross-bin information and yields distributions over possible histograms, expressed by $\tau$-quantiles of the cumulative histogram in each bin. The crux of our approach is a new loss function obtained by applying the pinball loss to the cumulative histogram, which for 1D histograms reduces to the Earth Mover’s distance (EMD) in the special case of the median ($\tau = 0.5$), and generalizes it to arbitrary quantiles. We validate our method with an illustrative toy example, a football-related task, and an astrophysical computer vision problem. We show that with our loss function, the accuracy of the predicted median histograms is very similar to the standard EMD case (and higher than for per-bin loss functions such as cross-entropy), while the predictions become much more informative at almost no additional computational cost.' volume: 139 URL: https://proceedings.mlr.press/v139/list21a.html PDF: http://proceedings.mlr.press/v139/list21a/list21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-list21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Florian family: List editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6713-6724 id: list21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6713 lastpage: 6724 published: 2021-07-01 00:00:00 +0000 - title: 'Understanding Instance-Level Label Noise: Disparate Impacts and Treatments' abstract: 'This paper aims to provide understandings for the effect of an over-parameterized model, e.g. a deep neural network, memorizing instance-dependent noisy labels. We first quantify the harms caused by memorizing noisy instances, and show the disparate impacts of noisy labels for sample instances with different representation frequencies. We then analyze how several popular solutions for learning with noisy labels mitigate this harm at the instance level. Our analysis reveals that existing approaches lead to disparate treatments when handling noisy instances. While higher-frequency instances often enjoy a high probability of an improvement by applying these solutions, lower-frequency instances do not. Our analysis reveals new understandings for when these approaches work, and provides theoretical justifications for previously reported empirical observations. This observation requires us to rethink the distribution of label noise across instances and calls for different treatments for instances in different regimes.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21a.html PDF: http://proceedings.mlr.press/v139/liu21a/liu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yang family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6725-6735 id: liu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 6725 lastpage: 6735 published: 2021-07-01 00:00:00 +0000 - title: 'APS: Active Pretraining with Successor Features' abstract: 'We introduce a new unsupervised pretraining objective for reinforcement learning. During the unsupervised reward-free pretraining phase, the agent maximizes mutual information between tasks and states induced by the policy. Our key contribution is a novel lower bound of this intractable quantity. We show that by reinterpreting and combining variational successor features \citep{Hansen2020Fast} with nonparametric entropy maximization \citep{liu2021behavior}, the intractable mutual information can be efficiently optimized. The proposed method Active Pretraining with Successor Feature (APS) explores the environment via nonparametric entropy maximization, and the explored data can be efficiently leveraged to learn behavior by variational successor features. APS addresses the limitations of existing mutual information maximization based and entropy maximization based unsupervised RL, and combines the best of both worlds. When evaluated on the Atari 100k data-efficiency benchmark, our approach significantly outperforms previous methods combining unsupervised pretraining with task-specific finetuning.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21b.html PDF: http://proceedings.mlr.press/v139/liu21b/liu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hao family: Liu - given: Pieter family: Abbeel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6736-6747 id: liu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 6736 lastpage: 6747 published: 2021-07-01 00:00:00 +0000 - title: 'Learning by Turning: Neural Architecture Aware Optimisation' abstract: 'Descent methods for deep networks are notoriously capricious: they require careful tuning of step size, momentum and weight decay, and which method will work best on a new benchmark is a priori unclear. To address this problem, this paper conducts a combined study of neural architecture and optimisation, leading to a new optimiser called Nero: the neuronal rotator. Nero trains reliably without momentum or weight decay, works in situations where Adam and SGD fail, and requires little to no learning rate tuning. Also, Nero’s memory footprint is   square root that of Adam or LAMB. Nero combines two ideas: (1) projected gradient descent over the space of balanced networks; (2) neuron-specific updates, where the step size sets the angle through which each neuron’s hyperplane turns. The paper concludes by discussing how this geometric connection between architecture and optimisation may impact theories of generalisation in deep learning.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21c.html PDF: http://proceedings.mlr.press/v139/liu21c/liu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yang family: Liu - given: Jeremy family: Bernstein - given: Markus family: Meister - given: Yisong family: Yue editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6748-6758 id: liu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 6748 lastpage: 6758 published: 2021-07-01 00:00:00 +0000 - title: 'Dynamic Game Theoretic Neural Optimizer' abstract: 'The connection between training deep neural networks (DNNs) and optimal control theory (OCT) has attracted considerable attention as a principled tool of algorithmic design. Despite few attempts being made, they have been limited to architectures where the layer propagation resembles a Markovian dynamical system. This casts doubts on their flexibility to modern networks that heavily rely on non-Markovian dependencies between layers (e.g. skip connections in residual networks). In this work, we propose a novel dynamic game perspective by viewing each layer as a player in a dynamic game characterized by the DNN itself. Through this lens, different classes of optimizers can be seen as matching different types of Nash equilibria, depending on the implicit information structure of each (p)layer. The resulting method, called Dynamic Game Theoretic Neural Optimizer (DGNOpt), not only generalizes OCT-inspired optimizers to richer network class; it also motivates a new training principle by solving a multi-player cooperative game. DGNOpt shows convergence improvements over existing methods on image classification datasets with residual and inception networks. Our work marries strengths from both OCT and game theory, paving ways to new algorithmic opportunities from robust optimal control and bandit-based optimization.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21d.html PDF: http://proceedings.mlr.press/v139/liu21d/liu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Guan-Horng family: Liu - given: Tianrong family: Chen - given: Evangelos family: Theodorou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6759-6769 id: liu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 6759 lastpage: 6769 published: 2021-07-01 00:00:00 +0000 - title: 'Besov Function Approximation and Binary Classification on Low-Dimensional Manifolds Using Convolutional Residual Networks' abstract: 'Most of existing statistical theories on deep neural networks have sample complexities cursed by the data dimension and therefore cannot well explain the empirical success of deep learning on high-dimensional data. To bridge this gap, we propose to exploit the low-dimensional structures of the real world datasets and establish theoretical guarantees of convolutional residual networks (ConvResNet) in terms of function approximation and statistical recovery for binary classification problem. Specifically, given the data lying on a $d$-dimensional manifold isometrically embedded in $\mathbb{R}^D$, we prove that if the network architecture is properly chosen, ConvResNets can (1) approximate {\it Besov functions} on manifolds with arbitrary accuracy, and (2) learn a classifier by minimizing the empirical logistic risk, which gives an {\it excess risk} in the order of $n^{-\frac{s}{2s+2(s\vee d)}}$, where $s$ is a smoothness parameter. This implies that the sample complexity depends on the intrinsic dimension $d$, instead of the data dimension $D$. Our results demonstrate that ConvResNets are adaptive to low-dimensional structures of data sets.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21e.html PDF: http://proceedings.mlr.press/v139/liu21e/liu21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hao family: Liu - given: Minshuo family: Chen - given: Tuo family: Zhao - given: Wenjing family: Liao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6770-6780 id: liu21e issued: date-parts: - 2021 - 7 - 1 firstpage: 6770 lastpage: 6780 published: 2021-07-01 00:00:00 +0000 - title: 'Just Train Twice: Improving Group Robustness without Training Group Information' abstract: 'Standard training via empirical risk minimization (ERM) can produce models that achieve low error on average but high error on minority groups, especially in the presence of spurious correlations between the input and label. Prior approaches to this problem, like group distributionally robust optimization (group DRO), generally require group annotations for every training point. On the other hand, approaches that do not use group annotations generally do not improve minority performance. For example, we find that joint DRO, which dynamically upweights examples with high training loss, tends to optimize for examples that are irrelevant to the specific groups we seek to do well on. In this paper, we propose a simple two-stage approach, JTT, that achieves comparable performance to group DRO while only requiring group annotations on a significantly smaller validation set. JTT first attempts to identify informative training examples, which are often minority examples, by training an initial ERM classifier and selecting the examples with high training loss. Then, it trains a final classifier by upsampling the selected examples. Crucially, unlike joint DRO, JTT does not iteratively upsample examples that have high loss under the final classifier. On four image classification and natural language processing tasks with spurious correlations, we show that JTT closes 85% of the gap in accuracy on the worst group between ERM and group DRO.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21f.html PDF: http://proceedings.mlr.press/v139/liu21f/liu21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Evan Z family: Liu - given: Behzad family: Haghgoo - given: Annie S family: Chen - given: Aditi family: Raghunathan - given: Pang Wei family: Koh - given: Shiori family: Sagawa - given: Percy family: Liang - given: Chelsea family: Finn editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6781-6792 id: liu21f issued: date-parts: - 2021 - 7 - 1 firstpage: 6781 lastpage: 6792 published: 2021-07-01 00:00:00 +0000 - title: 'Event Outlier Detection in Continuous Time' abstract: 'Continuous-time event sequences represent discrete events occurring in continuous time. Such sequences arise frequently in real-life. Usually we expect the sequences to follow some regular pattern over time. However, sometimes these patterns may be interrupted by unexpected absence or occurrences of events. Identification of these unexpected cases can be very important as they may point to abnormal situations that need human attention. In this work, we study and develop methods for detecting outliers in continuous-time event sequences, including unexpected absence and unexpected occurrences of events. Since the patterns that event sequences tend to follow may change in different contexts, we develop outlier detection methods based on point processes that can take context information into account. Our methods are based on Bayesian decision theory and hypothesis testing with theoretical guarantees. To test the performance of the methods, we conduct experiments on both synthetic data and real-world clinical data and show the effectiveness of the proposed methods.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21g.html PDF: http://proceedings.mlr.press/v139/liu21g/liu21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Siqi family: Liu - given: Milos family: Hauskrecht editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6793-6803 id: liu21g issued: date-parts: - 2021 - 7 - 1 firstpage: 6793 lastpage: 6803 published: 2021-07-01 00:00:00 +0000 - title: 'Heterogeneous Risk Minimization' abstract: 'Machine learning algorithms with empirical risk minimization usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts. Recently, some invariant learning methods for out-of-distribution (OOD) generalization have been proposed by leveraging multiple training environments to find invariant relationships. However, modern datasets are frequently assembled by merging data from multiple sources without explicit source labels. The resultant unobserved heterogeneity renders many invariant learning methods inapplicable. In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts. We theoretically characterize the roles of the environment labels in invariant learning and justify our newly proposed HRM framework. Extensive experimental results validate the effectiveness of our HRM framework.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21h.html PDF: http://proceedings.mlr.press/v139/liu21h/liu21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiashuo family: Liu - given: Zheyuan family: Hu - given: Peng family: Cui - given: Bo family: Li - given: Zheyan family: Shen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6804-6814 id: liu21h issued: date-parts: - 2021 - 7 - 1 firstpage: 6804 lastpage: 6814 published: 2021-07-01 00:00:00 +0000 - title: 'Stochastic Iterative Graph Matching' abstract: 'Recent works apply Graph Neural Networks (GNNs) to graph matching tasks and show promising results. Considering that model outputs are complex matchings, we devise several techniques to improve the learning of GNNs and obtain a new model, Stochastic Iterative Graph MAtching (SIGMA). Our model predicts a distribution of matchings, instead of a single matching, for a graph pair so the model can explore several probable matchings. We further introduce a novel multi-step matching procedure, which learns how to refine a graph pair’s matching results incrementally. The model also includes dummy nodes so that the model does not have to find matchings for nodes without correspondence. We fit this model to data via scalable stochastic optimization. We conduct extensive experiments across synthetic graph datasets as well as biochemistry and computer vision applications. Across all tasks, our results show that SIGMA can produce significantly improved graph matching results compared to state-of-the-art models. Ablation studies verify that each of our components (stochastic training, iterative matching, and dummy nodes) offers noticeable improvement.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21i.html PDF: http://proceedings.mlr.press/v139/liu21i/liu21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Linfeng family: Liu - given: Michael C family: Hughes - given: Soha family: Hassoun - given: Liping family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6815-6825 id: liu21i issued: date-parts: - 2021 - 7 - 1 firstpage: 6815 lastpage: 6825 published: 2021-07-01 00:00:00 +0000 - title: 'Cooperative Exploration for Multi-Agent Deep Reinforcement Learning' abstract: 'Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces by a normalized entropy-based technique. Then, agents are trained to reach the goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).' volume: 139 URL: https://proceedings.mlr.press/v139/liu21j.html PDF: http://proceedings.mlr.press/v139/liu21j/liu21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Iou-Jen family: Liu - given: Unnat family: Jain - given: Raymond A family: Yeh - given: Alexander family: Schwing editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6826-6836 id: liu21j issued: date-parts: - 2021 - 7 - 1 firstpage: 6826 lastpage: 6836 published: 2021-07-01 00:00:00 +0000 - title: 'Elastic Graph Neural Networks' abstract: 'While many existing graph neural networks (GNNs) have been proven to perform $\ell_2$-based graph smoothing that enforces smoothness globally, in this work we aim to further enhance the local smoothness adaptivity of GNNs via $\ell_1$-based graph smoothing. As a result, we introduce a family of GNNs (Elastic GNNs) based on $\ell_1$ and $\ell_2$-based graph smoothing. In particular, we propose a novel and general message passing scheme into GNNs. This message passing algorithm is not only friendly to back-propagation training but also achieves the desired smoothing properties with a theoretical convergence guarantee. Experiments on semi-supervised learning tasks demonstrate that the proposed Elastic GNNs obtain better adaptivity on benchmark datasets and are significantly robust to graph adversarial attacks. The implementation of Elastic GNNs is available at \url{https://github.com/lxiaorui/ElasticGNN}.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21k.html PDF: http://proceedings.mlr.press/v139/liu21k/liu21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiaorui family: Liu - given: Wei family: Jin - given: Yao family: Ma - given: Yaxin family: Li - given: Hua family: Liu - given: Yiqi family: Wang - given: Ming family: Yan - given: Jiliang family: Tang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6837-6849 id: liu21k issued: date-parts: - 2021 - 7 - 1 firstpage: 6837 lastpage: 6849 published: 2021-07-01 00:00:00 +0000 - title: 'One Pass Late Fusion Multi-view Clustering' abstract: 'Existing late fusion multi-view clustering (LFMVC) optimally integrates a group of pre-specified base partition matrices to learn a consensus one. It is then taken as the input of the widely used k-means to generate the cluster labels. As observed, the learning of the consensus partition matrix and the generation of cluster labels are separately done. These two procedures lack necessary negotiation and can not best serve for each other, which may adversely affect the clustering performance. To address this issue, we propose to unify the aforementioned two learning procedures into a single optimization, in which the consensus partition matrix can better serve for the generation of cluster labels, and the latter is able to guide the learning of the former. To optimize the resultant optimization problem, we develop a four-step alternate algorithm with proved convergence. We theoretically analyze the clustering generalization error of the proposed algorithm on unseen data. Comprehensive experiments on multiple benchmark datasets demonstrate the superiority of our algorithm in terms of both clustering accuracy and computational efficiency. It is expected that the simplicity and effectiveness of our algorithm will make it a good option to be considered for practical multi-view clustering applications.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21l.html PDF: http://proceedings.mlr.press/v139/liu21l/liu21l.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21l.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xinwang family: Liu - given: Li family: Liu - given: Qing family: Liao - given: Siwei family: Wang - given: Yi family: Zhang - given: Wenxuan family: Tu - given: Chang family: Tang - given: Jiyuan family: Liu - given: En family: Zhu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6850-6859 id: liu21l issued: date-parts: - 2021 - 7 - 1 firstpage: 6850 lastpage: 6859 published: 2021-07-01 00:00:00 +0000 - title: 'Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition' abstract: 'In real-world multi-agent systems, agents with different capabilities may join or leave without altering the team’s overarching goals. Coordinating teams with such dynamic composition is challenging: the optimal team strategy varies with the composition. We propose COPA, a coach-player framework to tackle this problem. We assume the coach has a global view of the environment and coordinates the players, who only have partial views, by distributing individual strategies. Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players. We validate our methods on a resource collection task, a rescue game, and the StarCraft micromanagement tasks. We demonstrate zero-shot generalization to new team compositions. Our method achieves comparable or better performance than the setting where all players have a full view of the environment. Moreover, we see that the performance remains high even when the coach communicates as little as 13% of the time using the adaptive communication strategy.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21m.html PDF: http://proceedings.mlr.press/v139/liu21m/liu21m.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21m.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bo family: Liu - given: Qiang family: Liu - given: Peter family: Stone - given: Animesh family: Garg - given: Yuke family: Zhu - given: Anima family: Anandkumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6860-6870 id: liu21m issued: date-parts: - 2021 - 7 - 1 firstpage: 6860 lastpage: 6870 published: 2021-07-01 00:00:00 +0000 - title: 'From Local to Global Norm Emergence: Dissolving Self-reinforcing Substructures with Incremental Social Instruments' abstract: 'Norm emergence is a process where agents in a multi-agent system establish self-enforcing conformity through repeated interactions. When such interactions are confined to a social topology, several self-reinforcing substructures (SRS) may emerge within the population. This prevents a formation of a global norm. We propose incremental social instruments (ISI) to dissolve these SRSs by creating ties between agents. Establishing ties requires some effort and cost. Hence, it is worth to design methods that build a small number of ties yet dissolve the SRSs. By using the notion of information entropy, we propose an indicator called the BA-ratio that measures the current SRSs. We find that by building ties with minimal BA-ratio, our ISI is effective in facilitating the global norm emergence. We explain this through our experiments and theoretical results. Furthermore, we propose the small-degree principle in minimising the BA-ratio that helps us to design efficient ISI algorithms for finding the optimal ties. Experiments on both synthetic and real-world network topologies demonstrate that our adaptive ISI is efficient at dissolving SRS.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21n.html PDF: http://proceedings.mlr.press/v139/liu21n/liu21n.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21n.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yiwei family: Liu - given: Jiamou family: Liu - given: Kaibin family: Wan - given: Zhan family: Qin - given: Zijian family: Zhang - given: Bakhadyr family: Khoussainov - given: Liehuang family: Zhu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6871-6881 id: liu21n issued: date-parts: - 2021 - 7 - 1 firstpage: 6871 lastpage: 6881 published: 2021-07-01 00:00:00 +0000 - title: 'A Value-Function-based Interior-point Method for Non-convex Bi-level Optimization' abstract: 'Bi-level optimization model is able to capture a wide range of complex learning tasks with practical interest. Due to the witnessed efficiency in solving bi-level programs, gradient-based methods have gained popularity in the machine learning community. In this work, we propose a new gradient-based solution scheme, namely, the Bi-level Value-Function-based Interior-point Method (BVFIM). Following the main idea of the log-barrier interior-point scheme, we penalize the regularized value function of the lower level problem into the upper level objective. By further solving a sequence of differentiable unconstrained approximation problems, we consequently derive a sequential programming scheme. The numerical advantage of our scheme relies on the fact that, when gradient methods are applied to solve the approximation problem, we successfully avoid computing any expensive Hessian-vector or Jacobian-vector product. We prove the convergence without requiring any convexity assumption on either the upper level or the lower level objective. Experiments demonstrate the efficiency of the proposed BVFIM on non-convex bi-level problems.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21o.html PDF: http://proceedings.mlr.press/v139/liu21o/liu21o.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21o.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Risheng family: Liu - given: Xuan family: Liu - given: Xiaoming family: Yuan - given: Shangzhi family: Zeng - given: Jin family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6882-6892 id: liu21o issued: date-parts: - 2021 - 7 - 1 firstpage: 6882 lastpage: 6892 published: 2021-07-01 00:00:00 +0000 - title: 'Selfish Sparse RNN Training' abstract: 'Sparse neural networks have been widely applied to reduce the computational demands of training and deploying over-parameterized deep neural networks. For inference acceleration, methods that discover a sparse network from a pre-trained dense network (dense-to-sparse training) work effectively. Recently, dynamic sparse training (DST) has been proposed to train sparse neural networks without pre-training a dense model (sparse-to-sparse training), so that the training process can also be accelerated. However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in the Recurrent Neural Networks (RNNs) setting. In this paper, we propose an approach to train intrinsically sparse RNNs with a fixed parameter count in one single run, without compromising performance. During training, we allow RNN layers to have a non-uniform redistribution across cell gates for better regularization. Further, we propose SNT-ASGD, a novel variant of the averaged stochastic gradient optimizer, which significantly improves the performance of all sparse training methods for RNNs. Using these strategies, we achieve state-of-the-art sparse training results, better than the dense-to-sparse methods, with various types of RNNs on Penn TreeBank and Wikitext-2 datasets. Our codes are available at https://github.com/Shiweiliuiiiiiii/Selfish-RNN.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21p.html PDF: http://proceedings.mlr.press/v139/liu21p/liu21p.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21p.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shiwei family: Liu - given: Decebal Constantin family: Mocanu - given: Yulong family: Pei - given: Mykola family: Pechenizkiy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6893-6904 id: liu21p issued: date-parts: - 2021 - 7 - 1 firstpage: 6893 lastpage: 6904 published: 2021-07-01 00:00:00 +0000 - title: 'Temporal Difference Learning as Gradient Splitting' abstract: 'Temporal difference learning with linear function approximation is a popular method to obtain a low-dimensional approximation of the value function of a policy in a Markov Decision Process. We provide an interpretation of this method in terms of a splitting of the gradient of an appropriately chosen function. As a consequence of this interpretation, convergence proofs for gradient descent can be applied almost verbatim to temporal difference learning. Beyond giving a fuller explanation of why temporal difference works, this interpretation also yields improved convergence times. We consider the setting with $1/\sqrt{T}$ step-size, where previous comparable finite-time convergence time bounds for temporal difference learning had the multiplicative factor $1/(1-\gamma)$ in front of the bound, with $\gamma$ being the discount factor. We show that a minor variation on TD learning which estimates the mean of the value function separately has a convergence time where $1/(1-\gamma)$ only multiplies an asymptotically negligible term.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21q.html PDF: http://proceedings.mlr.press/v139/liu21q/liu21q.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21q.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rui family: Liu - given: Alex family: Olshevsky editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6905-6913 id: liu21q issued: date-parts: - 2021 - 7 - 1 firstpage: 6905 lastpage: 6913 published: 2021-07-01 00:00:00 +0000 - title: 'On Robust Mean Estimation under Coordinate-level Corruption' abstract: 'We study the problem of robust mean estimation and introduce a novel Hamming distance-based measure of distribution shift for coordinate-level corruptions. We show that this measure yields adversary models that capture more realistic corruptions than those used in prior works, and present an information-theoretic analysis of robust mean estimation in these settings. We show that for structured distributions, methods that leverage the structure yield information theoretically more accurate mean estimation. We also focus on practical algorithms for robust mean estimation and study when data cleaning-inspired approaches that first fix corruptions in the input data and then perform robust mean estimation can match the information theoretic bounds of our analysis. We finally demonstrate experimentally that this two-step approach outperforms structure-agnostic robust estimation and provides accurate mean estimation even for high-magnitude corruption.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21r.html PDF: http://proceedings.mlr.press/v139/liu21r/liu21r.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21r.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zifan family: Liu - given: Jong Ho family: Park - given: Theodoros family: Rekatsinas - given: Christos family: Tzamos editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6914-6924 id: liu21r issued: date-parts: - 2021 - 7 - 1 firstpage: 6914 lastpage: 6924 published: 2021-07-01 00:00:00 +0000 - title: 'Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices' abstract: 'The goal of meta-reinforcement learning (meta-RL) is to build agents that can quickly learn new tasks by leveraging prior experience on related tasks. Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task. In principle, optimal exploration and exploitation can be learned end-to-end by simply maximizing task performance. However, such meta-RL approaches struggle with local optima due to a chicken-and-egg problem: learning to explore requires good exploitation to gauge the exploration’s utility, but learning to exploit requires information gathered via exploration. Optimizing separate objectives for exploration and exploitation can avoid this problem, but prior meta-RL exploration objectives yield suboptimal policies that gather information irrelevant to the task. We alleviate both concerns by constructing an exploitation objective that automatically identifies task-relevant information and an exploration objective to recover only this information. This avoids local optima in end-to-end training, without sacrificing optimal exploration. Empirically, DREAM substantially outperforms existing approaches on complex meta-RL problems, such as sparse-reward 3D visual navigation. Videos of DREAM: https://ezliu.github.io/dream/' volume: 139 URL: https://proceedings.mlr.press/v139/liu21s.html PDF: http://proceedings.mlr.press/v139/liu21s/liu21s.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21s.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Evan Z family: Liu - given: Aditi family: Raghunathan - given: Percy family: Liang - given: Chelsea family: Finn editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6925-6935 id: liu21s issued: date-parts: - 2021 - 7 - 1 firstpage: 6925 lastpage: 6935 published: 2021-07-01 00:00:00 +0000 - title: 'How Do Adam and Training Strategies Help BNNs Optimization' abstract: 'The best performing Binary Neural Networks (BNNs) are usually attained using Adam optimization and its multi-step training variants. However, to the best of our knowledge, few studies explore the fundamental reasons why Adam is superior to other optimizers like SGD for BNN optimization or provide analytical explanations that support specific training strategies. To address this, in this paper we first investigate the trajectories of gradients and weights in BNNs during the training process. We show the regularization effect of second-order momentum in Adam is crucial to revitalize the weights that are dead due to the activation saturation in BNNs. We find that Adam, through its adaptive learning rate strategy, is better equipped to handle the rugged loss surface of BNNs and reaches a better optimum with higher generalization ability. Furthermore, we inspect the intriguing role of the real-valued weights in binary networks, and reveal the effect of weight decay on the stability and sluggishness of BNN optimization. Through extensive experiments and analysis, we derive a simple training scheme, building on existing Adam-based optimization, which achieves 70.5% top-1 accuracy on the ImageNet dataset using the same architecture as the state-of-the-art ReActNet while achieving 1.1% higher accuracy. Code and models are available at https://github.com/liuzechun/AdamBNN.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21t.html PDF: http://proceedings.mlr.press/v139/liu21t/liu21t.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21t.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zechun family: Liu - given: Zhiqiang family: Shen - given: Shichao family: Li - given: Koen family: Helwegen - given: Dong family: Huang - given: Kwang-Ting family: Cheng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6936-6946 id: liu21t issued: date-parts: - 2021 - 7 - 1 firstpage: 6936 lastpage: 6946 published: 2021-07-01 00:00:00 +0000 - title: 'SagaNet: A Small Sample Gated Network for Pediatric Cancer Diagnosis' abstract: 'The scarcity of available samples and the high annotation cost of medical data cause a bottleneck in many digital diagnosis tasks based on deep learning. This problem is especially severe in pediatric tumor tasks, due to the small population base of children and high sample diversity caused by the high metastasis rate of related tumors. Targeted research on pediatric tumors is urgently needed but lacks sufficient attention. In this work, we propose a novel model to solve the diagnosis task of small round blue cell tumors (SRBCTs). To solve the problem of high noise and high diversity in the small sample scenario, the model is constrained to pay attention to the valid areas in the pathological image with a masking mechanism, and a length-aware loss is proposed to improve the tolerance to feature diversity. We evaluate this framework on a challenging small sample SRBCTs dataset, whose classification is difficult even for professional pathologists. The proposed model shows the best performance compared with state-of-the-art deep models and generalization on another pathological dataset, which illustrates the potentiality of deep learning applications in difficult small sample medical tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21u.html PDF: http://proceedings.mlr.press/v139/liu21u/liu21u.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21u.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuhan family: Liu - given: Shiliang family: Sun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6947-6956 id: liu21u issued: date-parts: - 2021 - 7 - 1 firstpage: 6947 lastpage: 6956 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Deep Neural Networks under Agnostic Corrupted Supervision' abstract: 'Training deep neural network models in the presence of corrupted supervision is challenging as the corrupted data points may significantly impact generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption and provides a unified framework for both classification and regression problems. Unlike many existing approaches that quantify the quality of the data points (e.g., based on their individual loss values), and filter them accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the average gradient. Even when a corrupted data point failed to be excluded by our algorithm, the data point will have a very limited impact on the overall loss, as compared with state-of-the-art filtering methods based on loss values. Extensive experiments on multiple benchmark datasets have demonstrated the robustness of our algorithm under different types of corruption. Our code is available at \url{https://github.com/illidanlab/PRL}.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21v.html PDF: http://proceedings.mlr.press/v139/liu21v/liu21v.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21v.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Boyang family: Liu - given: Mengying family: Sun - given: Ding family: Wang - given: Pang-Ning family: Tan - given: Jiayu family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6957-6967 id: liu21v issued: date-parts: - 2021 - 7 - 1 firstpage: 6957 lastpage: 6967 published: 2021-07-01 00:00:00 +0000 - title: 'Leveraging Public Data for Practical Private Query Release' abstract: 'In many statistical problems, incorporating priors can significantly improve performance. However, the use of prior knowledge in differentially private query release has remained underexplored, despite such priors commonly being available in the form of public datasets, such as previous US Census releases. With the goal of releasing statistics about a private dataset, we present PMW^Pub, which—unlike existing baselines—leverages public data drawn from a related distribution as prior information. We provide a theoretical analysis and an empirical evaluation on the American Community Survey (ACS) and ADULT datasets, which shows that our method outperforms state-of-the-art methods. Furthermore, PMW^Pub scales well to high-dimensional data domains, where running many existing methods would be computationally infeasible.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21w.html PDF: http://proceedings.mlr.press/v139/liu21w/liu21w.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21w.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Terrance family: Liu - given: Giuseppe family: Vietri - given: Thomas family: Steinke - given: Jonathan family: Ullman - given: Steven family: Wu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6968-6977 id: liu21w issued: date-parts: - 2021 - 7 - 1 firstpage: 6968 lastpage: 6977 published: 2021-07-01 00:00:00 +0000 - title: 'Watermarking Deep Neural Networks with Greedy Residuals' abstract: 'Deep neural networks (DNNs) are considered as intellectual property of their corresponding owners and thus are in urgent need of ownership protection, due to the massive amount of time and resources invested in designing, tuning and training them. In this paper, we propose a novel watermark-based ownership protection method by using the residuals of important parameters. Different from other watermark-based ownership protection methods that rely on some specific neural network architectures and during verification require external data source, namely ownership indicators, our method does not explicitly use ownership indicators for verification to defeat various attacks against DNN watermarks. Specifically, we greedily select a few and important model parameters for embedding so that the impairment caused by the changed parameters can be reduced and the robustness against different attacks can be improved as the selected parameters can well preserve the model information. Also, without the external data sources for verification, the adversary can hardly cast doubts on ownership verification by forging counterfeit watermarks. The extensive experiments show that our method outperforms previous state-of-the-art methods in five tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21x.html PDF: http://proceedings.mlr.press/v139/liu21x/liu21x.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21x.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hanwen family: Liu - given: Zhenyu family: Weng - given: Yuesheng family: Zhu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6978-6988 id: liu21x issued: date-parts: - 2021 - 7 - 1 firstpage: 6978 lastpage: 6988 published: 2021-07-01 00:00:00 +0000 - title: 'Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training' abstract: 'In this paper, we introduce a new perspective on training deep neural networks capable of state-of-the-art performance without the need for the expensive over-parameterization by proposing the concept of In-Time Over-Parameterization (ITOP) in sparse training. By starting from a random sparse network and continuously exploring sparse connectivities during training, we can perform an Over-Parameterization over the course of training, closing the gap in the expressibility between sparse training and dense training. We further use ITOP to understand the underlying mechanism of Dynamic Sparse Training (DST) and discover that the benefits of DST come from its ability to consider across time all possible parameters when searching for the optimal sparse connectivity. As long as sufficient parameters have been reliably explored, DST can outperform the dense neural network by a large margin. We present a series of experiments to support our conjecture and achieve the state-of-the-art sparse training performance with ResNet-50 on ImageNet. More impressively, ITOP achieves dominant performance over the overparameterization-based sparse methods at extreme sparsities. When trained with ResNet-34 on CIFAR-100, ITOP can match the performance of the dense model at an extreme sparsity 98%.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21y.html PDF: http://proceedings.mlr.press/v139/liu21y/liu21y.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21y.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shiwei family: Liu - given: Lu family: Yin - given: Decebal Constantin family: Mocanu - given: Mykola family: Pechenizkiy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 6989-7000 id: liu21y issued: date-parts: - 2021 - 7 - 1 firstpage: 6989 lastpage: 7000 published: 2021-07-01 00:00:00 +0000 - title: 'A Sharp Analysis of Model-based Reinforcement Learning with Self-Play' abstract: 'Model-based algorithms—algorithms that explore the environment through building and utilizing an estimated model—are widely used in reinforcement learning practice and theoretically shown to achieve optimal sample efficiency for single-agent reinforcement learning in Markov Decision Processes (MDPs). However, for multi-agent reinforcement learning in Markov games, the current best known sample complexity for model-based algorithms is rather suboptimal and compares unfavorably against recent model-free approaches. In this paper, we present a sharp analysis of model-based self-play algorithms for multi-agent Markov games. We design an algorithm \emph{Optimistic Nash Value Iteration} (Nash-VI) for two-player zero-sum Markov games that is able to output an $\epsilon$-approximate Nash policy in $\tilde{\mathcal{O}}(H^3SAB/\epsilon^2)$ episodes of game playing, where $S$ is the number of states, $A,B$ are the number of actions for the two players respectively, and $H$ is the horizon length. This significantly improves over the best known model-based guarantee of $\tilde{\mathcal{O}}(H^4S^2AB/\epsilon^2)$, and is the first that matches the information-theoretic lower bound $\Omega(H^3S(A+B)/\epsilon^2)$ except for a $\min\{A,B\}$ factor. In addition, our guarantee compares favorably against the best known model-free algorithm if $\min\{A,B\}=o(H^3)$, and outputs a single Markov policy while existing sample-efficient model-free algorithms output a nested mixture of Markov policies that is in general non-Markov and rather inconvenient to store and execute. We further adapt our analysis to designing a provably efficient task-agnostic algorithm for zero-sum Markov games, and designing the first line of provably sample-efficient algorithms for multi-player general-sum Markov games.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21z.html PDF: http://proceedings.mlr.press/v139/liu21z/liu21z.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21z.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qinghua family: Liu - given: Tiancheng family: Yu - given: Yu family: Bai - given: Chi family: Jin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7001-7010 id: liu21z issued: date-parts: - 2021 - 7 - 1 firstpage: 7001 lastpage: 7010 published: 2021-07-01 00:00:00 +0000 - title: 'Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?' abstract: 'In deep model compression, the recent finding "Lottery Ticket Hypothesis" (LTH) pointed out that there could exist a winning ticket (i.e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance than the original dense network. However, it is not easy to observe such winning property in many scenarios, where for example, a relatively large learning rate is used even if it benefits training the original dense model. In this work, we investigate the underlying condition and rationale behind the winning property, and find that the underlying reason is largely attributed to the correlation between initialized weights and final-trained weights when the learning rate is not sufficiently large. Thus, the existence of winning property is correlated with an insufficient DNN pretraining, and is unlikely to occur for a well-trained DNN. To overcome this limitation, we propose the "pruning & fine-tuning" method that consistently outperforms lottery ticket sparse training under the same pruning algorithm and the same total training epochs. Extensive experiments over multiple deep models (VGG, ResNet, MobileNet-v2) on different datasets have been conducted to justify our proposals.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21aa.html PDF: http://proceedings.mlr.press/v139/liu21aa/liu21aa.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21aa.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ning family: Liu - given: Geng family: Yuan - given: Zhengping family: Che - given: Xuan family: Shen - given: Xiaolong family: Ma - given: Qing family: Jin - given: Jian family: Ren - given: Jian family: Tang - given: Sijia family: Liu - given: Yanzhi family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7011-7020 id: liu21aa issued: date-parts: - 2021 - 7 - 1 firstpage: 7011 lastpage: 7020 published: 2021-07-01 00:00:00 +0000 - title: 'Group Fisher Pruning for Practical Network Compression' abstract: 'Network compression has been widely studied since it is able to reduce the memory and computation cost during inference. However, previous methods seldom deal with complicated structures like residual connections, group/depth-wise convolution and feature pyramid network, where channels of multiple layers are coupled and need to be pruned simultaneously. In this paper, we present a general channel pruning approach that can be applied to various complicated structures. Particularly, we propose a layer grouping algorithm to find coupled channels automatically. Then we derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels. Moreover, we find that inference speedup on GPUs is more correlated with the reduction of memory rather than FLOPs, and thus we employ the memory reduction of each channel to normalize the importance. Our method can be used to prune any structures including those with coupled channels. We conduct extensive experiments on various backbones, including the classic ResNet and ResNeXt, mobile-friendly MobileNetV2, and the NAS-based RegNet, both on image classification and object detection which is under-explored. Experimental results validate that our method can effectively prune sophisticated networks, boosting inference speed without sacrificing accuracy.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21ab.html PDF: http://proceedings.mlr.press/v139/liu21ab/liu21ab.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21ab.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Liyang family: Liu - given: Shilong family: Zhang - given: Zhanghui family: Kuang - given: Aojun family: Zhou - given: Jing-Hao family: Xue - given: Xinjiang family: Wang - given: Yimin family: Chen - given: Wenming family: Yang - given: Qingmin family: Liao - given: Wayne family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7021-7032 id: liu21ab issued: date-parts: - 2021 - 7 - 1 firstpage: 7021 lastpage: 7032 published: 2021-07-01 00:00:00 +0000 - title: 'Infinite-Dimensional Optimization for Zero-Sum Games via Variational Transport' abstract: 'Game optimization has been extensively studied when decision variables lie in a finite-dimensional space, of which solutions correspond to pure strategies at the Nash equilibrium (NE), and the gradient descent-ascent (GDA) method works widely in practice. In this paper, we consider infinite-dimensional zero-sum games by a min-max distributional optimization problem over a space of probability measures defined on a continuous variable set, which is inspired by finding a mixed NE for finite-dimensional zero-sum games. We then aim to answer the following question: \textit{Will GDA-type algorithms still be provably efficient when extended to infinite-dimensional zero-sum games?} To answer this question, we propose a particle-based variational transport algorithm based on GDA in the functional spaces. Specifically, the algorithm performs multi-step functional gradient descent-ascent in the Wasserstein space via pushing two sets of particles in the variable space. By characterizing the gradient estimation error from variational form maximization and the convergence behavior of each player with different objective landscapes, we prove rigorously that the generalized GDA algorithm converges to the NE or the value of the game efficiently for a class of games under the Polyak-Ł{ojasiewicz} (PL) condition. To conclude, we provide complete statistical and convergence guarantees for solving an infinite-dimensional zero-sum game via a provably efficient particle-based method. Additionally, our work provides the first thorough statistical analysis for the particle-based algorithm to learn an objective functional with a variational form using universal approximators (\textit{i.e.}, neural networks (NNs)), which is of independent interest.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21ac.html PDF: http://proceedings.mlr.press/v139/liu21ac/liu21ac.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21ac.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lewis family: Liu - given: Yufeng family: Zhang - given: Zhuoran family: Yang - given: Reza family: Babanezhad - given: Zhaoran family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7033-7044 id: liu21ac issued: date-parts: - 2021 - 7 - 1 firstpage: 7033 lastpage: 7044 published: 2021-07-01 00:00:00 +0000 - title: 'Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent' abstract: 'In the vanishing learning rate regime, stochastic gradient descent (SGD) is now relatively well understood. In this work, we propose to study the basic properties of SGD and its variants in the non-vanishing learning rate regime. The focus is on deriving exactly solvable results and discussing their implications. The main contributions of this work are to derive the stationary distribution for discrete-time SGD in a quadratic loss function with and without momentum; in particular, one implication of our result is that the fluctuation caused by discrete-time dynamics takes a distorted shape and is dramatically larger than a continuous-time theory could predict. Examples of applications of the proposed theory considered in this work include the approximation error of variants of SGD, the effect of minibatch noise, the optimal Bayesian inference, the escape rate from a sharp minimum, and the stationary covariance of a few second-order methods including damped Newton’s method, natural gradient descent, and Adam.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21ad.html PDF: http://proceedings.mlr.press/v139/liu21ad/liu21ad.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21ad.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kangqiao family: Liu - given: Liu family: Ziyin - given: Masahito family: Ueda editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7045-7056 id: liu21ad issued: date-parts: - 2021 - 7 - 1 firstpage: 7045 lastpage: 7056 published: 2021-07-01 00:00:00 +0000 - title: 'Multi-layered Network Exploration via Random Walks: From Offline Optimization to Online Learning' abstract: 'Multi-layered network exploration (MuLaNE) problem is an important problem abstracted from many applications. In MuLaNE, there are multiple network layers where each node has an importance weight and each layer is explored by a random walk. The MuLaNE task is to allocate total random walk budget $B$ into each network layer so that the total weights of the unique nodes visited by random walks are maximized. We systematically study this problem from offline optimization to online learning. For the offline optimization setting where the network structure and node weights are known, we provide greedy based constant-ratio approximation algorithms for overlapping networks, and greedy or dynamic-programming based optimal solutions for non-overlapping networks. For the online learning setting, neither the network structure nor the node weights are known initially. We adapt the combinatorial multi-armed bandit framework and design algorithms to learn random walk related parameters and node weights while optimizing the budget allocation in multiple rounds, and prove that they achieve logarithmic regret bounds. Finally, we conduct experiments on a real-world social network dataset to validate our theoretical results.' volume: 139 URL: https://proceedings.mlr.press/v139/liu21ae.html PDF: http://proceedings.mlr.press/v139/liu21ae/liu21ae.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liu21ae.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xutong family: Liu - given: Jinhang family: Zuo - given: Xiaowei family: Chen - given: Wei family: Chen - given: John C. S. family: Lui editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7057-7066 id: liu21ae issued: date-parts: - 2021 - 7 - 1 firstpage: 7057 lastpage: 7066 published: 2021-07-01 00:00:00 +0000 - title: 'Relative Positional Encoding for Transformers with Linear Complexity' abstract: 'Recent advances in Transformer models allow for unprecedented sequence lengths, due to linear space and time complexity. In the meantime, relative positional encoding (RPE) was proposed as beneficial for classical Transformers and consists in exploiting lags instead of absolute positions for inference. Still, RPE is not available for the recent linear-variants of the Transformer, because it requires the explicit computation of the attention matrix, which is precisely what is avoided by such methods. In this paper, we bridge this gap and present Stochastic Positional Encoding as a way to generate PE that can be used as a replacement to the classical additive (sinusoidal) PE and provably behaves like RPE. The main theoretical contribution is to make a connection between positional encoding and cross-covariance structures of correlated Gaussian processes. We illustrate the performance of our approach on the Long-Range Arena benchmark and on music generation.' volume: 139 URL: https://proceedings.mlr.press/v139/liutkus21a.html PDF: http://proceedings.mlr.press/v139/liutkus21a/liutkus21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-liutkus21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Antoine family: Liutkus - given: Ondřej family: Cı́fka - given: Shih-Lun family: Wu - given: Umut family: Simsekli - given: Yi-Hsuan family: Yang - given: Gael family: Richard editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7067-7079 id: liutkus21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7067 lastpage: 7079 published: 2021-07-01 00:00:00 +0000 - title: 'Joint Online Learning and Decision-making via Dual Mirror Descent' abstract: 'We consider an online revenue maximization problem over a finite time horizon subject to lower and upper bounds on cost. At each period, an agent receives a context vector sampled i.i.d. from an unknown distribution and needs to make a decision adaptively. The revenue and cost functions depend on the context vector as well as some fixed but possibly unknown parameter vector to be learned. We propose a novel offline benchmark and a new algorithm that mixes an online dual mirror descent scheme with a generic parameter learning process. When the parameter vector is known, we demonstrate an $O(\sqrt{T})$ regret result as well an $O(\sqrt{T})$ bound on the possible constraint violations. When the parameter is not known and must be learned, we demonstrate that the regret and constraint violations are the sums of the previous $O(\sqrt{T})$ terms plus terms that directly depend on the convergence of the learning process.' volume: 139 URL: https://proceedings.mlr.press/v139/lobos21a.html PDF: http://proceedings.mlr.press/v139/lobos21a/lobos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lobos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alfonso family: Lobos - given: Paul family: Grigas - given: Zheng family: Wen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7080-7089 id: lobos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7080 lastpage: 7089 published: 2021-07-01 00:00:00 +0000 - title: 'Symmetric Spaces for Graph Embeddings: A Finsler-Riemannian Approach' abstract: 'Learning faithful graph representations as sets of vertex embeddings has become a fundamental intermediary step in a wide range of machine learning applications. We propose the systematic use of symmetric spaces in representation learning, a class encompassing many of the previously used embedding targets. This enables us to introduce a new method, the use of Finsler metrics integrated in a Riemannian optimization scheme, that better adapts to dissimilar structures in the graph. We develop a tool to analyze the embeddings and infer structural properties of the data sets. For implementation, we choose Siegel spaces, a versatile family of symmetric spaces. Our approach outperforms competitive baselines for graph reconstruction tasks on various synthetic and real-world datasets. We further demonstrate its applicability on two downstream tasks, recommender systems and node classification.' volume: 139 URL: https://proceedings.mlr.press/v139/lopez21a.html PDF: http://proceedings.mlr.press/v139/lopez21a/lopez21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lopez21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Federico family: Lopez - given: Beatrice family: Pozzetti - given: Steve family: Trettel - given: Michael family: Strube - given: Anna family: Wienhard editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7090-7101 id: lopez21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7090 lastpage: 7101 published: 2021-07-01 00:00:00 +0000 - title: 'HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture' abstract: 'Recently Homomorphic Encryption (HE) is used to implement Privacy-Preserving Neural Networks (PPNNs) that perform inferences directly on encrypted data without decryption. Prior PPNNs adopt mobile network architectures such as SqueezeNet for smaller computing overhead, but we find naïvely using mobile network architectures for a PPNN does not necessarily achieve shorter inference latency. Despite having less parameters, a mobile network architecture typically introduces more layers and increases the HE multiplicative depth of a PPNN, thereby prolonging its inference latency. In this paper, we propose a \textbf{HE}-friendly privacy-preserving \textbf{M}obile neural n\textbf{ET}work architecture, \textbf{HEMET}. Experimental results show that, compared to state-of-the-art (SOTA) PPNNs, HEMET reduces the inference latency by $59.3%\sim 61.2%$, and improves the inference accuracy by $0.4 % \sim 0.5%$.' volume: 139 URL: https://proceedings.mlr.press/v139/lou21a.html PDF: http://proceedings.mlr.press/v139/lou21a/lou21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lou21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qian family: Lou - given: Lei family: Jiang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7102-7110 id: lou21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7102 lastpage: 7110 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Complexity in Decentralized Training' abstract: 'Decentralization is a promising method of scaling up parallel machine learning systems. In this paper, we provide a tight lower bound on the iteration complexity for such methods in a stochastic non-convex setting. Our lower bound reveals a theoretical gap in known convergence rates of many existing decentralized training algorithms, such as D-PSGD. We prove by construction this lower bound is tight and achievable. Motivated by our insights, we further propose DeTAG, a practical gossip-style decentralized algorithm that achieves the lower bound with only a logarithm gap. Empirically, we compare DeTAG with other decentralized algorithms on image classification tasks, and we show DeTAG enjoys faster convergence compared to baselines, especially on unshuffled data and in sparse networks.' volume: 139 URL: https://proceedings.mlr.press/v139/lu21a.html PDF: http://proceedings.mlr.press/v139/lu21a/lu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yucheng family: Lu - given: Christopher family: De Sa editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7111-7123 id: lu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7111 lastpage: 7123 published: 2021-07-01 00:00:00 +0000 - title: 'DANCE: Enhancing saliency maps using decoys' abstract: 'Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier. Unfortunately, recent evidence suggests that many saliency methods poorly perform, especially in situations where gradients are saturated, inputs contain adversarial perturbations, or predictions rely upon inter-feature dependence. To address these issues, we propose a framework, DANCE, which improves the robustness of saliency methods by following a two-step procedure. First, we introduce a perturbation mechanism that subtly varies the input sample without changing its intermediate representations. Using this approach, we can gather a corpus of perturbed ("decoy") data samples while ensuring that the perturbed and original input samples follow similar distributions. Second, we compute saliency maps for the decoy samples and propose a new method to aggregate saliency maps. With this design, we offset influence of gradient saturation. From a theoretical perspective, we show that the aggregated saliency map not only captures inter-feature dependence but, more importantly, is robust against previously described adversarial perturbation methods. Our empirical results suggest that, both qualitatively and quantitatively, DANCE outperforms existing methods in a variety of application domains.' volume: 139 URL: https://proceedings.mlr.press/v139/lu21b.html PDF: http://proceedings.mlr.press/v139/lu21b/lu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yang Young family: Lu - given: Wenbo family: Guo - given: Xinyu family: Xing - given: William Stafford family: Noble editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7124-7133 id: lu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7124 lastpage: 7133 published: 2021-07-01 00:00:00 +0000 - title: 'Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification' abstract: 'To cope with high annotation costs, training a classifier only from weakly supervised data has attracted a great deal of attention these days. Among various approaches, strengthening supervision from completely unsupervised classification is a promising direction, which typically employs class priors as the only supervision and trains a binary classifier from unlabeled (U) datasets. While existing risk-consistent methods are theoretically grounded with high flexibility, they can learn only from two U sets. In this paper, we propose a new approach for binary classification from $m$ U-sets for $m\ge2$. Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC), which is aimed at predicting from which U set each observed sample is drawn. SSC can be solved by a standard (multi-class) classification method, and we use the SSC solution to obtain the final binary classifier through a certain linear-fractional transformation. We built our method in a flexible and efficient end-to-end deep learning framework and prove it to be classifier-consistent. Through experiments, we demonstrate the superiority of our proposed method over state-of-the-art methods.' volume: 139 URL: https://proceedings.mlr.press/v139/lu21c.html PDF: http://proceedings.mlr.press/v139/lu21c/lu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nan family: Lu - given: Shida family: Lei - given: Gang family: Niu - given: Issei family: Sato - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7134-7144 id: lu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 7134 lastpage: 7144 published: 2021-07-01 00:00:00 +0000 - title: 'Variance Reduced Training with Stratified Sampling for Forecasting Models' abstract: 'In large-scale time series forecasting, one often encounters the situation where the temporal patterns of time series, while drifting over time, differ from one another in the same dataset. In this paper, we provably show under such heterogeneity, training a forecasting model with commonly used stochastic optimizers (e.g. SGD) potentially suffers large variance on gradient estimation, and thus incurs long-time training. We show that this issue can be efficiently alleviated via stratification, which allows the optimizer to sample from pre-grouped time series strata. For better trading-off gradient variance and computation complexity, we further propose SCott (Stochastic Stratified Control Variate Gradient Descent), a variance reduced SGD-style optimizer that utilizes stratified sampling via control variate. In theory, we provide the convergence guarantee of SCott on smooth non-convex objectives. Empirically, we evaluate SCott and other baseline optimizers on both synthetic and real-world time series forecasting problems, and demonstrate SCott converges faster with respect to both iterations and wall clock time.' volume: 139 URL: https://proceedings.mlr.press/v139/lu21d.html PDF: http://proceedings.mlr.press/v139/lu21d/lu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yucheng family: Lu - given: Youngsuk family: Park - given: Lifan family: Chen - given: Yuyang family: Wang - given: Christopher family: De Sa - given: Dean family: Foster editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7145-7155 id: lu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 7145 lastpage: 7155 published: 2021-07-01 00:00:00 +0000 - title: 'ACE: Explaining cluster from an adversarial perspective' abstract: 'A common workflow in single-cell RNA-seq analysis is to project the data to a latent space, cluster the cells in that space, and identify sets of marker genes that explain the differences among the discovered clusters. A primary drawback to this three-step procedure is that each step is carried out independently, thereby neglecting the effects of the nonlinear embedding and inter-gene dependencies on the selection of marker genes. Here we propose an integrated deep learning framework, Adversarial Clustering Explanation (ACE), that bundles all three steps into a single workflow. The method thus moves away from the notion of "marker genes" to instead identify a panel of explanatory genes. This panel may include genes that are not only enriched but also depleted relative to other cell types, as well as genes that exhibit differences between closely related cell types. Empirically, we demonstrate that ACE is able to identify gene panels that are both highly discriminative and nonredundant, and we demonstrate the applicability of ACE to an image recognition task.' volume: 139 URL: https://proceedings.mlr.press/v139/lu21e.html PDF: http://proceedings.mlr.press/v139/lu21e/lu21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lu21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yang Young family: Lu - given: Timothy C family: Yu - given: Giancarlo family: Bonora - given: William Stafford family: Noble editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7156-7167 id: lu21e issued: date-parts: - 2021 - 7 - 1 firstpage: 7156 lastpage: 7167 published: 2021-07-01 00:00:00 +0000 - title: 'On Monotonic Linear Interpolation of Neural Network Parameters' abstract: 'Linear interpolation between initial neural network parameters and converged parameters after training with stochastic gradient descent (SGD) typically leads to a monotonic decrease in the training objective. This Monotonic Linear Interpolation (MLI) property, first observed by Goodfellow et al. 2014, persists in spite of the non-convex objectives and highly non-linear training dynamics of neural networks. Extending this work, we evaluate several hypotheses for this property that, to our knowledge, have not yet been explored. Using tools from differential geometry, we draw connections between the interpolated paths in function space and the monotonicity of the network — providing sufficient conditions for the MLI property under mean squared error. While the MLI property holds under various settings (e.g., network architectures and learning problems), we show in practice that networks violating the MLI property can be produced systematically, by encouraging the weights to move far from initialization. The MLI property raises important questions about the loss landscape geometry of neural networks and highlights the need to further study their global properties.' volume: 139 URL: https://proceedings.mlr.press/v139/lucas21a.html PDF: http://proceedings.mlr.press/v139/lucas21a/lucas21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lucas21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: James R family: Lucas - given: Juhan family: Bae - given: Michael R family: Zhang - given: Stanislav family: Fort - given: Richard family: Zemel - given: Roger B family: Grosse editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7168-7179 id: lucas21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7168 lastpage: 7179 published: 2021-07-01 00:00:00 +0000 - title: 'Improving Breadth-Wise Backpropagation in Graph Neural Networks Helps Learning Long-Range Dependencies.' abstract: 'In this work, we focus on the ability of graph neural networks (GNNs) to learn long-range patterns in graphs with edge features. Learning patterns that involve longer paths in the graph, requires using deeper GNNs. However, GNNs suffer from a drop in performance with increasing network depth. To improve the performance of deeper GNNs, previous works have investigated normalization techniques and various types of skip connections. While they are designed to improve depth-wise backpropagation between the representations of the same node in successive layers, they do not improve breadth-wise backpropagation between representations of neighbouring nodes. To analyse the consequences, we design synthetic datasets serving as a testbed for the ability of GNNs to learn long-range patterns. Our analysis shows that several commonly used GNN variants with only depth-wise skip connections indeed have problems learning long-range patterns. They are clearly outperformed by an attention-based GNN architecture that we propose for improving both depth- and breadth-wise backpropagation. We also verify that the presented architecture is competitive on real-world data.' volume: 139 URL: https://proceedings.mlr.press/v139/lukovnikov21a.html PDF: http://proceedings.mlr.press/v139/lukovnikov21a/lukovnikov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lukovnikov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Denis family: Lukovnikov - given: Asja family: Fischer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7180-7191 id: lukovnikov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7180 lastpage: 7191 published: 2021-07-01 00:00:00 +0000 - title: 'GraphDF: A Discrete Flow Model for Molecular Graph Generation' abstract: 'We consider the problem of molecular graph generation using deep models. While graphs are discrete, most existing methods use continuous latent variables, resulting in inaccurate modeling of discrete graph structures. In this work, we propose GraphDF, a novel discrete latent variable model for molecular graph generation based on normalizing flow methods. GraphDF uses invertible modulo shift transforms to map discrete latent variables to graph nodes and edges. We show that the use of discrete latent variables reduces computational costs and eliminates the negative effect of dequantization. Comprehensive experimental results show that GraphDF outperforms prior methods on random generation, property optimization, and constrained optimization tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/luo21a.html PDF: http://proceedings.mlr.press/v139/luo21a/luo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-luo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Youzhi family: Luo - given: Keqiang family: Yan - given: Shuiwang family: Ji editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7192-7203 id: luo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7192 lastpage: 7203 published: 2021-07-01 00:00:00 +0000 - title: 'Trajectory Diversity for Zero-Shot Coordination' abstract: 'We study the problem of zero-shot coordination (ZSC), where agents must independently produce strategies for a collaborative game that are compatible with novel partners not seen during training. Our first contribution is to consider the need for diversity in generating such agents. Because self-play (SP) agents control their own trajectory distribution during training, each policy typically only performs well on this exact distribution. As a result, they achieve low scores in ZSC, since playing with another agent is likely to put them in situations they have not encountered during training. To address this issue, we train a common best response (BR) to a population of agents, which we regulate to be diverse. To this end, we introduce \textit{Trajectory Diversity} (TrajeDi) – a differentiable objective for generating diverse reinforcement learning policies. We derive TrajeDi as a generalization of the Jensen-Shannon divergence between policies and motivate it experimentally in two simple settings. We then focus on the collaborative card game Hanabi, demonstrating the scalability of our method and improving upon the cross-play scores of both independently trained SP agents and BRs to unregularized populations.' volume: 139 URL: https://proceedings.mlr.press/v139/lupu21a.html PDF: http://proceedings.mlr.press/v139/lupu21a/lupu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lupu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrei family: Lupu - given: Brandon family: Cui - given: Hengyuan family: Hu - given: Jakob family: Foerster editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7204-7213 id: lupu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7204 lastpage: 7213 published: 2021-07-01 00:00:00 +0000 - title: 'HyperHyperNetwork for the Design of Antenna Arrays' abstract: 'We present deep learning methods for the design of arrays and single instances of small antennas. Each design instance is conditioned on a target radiation pattern and is required to conform to specific spatial dimensions and to include, as part of its metallic structure, a set of predetermined locations. The solution, in the case of a single antenna, is based on a composite neural network that combines a simulation network, a hypernetwork, and a refinement network. In the design of the antenna array, we add an additional design level and employ a hypernetwork within a hypernetwork. The learning objective is based on measuring the similarity of the obtained radiation pattern to the desired one. Our experiments demonstrate that our approach is able to design novel antennas and antenna arrays that are compliant with the design requirements, considerably better than the baseline methods. We compare the solutions obtained by our method to existing designs and demonstrate a high level of overlap. When designing the antenna array of a cellular phone, the obtained solution displays improved properties over the existing one.' volume: 139 URL: https://proceedings.mlr.press/v139/lutati21a.html PDF: http://proceedings.mlr.press/v139/lutati21a/lutati21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lutati21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shahar family: Lutati - given: Lior family: Wolf editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7214-7223 id: lutati21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7214 lastpage: 7223 published: 2021-07-01 00:00:00 +0000 - title: 'Value Iteration in Continuous Actions, States and Time' abstract: 'Classical value iteration approaches are not applicable to environments with continuous states and actions. For such environments the states and actions must be discretized, which leads to an exponential increase in computational complexity. In this paper, we propose continuous fitted value iteration (cFVI). This algorithm enables dynamic programming for continuous states and actions with a known dynamics model. Exploiting the continuous time formulation, the optimal policy can be derived for non-linear control-affine dynamics. This closed-form solution enables the efficient extension of value iteration to continuous environments. We show in non-linear control experiments that the dynamic programming solution obtains the same quantitative performance as deep reinforcement learning methods in simulation but excels when transferred to the physical system.The policy obtained by cFVI is more robust to changes in the dynamics despite using only a deterministic model and without explicitly incorporating robustness in the optimization' volume: 139 URL: https://proceedings.mlr.press/v139/lutter21a.html PDF: http://proceedings.mlr.press/v139/lutter21a/lutter21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-lutter21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michael family: Lutter - given: Shie family: Mannor - given: Jan family: Peters - given: Dieter family: Fox - given: Animesh family: Garg editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7224-7234 id: lutter21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7224 lastpage: 7234 published: 2021-07-01 00:00:00 +0000 - title: 'Meta-Cal: Well-controlled Post-hoc Calibration by Ranking' abstract: 'In many applications, it is desirable that a classifier not only makes accurate predictions, but also outputs calibrated posterior probabilities. However, many existing classifiers, especially deep neural network classifiers, tend to be uncalibrated. Post-hoc calibration is a technique to recalibrate a model by learning a calibration map. Existing approaches mostly focus on constructing calibration maps with low calibration errors, however, this quality is inadequate for a calibrator being useful. In this paper, we introduce two constraints that are worth consideration in designing a calibration map for post-hoc calibration. Then we present Meta-Cal, which is built from a base calibrator and a ranking model. Under some mild assumptions, two high-probability bounds are given with respect to these constraints. Empirical results on CIFAR-10, CIFAR-100 and ImageNet and a range of popular network architectures show our proposed method significantly outperforms the current state of the art for post-hoc multi-class classification calibration.' volume: 139 URL: https://proceedings.mlr.press/v139/ma21a.html PDF: http://proceedings.mlr.press/v139/ma21a/ma21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ma21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xingchen family: Ma - given: Matthew B. family: Blaschko editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7235-7245 id: ma21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7235 lastpage: 7245 published: 2021-07-01 00:00:00 +0000 - title: 'Neural-Pull: Learning Signed Distance Function from Point clouds by Learning to Pull Space onto Surface' abstract: 'Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing. Several recent state-of-the-art methods address this problem using neural networks to learn signed distance functions (SDFs). In this paper, we introduce Neural-Pull, a new approach that is simple and leads to high quality SDFs. Specifically, we train a neural network to pull query 3D locations to their closest points on the surface using the predicted signed distance values and the gradient at the query locations, both of which are computed by the network itself. The pulling operation moves each query location with a stride given by the distance predicted by the network. Based on the sign of the distance, this may move the query location along or against the direction of the gradient of the SDF. This is a differentiable operation that allows us to update the signed distance value and the gradient simultaneously during training. Our outperforming results under widely used benchmarks demonstrate that we can learn SDFs more accurately and flexibly for surface reconstruction and single image reconstruction than the state-of-the-art methods. Our code and data are available at https://github.com/mabaorui/NeuralPull.' volume: 139 URL: https://proceedings.mlr.press/v139/ma21b.html PDF: http://proceedings.mlr.press/v139/ma21b/ma21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ma21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Baorui family: Ma - given: Zhizhong family: Han - given: Yu-Shen family: Liu - given: Matthias family: Zwicker editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7246-7257 id: ma21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7246 lastpage: 7257 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Stochastic Behaviour from Aggregate Data' abstract: 'Learning nonlinear dynamics from aggregate data is a challenging problem because the full trajectory of each individual is not available, namely, the individual observed at one time may not be observed at the next time point, or the identity of individual is unavailable. This is in sharp contrast to learning dynamics with full trajectory data, on which the majority of existing methods are based. We propose a novel method using the weak form of Fokker Planck Equation (FPE) — a partial differential equation — to describe the density evolution of data in a sampled form, which is then combined with Wasserstein generative adversarial network (WGAN) in the training process. In such a sample-based framework we are able to learn the nonlinear dynamics from aggregate data without explicitly solving the partial differential equation (PDE) FPE. We demonstrate our approach in the context of a series of synthetic and real-world data sets.' volume: 139 URL: https://proceedings.mlr.press/v139/ma21c.html PDF: http://proceedings.mlr.press/v139/ma21c/ma21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ma21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shaojun family: Ma - given: Shu family: Liu - given: Hongyuan family: Zha - given: Haomin family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7258-7267 id: ma21c issued: date-parts: - 2021 - 7 - 1 firstpage: 7258 lastpage: 7267 published: 2021-07-01 00:00:00 +0000 - title: 'Local Algorithms for Finding Densely Connected Clusters' abstract: 'Local graph clustering is an important algorithmic technique for analysing massive graphs, and has been widely applied in many research fields of data science. While the objective of most (local) graph clustering algorithms is to find a vertex set of low conductance, there has been a sequence of recent studies that highlight the importance of the inter-connection between clusters when analysing real-world datasets. Following this line of research, in this work we study local algorithms for finding a pair of vertex sets defined with respect to their inter-connection and their relationship with the rest of the graph. The key to our analysis is a new reduction technique that relates the structure of multiple sets to a single vertex set in the reduced graph. Among many potential applications, we show that our algorithms successfully recover densely connected clusters in the Interstate Disputes Dataset and the US Migration Dataset.' volume: 139 URL: https://proceedings.mlr.press/v139/macgregor21a.html PDF: http://proceedings.mlr.press/v139/macgregor21a/macgregor21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-macgregor21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peter family: Macgregor - given: He family: Sun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7268-7278 id: macgregor21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7268 lastpage: 7278 published: 2021-07-01 00:00:00 +0000 - title: 'Learning to Generate Noise for Multi-Attack Robustness' abstract: 'Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the majority of existing defense methods are tailored to defend against a single category of adversarial perturbation (e.g. $\ell_\infty$-attack). In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system. Moreover, training on multiple perturbations simultaneously significantly increases the computational overhead during training. To address these challenges, we propose a novel meta-learning framework that explicitly learns to generate noise to improve the model’s robustness against multiple types of attacks. Its key component is \emph{Meta Noise Generator (MNG)} that outputs optimal noise to stochastically perturb a given sample, such that it helps lower the error on diverse adversarial perturbations. By utilizing samples generated by MNG, we train a model by enforcing the label consistency across multiple perturbations. We validate the robustness of models trained by our scheme on various datasets and against a wide variety of perturbations, demonstrating that it significantly outperforms the baselines across multiple perturbations with a marginal computational cost.' volume: 139 URL: https://proceedings.mlr.press/v139/madaan21a.html PDF: http://proceedings.mlr.press/v139/madaan21a/madaan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-madaan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Divyam family: Madaan - given: Jinwoo family: Shin - given: Sung Ju family: Hwang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7279-7289 id: madaan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7279 lastpage: 7289 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Interaction Kernels for Agent Systems on Riemannian Manifolds' abstract: 'Interacting agent and particle systems are extensively used to model complex phenomena in science and engineering. We consider the problem of learning interaction kernels in these dynamical systems constrained to evolve on Riemannian manifolds from given trajectory data. The models we consider are based on interaction kernels depending on pairwise Riemannian distances between agents, with agents interacting locally along the direction of the shortest geodesic connecting them. We show that our estimators converge at a rate that is independent of the dimension of the state space, and derive bounds on the trajectory estimation error, on the manifold, between the observed and estimated dynamics. We demonstrate the performance of our estimator on two classical first order interacting systems: Opinion Dynamics and a Predator-Swarm system, with each system constrained on two prototypical manifolds, the $2$-dimensional sphere and the Poincaré disk model of hyperbolic space.' volume: 139 URL: https://proceedings.mlr.press/v139/maggioni21a.html PDF: http://proceedings.mlr.press/v139/maggioni21a/maggioni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-maggioni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mauro family: Maggioni - given: Jason J family: Miller - given: Hongda family: Qiu - given: Ming family: Zhong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7290-7300 id: maggioni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7290 lastpage: 7300 published: 2021-07-01 00:00:00 +0000 - title: 'Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning' abstract: 'Reinforcement Learning in large action spaces is a challenging problem. This is especially true for cooperative multi-agent reinforcement learning (MARL), which often requires tractable learning while respecting various constraints like communication budget and information about other agents. In this work, we focus on the fundamental hurdle affecting both value-based and policy-gradient approaches: an exponential blowup of the action space with the number of agents. For value-based methods, it poses challenges in accurately representing the optimal value function for value-based methods, thus inducing suboptimality. For policy gradient methods, it renders the critic ineffective and exacerbates the problem of the lagging critic. We show that from a learning theory perspective, both problems can be addressed by accurately representing the associated action-value function with a low-complexity hypothesis class. This requires accurately modelling the agent interactions in a sample efficient way. To this end, we propose a novel tensorised formulation of the Bellman equation. This gives rise to our method Tesseract, which utilises the view of Q-function seen as a tensor where the modes correspond to action spaces of different agents. Algorithms derived from Tesseract decompose the Q-tensor across the agents and utilise low-rank tensor approximations to model the agent interactions relevant to the task. We provide PAC analysis for Tesseract based algorithms and highlight their relevance to the class of rich observation MDPs. Empirical results in different domains confirm the gains in sample efficiency using Tesseract as supported by the theory.' volume: 139 URL: https://proceedings.mlr.press/v139/mahajan21a.html PDF: http://proceedings.mlr.press/v139/mahajan21a/mahajan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mahajan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Anuj family: Mahajan - given: Mikayel family: Samvelyan - given: Lei family: Mao - given: Viktor family: Makoviychuk - given: Animesh family: Garg - given: Jean family: Kossaifi - given: Shimon family: Whiteson - given: Yuke family: Zhu - given: Animashree family: Anandkumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7301-7312 id: mahajan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7301 lastpage: 7312 published: 2021-07-01 00:00:00 +0000 - title: 'Domain Generalization using Causal Matching' abstract: 'In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label. We show that this objective is not sufficient: there exist counter-examples where a model fails to generalize to unseen domains even after satisfying class-conditional domain invariance. We formalize this observation through a structural causal model and show the importance of modeling within-class variations for generalization. Specifically, classes contain objects that characterize specific causal features, and domains can be interpreted as interventions on these objects that change non-causal features. We highlight an alternative condition: inputs across domains should have the same representation if they are derived from the same object. Based on this objective, we propose matching-based algorithms when base objects are observed (e.g., through data augmentation) and approximate the objective when objects are not observed (MatchDG). Our simple matching-based algorithms are competitive to prior work on out-of-domain accuracy for rotated MNIST, Fashion-MNIST, PACS, and Chest-Xray datasets. Our method MatchDG also recovers ground-truth object matches: on MNIST and Fashion-MNIST, top-10 matches from MatchDG have over 50% overlap with ground-truth matches.' volume: 139 URL: https://proceedings.mlr.press/v139/mahajan21b.html PDF: http://proceedings.mlr.press/v139/mahajan21b/mahajan21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mahajan21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Divyat family: Mahajan - given: Shruti family: Tople - given: Amit family: Sharma editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7313-7324 id: mahajan21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7313 lastpage: 7324 published: 2021-07-01 00:00:00 +0000 - title: 'Stability and Convergence of Stochastic Gradient Clipping: Beyond Lipschitz Continuity and Smoothness' abstract: 'Stochastic gradient algorithms are often unstable when applied to functions that do not have Lipschitz-continuous and/or bounded gradients. Gradient clipping is a simple and effective technique to stabilize the training process for problems that are prone to the exploding gradient problem. Despite its widespread popularity, the convergence properties of the gradient clipping heuristic are poorly understood, especially for stochastic problems. This paper establishes both qualitative and quantitative convergence results of the clipped stochastic (sub)gradient method (SGD) for non-smooth convex functions with rapidly growing subgradients. Our analyses show that clipping enhances the stability of SGD and that the clipped SGD algorithm enjoys finite convergence rates in many cases. We also study the convergence of a clipped method with momentum, which includes clipped SGD as a special case, for weakly convex problems under standard assumptions. With a novel Lyapunov analysis, we show that the proposed method achieves the best-known rate for the considered class of problems, demonstrating the effectiveness of clipped methods also in this regime. Numerical results confirm our theoretical developments.' volume: 139 URL: https://proceedings.mlr.press/v139/mai21a.html PDF: http://proceedings.mlr.press/v139/mai21a/mai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vien V. family: Mai - given: Mikael family: Johansson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7325-7335 id: mai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7325 lastpage: 7335 published: 2021-07-01 00:00:00 +0000 - title: 'Nonparametric Hamiltonian Monte Carlo' abstract: 'Probabilistic programming uses programs to express generative models whose posterior probability is then computed by built-in inference engines. A challenging goal is to develop general purpose inference algorithms that work out-of-the-box for arbitrary programs in a universal probabilistic programming language (PPL). The densities defined by such programs, which may use stochastic branching and recursion, are (in general) nonparametric, in the sense that they correspond to models on an infinite-dimensional parameter space. However standard inference algorithms, such as the Hamiltonian Monte Carlo (HMC) algorithm, target distributions with a fixed number of parameters. This paper introduces the Nonparametric Hamiltonian Monte Carlo (NP-HMC) algorithm which generalises HMC to nonparametric models. Inputs to NP-HMC are a new class of measurable functions called “tree representable”, which serve as a language-independent representation of the density functions of probabilistic programs in a universal PPL. We provide a correctness proof of NP-HMC, and empirically demonstrate significant performance improvements over existing approaches on several nonparametric examples.' volume: 139 URL: https://proceedings.mlr.press/v139/mak21a.html PDF: http://proceedings.mlr.press/v139/mak21a/mak21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mak21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Carol family: Mak - given: Fabian family: Zaiser - given: Luke family: Ong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7336-7347 id: mak21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7336 lastpage: 7347 published: 2021-07-01 00:00:00 +0000 - title: 'Exploiting structured data for learning contagious diseases under incomplete testing' abstract: 'One of the ways that machine learning algorithms can help control the spread of an infectious disease is by building models that predict who is likely to become infected making them good candidates for preemptive interventions. In this work we ask: can we build reliable infection prediction models when the observed data is collected under limited, and biased testing that prioritizes testing symptomatic individuals? Our analysis suggests that when the infection is highly transmissible, incomplete testing might be sufficient to achieve good out-of-sample prediction error. Guided by this insight, we develop an algorithm that predicts infections, and show that it outperforms baselines on simulated data. We apply our model to data from a large hospital to predict Clostridioides difficile infections; a communicable disease that is characterized by both symptomatically infected and asymptomatic (i.e., untested) carriers. Using a proxy instead of the unobserved untested-infected state, we show that our model outperforms benchmarks in predicting infections.' volume: 139 URL: https://proceedings.mlr.press/v139/makar21a.html PDF: http://proceedings.mlr.press/v139/makar21a/makar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-makar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Maggie family: Makar - given: Lauren family: West - given: David family: Hooper - given: Eric family: Horvitz - given: Erica family: Shenoy - given: John family: Guttag editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7348-7357 id: makar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7348 lastpage: 7357 published: 2021-07-01 00:00:00 +0000 - title: 'Near-Optimal Algorithms for Explainable k-Medians and k-Means' abstract: 'We consider the problem of explainable $k$-medians and $k$-means introduced by Dasgupta, Frost, Moshkovitz, and Rashtchian (ICML 2020). In this problem, our goal is to find a \emph{threshold decision tree} that partitions data into $k$ clusters and minimizes the $k$-medians or $k$-means objective. The obtained clustering is easy to interpret because every decision node of a threshold tree splits data based on a single feature into two groups. We propose a new algorithm for this problem which is $\tilde O(\log k)$ competitive with $k$-medians with $\ell_1$ norm and $\tilde O(k)$ competitive with $k$-means. This is an improvement over the previous guarantees of $O(k)$ and $O(k^2)$ by Dasgupta et al (2020). We also provide a new algorithm which is $O(\log^{\nicefrac{3}{2}} k)$ competitive for $k$-medians with $\ell_2$ norm. Our first algorithm is near-optimal: Dasgupta et al (2020) showed a lower bound of $\Omega(\log k)$ for $k$-medians; in this work, we prove a lower bound of $\tilde\Omega(k)$ for $k$-means. We also provide a lower bound of $\Omega(\log k)$ for $k$-medians with $\ell_2$ norm.' volume: 139 URL: https://proceedings.mlr.press/v139/makarychev21a.html PDF: http://proceedings.mlr.press/v139/makarychev21a/makarychev21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-makarychev21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Konstantin family: Makarychev - given: Liren family: Shan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7358-7367 id: makarychev21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7358 lastpage: 7367 published: 2021-07-01 00:00:00 +0000 - title: 'KO codes: inventing nonlinear encoding and decoding for reliable wireless communication via deep-learning' abstract: 'Landmark codes underpin reliable physical layer communication, e.g., Reed-Muller, BCH, Convolution, Turbo, LDPC, and Polar codes: each is a linear code and represents a mathematical breakthrough. The impact on humanity is huge: each of these codes has been used in global wireless communication standards (satellite, WiFi, cellular). Reliability of communication over the classical additive white Gaussian noise (AWGN) channel enables benchmarking and ranking of the different codes. In this paper, we construct KO codes, a computationally efficient family of deep-learning driven (encoder, decoder) pairs that outperform the state-of-the-art reliability performance on the standardized AWGN channel. KO codes beat state-of-the-art Reed-Muller and Polar codes, under the low-complexity successive cancellation decoding, in the challenging short-to-medium block length regime on the AWGN channel. We show that the gains of KO codes are primarily due to the nonlinear mapping of information bits directly to transmit symbols (bypassing modulation) and yet possess an efficient, high-performance decoder. The key technical innovation that renders this possible is design of a novel family of neural architectures inspired by the computation tree of the {\bf K}ronecker {\bf O}peration (KO) central to Reed-Muller and Polar codes. These architectures pave way for the discovery of a much richer class of hitherto unexplored nonlinear algebraic structures.' volume: 139 URL: https://proceedings.mlr.press/v139/makkuva21a.html PDF: http://proceedings.mlr.press/v139/makkuva21a/makkuva21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-makkuva21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ashok V family: Makkuva - given: Xiyang family: Liu - given: Mohammad Vahid family: Jamali - given: Hessam family: Mahdavifar - given: Sewoong family: Oh - given: Pramod family: Viswanath editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7368-7378 id: makkuva21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7368 lastpage: 7378 published: 2021-07-01 00:00:00 +0000 - title: 'Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels' abstract: 'We study the relative power of learning with gradient descent on differentiable models, such as neural networks, versus using the corresponding tangent kernels. We show that under certain conditions, gradient descent achieves small error only if a related tangent kernel method achieves a non-trivial advantage over random guessing (a.k.a. weak learning), though this advantage might be very small even when gradient descent can achieve arbitrarily high accuracy. Complementing this, we show that without these conditions, gradient descent can in fact learn with small error even when no kernel method, in particular using the tangent kernel, can achieve a non-trivial advantage over random guessing.' volume: 139 URL: https://proceedings.mlr.press/v139/malach21a.html PDF: http://proceedings.mlr.press/v139/malach21a/malach21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-malach21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eran family: Malach - given: Pritish family: Kamath - given: Emmanuel family: Abbe - given: Nathan family: Srebro editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7379-7389 id: malach21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7379 lastpage: 7389 published: 2021-07-01 00:00:00 +0000 - title: 'Inverse Constrained Reinforcement Learning' abstract: 'In real world settings, numerous constraints are present which are hard to specify mathematically. However, for the real world deployment of reinforcement learning (RL), it is critical that RL agents are aware of these constraints, so that they can act safely. In this work, we consider the problem of learning constraints from demonstrations of a constraint-abiding agent’s behavior. We experimentally validate our approach and show that our framework can successfully learn the most likely constraints that the agent respects. We further show that these learned constraints are \textit{transferable} to new agents that may have different morphologies and/or reward functions. Previous works in this regard have either mainly been restricted to tabular (discrete) settings, specific types of constraints or assume the environment’s transition dynamics. In contrast, our framework is able to learn arbitrary \textit{Markovian} constraints in high-dimensions in a completely model-free setting. The code is available at: \url{https://github.com/shehryar-malik/icrl}.' volume: 139 URL: https://proceedings.mlr.press/v139/malik21a.html PDF: http://proceedings.mlr.press/v139/malik21a/malik21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-malik21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shehryar family: Malik - given: Usman family: Anwar - given: Alireza family: Aghasi - given: Ali family: Ahmed editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7390-7399 id: malik21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7390 lastpage: 7399 published: 2021-07-01 00:00:00 +0000 - title: 'A Sampling-Based Method for Tensor Ring Decomposition' abstract: 'We propose a sampling-based method for computing the tensor ring (TR) decomposition of a data tensor. The method uses leverage score sampled alternating least squares to fit the TR cores in an iterative fashion. By taking advantage of the special structure of TR tensors, we can efficiently estimate the leverage scores and attain a method which has complexity sublinear in the number of input tensor entries. We provide high-probability relative-error guarantees for the sampled least squares problems. We compare our proposal to existing methods in experiments on both synthetic and real data. Our method achieves substantial speedup—sometimes two or three orders of magnitude—over competing methods, while maintaining good accuracy. We also provide an example of how our method can be used for rapid feature extraction.' volume: 139 URL: https://proceedings.mlr.press/v139/malik21b.html PDF: http://proceedings.mlr.press/v139/malik21b/malik21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-malik21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Osman Asif family: Malik - given: Stephen family: Becker editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7400-7411 id: malik21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7400 lastpage: 7411 published: 2021-07-01 00:00:00 +0000 - title: 'Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity' abstract: 'Reinforcement learning (RL) is empirically successful in complex nonlinear Markov decision processes (MDPs) with continuous state spaces. By contrast, the majority of theoretical RL literature requires the MDP to satisfy some form of linear structure, in order to guarantee sample efficient RL. Such efforts typically assume the transition dynamics or value function of the MDP are described by linear functions of the state features. To resolve this discrepancy between theory and practice, we introduce the Effective Planning Window (EPW) condition, a structural condition on MDPs that makes no linearity assumptions. We demonstrate that the EPW condition permits sample efficient RL, by providing an algorithm which provably solves MDPs satisfying this condition. Our algorithm requires minimal assumptions on the policy class, which can include multi-layer neural networks with nonlinear activation functions. Notably, the EPW condition is directly motivated by popular gaming benchmarks, and we show that many classic Atari games satisfy this condition. We additionally show the necessity of conditions like EPW, by demonstrating that simple MDPs with slight nonlinearities cannot be solved sample efficiently.' volume: 139 URL: https://proceedings.mlr.press/v139/malik21c.html PDF: http://proceedings.mlr.press/v139/malik21c/malik21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-malik21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dhruv family: Malik - given: Aldo family: Pacchiano - given: Vishwak family: Srinivasan - given: Yuanzhi family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7412-7422 id: malik21c issued: date-parts: - 2021 - 7 - 1 firstpage: 7412 lastpage: 7422 published: 2021-07-01 00:00:00 +0000 - title: 'Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design' abstract: 'Many problems in engineering design and simulation require balancing competing objectives under the presence of uncertainty. Sample-efficient multiobjective optimization methods focus on the objective function values in metric space and ignore the sampling behavior of the design configurations in parameter space. Consequently, they may provide little actionable insight on how to choose designs in the presence of metric uncertainty or limited precision when implementing a chosen design. We propose a new formulation that accounts for the importance of the parameter space and is thus more suitable for multiobjective design problems; instead of searching for the Pareto-efficient frontier, we solicit the desired minimum performance thresholds on all objectives to define regions of satisfaction. We introduce an active search algorithm called Expected Coverage Improvement (ECI) to efficiently discover the region of satisfaction and simultaneously sample diverse acceptable configurations. We demonstrate our algorithm on several design and simulation domains: mechanical design, additive manufacturing, medical monitoring, and plasma physics.' volume: 139 URL: https://proceedings.mlr.press/v139/malkomes21a.html PDF: http://proceedings.mlr.press/v139/malkomes21a/malkomes21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-malkomes21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gustavo family: Malkomes - given: Bolong family: Cheng - given: Eric H family: Lee - given: Mike family: Mccourt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7423-7434 id: malkomes21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7423 lastpage: 7434 published: 2021-07-01 00:00:00 +0000 - title: 'Consistent Nonparametric Methods for Network Assisted Covariate Estimation' abstract: 'Networks with node covariates are commonplace: for example, people in a social network have interests, or product preferences, etc. If we know the covariates for some nodes, can we infer them for the remaining nodes? In this paper we propose a new similarity measure between two nodes based on the patterns of their 2-hop neighborhoods. We show that a simple algorithm (CN-VEC) like nearest neighbor regression with this metric is consistent for a wide range of models when the degree grows faster than $n^{1/3}$ up-to logarithmic factors, where $n$ is the number of nodes. For "low-rank" latent variable models, the natural contender will be to estimate the latent variables using SVD and use them for non-parametric regression. While we show consistency of this method under less stringent sparsity conditions, our experimental results suggest that the simple local CN-VEC method either outperforms the global SVD-RBF method, or has comparable performance for low rank models. We also present simulated and real data experiments to show the effectiveness of our algorithms compared to the state of the art.' volume: 139 URL: https://proceedings.mlr.press/v139/mao21a.html PDF: http://proceedings.mlr.press/v139/mao21a/mao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xueyu family: Mao - given: Deepayan family: Chakrabarti - given: Purnamrita family: Sarkar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7435-7446 id: mao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7435 lastpage: 7446 published: 2021-07-01 00:00:00 +0000 - title: 'Near-Optimal Model-Free Reinforcement Learning in Non-Stationary Episodic MDPs' abstract: 'We consider model-free reinforcement learning (RL) in non-stationary Markov decision processes. Both the reward functions and the state transition functions are allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain variation budgets. We propose Restarted Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free algorithm for non-stationary RL, and show that it outperforms existing solutions in terms of dynamic regret. Specifically, RestartQ-UCB with Freedman-type bonus terms achieves a dynamic regret bound of $\widetilde{O}(S^{\frac{1}{3}} A^{\frac{1}{3}} \Delta^{\frac{1}{3}} H T^{\frac{2}{3}})$, where $S$ and $A$ are the numbers of states and actions, respectively, $\Delta>0$ is the variation budget, $H$ is the number of time steps per episode, and $T$ is the total number of time steps. We further show that our algorithm is \emph{nearly optimal} by establishing an information-theoretical lower bound of $\Omega(S^{\frac{1}{3}} A^{\frac{1}{3}} \Delta^{\frac{1}{3}} H^{\frac{2}{3}} T^{\frac{2}{3}})$, the first lower bound in non-stationary RL. Numerical experiments validate the advantages of RestartQ-UCB in terms of both cumulative rewards and computational efficiency. We further demonstrate the power of our results in the context of multi-agent RL, where non-stationarity is a key challenge.' volume: 139 URL: https://proceedings.mlr.press/v139/mao21b.html PDF: http://proceedings.mlr.press/v139/mao21b/mao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Weichao family: Mao - given: Kaiqing family: Zhang - given: Ruihao family: Zhu - given: David family: Simchi-Levi - given: Tamer family: Basar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7447-7458 id: mao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7447 lastpage: 7458 published: 2021-07-01 00:00:00 +0000 - title: 'Adaptive Sampling for Best Policy Identification in Markov Decision Processes' abstract: 'We investigate the problem of best-policy identification in discounted Markov Decision Processes (MDPs) when the learner has access to a generative model. The objective is to devise a learning algorithm returning the best policy as early as possible. We first derive a problem-specific lower bound of the sample complexity satisfied by any learning algorithm. This lower bound corresponds to an optimal sample allocation that solves a non-convex program, and hence, is hard to exploit in the design of efficient algorithms. We then provide a simple and tight upper bound of the sample complexity lower bound, whose corresponding nearly-optimal sample allocation becomes explicit. The upper bound depends on specific functionals of the MDP such as the sub-optimality gaps and the variance of the next-state value function, and thus really captures the hardness of the MDP. Finally, we devise KLB-TS (KL Ball Track-and-Stop), an algorithm tracking this nearly-optimal allocation, and provide asymptotic guarantees for its sample complexity (both almost surely and in expectation). The advantages of KLB-TS against state-of-the-art algorithms are discussed and illustrated numerically.' volume: 139 URL: https://proceedings.mlr.press/v139/marjani21a.html PDF: http://proceedings.mlr.press/v139/marjani21a/marjani21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-marjani21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aymen Al family: Marjani - given: Alexandre family: Proutiere editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7459-7468 id: marjani21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7459 lastpage: 7468 published: 2021-07-01 00:00:00 +0000 - title: 'Explanations for Monotonic Classifiers.' abstract: 'In many classification tasks there is a requirement of monotonicity. Concretely, if all else remains constant, increasing (resp. decreasing) the value of one or more features must not decrease (resp. increase) the value of the prediction. Despite comprehensive efforts on learning monotonic classifiers, dedicated approaches for explaining monotonic classifiers are scarce and classifier-specific. This paper describes novel algorithms for the computation of one formal explanation of a (black-box) monotonic classifier. These novel algorithms are polynomial (indeed linear) in the run time complexity of the classifier. Furthermore, the paper presents a practically efficient model-agnostic algorithm for enumerating formal explanations.' volume: 139 URL: https://proceedings.mlr.press/v139/marques-silva21a.html PDF: http://proceedings.mlr.press/v139/marques-silva21a/marques-silva21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-marques-silva21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Joao family: Marques-Silva - given: Thomas family: Gerspacher - given: Martin C family: Cooper - given: Alexey family: Ignatiev - given: Nina family: Narodytska editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7469-7479 id: marques-silva21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7469 lastpage: 7479 published: 2021-07-01 00:00:00 +0000 - title: 'Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers' abstract: 'Two-player, constant-sum games are well studied in the literature, but there has been limited progress outside of this setting. We propose Joint Policy-Space Response Oracles (JPSRO), an algorithm for training agents in n-player, general-sum extensive form games, which provably converges to an equilibrium. We further suggest correlated equilibria (CE) as promising meta-solvers, and propose a novel solution concept Maximum Gini Correlated Equilibrium (MGCE), a principled and computationally efficient family of solutions for solving the correlated equilibrium selection problem. We conduct several experiments using CE meta-solvers for JPSRO and demonstrate convergence on n-player, general-sum games.' volume: 139 URL: https://proceedings.mlr.press/v139/marris21a.html PDF: http://proceedings.mlr.press/v139/marris21a/marris21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-marris21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Luke family: Marris - given: Paul family: Muller - given: Marc family: Lanctot - given: Karl family: Tuyls - given: Thore family: Graepel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7480-7491 id: marris21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7480 lastpage: 7491 published: 2021-07-01 00:00:00 +0000 - title: 'Blind Pareto Fairness and Subgroup Robustness' abstract: 'Much of the work in the field of group fairness addresses disparities between predefined groups based on protected features such as gender, age, and race, which need to be available at train, and often also at test, time. These approaches are static and retrospective, since algorithms designed to protect groups identified a priori cannot anticipate and protect the needs of different at-risk groups in the future. In this work we analyze the space of solutions for worst-case fairness beyond demographics, and propose Blind Pareto Fairness (BPF), a method that leverages no-regret dynamics to recover a fair minimax classifier that reduces worst-case risk of any potential subgroup of sufficient size, and guarantees that the remaining population receives the best possible level of service. BPF addresses fairness beyond demographics, that is, it does not rely on predefined notions of at-risk groups, neither at train nor at test time. Our experimental results show that the proposed framework improves worst-case risk in multiple standard datasets, while simultaneously providing better levels of service for the remaining population. The code is available at github.com/natalialmg/BlindParetoFairness' volume: 139 URL: https://proceedings.mlr.press/v139/martinez21a.html PDF: http://proceedings.mlr.press/v139/martinez21a/martinez21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-martinez21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Natalia L family: Martinez - given: Martin A family: Bertran - given: Afroditi family: Papadaki - given: Miguel family: Rodrigues - given: Guillermo family: Sapiro editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7492-7501 id: martinez21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7492 lastpage: 7501 published: 2021-07-01 00:00:00 +0000 - title: 'Necessary and sufficient conditions for causal feature selection in time series with latent common causes' abstract: 'We study the identification of direct and indirect causes on time series with latent variables, and provide a constrained-based causal feature selection method, which we prove that is both sound and complete under some graph constraints. Our theory and estimation algorithm require only two conditional independence tests for each observed candidate time series to determine whether or not it is a cause of an observed target time series. Furthermore, our selection of the conditioning set is such that it improves signal to noise ratio. We apply our method on real data, and on a wide range of simulated experiments, which yield very low false positive and relatively low false negative rates.' volume: 139 URL: https://proceedings.mlr.press/v139/mastakouri21a.html PDF: http://proceedings.mlr.press/v139/mastakouri21a/mastakouri21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mastakouri21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Atalanti A family: Mastakouri - given: Bernhard family: Schölkopf - given: Dominik family: Janzing editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7502-7511 id: mastakouri21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7502 lastpage: 7511 published: 2021-07-01 00:00:00 +0000 - title: 'Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment Restriction' abstract: 'We address the problem of causal effect estima-tion in the presence of unobserved confounding,but where proxies for the latent confounder(s) areobserved. We propose two kernel-based meth-ods for nonlinear causal effect estimation in thissetting: (a) a two-stage regression approach, and(b) a maximum moment restriction approach. Wefocus on the proximal causal learning setting, butour methods can be used to solve a wider classof inverse problems characterised by a Fredholmintegral equation. In particular, we provide a uni-fying view of two-stage and moment restrictionapproaches for solving this problem in a nonlin-ear setting. We provide consistency guaranteesfor each algorithm, and demonstrate that these ap-proaches achieve competitive results on syntheticdata and data simulating a real-world task. In par-ticular, our approach outperforms earlier methodsthat are not suited to leveraging proxy variables.' volume: 139 URL: https://proceedings.mlr.press/v139/mastouri21a.html PDF: http://proceedings.mlr.press/v139/mastouri21a/mastouri21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mastouri21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Afsaneh family: Mastouri - given: Yuchen family: Zhu - given: Limor family: Gultchin - given: Anna family: Korba - given: Ricardo family: Silva - given: Matt family: Kusner - given: Arthur family: Gretton - given: Krikamol family: Muandet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7512-7523 id: mastouri21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7512 lastpage: 7523 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Unsupervised Learning via L-statistic Minimization' abstract: 'Designing learning algorithms that are resistant to perturbations of the underlying data distribution is a problem of wide practical and theoretical importance. We present a general approach to this problem focusing on unsupervised learning. The key assumption is that the perturbing distribution is characterized by larger losses relative to a given class of admissible models. This is exploited by a general descent algorithm which minimizes an $L$-statistic criterion over the model class, weighting small losses more. Our analysis characterizes the robustness of the method in terms of bounds on the reconstruction error relative to the underlying unperturbed distribution. As a byproduct, we prove uniform convergence bounds with respect to the proposed criterion for several popular models in unsupervised learning, a result which may be of independent interest. Numerical experiments with \textsc{kmeans} clustering and principal subspace analysis demonstrate the effectiveness of our approach.' volume: 139 URL: https://proceedings.mlr.press/v139/maurer21a.html PDF: http://proceedings.mlr.press/v139/maurer21a/maurer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-maurer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andreas family: Maurer - given: Daniela Angela family: Parletta - given: Andrea family: Paudice - given: Massimiliano family: Pontil editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7524-7533 id: maurer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7524 lastpage: 7533 published: 2021-07-01 00:00:00 +0000 - title: 'Adversarial Multi Class Learning under Weak Supervision with Performance Guarantees' abstract: 'We develop a rigorous approach for using a set of arbitrarily correlated weak supervision sources in order to solve a multiclass classification task when only a very small set of labeled data is available. Our learning algorithm provably converges to a model that has minimum empirical risk with respect to an adversarial choice over feasible labelings for a set of unlabeled data, where the feasibility of a labeling is computed through constraints defined by rigorously estimated statistics of the weak supervision sources. We show theoretical guarantees for this approach that depend on the information provided by the weak supervision sources. Notably, this method does not require the weak supervision sources to have the same labeling space as the multiclass classification task. We demonstrate the effectiveness of our approach with experiments on various image classification tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/mazzetto21a.html PDF: http://proceedings.mlr.press/v139/mazzetto21a/mazzetto21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mazzetto21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alessio family: Mazzetto - given: Cyrus family: Cousins - given: Dylan family: Sam - given: Stephen H family: Bach - given: Eli family: Upfal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7534-7543 id: mazzetto21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7534 lastpage: 7543 published: 2021-07-01 00:00:00 +0000 - title: 'Fundamental Tradeoffs in Distributionally Adversarial Training' abstract: 'Adversarial training is among the most effective techniques to improve robustness of models against adversarial perturbations. However, the full effect of this approach on models is not well understood. For example, while adversarial training can reduce the adversarial risk (prediction error against an adversary), it sometimes increase standard risk (generalization error when there is no adversary). In this paper, we focus on \emph{distribution perturbing} adversary framework wherein the adversary can change the test distribution within a neighborhood of the training data distribution. The neighborhood is defined via Wasserstein distance between distributions and the radius of the neighborhood is a measure of adversary’s manipulative power. We study the tradeoff between standard risk and adversarial risk and derive the Pareto-optimal tradeoff, achievable over specific classes of models, in the infinite data limit with features dimension kept fixed. We consider three learning settings: 1) Regression with the class of linear models; 2) Binary classification under the Gaussian mixtures data model, with the class of linear classifiers; 3) Regression with the class of random features model (which can be equivalently represented as two-layer neural network with random first-layer weights). We show that a tradeoff between standard and adversarial risk is manifested in all three settings. We further characterize the Pareto-optimal tradeoff curves and discuss how a variety of factors, such as features correlation, adversary’s power or the width of two-layer neural network would affect this tradeoff.' volume: 139 URL: https://proceedings.mlr.press/v139/mehrabi21a.html PDF: http://proceedings.mlr.press/v139/mehrabi21a/mehrabi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mehrabi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mohammad family: Mehrabi - given: Adel family: Javanmard - given: Ryan A. family: Rossi - given: Anup family: Rao - given: Tung family: Mai editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7544-7554 id: mehrabi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7544 lastpage: 7554 published: 2021-07-01 00:00:00 +0000 - title: 'Leveraging Non-uniformity in First-order Non-convex Optimization' abstract: 'Classical global convergence results for first-order methods rely on uniform smoothness and the Ł{}ojasiewicz inequality. Motivated by properties of objective functions that arise in machine learning, we propose a non-uniform refinement of these notions, leading to \emph{Non-uniform Smoothness} (NS) and \emph{Non-uniform Ł{}ojasiewicz inequality} (NŁ{}). The new definitions inspire new geometry-aware first-order methods that are able to converge to global optimality faster than the classical $\Omega(1/t^2)$ lower bounds. To illustrate the power of these geometry-aware methods and their corresponding non-uniform analysis, we consider two important problems in machine learning: policy gradient optimization in reinforcement learning (PG), and generalized linear model training in supervised learning (GLM). For PG, we find that normalizing the gradient ascent method can accelerate convergence to $O(e^{- c \cdot t})$ (where $c > 0$) while incurring less overhead than existing algorithms. For GLM, we show that geometry-aware normalized gradient descent can also achieve a linear convergence rate, which significantly improves the best known results. We additionally show that the proposed geometry-aware gradient descent methods escape landscape plateaus faster than standard gradient descent. Experimental results are used to illustrate and complement the theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/mei21a.html PDF: http://proceedings.mlr.press/v139/mei21a/mei21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mei21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jincheng family: Mei - given: Yue family: Gao - given: Bo family: Dai - given: Csaba family: Szepesvari - given: Dale family: Schuurmans editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7555-7564 id: mei21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7555 lastpage: 7564 published: 2021-07-01 00:00:00 +0000 - title: 'Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks' abstract: 'We consider the problem of controlling a partially-observed dynamic process on a graph by a limited number of interventions. This problem naturally arises in contexts such as scheduling virus tests to curb an epidemic; targeted marketing in order to promote a product; and manually inspecting posts to detect fake news spreading on social networks. We formulate this setup as a sequential decision problem over a temporal graph process. In face of an exponential state space, combinatorial action space and partial observability, we design a novel tractable scheme to control dynamical processes on temporal graphs. We successfully apply our approach to two popular problems that fall into our framework: prioritizing which nodes should be tested in order to curb the spread of an epidemic, and influence maximization on a graph.' volume: 139 URL: https://proceedings.mlr.press/v139/meirom21a.html PDF: http://proceedings.mlr.press/v139/meirom21a/meirom21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-meirom21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eli family: Meirom - given: Haggai family: Maron - given: Shie family: Mannor - given: Gal family: Chechik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7565-7577 id: meirom21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7565 lastpage: 7577 published: 2021-07-01 00:00:00 +0000 - title: 'A theory of high dimensional regression with arbitrary correlations between input features and target functions: sample complexity, multiple descent curves and a hierarchy of phase transitions' abstract: 'The performance of neural networks depends on precise relationships between four distinct ingredients: the architecture, the loss function, the statistical structure of inputs, and the ground truth target function. Much theoretical work has focused on understanding the role of the first two ingredients under highly simplified models of random uncorrelated data and target functions. In contrast, performance likely relies on a conspiracy between the statistical structure of the input distribution and the structure of the function to be learned. To understand this better we revisit ridge regression in high dimensions, which corresponds to an exceedingly simple architecture and loss function, but we analyze its performance under arbitrary correlations between input features and the target function. We find a rich mathematical structure that includes: (1) a dramatic reduction in sample complexity when the target function aligns with data anisotropy; (2) the existence of multiple descent curves; (3) a sequence of phase transitions in the performance, loss landscape, and optimal regularization as a function of the amount of data that explains the first two effects.' volume: 139 URL: https://proceedings.mlr.press/v139/mel21a.html PDF: http://proceedings.mlr.press/v139/mel21a/mel21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mel21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gabriel family: Mel - given: Surya family: Ganguli editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7578-7587 id: mel21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7578 lastpage: 7587 published: 2021-07-01 00:00:00 +0000 - title: 'Neural Architecture Search without Training' abstract: 'The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be alleviated if we could partially predict a network’s trained accuracy from its initial state. In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network’s trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU, and verify its effectiveness on NAS-Bench-101, NAS-Bench-201, NATS-Bench, and Network Design Spaces. Our approach can be readily combined with more expensive search methods; we examine a simple adaptation of regularised evolutionary search. Code for reproducing our experiments is available at https://github.com/BayesWatch/nas-without-training.' volume: 139 URL: https://proceedings.mlr.press/v139/mellor21a.html PDF: http://proceedings.mlr.press/v139/mellor21a/mellor21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mellor21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Joe family: Mellor - given: Jack family: Turner - given: Amos family: Storkey - given: Elliot J family: Crowley editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7588-7598 id: mellor21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7588 lastpage: 7598 published: 2021-07-01 00:00:00 +0000 - title: 'Fast active learning for pure exploration in reinforcement learning' abstract: 'Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on \emph{exploring efficiently.} The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically-backed exploration strategies on the other. Many of them are incarnated by \emph{intrinsic motivation} and in particular \emph{explorations bonuses}. A common choice is to use $1/\sqrt{n}$ bonus, where $n$ is a number of times this particular state-action pair was visited. We show that, surprisingly, for a pure-exploration objective of \emph{reward-free exploration}, bonuses that scale with $1/n$ bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon $H$. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor $H$ the sample complexity in the \emph{best-policy identification} setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.' volume: 139 URL: https://proceedings.mlr.press/v139/menard21a.html PDF: http://proceedings.mlr.press/v139/menard21a/menard21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-menard21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pierre family: Menard - given: Omar Darwiche family: Domingues - given: Anders family: Jonsson - given: Emilie family: Kaufmann - given: Edouard family: Leurent - given: Michal family: Valko editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7599-7608 id: menard21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7599 lastpage: 7608 published: 2021-07-01 00:00:00 +0000 - title: 'UCB Momentum Q-learning: Correcting the bias without forgetting' abstract: 'We propose UCBMQ, Upper Confidence Bound Momentum Q-learning, a new algorithm for reinforcement learning in tabular and possibly stage-dependent, episodic Markov decision process. UCBMQ is based on Q-learning where we add a momentum term and rely on the principle of optimism in face of uncertainty to deal with exploration. Our new technical ingredient of UCBMQ is the use of momentum to correct the bias that Q-learning suffers while, \emph{at the same time}, limiting the impact it has on the second-order term of the regret. For UCBMQ, we are able to guarantee a regret of at most $\tilde{O}(\sqrt{H^3SAT}+ H^4 S A)$ where $H$ is the length of an episode, $S$ the number of states, $A$ the number of actions, $T$ the number of episodes and ignoring terms in poly$\log(SAHT)$. Notably, UCBMQ is the first algorithm that simultaneously matches the lower bound of $\Omega(\sqrt{H^3SAT})$ for large enough $T$ and has a second-order term (with respect to $T$) that scales \emph{only linearly} with the number of states $S$.' volume: 139 URL: https://proceedings.mlr.press/v139/menard21b.html PDF: http://proceedings.mlr.press/v139/menard21b/menard21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-menard21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pierre family: Menard - given: Omar Darwiche family: Domingues - given: Xuedong family: Shang - given: Michal family: Valko editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7609-7618 id: menard21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7609 lastpage: 7618 published: 2021-07-01 00:00:00 +0000 - title: 'An Integer Linear Programming Framework for Mining Constraints from Data' abstract: 'Structured output prediction problems (e.g., sequential tagging, hierarchical multi-class classification) often involve constraints over the output space. These constraints interact with the learned models to filter infeasible solutions and facilitate in building an accountable system. However, despite constraints are useful, they are often based on hand-crafted rules. This raises a question – can we mine constraints and rules from data based on a learning algorithm? In this paper, we present a general framework for mining constraints from data. In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem. Then, given the coefficients of the objective function and the corresponding solution, we mine the underlying constraints by estimating the outer and inner polytopes of the feasible set. We verify the proposed constraint mining algorithm in various synthetic and real-world applications and demonstrate that the proposed approach successfully identifies the feasible set at scale. In particular, we show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules. Our algorithm can also integrate with a neural network model to learn the hierarchical label structure of a multi-label classification task. Besides, we provide theoretical analysis about the tightness of the polytopes and the reliability of the mined constraints.' volume: 139 URL: https://proceedings.mlr.press/v139/meng21a.html PDF: http://proceedings.mlr.press/v139/meng21a/meng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-meng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tao family: Meng - given: Kai-Wei family: Chang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7619-7631 id: meng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7619 lastpage: 7631 published: 2021-07-01 00:00:00 +0000 - title: 'A statistical perspective on distillation' abstract: 'Knowledge distillation is a technique for improving a “student” model by replacing its one-hot training labels with a label distribution obtained from a “teacher” model. Despite its broad success, several basic questions — e.g., Why does distillation help? Why do more accurate teachers not necessarily distill better? — have received limited formal study. In this paper, we present a statistical perspective on distillation which provides an answer to these questions. Our core observation is that a “Bayes teacher” providing the true class-probabilities can lower the variance of the student objective, and thus improve performance. We then establish a bias-variance tradeoff that quantifies the value of teachers that approximate the Bayes class-probabilities. This provides a formal criterion as to what constitutes a “good” teacher, namely, the quality of its probability estimates. Finally, we illustrate how our statistical perspective facilitates novel applications of distillation to bipartite ranking and multiclass retrieval.' volume: 139 URL: https://proceedings.mlr.press/v139/menon21a.html PDF: http://proceedings.mlr.press/v139/menon21a/menon21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-menon21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aditya K family: Menon - given: Ankit Singh family: Rawat - given: Sashank family: Reddi - given: Seungyeon family: Kim - given: Sanjiv family: Kumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7632-7642 id: menon21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7632 lastpage: 7642 published: 2021-07-01 00:00:00 +0000 - title: 'Learn2Hop: Learned Optimization on Rough Landscapes' abstract: 'Optimization of non-convex loss surfaces containing many local minima remains a critical problem in a variety of domains, including operations research, informatics, and material design. Yet, current techniques either require extremely high iteration counts or a large number of random restarts for good performance. In this work, we propose adapting recent developments in meta-learning to these many-minima problems by learning the optimization algorithm for various loss landscapes. We focus on problems from atomic structural optimization—finding low energy configurations of many-atom systems—including widely studied models such as bimetallic clusters and disordered silicon. We find that our optimizer learns a hopping behavior which enables efficient exploration and improves the rate of low energy minima discovery. Finally, our learned optimizers show promising generalization with efficiency gains on never before seen tasks (e.g. new elements or compositions). Code is available at https://learn2hop.page.link/github.' volume: 139 URL: https://proceedings.mlr.press/v139/merchant21a.html PDF: http://proceedings.mlr.press/v139/merchant21a/merchant21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-merchant21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Amil family: Merchant - given: Luke family: Metz - given: Samuel S family: Schoenholz - given: Ekin D family: Cubuk editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7643-7653 id: merchant21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7643 lastpage: 7653 published: 2021-07-01 00:00:00 +0000 - title: 'Counterfactual Credit Assignment in Model-Free Reinforcement Learning' abstract: 'Credit assignment in reinforcement learning is the problem of measuring an action’s influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agent’s actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.' volume: 139 URL: https://proceedings.mlr.press/v139/mesnard21a.html PDF: http://proceedings.mlr.press/v139/mesnard21a/mesnard21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mesnard21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Thomas family: Mesnard - given: Theophane family: Weber - given: Fabio family: Viola - given: Shantanu family: Thakoor - given: Alaa family: Saade - given: Anna family: Harutyunyan - given: Will family: Dabney - given: Thomas S family: Stepleton - given: Nicolas family: Heess - given: Arthur family: Guez - given: Eric family: Moulines - given: Marcus family: Hutter - given: Lars family: Buesing - given: Remi family: Munos editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7654-7664 id: mesnard21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7654 lastpage: 7664 published: 2021-07-01 00:00:00 +0000 - title: 'Provably Efficient Learning of Transferable Rewards' abstract: 'The reward function is widely accepted as a succinct, robust, and transferable representation of a task. Typical approaches, at the basis of Inverse Reinforcement Learning (IRL), leverage on expert demonstrations to recover a reward function. In this paper, we study the theoretical properties of the class of reward functions that are compatible with the expert’s behavior. We analyze how the limited knowledge of the expert’s policy and of the environment affects the reward reconstruction phase. Then, we examine how the error propagates to the learned policy’s performance when transferring the reward function to a different environment. We employ these findings to devise a provably efficient active sampling approach, aware of the need for transferring the reward function, that can be paired with a large variety of IRL algorithms. Finally, we provide numerical simulations on benchmark environments.' volume: 139 URL: https://proceedings.mlr.press/v139/metelli21a.html PDF: http://proceedings.mlr.press/v139/metelli21a/metelli21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-metelli21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alberto Maria family: Metelli - given: Giorgia family: Ramponi - given: Alessandro family: Concetti - given: Marcello family: Restelli editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7665-7676 id: metelli21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7665 lastpage: 7676 published: 2021-07-01 00:00:00 +0000 - title: 'Mixed Nash Equilibria in the Adversarial Examples Game' abstract: 'This paper tackles the problem of adversarial examples from a game theoretic point of view. We study the open question of the existence of mixed Nash equilibria in the zero-sum game formed by the attacker and the classifier. While previous works usually allow only one player to use randomized strategies, we show the necessity of considering randomization for both the classifier and the attacker. We demonstrate that this game has no duality gap, meaning that it always admits approximate Nash equilibria. We also provide the first optimization algorithms to learn a mixture of classifiers that approximately realizes the value of this game, \emph{i.e.} procedures to build an optimally robust randomized classifier.' volume: 139 URL: https://proceedings.mlr.press/v139/meunier21a.html PDF: http://proceedings.mlr.press/v139/meunier21a/meunier21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-meunier21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Laurent family: Meunier - given: Meyer family: Scetbon - given: Rafael B family: Pinot - given: Jamal family: Atif - given: Yann family: Chevaleyre editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7677-7687 id: meunier21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7677 lastpage: 7687 published: 2021-07-01 00:00:00 +0000 - title: 'Learning in Nonzero-Sum Stochastic Games with Potentials' abstract: 'Multi-agent reinforcement learning (MARL) has become effective in tackling discrete cooperative game scenarios. However, MARL has yet to penetrate settings beyond those modelled by team and zero-sum games, confining it to a small subset of multi-agent systems. In this paper, we introduce a new generation of MARL learners that can handle \textit{nonzero-sum} payoff structures and continuous settings. In particular, we study the MARL problem in a class of games known as stochastic potential games (SPGs) with continuous state-action spaces. Unlike cooperative games, in which all agents share a common reward, SPGs are capable of modelling real-world scenarios where agents seek to fulfil their individual goals. We prove theoretically our learning method, $\ourmethod$, enables independent agents to learn Nash equilibrium strategies in \textit{polynomial time}. We demonstrate our framework tackles previously unsolvable tasks such as \textit{Coordination Navigation} and \textit{large selfish routing games} and that it outperforms the state of the art MARL baselines such as MADDPG and COMIX in such scenarios.' volume: 139 URL: https://proceedings.mlr.press/v139/mguni21a.html PDF: http://proceedings.mlr.press/v139/mguni21a/mguni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mguni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David H family: Mguni - given: Yutong family: Wu - given: Yali family: Du - given: Yaodong family: Yang - given: Ziyi family: Wang - given: Minne family: Li - given: Ying family: Wen - given: Joel family: Jennings - given: Jun family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7688-7699 id: mguni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7688 lastpage: 7699 published: 2021-07-01 00:00:00 +0000 - title: 'EfficientTTS: An Efficient and High-Quality Text-to-Speech Architecture' abstract: 'In this work, we address the Text-to-Speech (TTS) task by proposing a non-autoregressive architecture called EfficientTTS. Unlike the dominant non-autoregressive TTS models, which are trained with the need of external aligners, EfficientTTS optimizes all its parameters with a stable, end-to-end training procedure, allowing for synthesizing high quality speech in a fast and efficient manner. EfficientTTS is motivated by a new monotonic alignment modeling approach, which specifies monotonic constraints to the sequence alignment with almost no increase of computation. By combining EfficientTTS with different feed-forward network structures, we develop a family of TTS models, including both text-to-melspectrogram and text-to-waveform networks. We experimentally show that the proposed models significantly outperform counterpart models such as Tacotron 2 and Glow-TTS in terms of speech quality, training efficiency and synthesis speed, while still producing the speeches of strong robustness and great diversity. In addition, we demonstrate that proposed approach can be easily extended to autoregressive models such as Tacotron 2.' volume: 139 URL: https://proceedings.mlr.press/v139/miao21a.html PDF: http://proceedings.mlr.press/v139/miao21a/miao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-miao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chenfeng family: Miao - given: Liang family: Shuang - given: Zhengchen family: Liu - given: Chen family: Minchuan - given: Jun family: Ma - given: Shaojun family: Wang - given: Jing family: Xiao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7700-7709 id: miao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7700 lastpage: 7709 published: 2021-07-01 00:00:00 +0000 - title: 'Outside the Echo Chamber: Optimizing the Performative Risk' abstract: 'In performative prediction, predictions guide decision-making and hence can influence the distribution of future data. To date, work on performative prediction has focused on finding performatively stable models, which are the fixed points of repeated retraining. However, stable solutions can be far from optimal when evaluated in terms of the performative risk, the loss experienced by the decision maker when deploying a model. In this paper, we shift attention beyond performative stability and focus on optimizing the performative risk directly. We identify a natural set of properties of the loss function and model-induced distribution shift under which the performative risk is convex, a property which does not follow from convexity of the loss alone. Furthermore, we develop algorithms that leverage our structural assumptions to optimize the performative risk with better sample efficiency than generic methods for derivative-free convex optimization.' volume: 139 URL: https://proceedings.mlr.press/v139/miller21a.html PDF: http://proceedings.mlr.press/v139/miller21a/miller21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-miller21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: John P family: Miller - given: Juan C family: Perdomo - given: Tijana family: Zrnic editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7710-7720 id: miller21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7710 lastpage: 7720 published: 2021-07-01 00:00:00 +0000 - title: 'Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization' abstract: 'For machine learning systems to be reliable, we must understand their performance in unseen, out- of-distribution environments. In this paper, we empirically show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts. Specifically, we demonstrate strong correlations between in-distribution and out-of- distribution performance on variants of CIFAR- 10 & ImageNet, a synthetic pose estimation task derived from YCB objects, FMoW-WILDS satellite imagery classification, and wildlife classification in iWildCam-WILDS. The correlation holds across model architectures, hyperparameters, training set size, and training duration, and is more precise than what is expected from existing domain adaptation theory. To complete the picture, we also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS. Finally, we provide a candidate theory based on a Gaussian data model that shows how changes in the data covariance arising from distribution shift can affect the observed correlations.' volume: 139 URL: https://proceedings.mlr.press/v139/miller21b.html PDF: http://proceedings.mlr.press/v139/miller21b/miller21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-miller21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: John P family: Miller - given: Rohan family: Taori - given: Aditi family: Raghunathan - given: Shiori family: Sagawa - given: Pang Wei family: Koh - given: Vaishaal family: Shankar - given: Percy family: Liang - given: Yair family: Carmon - given: Ludwig family: Schmidt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7721-7735 id: miller21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7721 lastpage: 7735 published: 2021-07-01 00:00:00 +0000 - title: 'Signatured Deep Fictitious Play for Mean Field Games with Common Noise' abstract: 'Existing deep learning methods for solving mean-field games (MFGs) with common noise fix the sampling common noise paths and then solve the corresponding MFGs. This leads to a nested loop structure with millions of simulations of common noise paths in order to produce accurate solutions, which results in prohibitive computational cost and limits the applications to a large extent. In this paper, based on the rough path theory, we propose a novel single-loop algorithm, named signatured deep fictitious play (Sig-DFP), by which we can work with the unfixed common noise setup to avoid the nested loop structure and reduce the computational complexity significantly. The proposed algorithm can accurately capture the effect of common uncertainty changes on mean-field equilibria without further training of neural networks, as previously needed in the existing machine learning algorithms. The efficiency is supported by three applications, including linear-quadratic MFGs, mean-field portfolio game, and mean-field game of optimal consumption and investment. Overall, we provide a new point of view from the rough path theory to solve MFGs with common noise with significantly improved efficiency and an extensive range of applications. In addition, we report the first deep learning work to deal with extended MFGs (a mean-field interaction via both the states and controls) with common noise.' volume: 139 URL: https://proceedings.mlr.press/v139/min21a.html PDF: http://proceedings.mlr.press/v139/min21a/min21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-min21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ming family: Min - given: Ruimeng family: Hu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7736-7747 id: min21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7736 lastpage: 7747 published: 2021-07-01 00:00:00 +0000 - title: 'Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation' abstract: 'With rapid progress in neural text-to-speech (TTS) models, personalized speech generation is now in high demand for many applications. For practical applicability, a TTS model should generate high-quality speech with only a few audio samples from the given speaker, that are also short in length. However, existing methods either require to fine-tune the model or achieve low adaptation quality without fine-tuning. In this work, we propose StyleSpeech, a new TTS model which not only synthesizes high-quality speech but also effectively adapts to new speakers. Specifically, we propose Style-Adaptive Layer Normalization (SALN) which aligns gain and bias of the text input according to the style extracted from a reference speech audio. With SALN, our model effectively synthesizes speech in the style of the target speaker even from a single speech audio. Furthermore, to enhance StyleSpeech’s adaptation to speech from new speakers, we extend it to Meta-StyleSpeech by introducing two discriminators trained with style prototypes, and performing episodic training. The experimental results show that our models generate high-quality speech which accurately follows the speaker’s voice with single short-duration (1-3 sec) speech audio, significantly outperforming baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/min21b.html PDF: http://proceedings.mlr.press/v139/min21b/min21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-min21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dongchan family: Min - given: Dong Bok family: Lee - given: Eunho family: Yang - given: Sung Ju family: Hwang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7748-7759 id: min21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7748 lastpage: 7759 published: 2021-07-01 00:00:00 +0000 - title: 'On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks' abstract: 'Neural networks trained via gradient descent with random initialization and without any regularization enjoy good generalization performance in practice despite being highly overparametrized. A promising direction to explain this phenomenon is to study how initialization and overparametrization affect convergence and implicit bias of training algorithms. In this paper, we present a novel analysis of single-hidden-layer linear networks trained under gradient flow, which connects initialization, optimization, and overparametrization. Firstly, we show that the squared loss converges exponentially to its optimum at a rate that depends on the level of imbalance of the initialization. Secondly, we show that proper initialization constrains the dynamics of the network parameters to lie within an invariant set. In turn, minimizing the loss over this set leads to the min-norm solution. Finally, we show that large hidden layer width, together with (properly scaled) random initialization, ensures proximity to such an invariant set during training, allowing us to derive a novel non-asymptotic upper-bound on the distance between the trained network and the min-norm solution.' volume: 139 URL: https://proceedings.mlr.press/v139/min21c.html PDF: http://proceedings.mlr.press/v139/min21c/min21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-min21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hancheng family: Min - given: Salma family: Tarmoun - given: Rene family: Vidal - given: Enrique family: Mallada editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7760-7768 id: min21c issued: date-parts: - 2021 - 7 - 1 firstpage: 7760 lastpage: 7768 published: 2021-07-01 00:00:00 +0000 - title: 'An Identifiable Double VAE For Disentangled Representations' abstract: 'A large part of the literature on learning disentangled representations focuses on variational autoencoders (VAEs). Recent developments demonstrate that disentanglement cannot be obtained in a fully unsupervised setting without inductive biases on models and data. However, Khemakhem et al., AISTATS, 2020 suggest that employing a particular form of factorized prior, conditionally dependent on auxiliary variables complementing input observations, can be one such bias, resulting in an identifiable model with guarantees on disentanglement. Working along this line, we propose a novel VAE-based generative model with theoretical guarantees on identifiability. We obtain our conditional prior over the latents by learning an optimal representation, which imposes an additional strength on their regularization. We also extend our method to semi-supervised settings. Experimental results indicate superior performance with respect to state-of-the-art approaches, according to several established metrics proposed in the literature on disentanglement.' volume: 139 URL: https://proceedings.mlr.press/v139/mita21a.html PDF: http://proceedings.mlr.press/v139/mita21a/mita21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mita21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Graziano family: Mita - given: Maurizio family: Filippone - given: Pietro family: Michiardi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7769-7779 id: mita21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7769 lastpage: 7779 published: 2021-07-01 00:00:00 +0000 - title: 'Offline Meta-Reinforcement Learning with Advantage Weighting' abstract: 'This paper introduces the offline meta-reinforcement learning (offline meta-RL) problem setting and proposes an algorithm that performs well in this setting. Offline meta-RL is analogous to the widely successful supervised learning strategy of pre-training a model on a large batch of fixed, pre-collected data (possibly from various tasks) and fine-tuning the model to a new task with relatively little data. That is, in offline meta-RL, we meta-train on fixed, pre-collected data from several tasks and adapt to a new task with a very small amount (less than 5 trajectories) of data from the new task. By nature of being offline, algorithms for offline meta-RL can utilize the largest possible pool of training data available and eliminate potentially unsafe or costly data collection during meta-training. This setting inherits the challenges of offline RL, but it differs significantly because offline RL does not generally consider a) transfer to new tasks or b) limited data from the test task, both of which we face in offline meta-RL. Targeting the offline meta-RL setting, we propose Meta-Actor Critic with Advantage Weighting (MACAW). MACAW is an optimization-based meta-learning algorithm that uses simple, supervised regression objectives for both the inner and outer loop of meta-training. On offline variants of common meta-RL benchmarks, we empirically find that this approach enables fully offline meta-reinforcement learning and achieves notable gains over prior methods.' volume: 139 URL: https://proceedings.mlr.press/v139/mitchell21a.html PDF: http://proceedings.mlr.press/v139/mitchell21a/mitchell21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mitchell21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eric family: Mitchell - given: Rafael family: Rafailov - given: Xue Bin family: Peng - given: Sergey family: Levine - given: Chelsea family: Finn editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7780-7791 id: mitchell21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7780 lastpage: 7791 published: 2021-07-01 00:00:00 +0000 - title: 'The Power of Log-Sum-Exp: Sequential Density Ratio Matrix Estimation for Speed-Accuracy Optimization' abstract: 'We propose a model for multiclass classification of time series to make a prediction as early and as accurate as possible. The matrix sequential probability ratio test (MSPRT) is known to be asymptotically optimal for this setting, but contains a critical assumption that hinders broad real-world applications; the MSPRT requires the underlying probability density. To address this problem, we propose to solve density ratio matrix estimation (DRME), a novel type of density ratio estimation that consists of estimating matrices of multiple density ratios with constraints and thus is more challenging than the conventional density ratio estimation. We propose a log-sum-exp-type loss function (LSEL) for solving DRME and prove the following: (i) the LSEL provides the true density ratio matrix as the sample size of the training set increases (consistency); (ii) it assigns larger gradients to harder classes (hard class weighting effect); and (iii) it provides discriminative scores even on class-imbalanced datasets (guess-aversion). Our overall architecture for early classification, MSPRT-TANDEM, statistically significantly outperforms baseline models on four datasets including action recognition, especially in the early stage of sequential observations. Our code and datasets are publicly available.' volume: 139 URL: https://proceedings.mlr.press/v139/miyagawa21a.html PDF: http://proceedings.mlr.press/v139/miyagawa21a/miyagawa21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-miyagawa21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Taiki family: Miyagawa - given: Akinori F family: Ebihara editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7792-7804 id: miyagawa21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7792 lastpage: 7804 published: 2021-07-01 00:00:00 +0000 - title: 'PODS: Policy Optimization via Differentiable Simulation' abstract: 'Current reinforcement learning (RL) methods use simulation models as simple black-box oracles. In this paper, with the goal of improving the performance exhibited by RL algorithms, we explore a systematic way of leveraging the additional information provided by an emerging class of differentiable simulators. Building on concepts established by Deterministic Policy Gradients (DPG) methods, the neural network policies learned with our approach represent deterministic actions. In a departure from standard methodologies, however, learning these policies does not hinge on approximations of the value function that must be learned concurrently in an actor-critic fashion. Instead, we exploit differentiable simulators to directly compute the analytic gradient of a policy’s value function with respect to the actions it outputs. This, in turn, allows us to efficiently perform locally optimal policy improvement iterations. Compared against other state-of-the-art RL methods, we show that with minimal hyper-parameter tuning our approach consistently leads to better asymptotic behavior across a set of payload manipulation tasks that demand a high degree of accuracy and precision.' volume: 139 URL: https://proceedings.mlr.press/v139/mora21a.html PDF: http://proceedings.mlr.press/v139/mora21a/mora21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mora21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Miguel Angel Zamora family: Mora - given: Momchil family: Peychev - given: Sehoon family: Ha - given: Martin family: Vechev - given: Stelian family: Coros editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7805-7817 id: mora21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7805 lastpage: 7817 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Deviation Types and Learning for Hindsight Rationality in Extensive-Form Games' abstract: 'Hindsight rationality is an approach to playing general-sum games that prescribes no-regret learning dynamics for individual agents with respect to a set of deviations, and further describes jointly rational behavior among multiple agents with mediated equilibria. To develop hindsight rational learning in sequential decision-making settings, we formalize behavioral deviations as a general class of deviations that respect the structure of extensive-form games. Integrating the idea of time selection into counterfactual regret minimization (CFR), we introduce the extensive-form regret minimization (EFR) algorithm that achieves hindsight rationality for any given set of behavioral deviations with computation that scales closely with the complexity of the set. We identify behavioral deviation subsets, the partial sequence deviation types, that subsume previously studied types and lead to efficient EFR instances in games with moderate lengths. In addition, we present a thorough empirical analysis of EFR instantiated with different deviation types in benchmark games, where we find that stronger types typically induce better performance.' volume: 139 URL: https://proceedings.mlr.press/v139/morrill21a.html PDF: http://proceedings.mlr.press/v139/morrill21a/morrill21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-morrill21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dustin family: Morrill - given: Ryan family: D’Orazio - given: Marc family: Lanctot - given: James R family: Wright - given: Michael family: Bowling - given: Amy R family: Greenwald editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7818-7828 id: morrill21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7818 lastpage: 7828 published: 2021-07-01 00:00:00 +0000 - title: 'Neural Rough Differential Equations for Long Time Series' abstract: 'Neural controlled differential equations (CDEs) are the continuous-time analogue of recurrent neural networks, as Neural ODEs are to residual networks, and offer a memory-efficient continuous-time way to model functions of potentially irregular time series. Existing methods for computing the forward pass of a Neural CDE involve embedding the incoming time series into path space, often via interpolation, and using evaluations of this path to drive the hidden state. Here, we use rough path theory to extend this formulation. Instead of directly embedding into path space, we instead represent the input signal over small time intervals through its \textit{log-signature}, which are statistics describing how the signal drives a CDE. This is the approach for solving \textit{rough differential equations} (RDEs), and correspondingly we describe our main contribution as the introduction of Neural RDEs. This extension has a purpose: by generalising the Neural CDE approach to a broader class of driving signals, we demonstrate particular advantages for tackling long time series. In this regime, we demonstrate efficacy on problems of length up to 17k observations and observe significant training speed-ups, improvements in model performance, and reduced memory requirements compared to existing approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/morrill21b.html PDF: http://proceedings.mlr.press/v139/morrill21b/morrill21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-morrill21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: James family: Morrill - given: Cristopher family: Salvi - given: Patrick family: Kidger - given: James family: Foster editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7829-7838 id: morrill21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7829 lastpage: 7838 published: 2021-07-01 00:00:00 +0000 - title: 'Connecting Interpretability and Robustness in Decision Trees through Separation' abstract: 'Recent research has recognized interpretability and robustness as essential properties of trustworthy classification. Curiously, a connection between robustness and interpretability was empirically observed, but the theoretical reasoning behind it remained elusive. In this paper, we rigorously investigate this connection. Specifically, we focus on interpretation using decision trees and robustness to l_{\infty}-perturbation. Previous works defined the notion of r-separation as a sufficient condition for robustness. We prove upper and lower bounds on the tree size in case the data is r-separated. We then show that a tighter bound on the size is possible when the data is linearly separated. We provide the first algorithm with provable guarantees both on robustness, interpretability, and accuracy in the context of decision trees. Experiments confirm that our algorithm yields classifiers that are both interpretable and robust and have high accuracy.' volume: 139 URL: https://proceedings.mlr.press/v139/moshkovitz21a.html PDF: http://proceedings.mlr.press/v139/moshkovitz21a/moshkovitz21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-moshkovitz21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michal family: Moshkovitz - given: Yao-Yuan family: Yang - given: Kamalika family: Chaudhuri editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7839-7849 id: moshkovitz21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7839 lastpage: 7849 published: 2021-07-01 00:00:00 +0000 - title: 'Outlier-Robust Optimal Transport' abstract: 'Optimal transport (OT) measures distances between distributions in a way that depends on the geometry of the sample space. In light of recent advances in computational OT, OT distances are widely used as loss functions in machine learning. Despite their prevalence and advantages, OT loss functions can be extremely sensitive to outliers. In fact, a single adversarially-picked outlier can increase the standard $W_2$-distance arbitrarily. To address this issue, we propose an outlier-robust formulation of OT. Our formulation is convex but challenging to scale at a first glance. Our main contribution is deriving an \emph{equivalent} formulation based on cost truncation that is easy to incorporate into modern algorithms for computational OT. We demonstrate the benefits of our formulation in mean estimation problems under the Huber contamination model in simulations and outlier detection tasks on real data.' volume: 139 URL: https://proceedings.mlr.press/v139/mukherjee21a.html PDF: http://proceedings.mlr.press/v139/mukherjee21a/mukherjee21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mukherjee21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Debarghya family: Mukherjee - given: Aritra family: Guha - given: Justin M family: Solomon - given: Yuekai family: Sun - given: Mikhail family: Yurochkin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7850-7860 id: mukherjee21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7850 lastpage: 7860 published: 2021-07-01 00:00:00 +0000 - title: 'Oblivious Sketching for Logistic Regression' abstract: 'What guarantees are possible for solving logistic regression in one pass over a data stream? To answer this question, we present the first data oblivious sketch for logistic regression. Our sketch can be computed in input sparsity time over a turnstile data stream and reduces the size of a $d$-dimensional data set from $n$ to only $\operatorname{poly}(\mu d\log n)$ weighted points, where $\mu$ is a useful parameter which captures the complexity of compressing the data. Solving (weighted) logistic regression on the sketch gives an $O(\log n)$-approximation to the original problem on the full data set. We also show how to obtain an $O(1)$-approximation with slight modifications. Our sketches are fast, simple, easy to implement, and our experiments demonstrate their practicality.' volume: 139 URL: https://proceedings.mlr.press/v139/munteanu21a.html PDF: http://proceedings.mlr.press/v139/munteanu21a/munteanu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-munteanu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexander family: Munteanu - given: Simon family: Omlor - given: David family: Woodruff editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7861-7871 id: munteanu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7861 lastpage: 7871 published: 2021-07-01 00:00:00 +0000 - title: 'Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning' abstract: 'Recently, local SGD has got much attention and been extensively studied in the distributed learning community to overcome the communication bottleneck problem. However, the superiority of local SGD to minibatch SGD only holds in quite limited situations. In this paper, we study a new local algorithm called Bias-Variance Reduced Local SGD (BVR-L-SGD) for nonconvex distributed optimization. Algorithmically, our proposed bias and variance reduced local gradient estimator fully utilizes small second-order heterogeneity of local objectives and suggests randomly picking up one of the local models instead of taking the average of them when workers are synchronized. Theoretically, under small heterogeneity of local objectives, we show that BVR-L-SGD achieves better communication complexity than both the previous non-local and local methods under mild conditions, and particularly BVR-L-SGD is the first method that breaks the barrier of communication complexity $\Theta(1/\varepsilon)$ for general nonconvex smooth objectives when the heterogeneity is small and the local computation budget is large. Numerical results are given to verify the theoretical findings and give empirical evidence of the superiority of our method.' volume: 139 URL: https://proceedings.mlr.press/v139/murata21a.html PDF: http://proceedings.mlr.press/v139/murata21a/murata21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-murata21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tomoya family: Murata - given: Taiji family: Suzuki editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7872-7881 id: murata21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7872 lastpage: 7881 published: 2021-07-01 00:00:00 +0000 - title: 'Implicit-PDF: Non-Parametric Representation of Probability Distributions on the Rotation Manifold' abstract: 'In the deep learning era, the vast majority of methods to predict pose from a single image are trained to classify or regress to a single given ground truth pose per image. Such methods have two main shortcomings, i) they cannot represent uncertainty about the predictions, and ii) they cannot handle symmetric objects, where multiple (potentially infinite) poses may be correct. Only recently these shortcomings have been addressed, but current approaches as limited in that they cannot express the full rich space of distributions on the rotation manifold. To this end, we introduce a method to estimate arbitrary, non-parametric distributions on SO(3). Our key idea is to represent the distributions implicitly, with a neural network that estimates the probability density, given the input image and a candidate pose. At inference time, grid sampling or gradient ascent can be used to find the most likely pose, but it is also possible to evaluate the density at any pose, enabling reasoning about symmetries and uncertainty. This is the most general way of representing distributions on manifolds, and to demonstrate its expressive power we introduce a new dataset containing symmetric and nearly-symmetric objects. Our method also shows advantages on the popular object pose estimation benchmarks ModelNet10-SO(3) and T-LESS. Code, data, and visualizations may be found at implicit-pdf.github.io.' volume: 139 URL: https://proceedings.mlr.press/v139/murphy21a.html PDF: http://proceedings.mlr.press/v139/murphy21a/murphy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-murphy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kieran A family: Murphy - given: Carlos family: Esteves - given: Varun family: Jampani - given: Srikumar family: Ramalingam - given: Ameesh family: Makadia editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7882-7893 id: murphy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7882 lastpage: 7893 published: 2021-07-01 00:00:00 +0000 - title: 'No-regret Algorithms for Capturing Events in Poisson Point Processes' abstract: 'Inhomogeneous Poisson point processes are widely used models of event occurrences. We address \emph{adaptive sensing of Poisson Point processes}, namely, maximizing the number of captured events subject to sensing costs. We encode prior assumptions on the rate function by modeling it as a member of a known \emph{reproducing kernel Hilbert space} (RKHS). By partitioning the domain into separate small regions, and using heteroscedastic linear regression, we propose a tractable estimator of Poisson process rates for two feedback models: \emph{count-record}, where exact locations of events are observed, and \emph{histogram} feedback, where only counts of events are observed. We derive provably accurate anytime confidence estimates for our estimators for sequentially acquired Poisson count data. Using these, we formulate algorithms based on optimism that provably incur sublinear count-regret. We demonstrate the practicality of the method on problems from crime modeling, revenue maximization as well as environmental monitoring.' volume: 139 URL: https://proceedings.mlr.press/v139/mutny21a.html PDF: http://proceedings.mlr.press/v139/mutny21a/mutny21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-mutny21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mojmir family: Mutny - given: Andreas family: Krause editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7894-7904 id: mutny21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7894 lastpage: 7904 published: 2021-07-01 00:00:00 +0000 - title: 'Online Limited Memory Neural-Linear Bandits with Likelihood Matching' abstract: 'We study neural-linear bandits for solving problems where {\em both} exploration and representation learning play an important role. Neural-linear bandits harnesses the representation power of Deep Neural Networks (DNNs) and combines it with efficient exploration mechanisms by leveraging uncertainty estimation of the model, designed for linear contextual bandits on top of the last hidden layer. In order to mitigate the problem of representation change during the process, new uncertainty estimations are computed using stored data from an unlimited buffer. Nevertheless, when the amount of stored data is limited, a phenomenon called catastrophic forgetting emerges. To alleviate this, we propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online. We applied our algorithm, Limited Memory Neural-Linear with Likelihood Matching (NeuralLinear-LiM2) on a variety of datasets and observed that our algorithm achieves comparable performance to the unlimited memory approach while exhibits resilience to catastrophic forgetting.' volume: 139 URL: https://proceedings.mlr.press/v139/nabati21a.html PDF: http://proceedings.mlr.press/v139/nabati21a/nabati21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nabati21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ofir family: Nabati - given: Tom family: Zahavy - given: Shie family: Mannor editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7905-7915 id: nabati21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7905 lastpage: 7915 published: 2021-07-01 00:00:00 +0000 - title: 'Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding' abstract: 'Variational autoencoder (VAE) estimates the posterior parameters (mean and variance) of latent variables corresponding to each input data. While it is used for many tasks, the transparency of the model is still an underlying issue. This paper provides a quantitative understanding of VAE property through the differential geometric and information-theoretic interpretations of VAE. According to the Rate-distortion theory, the optimal transform coding is achieved by using an orthonormal transform with PCA basis where the transform space is isometric to the input. Considering the analogy of transform coding to VAE, we clarify theoretically and experimentally that VAE can be mapped to an implicit isometric embedding with a scale factor derived from the posterior parameter. As a result, we can estimate the data probabilities in the input space from the prior, loss metrics, and corresponding posterior parameters, and further, the quantitative importance of each latent variable can be evaluated like the eigenvalue of PCA.' volume: 139 URL: https://proceedings.mlr.press/v139/nakagawa21a.html PDF: http://proceedings.mlr.press/v139/nakagawa21a/nakagawa21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nakagawa21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Akira family: Nakagawa - given: Keizo family: Kato - given: Taiji family: Suzuki editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7916-7926 id: nakagawa21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7916 lastpage: 7926 published: 2021-07-01 00:00:00 +0000 - title: 'GMAC: A Distributional Perspective on Actor-Critic Framework' abstract: 'In this paper, we devise a distributional framework on actor-critic as a solution to distributional instability, action type restriction, and conflation between samples and statistics. We propose a new method that minimizes the Cram{é}r distance with the multi-step Bellman target distribution generated from a novel Sample-Replacement algorithm denoted SR(\lambda), which learns the correct value distribution under multiple Bellman operations. Parameterizing a value distribution with Gaussian Mixture Model further improves the efficiency and the performance of the method, which we name GMAC. We empirically show that GMAC captures the correct representation of value distributions and improves the performance of a conventional actor-critic method with low computational cost, in both discrete and continuous action spaces using Arcade Learning Environment (ALE) and PyBullet environment.' volume: 139 URL: https://proceedings.mlr.press/v139/nam21a.html PDF: http://proceedings.mlr.press/v139/nam21a/nam21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nam21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Daniel W family: Nam - given: Younghoon family: Kim - given: Chan Y family: Park editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7927-7936 id: nam21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7927 lastpage: 7936 published: 2021-07-01 00:00:00 +0000 - title: 'Memory-Efficient Pipeline-Parallel DNN Training' abstract: 'Many state-of-the-art ML results have been obtained by scaling up the number of parameters in existing models. However, parameters and activations for such large models often do not fit in the memory of a single accelerator device; this means that it is necessary to distribute training of large models over multiple accelerators. In this work, we propose PipeDream-2BW, a system that supports memory-efficient pipeline parallelism. PipeDream-2BW uses a novel pipelining and weight gradient coalescing strategy, combined with the double buffering of weights, to ensure high throughput, low memory footprint, and weight update semantics similar to data parallelism. In addition, PipeDream-2BW automatically partitions the model over the available hardware resources, while respecting hardware constraints such as memory capacities of accelerators and interconnect topologies. PipeDream-2BW can accelerate the training of large GPT and BERT language models by up to 20x with similar final model accuracy.' volume: 139 URL: https://proceedings.mlr.press/v139/narayanan21a.html PDF: http://proceedings.mlr.press/v139/narayanan21a/narayanan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-narayanan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Deepak family: Narayanan - given: Amar family: Phanishayee - given: Kaiyu family: Shi - given: Xie family: Chen - given: Matei family: Zaharia editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7937-7947 id: narayanan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7937 lastpage: 7947 published: 2021-07-01 00:00:00 +0000 - title: 'Randomized Dimensionality Reduction for Facility Location and Single-Linkage Clustering' abstract: 'Random dimensionality reduction is a versatile tool for speeding up algorithms for high-dimensional problems. We study its application to two clustering problems: the facility location problem, and the single-linkage hierarchical clustering problem, which is equivalent to computing the minimum spanning tree. We show that if we project the input pointset $X$ onto a random $d = O(d_X)$-dimensional subspace (where $d_X$ is the doubling dimension of $X$), then the optimum facility location cost in the projected space approximates the original cost up to a constant factor. We show an analogous statement for minimum spanning tree, but with the dimension $d$ having an extra $\log \log n$ term and the approximation factor being arbitrarily close to $1$. Furthermore, we extend these results to approximating {\em solutions} instead of just their {\em costs}. Lastly, we provide experimental results to validate the quality of solutions and the speedup due to the dimensionality reduction. Unlike several previous papers studying this approach in the context of $k$-means and $k$-medians, our dimension bound does not depend on the number of clusters but only on the intrinsic dimensionality of $X$.' volume: 139 URL: https://proceedings.mlr.press/v139/narayanan21b.html PDF: http://proceedings.mlr.press/v139/narayanan21b/narayanan21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-narayanan21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shyam family: Narayanan - given: Sandeep family: Silwal - given: Piotr family: Indyk - given: Or family: Zamir editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7948-7957 id: narayanan21b issued: date-parts: - 2021 - 7 - 1 firstpage: 7948 lastpage: 7957 published: 2021-07-01 00:00:00 +0000 - title: 'Generating images with sparse representations' abstract: 'The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models. Previous approaches such as VQ-VAE use deep autoencoders to obtain compact representations, which are more practical as inputs for likelihood-based models. We present an alternative approach, inspired by common image compression methods like JPEG, and convert images to quantized discrete cosine transform (DCT) blocks, which are represented sparsely as a sequence of DCT channel, spatial location, and DCT coefficient triples. We propose a Transformer-based autoregressive architecture, which is trained to sequentially predict the conditional distribution of the next element in such sequences, and which scales effectively to high resolution images. On a range of image datasets, we demonstrate that our approach can generate high quality, diverse images, with sample metric scores competitive with state of the art methods. We additionally show that simple modifications to our method yield effective image colorization and super-resolution models.' volume: 139 URL: https://proceedings.mlr.press/v139/nash21a.html PDF: http://proceedings.mlr.press/v139/nash21a/nash21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nash21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Charlie family: Nash - given: Jacob family: Menick - given: Sander family: Dieleman - given: Peter family: Battaglia editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7958-7968 id: nash21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7958 lastpage: 7968 published: 2021-07-01 00:00:00 +0000 - title: 'Geometric convergence of elliptical slice sampling' abstract: 'For Bayesian learning, given likelihood function and Gaussian prior, the elliptical slice sampler, introduced by Murray, Adams and MacKay 2010, provides a tool for the construction of a Markov chain for approximate sampling of the underlying posterior distribution. Besides of its wide applicability and simplicity its main feature is that no tuning is necessary. Under weak regularity assumptions on the posterior density we show that the corresponding Markov chain is geometrically ergodic and therefore yield qualitative convergence guarantees. We illustrate our result for Gaussian posteriors as they appear in Gaussian process regression in a fully Gaussian scenario, which for example is exhibited in Gaussian process regression, as well as in a setting of a multi-modal distribution. Remarkably, our numerical experiments indicate a dimension-independent performance of elliptical slice sampling even in situations where our ergodicity result does not apply.' volume: 139 URL: https://proceedings.mlr.press/v139/natarovskii21a.html PDF: http://proceedings.mlr.press/v139/natarovskii21a/natarovskii21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-natarovskii21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Viacheslav family: Natarovskii - given: Daniel family: Rudolf - given: Björn family: Sprungk editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7969-7978 id: natarovskii21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7969 lastpage: 7978 published: 2021-07-01 00:00:00 +0000 - title: 'HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search' abstract: 'Realistic use of neural networks often requires adhering to multiple constraints on latency, energy and memory among others. A popular approach to find fitting networks is through constrained Neural Architecture Search (NAS), however, previous methods enforce the constraint only softly. Therefore, the resulting networks do not exactly adhere to the resource constraint and their accuracy is harmed. In this work we resolve this by introducing Hard Constrained diffeRentiable NAS (HardCoRe-NAS), that is based on an accurate formulation of the expected resource requirement and a scalable search method that satisfies the hard constraint throughout the search. Our experiments show that HardCoRe-NAS generates state-of-the-art architectures, surpassing other NAS methods, while strictly satisfying the hard resource constraints without any tuning required.' volume: 139 URL: https://proceedings.mlr.press/v139/nayman21a.html PDF: http://proceedings.mlr.press/v139/nayman21a/nayman21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nayman21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Niv family: Nayman - given: Yonathan family: Aflalo - given: Asaf family: Noy - given: Lihi family: Zelnik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7979-7990 id: nayman21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7979 lastpage: 7990 published: 2021-07-01 00:00:00 +0000 - title: 'Emergent Social Learning via Multi-agent Reinforcement Learning' abstract: 'Social learning is a key component of human and animal intelligence. By taking cues from the behavior of experts in their environment, social learners can acquire sophisticated behavior and rapidly adapt to new circumstances. This paper investigates whether independent reinforcement learning (RL) agents in a multi-agent environment can learn to use social learning to improve their performance. We find that in most circumstances, vanilla model-free RL agents do not use social learning. We analyze the reasons for this deficiency, and show that by imposing constraints on the training environment and introducing a model-based auxiliary loss we are able to obtain generalized social learning policies which enable agents to: i) discover complex skills that are not learned from single-agent training, and ii) adapt online to novel environments by taking cues from experts present in the new environment. In contrast, agents trained with model-free RL or imitation learning generalize poorly and do not succeed in the transfer tasks. By mixing multi-agent and solo training, we can obtain agents that use social learning to gain skills that they can deploy when alone, even out-performing agents trained alone from the start.' volume: 139 URL: https://proceedings.mlr.press/v139/ndousse21a.html PDF: http://proceedings.mlr.press/v139/ndousse21a/ndousse21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ndousse21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kamal K family: Ndousse - given: Douglas family: Eck - given: Sergey family: Levine - given: Natasha family: Jaques editor: - given: Marina family: Meila - given: Tong family: Zhang page: 7991-8004 id: ndousse21a issued: date-parts: - 2021 - 7 - 1 firstpage: 7991 lastpage: 8004 published: 2021-07-01 00:00:00 +0000 - title: 'Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mutual Information' abstract: 'In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations. One example is budget constrained global optimization of f, for which Bayesian optimization is a popular method. Other properties of interest include local optima, level sets, integrals, or graph-structured information induced by f. Often, we can find an algorithm A to compute the desired property, but it may require far more than T queries to execute. Given such an A, and a prior distribution over f, we refer to the problem of inferring the output of A using T evaluations as Bayesian Algorithm Execution (BAX). To tackle this problem, we present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm’s output. Applying this to Dijkstra’s algorithm, for instance, we infer shortest paths in synthetic and real-world graphs with black-box edge costs. Using evolution strategies, we yield variants of Bayesian optimization that target local, rather than global, optima. On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm. Our method is closely connected to other Bayesian optimal experimental design procedures such as entropy search methods and optimal sensor placement using Gaussian processes.' volume: 139 URL: https://proceedings.mlr.press/v139/neiswanger21a.html PDF: http://proceedings.mlr.press/v139/neiswanger21a/neiswanger21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-neiswanger21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Willie family: Neiswanger - given: Ke Alexander family: Wang - given: Stefano family: Ermon editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8005-8015 id: neiswanger21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8005 lastpage: 8015 published: 2021-07-01 00:00:00 +0000 - title: 'Continuous Coordination As a Realistic Scenario for Lifelong Learning' abstract: 'Current deep reinforcement learning (RL) algorithms are still highly task-specific and lack the ability to generalize to new environments. Lifelong learning (LLL), however, aims at solving multiple tasks sequentially by efficiently transferring and using knowledge between tasks. Despite a surge of interest in lifelong RL in recent years, the lack of a realistic testbed makes robust evaluation of LLL algorithms difficult. Multi-agent RL (MARL), on the other hand, can be seen as a natural scenario for lifelong RL due to its inherent non-stationarity, since the agents’ policies change over time. In this work, we introduce a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings. Our setup is based on Hanabi {—} a partially-observable, fully cooperative multi-agent game that has been shown to be challenging for zero-shot coordination. Its large strategy space makes it a desirable environment for lifelong RL tasks. We evaluate several recent MARL methods, and benchmark state-of-the-art LLL algorithms in limited memory and computation regimes to shed light on their strengths and weaknesses. This continual learning paradigm also provides us with a pragmatic way of going beyond centralized training which is the most commonly used training protocol in MARL. We empirically show that the agents trained in our setup are able to coordinate well with unseen agents, without any additional assumptions made by previous works. The code and all pre-trained models are available at https://github.com/chandar-lab/Lifelong-Hanabi.' volume: 139 URL: https://proceedings.mlr.press/v139/nekoei21a.html PDF: http://proceedings.mlr.press/v139/nekoei21a/nekoei21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nekoei21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hadi family: Nekoei - given: Akilesh family: Badrinaaraayanan - given: Aaron family: Courville - given: Sarath family: Chandar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8016-8024 id: nekoei21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8016 lastpage: 8024 published: 2021-07-01 00:00:00 +0000 - title: 'Policy Caches with Successor Features' abstract: 'Transfer in reinforcement learning is based on the idea that it is possible to use what is learned in one task to improve the learning process in another task. For transfer between tasks which share transition dynamics but differ in reward function, successor features have been shown to be a useful representation which allows for efficient computation of action-value functions for previously-learned policies in new tasks. These functions induce policies in the new tasks, so an agent may not need to learn a new policy for each new task it encounters, especially if it is allowed some amount of suboptimality in those tasks. We present new bounds for the performance of optimal policies in a new task, as well as an approach to use these bounds to decide, when presented with a new task, whether to use cached policies or learn a new policy.' volume: 139 URL: https://proceedings.mlr.press/v139/nemecek21a.html PDF: http://proceedings.mlr.press/v139/nemecek21a/nemecek21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nemecek21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mark family: Nemecek - given: Ronald family: Parr editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8025-8033 id: nemecek21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8025 lastpage: 8033 published: 2021-07-01 00:00:00 +0000 - title: 'Causality-aware counterfactual confounding adjustment as an alternative to linear residualization in anticausal prediction tasks based on linear learners' abstract: 'Linear residualization is a common practice for confounding adjustment in machine learning applications. Recently, causality-aware predictive modeling has been proposed as an alternative causality-inspired approach for adjusting for confounders. In this paper, we compare the linear residualization approach against the causality-aware confounding adjustment in anticausal prediction tasks. Our comparisons include both the settings where the training and test sets come from the same distributions, as well as, when the training and test sets are shifted due to selection biases. In the absence of dataset shifts, we show that the causality-aware approach tends to (asymptotically) outperform the residualization adjustment in terms of predictive performance in linear learners. Importantly, our results still holds even when the true model generating the data is not linear. We illustrate our results in both regression and classification tasks. Furthermore, in the presence of dataset shifts in the joint distribution of the confounders and outcome variables, we show that the causality-aware approach is more stable than linear residualization.' volume: 139 URL: https://proceedings.mlr.press/v139/neto21a.html PDF: http://proceedings.mlr.press/v139/neto21a/neto21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-neto21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Elias Chaibub family: Neto editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8034-8044 id: neto21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8034 lastpage: 8044 published: 2021-07-01 00:00:00 +0000 - title: 'Incentivizing Compliance with Algorithmic Instruments' abstract: 'Randomized experiments can be susceptible to selection bias due to potential non-compliance by the participants. While much of the existing work has studied compliance as a static behavior, we propose a game-theoretic model to study compliance as dynamic behavior that may change over time. In rounds, a social planner interacts with a sequence of heterogeneous agents who arrive with their unobserved private type that determines both their prior preferences across the actions (e.g., control and treatment) and their baseline rewards without taking any treatment. The planner provides each agent with a randomized recommendation that may alter their beliefs and their action selection. We develop a novel recommendation mechanism that views the planner’s recommendation as a form of instrumental variable (IV) that only affects an agents’ action selection, but not the observed rewards. We construct such IVs by carefully mapping the history –the interactions between the planner and the previous agents– to a random recommendation. Even though the initial agents may be completely non-compliant, our mechanism can incentivize compliance over time, thereby enabling the estimation of the treatment effect of each treatment, and minimizing the cumulative regret of the planner whose goal is to identify the optimal treatment.' volume: 139 URL: https://proceedings.mlr.press/v139/ngo21a.html PDF: http://proceedings.mlr.press/v139/ngo21a/ngo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ngo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dung Daniel T family: Ngo - given: Logan family: Stapleton - given: Vasilis family: Syrgkanis - given: Steven family: Wu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8045-8055 id: ngo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8045 lastpage: 8055 published: 2021-07-01 00:00:00 +0000 - title: 'On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths' abstract: 'We give a simple proof for the global convergence of gradient descent in training deep ReLU networks with the standard square loss, and show some of its improvements over the state-of-the-art. In particular, while prior works require all the hidden layers to be wide with width at least $\Omega(N^8)$ ($N$ being the number of training samples), we require a single wide layer of linear, quadratic or cubic width depending on the type of initialization. Unlike many recent proofs based on the Neural Tangent Kernel (NTK), our proof need not track the evolution of the entire NTK matrix, or more generally, any quantities related to the changes of activation patterns during training. Instead, we only need to track the evolution of the output at the last hidden layer, which can be done much more easily thanks to the Lipschitz property of ReLU. Some highlights of our setting: (i) all the layers are trained with standard gradient descent, (ii) the network has standard parameterization as opposed to the NTK one, and (iii) the network has a single wide layer as opposed to having all wide hidden layers as in most of NTK-related results.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21a.html PDF: http://proceedings.mlr.press/v139/nguyen21a/nguyen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Quynh family: Nguyen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8056-8062 id: nguyen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8056 lastpage: 8062 published: 2021-07-01 00:00:00 +0000 - title: 'Value-at-Risk Optimization with Gaussian Processes' abstract: 'Value-at-risk (VaR) is an established measure to assess risks in critical real-world applications with random environmental factors. This paper presents a novel VaR upper confidence bound (V-UCB) algorithm for maximizing the VaR of a black-box objective function with the first no-regret guarantee. To realize this, we first derive a confidence bound of VaR and then prove the existence of values of the environmental random variable (to be selected to achieve no regret) such that the confidence bound of VaR lies within that of the objective function evaluated at such values. Our V-UCB algorithm empirically demonstrates state-of-the-art performance in optimizing synthetic benchmark functions, a portfolio optimization problem, and a simulated robot task.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21b.html PDF: http://proceedings.mlr.press/v139/nguyen21b/nguyen21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Quoc Phong family: Nguyen - given: Zhongxiang family: Dai - given: Bryan Kian Hsiang family: Low - given: Patrick family: Jaillet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8063-8072 id: nguyen21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8063 lastpage: 8072 published: 2021-07-01 00:00:00 +0000 - title: 'Cross-model Back-translated Distillation for Unsupervised Machine Translation' abstract: 'Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT’14 English-French, WMT’16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5–3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21c.html PDF: http://proceedings.mlr.press/v139/nguyen21c/nguyen21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xuan-Phi family: Nguyen - given: Shafiq family: Joty - given: Thanh-Tung family: Nguyen - given: Kui family: Wu - given: Ai Ti family: Aw editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8073-8083 id: nguyen21c issued: date-parts: - 2021 - 7 - 1 firstpage: 8073 lastpage: 8083 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Transport Kernels for Sequential and Parallel Neural Architecture Search' abstract: 'Neural architecture search (NAS) automates the design of deep neural networks. One of the main challenges in searching complex and non-continuous architectures is to compare the similarity of networks that the conventional Euclidean metric may fail to capture. Optimal transport (OT) is resilient to such complex structure by considering the minimal cost for transporting a network into another. However, the OT is generally not negative definite which may limit its ability to build the positive-definite kernels required in many kernel-dependent frameworks. Building upon tree-Wasserstein (TW), which is a negative definite variant of OT, we develop a novel discrepancy for neural architectures, and demonstrate it within a Gaussian process surrogate model for the sequential NAS settings. Furthermore, we derive a novel parallel NAS, using quality k-determinantal point process on the GP posterior, to select diverse and high-performing architectures from a discrete set of candidates. Empirically, we demonstrate that our TW-based approaches outperform other baselines in both sequential and parallel NAS.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21d.html PDF: http://proceedings.mlr.press/v139/nguyen21d/nguyen21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vu family: Nguyen - given: Tam family: Le - given: Makoto family: Yamada - given: Michael A. family: Osborne editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8084-8095 id: nguyen21d issued: date-parts: - 2021 - 7 - 1 firstpage: 8084 lastpage: 8095 published: 2021-07-01 00:00:00 +0000 - title: 'Interactive Learning from Activity Description' abstract: 'We present a novel interactive learning protocol that enables training request-fulfilling agents by verbally describing their activities. Unlike imitation learning (IL), our protocol allows the teaching agent to provide feedback in a language that is most appropriate for them. Compared with reward in reinforcement learning (RL), the description feedback is richer and allows for improved sample complexity. We develop a probabilistic framework and an algorithm that practically implements our protocol. Empirical results in two challenging request-fulfilling problems demonstrate the strengths of our approach: compared with RL baselines, it is more sample-efficient; compared with IL baselines, it achieves competitive success rates without requiring the teaching agent to be able to demonstrate the desired behavior using the learning agent’s actions. Apart from empirical evaluation, we also provide theoretical guarantees for our algorithm under certain assumptions about the teacher and the environment.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21e.html PDF: http://proceedings.mlr.press/v139/nguyen21e/nguyen21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Khanh X family: Nguyen - given: Dipendra family: Misra - given: Robert family: Schapire - given: Miroslav family: Dudik - given: Patrick family: Shafto editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8096-8108 id: nguyen21e issued: date-parts: - 2021 - 7 - 1 firstpage: 8096 lastpage: 8108 published: 2021-07-01 00:00:00 +0000 - title: 'Nonmyopic Multifidelity Acitve Search' abstract: 'Active search is a learning paradigm where we seek to identify as many members of a rare, valuable class as possible given a labeling budget. Previous work on active search has assumed access to a faithful (and expensive) oracle reporting experimental results. However, some settings offer access to cheaper surrogates such as computational simulation that may aid in the search. We propose a model of multifidelity active search, as well as a novel, computationally efficient policy for this setting that is motivated by state-of-the-art classical policies. Our policy is nonmyopic and budget aware, allowing for a dynamic tradeoff between exploration and exploitation. We evaluate the performance of our solution on real-world datasets and demonstrate significantly better performance than natural benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21f.html PDF: http://proceedings.mlr.press/v139/nguyen21f/nguyen21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Quan family: Nguyen - given: Arghavan family: Modiri - given: Roman family: Garnett editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8109-8118 id: nguyen21f issued: date-parts: - 2021 - 7 - 1 firstpage: 8109 lastpage: 8118 published: 2021-07-01 00:00:00 +0000 - title: 'Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks' abstract: 'A recent line of work has analyzed the theoretical properties of deep neural networks via the Neural Tangent Kernel (NTK). In particular, the smallest eigenvalue of the NTK has been related to the memorization capacity, the global convergence of gradient descent algorithms and the generalization of deep nets. However, existing results either provide bounds in the two-layer setting or assume that the spectrum of the NTK matrices is bounded away from 0 for multi-layer networks. In this paper, we provide tight bounds on the smallest eigenvalue of NTK matrices for deep ReLU nets, both in the limiting case of infinite widths and for finite widths. In the finite-width setting, the network architectures we consider are fairly general: we require the existence of a wide layer with roughly order of $N$ neurons, $N$ being the number of data samples; and the scaling of the remaining layer widths is arbitrary (up to logarithmic factors). To obtain our results, we analyze various quantities of independent interest: we give lower bounds on the smallest singular value of hidden feature matrices, and upper bounds on the Lipschitz constant of input-output feature maps.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21g.html PDF: http://proceedings.mlr.press/v139/nguyen21g/nguyen21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Quynh family: Nguyen - given: Marco family: Mondelli - given: Guido F family: Montufar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8119-8129 id: nguyen21g issued: date-parts: - 2021 - 7 - 1 firstpage: 8119 lastpage: 8129 published: 2021-07-01 00:00:00 +0000 - title: 'Temporal Predictive Coding For Model-Based Planning In Latent Space' abstract: 'High-dimensional observations are a major challenge in the application of model-based reinforcement learning (MBRL) to real-world environments. To handle high-dimensional sensory inputs, existing approaches use representation learning to map high-dimensional observations into a lower-dimensional latent space that is more amenable to dynamics estimation and planning. In this work, we present an information-theoretic approach that employs temporal predictive coding to encode elements in the environment that can be predicted across time. Since this approach focuses on encoding temporally-predictable information, we implicitly prioritize the encoding of task-relevant components over nuisance information within the environment that are provably task-irrelevant. By learning this representation in conjunction with a recurrent state space model, we can then perform planning in latent space. We evaluate our model on a challenging modification of standard DMControl tasks where the background is replaced with natural videos that contain complex but irrelevant information to the planning task. Our experiments show that our model is superior to existing methods in the challenging complex-background setting while remaining competitive with current state-of-the-art models in the standard setting.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21h.html PDF: http://proceedings.mlr.press/v139/nguyen21h/nguyen21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tung D family: Nguyen - given: Rui family: Shu - given: Tuan family: Pham - given: Hung family: Bui - given: Stefano family: Ermon editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8130-8139 id: nguyen21h issued: date-parts: - 2021 - 7 - 1 firstpage: 8130 lastpage: 8139 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially Private Densest Subgraph Detection' abstract: 'Densest subgraph detection is a fundamental graph mining problem, with a large number of applications. There has been a lot of work on efficient algorithms for finding the densest subgraph in massive networks. However, in many domains, the network is private, and returning a densest subgraph can reveal information about the network. Differential privacy is a powerful framework to handle such settings. We study the densest subgraph problem in the edge privacy model, in which the edges of the graph are private. We present the first sequential and parallel differentially private algorithms for this problem. We show that our algorithms have an additive approximation guarantee. We evaluate our algorithms on a large number of real-world networks, and observe a good privacy-accuracy tradeoff when the network has high density.' volume: 139 URL: https://proceedings.mlr.press/v139/nguyen21i.html PDF: http://proceedings.mlr.press/v139/nguyen21i/nguyen21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nguyen21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dung family: Nguyen - given: Anil family: Vullikanti editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8140-8151 id: nguyen21i issued: date-parts: - 2021 - 7 - 1 firstpage: 8140 lastpage: 8151 published: 2021-07-01 00:00:00 +0000 - title: 'Data Augmentation for Meta-Learning' abstract: 'Conventional image classifiers are trained by randomly sampling mini-batches of images. To achieve state-of-the-art performance, practitioners use sophisticated data augmentation schemes to expand the amount of training data available for sampling. In contrast, meta-learning algorithms sample support data, query data, and tasks on each training step. In this complex sampling scenario, data augmentation can be used not only to expand the number of images available per class, but also to generate entirely new classes/tasks. We systematically dissect the meta-learning pipeline and investigate the distinct ways in which data augmentation can be integrated at both the image and class levels. Our proposed meta-specific data augmentation significantly improves the performance of meta-learners on few-shot classification benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/ni21a.html PDF: http://proceedings.mlr.press/v139/ni21a/ni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Renkun family: Ni - given: Micah family: Goldblum - given: Amr family: Sharaf - given: Kezhi family: Kong - given: Tom family: Goldstein editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8152-8161 id: ni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8152 lastpage: 8161 published: 2021-07-01 00:00:00 +0000 - title: 'Improved Denoising Diffusion Probabilistic Models' abstract: 'Denoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications, DDPMs can also achieve competitive log-likelihoods while maintaining high sample quality. Additionally, we find that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality, which is important for the practical deployment of these models. We additionally use precision and recall to compare how well DDPMs and GANs cover the target distribution. Finally, we show that the sample quality and likelihood of these models scale smoothly with model capacity and training compute, making them easily scalable. We release our code and pre-trained models at https://github.com/openai/improved-diffusion.' volume: 139 URL: https://proceedings.mlr.press/v139/nichol21a.html PDF: http://proceedings.mlr.press/v139/nichol21a/nichol21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nichol21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexander Quinn family: Nichol - given: Prafulla family: Dhariwal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8162-8171 id: nichol21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8162 lastpage: 8171 published: 2021-07-01 00:00:00 +0000 - title: 'Smooth $p$-Wasserstein Distance: Structure, Empirical Approximation, and Statistical Applications' abstract: 'Discrepancy measures between probability distributions, often termed statistical distances, are ubiquitous in probability theory, statistics and machine learning. To combat the curse of dimensionality when estimating these distances from data, recent work has proposed smoothing out local irregularities in the measured distributions via convolution with a Gaussian kernel. Motivated by the scalability of this framework to high dimensions, we investigate the structural and statistical behavior of the Gaussian-smoothed $p$-Wasserstein distance $\mathsf{W}_p^{(\sigma)}$, for arbitrary $p\geq 1$. After establishing basic metric and topological properties of $\mathsf{W}_p^{(\sigma)}$, we explore the asymptotic statistical properties of $\mathsf{W}_p^{(\sigma)}(\hat{\mu}_n,\mu)$, where $\hat{\mu}_n$ is the empirical distribution of $n$ independent observations from $\mu$. We prove that $\mathsf{W}_p^{(\sigma)}$ enjoys a parametric empirical convergence rate of $n^{-1/2}$, which contrasts the $n^{-1/d}$ rate for unsmoothed $\Wp$ when $d \geq 3$. Our proof relies on controlling $\mathsf{W}_p^{(\sigma)}$ by a $p$th-order smooth Sobolev distance $\mathsf{d}_p^{(\sigma)}$ and deriving the limit distribution of $\sqrt{n}\,\mathsf{d}_p^{(\sigma)}(\hat{\mu}_n,\mu)$ for all dimensions $d$. As applications, we provide asymptotic guarantees for two-sample testing and minimum distance estimation using $\mathsf{W}_p^{(\sigma)}$, with experiments for $p=2$ using a maximum mean discrepancy formulation of $\mathsf{d}_2^{(\sigma)}$.' volume: 139 URL: https://proceedings.mlr.press/v139/nietert21a.html PDF: http://proceedings.mlr.press/v139/nietert21a/nietert21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nietert21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sloan family: Nietert - given: Ziv family: Goldfeld - given: Kengo family: Kato editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8172-8183 id: nietert21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8172 lastpage: 8183 published: 2021-07-01 00:00:00 +0000 - title: 'AdaXpert: Adapting Neural Architecture for Growing Data' abstract: 'In real-world applications, data often come in a growing manner, where the data volume and the number of classes may increase dynamically. This will bring a critical challenge for learning: given the increasing data volume or the number of classes, one has to instantaneously adjust the neural model capacity to obtain promising performance. Existing methods either ignore the growing nature of data or seek to independently search an optimal architecture for a given dataset, and thus are incapable of promptly adjusting the architectures for the changed data. To address this, we present a neural architecture adaptation method, namely Adaptation eXpert (AdaXpert), to efficiently adjust previous architectures on the growing data. Specifically, we introduce an architecture adjuster to generate a suitable architecture for each data snapshot, based on the previous architecture and the different extent between current and previous data distributions. Furthermore, we propose an adaptation condition to determine the necessity of adjustment, thereby avoiding unnecessary and time-consuming adjustments. Extensive experiments on two growth scenarios (increasing data volume and number of classes) demonstrate the effectiveness of the proposed method.' volume: 139 URL: https://proceedings.mlr.press/v139/niu21a.html PDF: http://proceedings.mlr.press/v139/niu21a/niu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-niu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shuaicheng family: Niu - given: Jiaxiang family: Wu - given: Guanghui family: Xu - given: Yifan family: Zhang - given: Yong family: Guo - given: Peilin family: Zhao - given: Peng family: Wang - given: Mingkui family: Tan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8184-8194 id: niu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8184 lastpage: 8194 published: 2021-07-01 00:00:00 +0000 - title: 'Asynchronous Decentralized Optimization With Implicit Stochastic Variance Reduction' abstract: 'A novel asynchronous decentralized optimization method that follows Stochastic Variance Reduction (SVR) is proposed. Average consensus algorithms, such as Decentralized Stochastic Gradient Descent (DSGD), facilitate distributed training of machine learning models. However, the gradient will drift within the local nodes due to statistical heterogeneity of the subsets of data residing on the nodes and long communication intervals. To overcome the drift problem, (i) Gradient Tracking-SVR (GT-SVR) integrates SVR into DSGD and (ii) Edge-Consensus Learning (ECL) solves a model constrained minimization problem using a primal-dual formalism. In this paper, we reformulate the update procedure of ECL such that it implicitly includes the gradient modification of SVR by optimally selecting a constraint-strength control parameter. Through convergence analysis and experiments, we confirmed that the proposed ECL with Implicit SVR (ECL-ISVR) is stable and approximately reaches the reference performance obtained with computation on a single-node using full data set.' volume: 139 URL: https://proceedings.mlr.press/v139/niwa21a.html PDF: http://proceedings.mlr.press/v139/niwa21a/niwa21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-niwa21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kenta family: Niwa - given: Guoqiang family: Zhang - given: W. Bastiaan family: Kleijn - given: Noboru family: Harada - given: Hiroshi family: Sawada - given: Akinori family: Fujino editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8195-8204 id: niwa21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8195 lastpage: 8204 published: 2021-07-01 00:00:00 +0000 - title: 'WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points' abstract: 'Generative adversarial networks (GAN) are a widely used class of deep generative models, but their minimax training dynamics are not understood very well. In this work, we show that GANs with a 2-layer infinite-width generator and a 2-layer finite-width discriminator trained with stochastic gradient ascent-descent have no spurious stationary points. We then show that when the width of the generator is finite but wide, there are no spurious stationary points within a ball whose radius becomes arbitrarily large (to cover the entire parameter space) as the width goes to infinity.' volume: 139 URL: https://proceedings.mlr.press/v139/no21a.html PDF: http://proceedings.mlr.press/v139/no21a/no21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-no21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Albert family: No - given: Taeho family: Yoon - given: Kwon family: Sehyun - given: Ernest K family: Ryu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8205-8215 id: no21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8205 lastpage: 8215 published: 2021-07-01 00:00:00 +0000 - title: 'The Impact of Record Linkage on Learning from Feature Partitioned Data' abstract: 'There has been recently a significant boost to machine learning with distributed data, in particular with the success of federated learning. A common and very challenging setting is that of vertical or feature partitioned data, when multiple data providers hold different features about common entities. In general, training needs to be preceded by record linkage (RL), a step that finds the correspondence between the observations of the datasets. RL is prone to mistakes in the real world. Despite the importance of the problem, there has been so far no formal assessment of the way in which RL errors impact learning models. Work in the area either use heuristics or assume that the optimal RL is known in advance. In this paper, we provide the first assessment of the problem for supervised learning. For wide sets of losses, we provide technical conditions under which the classifier learned after noisy RL converges (with the data size) to the best classifier that would be learned from mistake-free RL. This yields new insights on the way the pipeline RL + ML operates, from the role of large margin classification on dampening the impact of RL mistakes to clues on how to further optimize RL as a preprocessing step to ML. Experiments on a large UCI benchmark validate those formal observations.' volume: 139 URL: https://proceedings.mlr.press/v139/nock21a.html PDF: http://proceedings.mlr.press/v139/nock21a/nock21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nock21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Richard family: Nock - given: Stephen family: Hardy - given: Wilko family: Henecka - given: Hamish family: Ivey-Law - given: Jakub family: Nabaglo - given: Giorgio family: Patrini - given: Guillaume family: Smith - given: Brian family: Thorne editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8216-8226 id: nock21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8216 lastpage: 8226 published: 2021-07-01 00:00:00 +0000 - title: 'Accuracy, Interpretability, and Differential Privacy via Explainable Boosting' abstract: 'We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.' volume: 139 URL: https://proceedings.mlr.press/v139/nori21a.html PDF: http://proceedings.mlr.press/v139/nori21a/nori21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nori21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Harsha family: Nori - given: Rich family: Caruana - given: Zhiqi family: Bu - given: Judy Hanwen family: Shen - given: Janardhan family: Kulkarni editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8227-8237 id: nori21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8227 lastpage: 8237 published: 2021-07-01 00:00:00 +0000 - title: 'Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods' abstract: 'Hindsight allows reinforcement learning agents to leverage new observations to make inferences about earlier states and transitions. In this paper, we exploit the idea of hindsight and introduce posterior value functions. Posterior value functions are computed by inferring the posterior distribution over hidden components of the state in previous timesteps and can be used to construct novel unbiased baselines for policy gradient methods. Importantly, we prove that these baselines reduce (and never increase) the variance of policy gradient estimators compared to traditional state value functions. While the posterior value function is motivated by partial observability, we extend these results to arbitrary stochastic MDPs by showing that hindsight-capable agents can model stochasticity in the environment as a special case of partial observability. Finally, we introduce a pair of methods for learning posterior value functions and prove their convergence.' volume: 139 URL: https://proceedings.mlr.press/v139/nota21a.html PDF: http://proceedings.mlr.press/v139/nota21a/nota21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-nota21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chris family: Nota - given: Philip family: Thomas - given: Bruno C. Da family: Silva editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8238-8247 id: nota21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8238 lastpage: 8247 published: 2021-07-01 00:00:00 +0000 - title: 'Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes' abstract: 'We consider the optimal approximate posterior over the top-layer weights in a Bayesian neural network for regression, and show that it exhibits strong dependencies on the lower-layer weights. We adapt this result to develop a correlated approximate posterior over the weights at all layers in a Bayesian neural network. We extend this approach to deep Gaussian processes, unifying inference in the two model classes. Our approximate posterior uses learned "global” inducing points, which are defined only at the input layer and propagated through the network to obtain inducing inputs at subsequent layers. By contrast, standard, "local”, inducing point methods from the deep Gaussian process literature optimise a separate set of inducing inputs at every layer, and thus do not model correlations across layers. Our method gives state-of-the-art performance for a variational Bayesian method, without data augmentation or tempering, on CIFAR-10 of 86.7%, which is comparable to SGMCMC without tempering but with data augmentation (88% in Wenzel et al. 2020).' volume: 139 URL: https://proceedings.mlr.press/v139/ober21a.html PDF: http://proceedings.mlr.press/v139/ober21a/ober21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ober21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sebastian W family: Ober - given: Laurence family: Aitchison editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8248-8259 id: ober21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8248 lastpage: 8259 published: 2021-07-01 00:00:00 +0000 - title: 'Regularizing towards Causal Invariance: Linear Models with Proxies' abstract: 'We propose a method for learning linear models whose predictive performance is robust to causal interventions on unobserved variables, when noisy proxies of those variables are available. Our approach takes the form of a regularization term that trades off between in-distribution performance and robustness to interventions. Under the assumption of a linear structural causal model, we show that a single proxy can be used to create estimators that are prediction optimal under interventions of bounded strength. This strength depends on the magnitude of the measurement noise in the proxy, which is, in general, not identifiable. In the case of two proxy variables, we propose a modified estimator that is prediction optimal under interventions up to a known strength. We further show how to extend these estimators to scenarios where additional information about the "test time" intervention is available during training. We evaluate our theoretical findings in synthetic experiments and using real data of hourly pollution levels across several cities in China.' volume: 139 URL: https://proceedings.mlr.press/v139/oberst21a.html PDF: http://proceedings.mlr.press/v139/oberst21a/oberst21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-oberst21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michael family: Oberst - given: Nikolaj family: Thams - given: Jonas family: Peters - given: David family: Sontag editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8260-8270 id: oberst21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8260 lastpage: 8270 published: 2021-07-01 00:00:00 +0000 - title: 'Sparsity-Agnostic Lasso Bandit' abstract: 'We consider a stochastic contextual bandit problem where the dimension $d$ of the feature vectors is potentially large, however, only a sparse subset of features of cardinality $s_0 \ll d$ affect the reward function. Essentially all existing algorithms for sparse bandits require a priori knowledge of the value of the sparsity index $s_0$. This knowledge is almost never available in practice, and misspecification of this parameter can lead to severe deterioration in the performance of existing methods. The main contribution of this paper is to propose an algorithm that does not require prior knowledge of the sparsity index $s_0$ and establish tight regret bounds on its performance under mild conditions. We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms existing methods, even when the correct sparsity index is revealed to them but is kept hidden from our algorithm.' volume: 139 URL: https://proceedings.mlr.press/v139/oh21a.html PDF: http://proceedings.mlr.press/v139/oh21a/oh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-oh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Min-Hwan family: Oh - given: Garud family: Iyengar - given: Assaf family: Zeevi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8271-8280 id: oh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8271 lastpage: 8280 published: 2021-07-01 00:00:00 +0000 - title: 'Autoencoder Image Interpolation by Shaping the Latent Space' abstract: 'One of the fascinating properties of deep learning is the ability of the network to reveal the underlying factors characterizing elements in datasets of different types. Autoencoders represent an effective approach for computing these factors. Autoencoders have been studied in the context of enabling interpolation between data points by decoding convex combinations of latent vectors. However, this interpolation often leads to artifacts or produces unrealistic results during reconstruction. We argue that these incongruities are due to the structure of the latent space and to the fact that such naively interpolated latent vectors deviate from the data manifold. In this paper, we propose a regularization technique that shapes the latent representation to follow a manifold that is consistent with the training images and that forces the manifold to be smooth and locally convex. This regularization not only enables faithful interpolation between data points, as we show herein but can also be used as a general regularization technique to avoid overfitting or to produce new samples for data augmentation.' volume: 139 URL: https://proceedings.mlr.press/v139/oring21a.html PDF: http://proceedings.mlr.press/v139/oring21a/oring21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-oring21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alon family: Oring - given: Zohar family: Yakhini - given: Yacov family: Hel-Or editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8281-8290 id: oring21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8281 lastpage: 8290 published: 2021-07-01 00:00:00 +0000 - title: 'Generalization Guarantees for Neural Architecture Search with Train-Validation Split' abstract: 'Neural Architecture Search (NAS) is a popular method for automatically designing optimized deep-learning architectures. NAS methods commonly use bilevel optimization where one optimizes the weights over the training data (lower-level problem) and hyperparameters - such as the architecture - over the validation data (upper-level problem). This paper explores the statistical aspects of such problems with train-validation splits. In practice, the lower-level problem is often overparameterized and can easily achieve zero loss. Thus, a-priori, it seems impossible to distinguish the right hyperparameters based on training loss alone which motivates a better understanding of train-validation split. To this aim, we first show that refined properties of the validation loss such as risk and hyper-gradients are indicative of those of the true test loss and help prevent overfitting with a near-minimal validation sample size. Importantly, this is established for continuous search spaces which are relevant for differentiable search schemes. We then establish generalization bounds for NAS problems with an emphasis on an activation search problem and gradient-based methods. Finally, we show rigorous connections between NAS and low-rank matrix learning which leads to algorithmic insights where the solution of the upper problem can be accurately learned via spectral methods to achieve near-minimal risk.' volume: 139 URL: https://proceedings.mlr.press/v139/oymak21a.html PDF: http://proceedings.mlr.press/v139/oymak21a/oymak21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-oymak21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Samet family: Oymak - given: Mingchen family: Li - given: Mahdi family: Soltanolkotabi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8291-8301 id: oymak21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8291 lastpage: 8301 published: 2021-07-01 00:00:00 +0000 - title: 'Vector Quantized Models for Planning' abstract: 'Recent developments in the field of model-based RL have proven successful in a range of environments, especially ones where planning is essential. However, such successes have been limited to deterministic fully-observed environments. We present a new approach that handles stochastic and partially-observable environments. Our key insight is to use discrete autoencoders to capture the multiple possible effects of an action in a stochastic environment. We use a stochastic variant of Monte Carlo tree search to plan over both the agent’s actions and the discrete latent variables representing the environment’s response. Our approach significantly outperforms an offline version of MuZero on a stochastic interpretation of chess where the opponent is considered part of the environment. We also show that our approach scales to DeepMind Lab, a first-person 3D environment with large visual observations and partial observability.' volume: 139 URL: https://proceedings.mlr.press/v139/ozair21a.html PDF: http://proceedings.mlr.press/v139/ozair21a/ozair21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ozair21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sherjil family: Ozair - given: Yazhe family: Li - given: Ali family: Razavi - given: Ioannis family: Antonoglou - given: Aaron family: Van Den Oord - given: Oriol family: Vinyals editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8302-8313 id: ozair21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8302 lastpage: 8313 published: 2021-07-01 00:00:00 +0000 - title: 'Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling' abstract: 'Deep neural networks have been shown to be susceptible to adversarial attacks. This lack of adversarial robustness is even more pronounced when models are compressed in order to meet hardware limitations. Hence, if adversarial robustness is an issue, training of sparsely connected networks necessitates considering adversarially robust sparse learning. Motivated by the efficient and stable computational function of the brain in the presence of a highly dynamic synaptic connectivity structure, we propose an intrinsically sparse rewiring approach to train neural networks with state-of-the-art robust learning objectives under high sparsity. Importantly, in contrast to previously proposed pruning techniques, our approach satisfies global connectivity constraints throughout robust optimization, i.e., it does not require dense pre-training followed by pruning. Based on a Bayesian posterior sampling principle, a network rewiring process simultaneously learns the sparse connectivity structure and the robustness-accuracy trade-off based on the adversarial learning objective. Although our networks are sparsely connected throughout the whole training process, our experimental benchmark evaluations show that their performance is superior to recently proposed robustness-aware network pruning methods which start from densely connected networks.' volume: 139 URL: https://proceedings.mlr.press/v139/ozdenizci21a.html PDF: http://proceedings.mlr.press/v139/ozdenizci21a/ozdenizci21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ozdenizci21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ozan family: Özdenizci - given: Robert family: Legenstein editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8314-8324 id: ozdenizci21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8314 lastpage: 8324 published: 2021-07-01 00:00:00 +0000 - title: 'Opening the Blackbox: Accelerating Neural Differential Equations by Regularizing Internal Solver Heuristics' abstract: 'Democratization of machine learning requires architectures that automatically adapt to new problems. Neural Differential Equations (NDEs) have emerged as a popular modeling framework by removing the need for ML practitioners to choose the number of layers in a recurrent model. While we can control the computational cost by choosing the number of layers in standard architectures, in NDEs the number of neural network evaluations for a forward pass can depend on the number of steps of the adaptive ODE solver. But, can we force the NDE to learn the version with the least steps while not increasing the training cost? Current strategies to overcome slow prediction require high order automatic differentiation, leading to significantly higher training time. We describe a novel regularization method that uses the internal cost heuristics of adaptive differential equation solvers combined with discrete adjoint sensitivities to guide the training process towards learning NDEs that are easier to solve. This approach opens up the blackbox numerical analysis behind the differential equation solver’s algorithm and directly uses its local error estimates and stiffness heuristics as cheap and accurate cost estimates. We incorporate our method without any change in the underlying NDE framework and show that our method extends beyond Ordinary Differential Equations to accommodate Neural Stochastic Differential Equations. We demonstrate how our approach can halve the prediction time and, unlike other methods which can increase the training time by an order of magnitude, we demonstrate similar reduction in training times. Together this showcases how the knowledge embedded within state-of-the-art equation solvers can be used to enhance machine learning.' volume: 139 URL: https://proceedings.mlr.press/v139/pal21a.html PDF: http://proceedings.mlr.press/v139/pal21a/pal21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-pal21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Avik family: Pal - given: Yingbo family: Ma - given: Viral family: Shah - given: Christopher V family: Rackauckas editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8325-8335 id: pal21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8325 lastpage: 8335 published: 2021-07-01 00:00:00 +0000 - title: 'RNN with Particle Flow for Probabilistic Spatio-temporal Forecasting' abstract: 'Spatio-temporal forecasting has numerous applications in analyzing wireless, traffic, and financial networks. Many classical statistical models often fall short in handling the complexity and high non-linearity present in time-series data. Recent advances in deep learning allow for better modelling of spatial and temporal dependencies. While most of these models focus on obtaining accurate point forecasts, they do not characterize the prediction uncertainty. In this work, we consider the time-series data as a random realization from a nonlinear state-space model and target Bayesian inference of the hidden states for probabilistic forecasting. We use particle flow as the tool for approximating the posterior distribution of the states, as it is shown to be highly effective in complex, high-dimensional settings. Thorough experimentation on several real world time-series datasets demonstrates that our approach provides better characterization of uncertainty while maintaining comparable accuracy to the state-of-the-art point forecasting methods.' volume: 139 URL: https://proceedings.mlr.press/v139/pal21b.html PDF: http://proceedings.mlr.press/v139/pal21b/pal21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-pal21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Soumyasundar family: Pal - given: Liheng family: Ma - given: Yingxue family: Zhang - given: Mark family: Coates editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8336-8348 id: pal21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8336 lastpage: 8348 published: 2021-07-01 00:00:00 +0000 - title: 'Inference for Network Regression Models with Community Structure' abstract: 'Network regression models, where the outcome comprises the valued edge in a network and the predictors are actor or dyad-level covariates, are used extensively in the social and biological sciences. Valid inference relies on accurately modeling the residual dependencies among the relations. Frequently homogeneity assumptions are placed on the errors which are commonly incorrect and ignore critical natural clustering of the actors. In this work, we present a novel regression modeling framework that models the errors as resulting from a community-based dependence structure and exploits the subsequent exchangeability properties of the error distribution to obtain parsimonious standard errors for regression parameters.' volume: 139 URL: https://proceedings.mlr.press/v139/pan21a.html PDF: http://proceedings.mlr.press/v139/pan21a/pan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-pan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mengjie family: Pan - given: Tyler family: Mccormick - given: Bailey family: Fosdick editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8349-8358 id: pan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8349 lastpage: 8358 published: 2021-07-01 00:00:00 +0000 - title: 'Latent Space Energy-Based Model of Symbol-Vector Coupling for Text Generation and Classification' abstract: 'We propose a latent space energy-based prior model for text generation and classification. The model stands on a generator network that generates the text sequence based on a continuous latent vector. The energy term of the prior model couples a continuous latent vector and a symbolic one-hot vector, so that discrete category can be inferred from the observed example based on the continuous latent vector. Such a latent space coupling naturally enables incorporation of information bottleneck regularization to encourage the continuous latent vector to extract information from the observed example that is informative of the underlying category. In our learning method, the symbol-vector coupling, the generator network and the inference network are learned jointly. Our model can be learned in an unsupervised setting where no category labels are provided. It can also be learned in semi-supervised setting where category labels are provided for a subset of training examples. Our experiments demonstrate that the proposed model learns well-structured and meaningful latent space, which (1) guides the generator to generate text with high quality, diversity, and interpretability, and (2) effectively classifies text.' volume: 139 URL: https://proceedings.mlr.press/v139/pang21a.html PDF: http://proceedings.mlr.press/v139/pang21a/pang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-pang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bo family: Pang - given: Ying Nian family: Wu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8359-8370 id: pang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8359 lastpage: 8370 published: 2021-07-01 00:00:00 +0000 - title: 'Leveraging Good Representations in Linear Contextual Bandits' abstract: 'The linear contextual bandit literature is mostly focused on the design of efficient learning algorithms for a given representation. However, a contextual bandit problem may admit multiple linear representations, each one with different characteristics that directly impact the regret of the learning algorithm. In particular, recent works showed that there exist “good” representations for which constant problem-dependent regret can be achieved. In this paper, we first provide a systematic analysis of the different definitions of “good” representations proposed in the literature. We then propose a novel selection algorithm able to adapt to the best representation in a set of $M$ candidates. We show that the regret is indeed never worse than the regret obtained by running \textsc{LinUCB} on best representation (up to a $\ln M$ factor). As a result, our algorithm achieves constant regret if a “good” representation is available in the set. Furthermore, we show the algorithm may still achieve constant regret by implicitly constructing a “good” representation, even when none of the initial representations is “good”. Finally, we validate our theoretical findings in a number of standard contextual bandit problems.' volume: 139 URL: https://proceedings.mlr.press/v139/papini21a.html PDF: http://proceedings.mlr.press/v139/papini21a/papini21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-papini21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Matteo family: Papini - given: Andrea family: Tirinzoni - given: Marcello family: Restelli - given: Alessandro family: Lazaric - given: Matteo family: Pirotta editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8371-8380 id: papini21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8371 lastpage: 8380 published: 2021-07-01 00:00:00 +0000 - title: 'Wasserstein Distributional Normalization For Robust Distributional Certification of Noisy Labeled Data' abstract: 'We propose a novel Wasserstein distributional normalization method that can classify noisy labeled data accurately. Recently, noisy labels have been successfully handled based on small-loss criteria, but have not been clearly understood from the theoretical point of view. In this paper, we address this problem by adopting distributionally robust optimization (DRO). In particular, we present a theoretical investigation of the distributional relationship between uncertain and certain samples based on the small-loss criteria. Our method takes advantage of this relationship to exploit useful information from uncertain samples. To this end, we normalize uncertain samples into the robustly certified region by introducing the non-parametric Ornstein-Ulenbeck type of Wasserstein gradient flows called Wasserstein distributional normalization, which is cheap and fast to implement. We verify that network confidence and distributional certification are fundamentally correlated and show the concentration inequality when the network escapes from over-parameterization. Experimental results demonstrate that our non-parametric classification method outperforms other parametric baselines on the Clothing1M and CIFAR-10/100 datasets when the data have diverse noisy labels.' volume: 139 URL: https://proceedings.mlr.press/v139/park21a.html PDF: http://proceedings.mlr.press/v139/park21a/park21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-park21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sung Woo family: Park - given: Junseok family: Kwon editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8381-8390 id: park21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8381 lastpage: 8390 published: 2021-07-01 00:00:00 +0000 - title: 'Unsupervised Representation Learning via Neural Activation Coding' abstract: 'We present neural activation coding (NAC) as a novel approach for learning deep representations from unlabeled data for downstream applications. We argue that the deep encoder should maximize its nonlinear expressivity on the data for downstream predictors to take full advantage of its representation power. To this end, NAC maximizes the mutual information between activation patterns of the encoder and the data over a noisy communication channel. We show that learning for a noise-robust activation code increases the number of distinct linear regions of ReLU encoders, hence the maximum nonlinear expressivity. More interestingly, NAC learns both continuous and discrete representations of data, which we respectively evaluate on two downstream tasks: (i) linear classification on CIFAR-10 and ImageNet-1K and (ii) nearest neighbor retrieval on CIFAR-10 and FLICKR-25K. Empirical results show that NAC attains better or comparable performance on both tasks over recent baselines including SimCLR and DistillHash. In addition, NAC pretraining provides significant benefits to the training of deep generative models. Our code is available at https://github.com/yookoon/nac.' volume: 139 URL: https://proceedings.mlr.press/v139/park21b.html PDF: http://proceedings.mlr.press/v139/park21b/park21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-park21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yookoon family: Park - given: Sangho family: Lee - given: Gunhee family: Kim - given: David family: Blei editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8391-8400 id: park21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8391 lastpage: 8400 published: 2021-07-01 00:00:00 +0000 - title: 'Conditional Distributional Treatment Effect with Kernel Conditional Mean Embeddings and U-Statistic Regression' abstract: 'We propose to analyse the conditional distributional treatment effect (CoDiTE), which, in contrast to the more common conditional average treatment effect (CATE), is designed to encode a treatment’s distributional aspects beyond the mean. We first introduce a formal definition of the CoDiTE associated with a distance function between probability measures. Then we discuss the CoDiTE associated with the maximum mean discrepancy via kernel conditional mean embeddings, which, coupled with a hypothesis test, tells us whether there is any conditional distributional effect of the treatment. Finally, we investigate what kind of conditional distributional effect the treatment has, both in an exploratory manner via the conditional witness function, and in a quantitative manner via U-statistic regression, generalising the CATE to higher-order moments. Experiments on synthetic, semi-synthetic and real datasets demonstrate the merits of our approach.' volume: 139 URL: https://proceedings.mlr.press/v139/park21c.html PDF: http://proceedings.mlr.press/v139/park21c/park21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-park21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Junhyung family: Park - given: Uri family: Shalit - given: Bernhard family: Schölkopf - given: Krikamol family: Muandet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8401-8412 id: park21c issued: date-parts: - 2021 - 7 - 1 firstpage: 8401 lastpage: 8412 published: 2021-07-01 00:00:00 +0000 - title: 'Generative Adversarial Networks for Markovian Temporal Dynamics: Stochastic Continuous Data Generation' abstract: 'In this paper, we present a novel generative adversarial network (GAN) that can describe Markovian temporal dynamics. To generate stochastic sequential data, we introduce a novel stochastic differential equation-based conditional generator and spatial-temporal constrained discriminator networks. To stabilize the learning dynamics of the min-max type of the GAN objective function, we propose well-posed constraint terms for both networks. We also propose a novel conditional Markov Wasserstein distance to induce a pathwise Wasserstein distance. The experimental results demonstrate that our method outperforms state-of-the-art methods using several different types of data.' volume: 139 URL: https://proceedings.mlr.press/v139/park21d.html PDF: http://proceedings.mlr.press/v139/park21d/park21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-park21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sung Woo family: Park - given: Dong Wook family: Shu - given: Junseok family: Kwon editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8413-8421 id: park21d issued: date-parts: - 2021 - 7 - 1 firstpage: 8413 lastpage: 8421 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Counterfactual Explanations in Tree Ensembles' abstract: 'Counterfactual explanations are usually generated through heuristics that are sensitive to the search’s initial conditions. The absence of guarantees of performance and robustness hinders trustworthiness. In this paper, we take a disciplined approach towards counterfactual explanations for tree ensembles. We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches. We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score. We provide comprehensive coverage of additional constraints that model important objectives, heterogeneous data types, structural constraints on the feature space, along with resource and actionability restrictions. Our experimental analyses demonstrate that the proposed search approach requires a computational effort that is orders of magnitude smaller than previous mathematical programming algorithms. It scales up to large data sets and tree ensembles, where it provides, within seconds, systematic explanations grounded on well-defined models solved to optimality.' volume: 139 URL: https://proceedings.mlr.press/v139/parmentier21a.html PDF: http://proceedings.mlr.press/v139/parmentier21a/parmentier21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-parmentier21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Axel family: Parmentier - given: Thibaut family: Vidal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8422-8431 id: parmentier21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8422 lastpage: 8431 published: 2021-07-01 00:00:00 +0000 - title: 'PHEW : Constructing Sparse Networks that Learn Fast and Generalize Well without Training Data' abstract: 'Methods that sparsify a network at initialization are important in practice because they greatly improve the efficiency of both learning and inference. Our work is based on a recently proposed decomposition of the Neural Tangent Kernel (NTK) that has decoupled the dynamics of the training process into a data-dependent component and an architecture-dependent kernel {–} the latter referred to as Path Kernel. That work has shown how to design sparse neural networks for faster convergence, without any training data, using the Synflow-L2 algorithm. We first show that even though Synflow-L2 is optimal in terms of convergence, for a given network density, it results in sub-networks with “bottleneck” (narrow) layers {–} leading to poor performance as compared to other data-agnostic methods that use the same number of parameters. Then we propose a new method to construct sparse networks, without any training data, referred to as Paths with Higher-Edge Weights (PHEW). PHEW is a probabilistic network formation method based on biased random walks that only depends on the initial weights. It has similar path kernel properties as Synflow-L2 but it generates much wider layers, resulting in better generalization and performance. PHEW achieves significant improvements over the data-independent SynFlow and SynFlow-L2 methods at a wide range of network densities.' volume: 139 URL: https://proceedings.mlr.press/v139/patil21a.html PDF: http://proceedings.mlr.press/v139/patil21a/patil21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-patil21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shreyas Malakarjun family: Patil - given: Constantine family: Dovrolis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8432-8442 id: patil21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8432 lastpage: 8442 published: 2021-07-01 00:00:00 +0000 - title: 'CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints' abstract: 'Bridging logical and algorithmic reasoning with modern machine learning techniques is a fundamental challenge with potentially transformative impact. On the algorithmic side, many NP-hard problems can be expressed as integer programs, in which the constraints play the role of their ’combinatorial specification’. In this work, we aim to integrate integer programming solvers into neural network architectures as layers capable of learning both the cost terms and the constraints. The resulting end-to-end trainable architectures jointly extract features from raw data and solve a suitable (learned) combinatorial problem with state-of-the-art integer programming solvers. We demonstrate the potential of such layers with an extensive performance analysis on synthetic data and with a demonstration on a competitive computer vision keypoint matching benchmark.' volume: 139 URL: https://proceedings.mlr.press/v139/paulus21a.html PDF: http://proceedings.mlr.press/v139/paulus21a/paulus21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-paulus21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Anselm family: Paulus - given: Michal family: Rolinek - given: Vit family: Musil - given: Brandon family: Amos - given: Georg family: Martius editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8443-8453 id: paulus21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8443 lastpage: 8453 published: 2021-07-01 00:00:00 +0000 - title: 'Ensemble Bootstrapping for Q-Learning' abstract: 'Q-learning (QL), a common reinforcement learning algorithm, suffers from over-estimation bias due to the maximization term in the optimal Bellman operator. This bias may lead to sub-optimal behavior. Double-Q-learning tackles this issue by utilizing two estimators, yet results in an under-estimation bias. Similar to over-estimation in Q-learning, in certain scenarios, the under-estimation bias may degrade performance. In this work, we introduce a new bias-reduced algorithm called Ensemble Bootstrapped Q-Learning (EBQL), a natural extension of Double-Q-learning to ensembles. We analyze our method both theoretically and empirically. Theoretically, we prove that EBQL-like updates yield lower MSE when estimating the maximal mean of a set of independent random variables. Empirically, we show that there exist domains where both over and under-estimation result in sub-optimal performance. Finally, We demonstrate the superior performance of a deep RL variant of EBQL over other deep QL algorithms for a suite of ATARI games.' volume: 139 URL: https://proceedings.mlr.press/v139/peer21a.html PDF: http://proceedings.mlr.press/v139/peer21a/peer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-peer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Oren family: Peer - given: Chen family: Tessler - given: Nadav family: Merlis - given: Ron family: Meir editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8454-8463 id: peer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8454 lastpage: 8463 published: 2021-07-01 00:00:00 +0000 - title: 'Homomorphic Sensing: Sparsity and Noise' abstract: '\emph{Unlabeled sensing} is a recent problem encompassing many data science and engineering applications and typically formulated as solving linear equations whose right-hand side vector has undergone an unknown permutation. It was generalized to the \emph{homomorphic sensing} problem by replacing the unknown permutation with an unknown linear map from a given finite set of linear maps. In this paper we present tighter and simpler conditions for the homomorphic sensing problem to admit a unique solution. We show that this solution is locally stable under noise, while under a sparsity assumption it remains unique under less demanding conditions. Sparsity in the context of unlabeled sensing leads to the problem of \textit{unlabeled compressed sensing}, and a consequence of our general theory is the existence under mild conditions of a unique sparsest solution. On the algorithmic level, we solve unlabeled compressed sensing by an iterative algorithm validated by synthetic data experiments. Finally, under the unifying homomorphic sensing framework we connect unlabeled sensing to other important practical problems.' volume: 139 URL: https://proceedings.mlr.press/v139/peng21a.html PDF: http://proceedings.mlr.press/v139/peng21a/peng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-peng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Liangzu family: Peng - given: Boshi family: Wang - given: Manolis family: Tsakiris editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8464-8475 id: peng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8464 lastpage: 8475 published: 2021-07-01 00:00:00 +0000 - title: 'How could Neural Networks understand Programs?' abstract: 'Semantic understanding of programs is a fundamental problem for programming language processing (PLP). Recent works that learn representations of code based on pre-training techniques in NLP have pushed the frontiers in this direction. However, the semantics of PL and NL have essential differences. These being ignored, we believe it is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by the heuristic. In fact, the semantics of a program can be rigorously defined by formal semantics in PL theory. For example, the operational semantics, describes the meaning of a valid program as updating the environment (i.e., the memory address-value function) through fundamental operations, such as memory I/O and conditional branching. Inspired by this, we propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition, which is indispensable for program understanding. To validate our proposal, we present a hierarchical Transformer-based pre-training model called OSCAR to better facilitate the understanding of programs. OSCAR learns from intermediate representation (IR) and an encoded representation derived from static analysis, which are used for representing the fundamental operations and approximating the environment transitions respectively. OSCAR empirically shows the outstanding capability of program semantics understanding on many practical software engineering tasks. Code and models are released at: \url{https://github.com/pdlan/OSCAR}.' volume: 139 URL: https://proceedings.mlr.press/v139/peng21b.html PDF: http://proceedings.mlr.press/v139/peng21b/peng21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-peng21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dinglan family: Peng - given: Shuxin family: Zheng - given: Yatao family: Li - given: Guolin family: Ke - given: Di family: He - given: Tie-Yan family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8476-8486 id: peng21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8476 lastpage: 8486 published: 2021-07-01 00:00:00 +0000 - title: 'Privacy-Preserving Video Classification with Convolutional Neural Networks' abstract: 'Many video classification applications require access to personal data, thereby posing an invasive security risk to the users’ privacy. We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks that allows a party to infer a label from a video without necessitating the video owner to disclose their video to other entities in an unencrypted manner. Similarly, our approach removes the requirement of the classifier owner from revealing their model parameters to outside entities in plaintext. To this end, we combine existing Secure Multi-Party Computation (MPC) protocols for private image classification with our novel MPC protocols for oblivious single-frame selection and secure label aggregation across frames. The result is an end-to-end privacy-preserving video classification pipeline. We evaluate our proposed solution in an application for private human emotion recognition. Our results across a variety of security settings, spanning honest and dishonest majority configurations of the computing parties, and for both passive and active adversaries, demonstrate that videos can be classified with state-of-the-art accuracy, and without leaking sensitive user information.' volume: 139 URL: https://proceedings.mlr.press/v139/pentyala21a.html PDF: http://proceedings.mlr.press/v139/pentyala21a/pentyala21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-pentyala21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sikha family: Pentyala - given: Rafael family: Dowsley - given: Martine family: De Cock editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8487-8499 id: pentyala21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8487 lastpage: 8499 published: 2021-07-01 00:00:00 +0000 - title: 'Rissanen Data Analysis: Examining Dataset Characteristics via Description Length' abstract: 'We introduce a method to determine if a certain capability helps to achieve an accurate model of given data. We view labels as being generated from the inputs by a program composed of subroutines with different capabilities, and we posit that a subroutine is useful if and only if the minimal program that invokes it is shorter than the one that does not. Since minimum program length is uncomputable, we instead estimate the labels’ minimum description length (MDL) as a proxy, giving us a theoretically-grounded method for analyzing dataset characteristics. We call the method Rissanen Data Analysis (RDA) after the father of MDL, and we showcase its applicability on a wide variety of settings in NLP, ranging from evaluating the utility of generating subquestions before answering a question, to analyzing the value of rationales and explanations, to investigating the importance of different parts of speech, and uncovering dataset gender bias.' volume: 139 URL: https://proceedings.mlr.press/v139/perez21a.html PDF: http://proceedings.mlr.press/v139/perez21a/perez21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-perez21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ethan family: Perez - given: Douwe family: Kiela - given: Kyunghyun family: Cho editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8500-8513 id: perez21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8500 lastpage: 8513 published: 2021-07-01 00:00:00 +0000 - title: 'Modelling Behavioural Diversity for Learning in Open-Ended Games' abstract: 'Promoting behavioural diversity is critical for solving games with non-transitive dynamics where strategic cycles exist, and there is no consistent winner (e.g., Rock-Paper-Scissors). Yet, there is a lack of rigorous treatment for defining diversity and constructing diversity-aware learning dynamics. In this work, we offer a geometric interpretation of behavioural diversity in games and introduce a novel diversity metric based on \emph{determinantal point processes} (DPP). By incorporating the diversity metric into best-response dynamics, we develop \emph{diverse fictitious play} and \emph{diverse policy-space response oracle} for solving normal-form games and open-ended games. We prove the uniqueness of the diverse best response and the convergence of our algorithms on two-player games. Importantly, we show that maximising the DPP-based diversity metric guarantees to enlarge the \emph{gamescape} – convex polytopes spanned by agents’ mixtures of strategies. To validate our diversity-aware solvers, we test on tens of games that show strong non-transitivity. Results suggest that our methods achieve at least the same, and in most games, lower exploitability than PSRO solvers by finding effective and diverse strategies.' volume: 139 URL: https://proceedings.mlr.press/v139/perez-nieves21a.html PDF: http://proceedings.mlr.press/v139/perez-nieves21a/perez-nieves21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-perez-nieves21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nicolas family: Perez-Nieves - given: Yaodong family: Yang - given: Oliver family: Slumbers - given: David H family: Mguni - given: Ying family: Wen - given: Jun family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8514-8524 id: perez-nieves21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8514 lastpage: 8524 published: 2021-07-01 00:00:00 +0000 - title: 'From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization' abstract: 'In this paper we investigate the Follow the Regularized Leader dynamics in sequential imperfect information games (IIG). We generalize existing results of Poincar{é} recurrence from normal-form games to zero-sum two-player imperfect information games and other sequential game settings. We then investigate how adapting the reward (by adding a regularization term) of the game can give strong convergence guarantees in monotone games. We continue by showing how this reward adaptation technique can be leveraged to build algorithms that converge exactly to the Nash equilibrium. Finally, we show how these insights can be directly used to build state-of-the-art model-free algorithms for zero-sum two-player Imperfect Information Games (IIG).' volume: 139 URL: https://proceedings.mlr.press/v139/perolat21a.html PDF: http://proceedings.mlr.press/v139/perolat21a/perolat21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-perolat21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Julien family: Perolat - given: Remi family: Munos - given: Jean-Baptiste family: Lespiau - given: Shayegan family: Omidshafiei - given: Mark family: Rowland - given: Pedro family: Ortega - given: Neil family: Burch - given: Thomas family: Anthony - given: David family: Balduzzi - given: Bart family: De Vylder - given: Georgios family: Piliouras - given: Marc family: Lanctot - given: Karl family: Tuyls editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8525-8535 id: perolat21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8525 lastpage: 8535 published: 2021-07-01 00:00:00 +0000 - title: 'Spectral Smoothing Unveils Phase Transitions in Hierarchical Variational Autoencoders' abstract: 'Variational autoencoders with deep hierarchies of stochastic layers have been known to suffer from the problem of posterior collapse, where the top layers fall back to the prior and become independent of input. We suggest that the hierarchical VAE objective explicitly includes the variance of the function parameterizing the mean and variance of the latent Gaussian distribution which itself is often a high variance function. Building on this we generalize VAE neural networks by incorporating a smoothing parameter motivated by Gaussian analysis to reduce higher frequency components and consequently the variance in parameterizing functions and show that this can help to solve the problem of posterior collapse. We further show that under such smoothing the VAE loss exhibits a phase transition, where the top layer KL divergence sharply drops to zero at a critical value of the smoothing parameter that is similar for the same model across datasets. We validate the phenomenon across model configurations and datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/pervez21a.html PDF: http://proceedings.mlr.press/v139/pervez21a/pervez21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-pervez21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Adeel family: Pervez - given: Efstratios family: Gavves editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8536-8545 id: pervez21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8536 lastpage: 8545 published: 2021-07-01 00:00:00 +0000 - title: 'Differentiable Sorting Networks for Scalable Sorting and Ranking Supervision' abstract: 'Sorting and ranking supervision is a method for training neural networks end-to-end based on ordering constraints. That is, the ground truth order of sets of samples is known, while their absolute values remain unsupervised. For that, we propose differentiable sorting networks by relaxing their pairwise conditional swap operations. To address the problems of vanishing gradients and extensive blurring that arise with larger numbers of layers, we propose mapping activations to regions with moderate gradients. We consider odd-even as well as bitonic sorting networks, which outperform existing relaxations of the sorting operation. We show that bitonic sorting networks can achieve stable training on large input sets of up to 1024 elements.' volume: 139 URL: https://proceedings.mlr.press/v139/petersen21a.html PDF: http://proceedings.mlr.press/v139/petersen21a/petersen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-petersen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Felix family: Petersen - given: Christian family: Borgelt - given: Hilde family: Kuehne - given: Oliver family: Deussen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8546-8555 id: petersen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8546 lastpage: 8555 published: 2021-07-01 00:00:00 +0000 - title: 'Megaverse: Simulating Embodied Agents at One Million Experiences per Second' abstract: 'We present Megaverse, a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of our engine enables physics-based simulation with high-dimensional egocentric observations at more than 1,000,000 actions per second on a single 8-GPU node. Megaverse is up to 70x faster than DeepMind Lab in fully-shaded 3D scenes with interactive objects. We achieve this high simulation performance by leveraging batched simulation, thereby taking full advantage of the massive parallelism of modern GPUs. We use Megaverse to build a new benchmark that consists of several single-agent and multi-agent tasks covering a variety of cognitive challenges. We evaluate model-free RL on this benchmark to provide baselines and facilitate future research.' volume: 139 URL: https://proceedings.mlr.press/v139/petrenko21a.html PDF: http://proceedings.mlr.press/v139/petrenko21a/petrenko21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-petrenko21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aleksei family: Petrenko - given: Erik family: Wijmans - given: Brennan family: Shacklett - given: Vladlen family: Koltun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8556-8566 id: petrenko21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8556 lastpage: 8566 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Practical Mean Bounds for Small Samples' abstract: 'Historically, to bound the mean for small sample sizes, practitioners have had to choose between using methods with unrealistic assumptions about the unknown distribution (e.g., Gaussianity) and methods like Hoeffding’s inequality that use weaker assumptions but produce much looser (wider) intervals. In 1969, \citet{Anderson1969} proposed a mean confidence interval strictly better than or equal to Hoeffding’s whose only assumption is that the distribution’s support is contained in an interval $[a,b]$. For the first time since then, we present a new family of bounds that compares favorably to Anderson’s. We prove that each bound in the family has {\em guaranteed coverage}, i.e., it holds with probability at least $1-\alpha$ for all distributions on an interval $[a,b]$. Furthermore, one of the bounds is tighter than or equal to Anderson’s for all samples. In simulations, we show that for many distributions, the gain over Anderson’s bound is substantial.' volume: 139 URL: https://proceedings.mlr.press/v139/phan21a.html PDF: http://proceedings.mlr.press/v139/phan21a/phan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-phan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: My family: Phan - given: Philip family: Thomas - given: Erik family: Learned-Miller editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8567-8576 id: phan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8567 lastpage: 8576 published: 2021-07-01 00:00:00 +0000 - title: 'DG-LMC: A Turn-key and Scalable Synchronous Distributed MCMC Algorithm via Langevin Monte Carlo within Gibbs' abstract: 'Performing reliable Bayesian inference on a big data scale is becoming a keystone in the modern era of machine learning. A workhorse class of methods to achieve this task are Markov chain Monte Carlo (MCMC) algorithms and their design to handle distributed datasets has been the subject of many works. However, existing methods are not completely either reliable or computationally efficient. In this paper, we propose to fill this gap in the case where the dataset is partitioned and stored on computing nodes within a cluster under a master/slaves architecture. We derive a user-friendly centralised distributed MCMC algorithm with provable scaling in high-dimensional settings. We illustrate the relevance of the proposed methodology on both synthetic and real data experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/plassier21a.html PDF: http://proceedings.mlr.press/v139/plassier21a/plassier21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-plassier21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vincent family: Plassier - given: Maxime family: Vono - given: Alain family: Durmus - given: Eric family: Moulines editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8577-8587 id: plassier21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8577 lastpage: 8587 published: 2021-07-01 00:00:00 +0000 - title: 'GeomCA: Geometric Evaluation of Data Representations' abstract: 'Evaluating the quality of learned representations without relying on a downstream task remains one of the challenges in representation learning. In this work, we present Geometric Component Analysis (GeomCA) algorithm that evaluates representation spaces based on their geometric and topological properties. GeomCA can be applied to representations of any dimension, independently of the model that generated them. We demonstrate its applicability by analyzing representations obtained from a variety of scenarios, such as contrastive learning models, generative models and supervised learning models.' volume: 139 URL: https://proceedings.mlr.press/v139/poklukar21a.html PDF: http://proceedings.mlr.press/v139/poklukar21a/poklukar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-poklukar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Petra family: Poklukar - given: Anastasiia family: Varava - given: Danica family: Kragic editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8588-8598 id: poklukar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8588 lastpage: 8598 published: 2021-07-01 00:00:00 +0000 - title: 'Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech' abstract: 'Recently, denoising diffusion probabilistic models and generative score matching have shown high potential in modelling complex data distributions while stochastic calculus has provided a unified point of view on these techniques allowing for flexible inference schemes. In this paper we introduce Grad-TTS, a novel text-to-speech model with score-based decoder producing mel-spectrograms by gradually transforming noise predicted by encoder and aligned with text input by means of Monotonic Alignment Search. The framework of stochastic differential equations helps us to generalize conventional diffusion probabilistic models to the case of reconstructing data from noise with different parameters and allows to make this reconstruction flexible by explicitly controlling trade-off between sound quality and inference speed. Subjective human evaluation shows that Grad-TTS is competitive with state-of-the-art text-to-speech approaches in terms of Mean Opinion Score.' volume: 139 URL: https://proceedings.mlr.press/v139/popov21a.html PDF: http://proceedings.mlr.press/v139/popov21a/popov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-popov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vadim family: Popov - given: Ivan family: Vovk - given: Vladimir family: Gogoryan - given: Tasnima family: Sadekova - given: Mikhail family: Kudinov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8599-8608 id: popov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8599 lastpage: 8608 published: 2021-07-01 00:00:00 +0000 - title: 'Bias-Free Scalable Gaussian Processes via Randomized Truncations' abstract: 'Scalable Gaussian Process methods are computationally attractive, yet introduce modeling biases that require rigorous study. This paper analyzes two common techniques: early truncated conjugate gradients (CG) and random Fourier features (RFF). We find that both methods introduce a systematic bias on the learned hyperparameters: CG tends to underfit while RFF tends to overfit. We address these issues using randomized truncation estimators that eliminate bias in exchange for increased variance. In the case of RFF, we show that the bias-to-variance conversion is indeed a trade-off: the additional variance proves detrimental to optimization. However, in the case of CG, our unbiased learning procedure meaningfully outperforms its biased counterpart with minimal additional computation. Our code is available at https://github.com/ cunningham-lab/RTGPS.' volume: 139 URL: https://proceedings.mlr.press/v139/potapczynski21a.html PDF: http://proceedings.mlr.press/v139/potapczynski21a/potapczynski21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-potapczynski21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andres family: Potapczynski - given: Luhuan family: Wu - given: Dan family: Biderman - given: Geoff family: Pleiss - given: John P family: Cunningham editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8609-8619 id: potapczynski21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8609 lastpage: 8619 published: 2021-07-01 00:00:00 +0000 - title: 'Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset' abstract: 'That neural networks may be pruned to high sparsities and retain high accuracy is well established. Recent research efforts focus on pruning immediately after initialization so as to allow the computational savings afforded by sparsity to extend to the training process. In this work, we introduce a new ‘DCT plus Sparse’ layer architecture, which maintains information propagation and trainability even with as little as 0.01% trainable parameters remaining. We show that standard training of networks built with these layers, and pruned at initialization, achieves state-of-the-art accuracy for extreme sparsities on a variety of benchmark network architectures and datasets. Moreover, these results are achieved using only simple heuristics to determine the locations of the trainable parameters in the network, and thus without having to initially store or compute with the full, unpruned network, as is required by competing prune-at-initialization algorithms. Switching from standard sparse layers to DCT plus Sparse layers does not increase the storage footprint of a network and incurs only a small additional computational overhead.' volume: 139 URL: https://proceedings.mlr.press/v139/price21a.html PDF: http://proceedings.mlr.press/v139/price21a/price21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-price21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ilan family: Price - given: Jared family: Tanner editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8620-8629 id: price21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8620 lastpage: 8629 published: 2021-07-01 00:00:00 +0000 - title: 'BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining' abstract: 'In this paper, we propose BANG, a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generation can be uniformly regarded as to what extent previous tokens can be attended, and BANG bridges AR and NAR generation through designing a novel model structure for large-scale pre-training. A pretrained BANG model can simultaneously support AR, NAR, and semi-NAR generation to meet different requirements. Experiments on question generation (SQuAD 1.1), summarization (XSum), and dialogue generation (PersonaChat) show that BANG improves NAR and semi-NAR performance significantly as well as attaining comparable performance with strong AR pretrained models. Compared with the semi-NAR strong baselines, BANG achieves absolute improvements of 14.01 and 5.24 in the overall scores of SQuAD 1.1 and XSum, respectively. In addition, BANG achieves absolute improvements of 10.73, 6.39, and 5.90 in the overall scores of SQuAD, XSUM, and PersonaChat compared with the NAR strong baselines, respectively. Our code will be made publicly available.' volume: 139 URL: https://proceedings.mlr.press/v139/qi21a.html PDF: http://proceedings.mlr.press/v139/qi21a/qi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Weizhen family: Qi - given: Yeyun family: Gong - given: Jian family: Jiao - given: Yu family: Yan - given: Weizhu family: Chen - given: Dayiheng family: Liu - given: Kewen family: Tang - given: Houqiang family: Li - given: Jiusheng family: Chen - given: Ruofei family: Zhang - given: Ming family: Zhou - given: Nan family: Duan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8630-8639 id: qi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8630 lastpage: 8639 published: 2021-07-01 00:00:00 +0000 - title: 'A Probabilistic Approach to Neural Network Pruning' abstract: 'Neural network pruning techniques reduce the number of parameters without compromising predicting ability of a network. Many algorithms have been developed for pruning both over-parameterized fully-connected networks (FCN) and convolutional neural networks (CNN), but analytical studies of capabilities and compression ratios of such pruned sub-networks are lacking. We theoretically study the performance of two pruning techniques (random and magnitude-based) on FCN and CNN. Given a target network, we provide a universal approach to bound the gap between a pruned and the target network in a probabilistic sense, which is the first study of this nature. The results establish that there exist pruned networks with expressive power within any specified bound from the target network and with a significant compression ratio.' volume: 139 URL: https://proceedings.mlr.press/v139/qian21a.html PDF: http://proceedings.mlr.press/v139/qian21a/qian21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qian21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xin family: Qian - given: Diego family: Klabjan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8640-8649 id: qian21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8640 lastpage: 8649 published: 2021-07-01 00:00:00 +0000 - title: 'Global Prosody Style Transfer Without Text Transcriptions' abstract: 'Prosody plays an important role in characterizing the style of a speaker or an emotion, but most non-parallel voice or emotion style transfer algorithms do not convert any prosody information. Two major components of prosody are pitch and rhythm. Disentangling the prosody information, particularly the rhythm component, from the speech is challenging because it involves breaking the synchrony between the input speech and the disentangled speech representation. As a result, most existing prosody style transfer algorithms would need to rely on some form of text transcriptions to identify the content information, which confines their application to high-resource languages only. Recently, SpeechSplit has made sizeable progress towards unsupervised prosody style transfer, but it is unable to extract high-level global prosody style in an unsupervised manner. In this paper, we propose AutoPST, which can disentangle global prosody style from speech without relying on any text transcriptions. AutoPST is an Autoencoder-based Prosody Style Transfer framework with a thorough rhythm removal module guided by the self-expressive representation learning. Experiments on different style transfer tasks show that AutoPST can effectively convert prosody that correctly reflects the styles of the target domains.' volume: 139 URL: https://proceedings.mlr.press/v139/qian21b.html PDF: http://proceedings.mlr.press/v139/qian21b/qian21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qian21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kaizhi family: Qian - given: Yang family: Zhang - given: Shiyu family: Chang - given: Jinjun family: Xiong - given: Chuang family: Gan - given: David family: Cox - given: Mark family: Hasegawa-Johnson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8650-8660 id: qian21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8650 lastpage: 8660 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Differentiable Simulation of Articulated Bodies' abstract: 'We present a method for efficient differentiable simulation of articulated bodies. This enables integration of articulated body dynamics into deep learning frameworks, and gradient-based optimization of neural networks that operate on articulated bodies. We derive the gradients of the contact solver using spatial algebra and the adjoint method. Our approach is an order of magnitude faster than autodiff tools. By only saving the initial states throughout the simulation process, our method reduces memory requirements by two orders of magnitude. We demonstrate the utility of efficient differentiable dynamics for articulated bodies in a variety of applications. We show that reinforcement learning with articulated systems can be accelerated using gradients provided by our method. In applications to control and inverse problems, gradient-based optimization enabled by our work accelerates convergence by more than an order of magnitude.' volume: 139 URL: https://proceedings.mlr.press/v139/qiao21a.html PDF: http://proceedings.mlr.press/v139/qiao21a/qiao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qiao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi-Ling family: Qiao - given: Junbang family: Liang - given: Vladlen family: Koltun - given: Ming C family: Lin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8661-8671 id: qiao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8661 lastpage: 8671 published: 2021-07-01 00:00:00 +0000 - title: 'Oneshot Differentially Private Top-k Selection' abstract: 'Being able to efficiently and accurately select the top-$k$ elements with differential privacy is an integral component of various private data analysis tasks. In this paper, we present the oneshot Laplace mechanism, which generalizes the well-known Report Noisy Max \cite{dwork2014algorithmic} mechanism to reporting noisy top-$k$ elements. We show that the oneshot Laplace mechanism with a noise level of $\widetilde{O}(\sqrt{k}/\eps)$ is approximately differentially private. Compared to the previous peeling approach of running Report Noisy Max $k$ times, the oneshot Laplace mechanism only adds noises and computes the top $k$ elements once, hence much more efficient for large $k$. In addition, our proof of privacy relies on a novel coupling technique that bypasses the composition theorems so without the linear dependence on $k$ which is inherent to various composition theorems. Finally, we present a novel application of efficient top-$k$ selection in the classical problem of ranking from pairwise comparisons.' volume: 139 URL: https://proceedings.mlr.press/v139/qiao21b.html PDF: http://proceedings.mlr.press/v139/qiao21b/qiao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qiao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gang family: Qiao - given: Weijie family: Su - given: Li family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8672-8681 id: qiao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8672 lastpage: 8681 published: 2021-07-01 00:00:00 +0000 - title: 'Density Constrained Reinforcement Learning' abstract: 'We study constrained reinforcement learning (CRL) from a novel perspective by setting constraints directly on state density functions, rather than the value functions considered by previous works. State density has a clear physical and mathematical interpretation, and is able to express a wide variety of constraints such as resource limits and safety requirements. Density constraints can also avoid the time-consuming process of designing and tuning cost functions required by value function-based constraints to encode system specifications. We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density constrained RL problem optimally and the constrains are guaranteed to be satisfied. We prove that the proposed algorithm converges to a near-optimal solution with a bounded error even when the policy update is imperfect. We use a set of comprehensive experiments to demonstrate the advantages of our approach over state-of-the-art CRL methods, with a wide range of density constrained tasks as well as standard CRL benchmarks such as Safety-Gym.' volume: 139 URL: https://proceedings.mlr.press/v139/qin21a.html PDF: http://proceedings.mlr.press/v139/qin21a/qin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zengyi family: Qin - given: Yuxiao family: Chen - given: Chuchu family: Fan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8682-8692 id: qin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8682 lastpage: 8692 published: 2021-07-01 00:00:00 +0000 - title: 'Budgeted Heterogeneous Treatment Effect Estimation' abstract: 'Heterogeneous treatment effect (HTE) estimation is receiving increasing interest due to its important applications in fields such as healthcare, economics, and education. Current HTE estimation methods generally assume the existence of abundant observational data, though the acquisition of such data can be costly. In some real scenarios, it is easy to access the pre-treatment covariates and treatment assignments, but expensive to obtain the factual outcomes. To make HTE estimation more practical, in this paper, we examine the problem of estimating HTEs with a budget constraint on observational data, aiming to obtain accurate HTE estimates with limited costs. By deriving an informative generalization bound and connecting to active learning, we propose an effective and efficient method which is validated both theoretically and empirically.' volume: 139 URL: https://proceedings.mlr.press/v139/qin21b.html PDF: http://proceedings.mlr.press/v139/qin21b/qin21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qin21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tian family: Qin - given: Tian-Zuo family: Wang - given: Zhi-Hua family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8693-8702 id: qin21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8693 lastpage: 8702 published: 2021-07-01 00:00:00 +0000 - title: 'Neural Transformation Learning for Deep Anomaly Detection Beyond Images' abstract: 'Data transformations (e.g. rotations, reflections, and cropping) play an important role in self-supervised learning. Typically, images are transformed into different views, and neural networks trained on tasks involving these views produce useful feature representations for downstream tasks, including anomaly detection. However, for anomaly detection beyond image data, it is often unclear which transformations to use. Here we present a simple end-to-end procedure for anomaly detection with learnable transformations. The key idea is to embed the transformed data into a semantic space such that the transformed data still resemble their untransformed form, while different transformations are easily distinguishable. Extensive experiments on time series show that our proposed method outperforms existing approaches in the one-vs.-rest setting and is competitive in the more challenging n-vs.-rest anomaly-detection task. On medical and cyber-security tabular data, our method learns domain-specific transformations and detects anomalies more accurately than previous work.' volume: 139 URL: https://proceedings.mlr.press/v139/qiu21a.html PDF: http://proceedings.mlr.press/v139/qiu21a/qiu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qiu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chen family: Qiu - given: Timo family: Pfrommer - given: Marius family: Kloft - given: Stephan family: Mandt - given: Maja family: Rudolph editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8703-8714 id: qiu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8703 lastpage: 8714 published: 2021-07-01 00:00:00 +0000 - title: 'Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions' abstract: 'While single-agent policy optimization in a fixed environment has attracted a lot of research attention recently in the reinforcement learning community, much less is known theoretically when there are multiple agents playing in a potentially competitive environment. We take steps forward by proposing and analyzing new fictitious play policy optimization algorithms for two-player zero-sum Markov games with structured but unknown transitions. We consider two classes of transition structures: factored independent transition and single-controller transition. For both scenarios, we prove tight $\widetilde{\mathcal{O}}(\sqrt{T})$ regret bounds after $T$ steps in a two-agent competitive game scenario. The regret of each player is measured against a potentially adversarial opponent who can choose a single best policy in hindsight after observing the full policy sequence. Our algorithms feature a combination of Upper Confidence Bound (UCB)-type optimism and fictitious play under the scope of simultaneous policy optimization in a non-stationary environment. When both players adopt the proposed algorithms, their overall optimality gap is $\widetilde{\mathcal{O}}(\sqrt{T})$.' volume: 139 URL: https://proceedings.mlr.press/v139/qiu21b.html PDF: http://proceedings.mlr.press/v139/qiu21b/qiu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qiu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shuang family: Qiu - given: Xiaohan family: Wei - given: Jieping family: Ye - given: Zhaoran family: Wang - given: Zhuoran family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8715-8725 id: qiu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8715 lastpage: 8725 published: 2021-07-01 00:00:00 +0000 - title: 'Optimization Planning for 3D ConvNets' abstract: 'It is not trivial to optimally learn a 3D Convolutional Neural Networks (3D ConvNets) due to high complexity and various options of the training scheme. The most common hand-tuning process starts from learning 3D ConvNets using short video clips and then is followed by learning long-term temporal dependency using lengthy clips, while gradually decaying the learning rate from high to low as training progresses. The fact that such process comes along with several heuristic settings motivates the study to seek an optimal "path" to automate the entire training. In this paper, we decompose the path into a series of training "states" and specify the hyper-parameters, e.g., learning rate and the length of input clips, in each state. The estimation of the knee point on the performance-epoch curve triggers the transition from one state to another. We perform dynamic programming over all the candidate states to plan the optimal permutation of states, i.e., optimization path. Furthermore, we devise a new 3D ConvNets with a unique design of dual-head classifier to improve spatial and temporal discrimination. Extensive experiments on seven public video recognition benchmarks demonstrate the advantages of our proposal. With the optimization planning, our 3D ConvNets achieves superior results when comparing to the state-of-the-art recognition methods. More remarkably, we obtain the top-1 accuracy of 80.5% and 82.7% on Kinetics-400 and Kinetics-600 datasets, respectively.' volume: 139 URL: https://proceedings.mlr.press/v139/qiu21c.html PDF: http://proceedings.mlr.press/v139/qiu21c/qiu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qiu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhaofan family: Qiu - given: Ting family: Yao - given: Chong-Wah family: Ngo - given: Tao family: Mei editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8726-8736 id: qiu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 8726 lastpage: 8736 published: 2021-07-01 00:00:00 +0000 - title: 'On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game' abstract: 'To achieve sample efficiency in reinforcement learning (RL), it necessitates to efficiently explore the underlying environment. Under the offline setting, addressing the exploration challenge lies in collecting an offline dataset with sufficient coverage. Motivated by such a challenge, we study the reward-free RL problem, where an agent aims to thoroughly explore the environment without any pre-specified reward function. Then, given any extrinsic reward, the agent computes the optimal policy via offline RL with data collected in the exploration stage. Moreover, we tackle this problem under the context of function approximation, leveraging powerful function approximators. Specifically, we propose to explore via an optimistic variant of the value-iteration algorithm incorporating kernel and neural function approximations, where we adopt the associated exploration bonus as the exploration reward. Moreover, we design exploration and planning algorithms for both single-agent MDPs and zero-sum Markov games and prove that our methods can achieve $\widetilde{\mathcal{O}}(1 /\varepsilon^2)$ sample complexity for generating a $\varepsilon$-suboptimal policy or $\varepsilon$-approximate Nash equilibrium when given an arbitrary extrinsic reward. To the best of our knowledge, we establish the first provably efficient reward-free RL algorithm with kernel and neural function approximators.' volume: 139 URL: https://proceedings.mlr.press/v139/qiu21d.html PDF: http://proceedings.mlr.press/v139/qiu21d/qiu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-qiu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shuang family: Qiu - given: Jieping family: Ye - given: Zhaoran family: Wang - given: Zhuoran family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8737-8747 id: qiu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 8737 lastpage: 8747 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Transferable Visual Models From Natural Language Supervision' abstract: 'State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on.' volume: 139 URL: https://proceedings.mlr.press/v139/radford21a.html PDF: http://proceedings.mlr.press/v139/radford21a/radford21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-radford21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alec family: Radford - given: Jong Wook family: Kim - given: Chris family: Hallacy - given: Aditya family: Ramesh - given: Gabriel family: Goh - given: Sandhini family: Agarwal - given: Girish family: Sastry - given: Amanda family: Askell - given: Pamela family: Mishkin - given: Jack family: Clark - given: Gretchen family: Krueger - given: Ilya family: Sutskever editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8748-8763 id: radford21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8748 lastpage: 8763 published: 2021-07-01 00:00:00 +0000 - title: 'A General Framework For Detecting Anomalous Inputs to DNN Classifiers' abstract: 'Detecting anomalous inputs, such as adversarial and out-of-distribution (OOD) inputs, is critical for classifiers (including deep neural networks or DNNs) deployed in real-world applications. While prior works have proposed various methods to detect such anomalous samples using information from the internal layer representations of a DNN, there is a lack of consensus on a principled approach for the different components of such a detection method. As a result, often heuristic and one-off methods are applied for different aspects of this problem. We propose an unsupervised anomaly detection framework based on the internal DNN layer representations in the form of a meta-algorithm with configurable components. We proceed to propose specific instantiations for each component of the meta-algorithm based on ideas grounded in statistical testing and anomaly detection. We evaluate the proposed methods on well-known image classification datasets with strong adversarial attacks and OOD inputs, including an adaptive attack that uses the internal layer representations of the DNN (often not considered in prior work). Comparisons with five recently-proposed competing detection methods demonstrates the effectiveness of our method in detecting adversarial and OOD inputs.' volume: 139 URL: https://proceedings.mlr.press/v139/raghuram21a.html PDF: http://proceedings.mlr.press/v139/raghuram21a/raghuram21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-raghuram21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jayaram family: Raghuram - given: Varun family: Chandrasekaran - given: Somesh family: Jha - given: Suman family: Banerjee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8764-8775 id: raghuram21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8764 lastpage: 8775 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning' abstract: 'Ad hoc teamwork is the challenging problem of designing an autonomous agent which can adapt quickly to collaborate with teammates without prior coordination mechanisms, including joint training. Prior work in this area has focused on closed teams in which the number of agents is fixed. In this work, we consider open teams by allowing agents with different fixed policies to enter and leave the environment without prior notification. Our solution builds on graph neural networks to learn agent models and joint-action value models under varying team compositions. We contribute a novel action-value computation that integrates the agent model and joint-action value model to produce action-value estimates. We empirically demonstrate that our approach successfully models the effects other agents have on the learner, leading to policies that robustly adapt to dynamic team compositions and significantly outperform several alternative methods.' volume: 139 URL: https://proceedings.mlr.press/v139/rahman21a.html PDF: http://proceedings.mlr.press/v139/rahman21a/rahman21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rahman21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Muhammad A family: Rahman - given: Niklas family: Hopner - given: Filippos family: Christianos - given: Stefano V family: Albrecht editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8776-8786 id: rahman21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8776 lastpage: 8786 published: 2021-07-01 00:00:00 +0000 - title: 'Decoupling Value and Policy for Generalization in Reinforcement Learning' abstract: 'Standard deep reinforcement learning algorithms use a shared representation for the policy and value function, especially when training directly from images. However, we argue that more information is needed to accurately estimate the value function than to learn the optimal policy. Consequently, the use of a shared representation for the policy and value function can lead to overfitting. To alleviate this problem, we propose two approaches which are combined to create IDAAC: Invariant Decoupled Advantage Actor-Critic. First, IDAAC decouples the optimization of the policy and value function, using separate networks to model them. Second, it introduces an auxiliary loss which encourages the representation to be invariant to task-irrelevant properties of the environment. IDAAC shows good generalization to unseen environments, achieving a new state-of-the-art on the Procgen benchmark and outperforming popular methods on DeepMind Control tasks with distractors. Our implementation is available at https://github.com/rraileanu/idaac.' volume: 139 URL: https://proceedings.mlr.press/v139/raileanu21a.html PDF: http://proceedings.mlr.press/v139/raileanu21a/raileanu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-raileanu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Roberta family: Raileanu - given: Rob family: Fergus editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8787-8798 id: raileanu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8787 lastpage: 8798 published: 2021-07-01 00:00:00 +0000 - title: 'Hierarchical Clustering of Data Streams: Scalable Algorithms and Approximation Guarantees' abstract: 'We investigate the problem of hierarchically clustering data streams containing metric data in R^d. We introduce a desirable invariance property for such algorithms, describe a general family of hyperplane-based methods enjoying this property, and analyze two scalable instances of this general family against recently popularized similarity/dissimilarity-based metrics for hierarchical clustering. We prove a number of new results related to the approximation ratios of these algorithms, improving in various ways over the literature on this subject. Finally, since our algorithms are principled but also very practical, we carry out an experimental comparison on both synthetic and real-world datasets showing competitive results against known baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/rajagopalan21a.html PDF: http://proceedings.mlr.press/v139/rajagopalan21a/rajagopalan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rajagopalan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Anand family: Rajagopalan - given: Fabio family: Vitale - given: Danny family: Vainstein - given: Gui family: Citovsky - given: Cecilia M family: Procopiuc - given: Claudio family: Gentile editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8799-8809 id: rajagopalan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8799 lastpage: 8809 published: 2021-07-01 00:00:00 +0000 - title: 'Differentially Private Sliced Wasserstein Distance' abstract: 'Developing machine learning methods that are privacy preserving is today a central topic of research, with huge practical impacts. Among the numerous ways to address privacy-preserving learning, we here take the perspective of computing the divergences between distributions under the Differential Privacy (DP) framework — being able to compute divergences between distributions is pivotal for many machine learning problems, such as learning generative models or domain adaptation problems. Instead of resorting to the popular gradient-based sanitization method for DP, we tackle the problem at its roots by focusing on the Sliced Wasserstein Distance and seamlessly making it differentially private. Our main contribution is as follows: we analyze the property of adding a Gaussian perturbation to the intrinsic randomized mechanism of the Sliced Wasserstein Distance, and we establish the sensitivity of the resulting differentially private mechanism. One of our important findings is that this DP mechanism transforms the Sliced Wasserstein distance into another distance, that we call the Smoothed Sliced Wasserstein Distance. This new differentially private distribution distance can be plugged into generative models and domain adaptation algorithms in a transparent way, and we empirically show that it yields highly competitive performance compared with gradient-based DP approaches from the literature, with almost no loss in accuracy for the domain adaptation problems that we consider.' volume: 139 URL: https://proceedings.mlr.press/v139/rakotomamonjy21a.html PDF: http://proceedings.mlr.press/v139/rakotomamonjy21a/rakotomamonjy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rakotomamonjy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alain family: Rakotomamonjy - given: Ralaivola family: Liva editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8810-8820 id: rakotomamonjy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8810 lastpage: 8820 published: 2021-07-01 00:00:00 +0000 - title: 'Zero-Shot Text-to-Image Generation' abstract: 'Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.' volume: 139 URL: https://proceedings.mlr.press/v139/ramesh21a.html PDF: http://proceedings.mlr.press/v139/ramesh21a/ramesh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ramesh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aditya family: Ramesh - given: Mikhail family: Pavlov - given: Gabriel family: Goh - given: Scott family: Gray - given: Chelsea family: Voss - given: Alec family: Radford - given: Mark family: Chen - given: Ilya family: Sutskever editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8821-8831 id: ramesh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8821 lastpage: 8831 published: 2021-07-01 00:00:00 +0000 - title: 'End-to-End Learning of Coherent Probabilistic Forecasts for Hierarchical Time Series' abstract: 'This paper presents a novel approach for hierarchical time series forecasting that produces coherent, probabilistic forecasts without requiring any explicit post-processing reconciliation. Unlike the state-of-the-art, the proposed method simultaneously learns from all time series in the hierarchy and incorporates the reconciliation step into a single trainable model. This is achieved by applying the reparameterization trick and casting reconciliation as an optimization problem with a closed-form solution. These model features make end-to-end learning of hierarchical forecasts possible, while accomplishing the challenging task of generating forecasts that are both probabilistic and coherent. Importantly, our approach also accommodates general aggregation constraints including grouped and temporal hierarchies. An extensive empirical evaluation on real-world hierarchical datasets demonstrates the advantages of the proposed approach over the state-of-the-art.' volume: 139 URL: https://proceedings.mlr.press/v139/rangapuram21a.html PDF: http://proceedings.mlr.press/v139/rangapuram21a/rangapuram21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rangapuram21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Syama Sundar family: Rangapuram - given: Lucien D family: Werner - given: Konstantinos family: Benidis - given: Pedro family: Mercado - given: Jan family: Gasthaus - given: Tim family: Januschowski editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8832-8843 id: rangapuram21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8832 lastpage: 8843 published: 2021-07-01 00:00:00 +0000 - title: 'MSA Transformer' abstract: 'Unsupervised protein language models trained across millions of diverse sequences learn structure and function of proteins. Protein language models studied to date have been trained to perform inference from individual sequences. The longstanding approach in computational biology has been to make inferences from a family of evolutionarily related sequences by fitting a model to each family independently. In this work we combine the two paradigms. We introduce a protein language model which takes as input a set of sequences in the form of a multiple sequence alignment. The model interleaves row and column attention across the input sequences and is trained with a variant of the masked language modeling objective across many protein families. The performance of the model surpasses current state-of-the-art unsupervised structure learning methods by a wide margin, with far greater parameter efficiency than prior state-of-the-art protein language models.' volume: 139 URL: https://proceedings.mlr.press/v139/rao21a.html PDF: http://proceedings.mlr.press/v139/rao21a/rao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Roshan M family: Rao - given: Jason family: Liu - given: Robert family: Verkuil - given: Joshua family: Meier - given: John family: Canny - given: Pieter family: Abbeel - given: Tom family: Sercu - given: Alexander family: Rives editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8844-8856 id: rao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8844 lastpage: 8856 published: 2021-07-01 00:00:00 +0000 - title: 'Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting' abstract: 'In this work, we propose TimeGrad, an autoregressive model for multivariate probabilistic time series forecasting which samples from the data distribution at each time step by estimating its gradient. To this end, we use diffusion probabilistic models, a class of latent variable models closely connected to score matching and energy-based methods. Our model learns gradients by optimizing a variational bound on the data likelihood and at inference time converts white noise into a sample of the distribution of interest through a Markov chain using Langevin sampling. We demonstrate experimentally that the proposed autoregressive denoising diffusion model is the new state-of-the-art multivariate probabilistic forecasting method on real-world data sets with thousands of correlated dimensions. We hope that this method is a useful tool for practitioners and lays the foundation for future research in this area.' volume: 139 URL: https://proceedings.mlr.press/v139/rasul21a.html PDF: http://proceedings.mlr.press/v139/rasul21a/rasul21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rasul21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kashif family: Rasul - given: Calvin family: Seward - given: Ingmar family: Schuster - given: Roland family: Vollgraf editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8857-8868 id: rasul21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8857 lastpage: 8868 published: 2021-07-01 00:00:00 +0000 - title: 'Generative Particle Variational Inference via Estimation of Functional Gradients' abstract: 'Recently, particle-based variational inference (ParVI) methods have gained interest because they can avoid arbitrary parametric assumptions that are common in variational inference. However, many ParVI approaches do not allow arbitrary sampling from the posterior, and the few that do allow such sampling suffer from suboptimality. This work proposes a new method for learning to approximately sample from the posterior distribution. We construct a neural sampler that is trained with the functional gradient of the KL-divergence between the empirical sampling distribution and the target distribution, assuming the gradient resides within a reproducing kernel Hilbert space. Our generative ParVI (GPVI) approach maintains the asymptotic performance of ParVI methods while offering the flexibility of a generative sampler. Through carefully constructed experiments, we show that GPVI outperforms previous generative ParVI methods such as amortized SVGD, and is competitive with ParVI as well as gold-standard approaches like Hamiltonian Monte Carlo for fitting both exactly known and intractable target distributions.' volume: 139 URL: https://proceedings.mlr.press/v139/ratzlaff21a.html PDF: http://proceedings.mlr.press/v139/ratzlaff21a/ratzlaff21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ratzlaff21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Neale family: Ratzlaff - given: Qinxun family: Bai - given: Li family: Fuxin - given: Wei family: Xu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8869-8879 id: ratzlaff21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8869 lastpage: 8879 published: 2021-07-01 00:00:00 +0000 - title: 'Enhancing Robustness of Neural Networks through Fourier Stabilization' abstract: 'Despite the considerable success of neural networks in security settings such as malware detection, such models have proved vulnerable to evasion attacks, in which attackers make slight changes to inputs (e.g., malware) to bypass detection. We propose a novel approach, Fourier stabilization, for designing evasion-robust neural networks with binary inputs. This approach, which is complementary to other forms of defense, replaces the weights of individual neurons with robust analogs derived using Fourier analytic tools. The choice of which neurons to stabilize in a neural network is then a combinatorial optimization problem, and we propose several methods for approximately solving it. We provide a formal bound on the per-neuron drop in accuracy due to Fourier stabilization, and experimentally demonstrate the effectiveness of the proposed approach in boosting robustness of neural networks in several detection settings. Moreover, we show that our approach effectively composes with adversarial training.' volume: 139 URL: https://proceedings.mlr.press/v139/raviv21a.html PDF: http://proceedings.mlr.press/v139/raviv21a/raviv21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-raviv21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Netanel family: Raviv - given: Aidan family: Kelley - given: Minzhe family: Guo - given: Yevgeniy family: Vorobeychik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8880-8889 id: raviv21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8880 lastpage: 8889 published: 2021-07-01 00:00:00 +0000 - title: 'Disentangling Sampling and Labeling Bias for Learning in Large-output Spaces' abstract: 'Negative sampling schemes enable efficient training given a large number of classes, by offering a means to approximate a computationally expensive loss function that takes all labels into account. In this paper, we present a new connection between these schemes and loss modification techniques for countering label imbalance. We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels. Further, we provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance. We empirically verify our findings on long-tail classification and retrieval benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/rawat21a.html PDF: http://proceedings.mlr.press/v139/rawat21a/rawat21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rawat21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ankit Singh family: Rawat - given: Aditya K family: Menon - given: Wittawat family: Jitkrittum - given: Sadeep family: Jayasumana - given: Felix family: Yu - given: Sashank family: Reddi - given: Sanjiv family: Kumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8890-8901 id: rawat21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8890 lastpage: 8901 published: 2021-07-01 00:00:00 +0000 - title: 'Cross-domain Imitation from Observations' abstract: 'Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior. With environments modeled as Markov Decision Processes (MDP), most of the existing imitation algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitation policy is to be learned. In this paper, we study the problem of how to imitate tasks when discrepancies exist between the expert and agent MDP. These discrepancies across domains could include differing dynamics, viewpoint, or morphology; we present a novel framework to learn correspondences across such domains. Importantly, in contrast to prior works, we use unpaired and unaligned trajectories containing only states in the expert domain, to learn this correspondence. We utilize a cycle-consistency constraint on both the state space and a domain agnostic latent space to do this. In addition, we enforce consistency on the temporal position of states via a normalized position estimator function, to align the trajectories across the two domains. Once this correspondence is found, we can directly transfer the demonstrations on one domain to the other and use it for imitation. Experiments across a wide variety of challenging domains demonstrate the efficacy of our approach.' volume: 139 URL: https://proceedings.mlr.press/v139/raychaudhuri21a.html PDF: http://proceedings.mlr.press/v139/raychaudhuri21a/raychaudhuri21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-raychaudhuri21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dripta S. family: Raychaudhuri - given: Sujoy family: Paul - given: Jeroen family: Vanbaar - given: Amit K. family: Roy-Chowdhury editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8902-8912 id: raychaudhuri21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8902 lastpage: 8912 published: 2021-07-01 00:00:00 +0000 - title: 'Implicit Regularization in Tensor Factorization' abstract: 'Recent efforts to unravel the mystery of implicit regularization in deep learning have led to a theoretical focus on matrix factorization — matrix completion via linear neural network. As a step further towards practical deep learning, we provide the first theoretical analysis of implicit regularization in tensor factorization — tensor completion via certain type of non-linear neural network. We circumvent the notorious difficulty of tensor problems by adopting a dynamical systems perspective, and characterizing the evolution induced by gradient descent. The characterization suggests a form of greedy low tensor rank search, which we rigorously prove under certain conditions, and empirically demonstrate under others. Motivated by tensor rank capturing the implicit regularization of a non-linear neural network, we empirically explore it as a measure of complexity, and find that it captures the essence of datasets on which neural networks generalize. This leads us to believe that tensor rank may pave way to explaining both implicit regularization in deep learning, and the properties of real-world data translating this implicit regularization to generalization.' volume: 139 URL: https://proceedings.mlr.press/v139/razin21a.html PDF: http://proceedings.mlr.press/v139/razin21a/razin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-razin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Noam family: Razin - given: Asaf family: Maman - given: Nadav family: Cohen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8913-8924 id: razin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8913 lastpage: 8924 published: 2021-07-01 00:00:00 +0000 - title: 'Align, then memorise: the dynamics of learning with feedback alignment' abstract: 'Direct Feedback Alignment (DFA) is emerging as an efficient and biologically plausible alternative to backpropagation for training deep neural networks. Despite relying on random feedback weights for the backward pass, DFA successfully trains state-of-the-art models such as Transformers. On the other hand, it notoriously fails to train convolutional networks. An understanding of the inner workings of DFA to explain these diverging results remains elusive. Here, we propose a theory of feedback alignment algorithms. We first show that learning in shallow networks proceeds in two steps: an alignment phase, where the model adapts its weights to align the approximate gradient with the true gradient of the loss function, is followed by a memorisation phase, where the model focuses on fitting the data. This two-step process has a degeneracy breaking effect: out of all the low-loss solutions in the landscape, a net-work trained with DFA naturally converges to the solution which maximises gradient alignment. We also identify a key quantity underlying alignment in deep linear networks: the conditioning of the alignment matrices. The latter enables a detailed understanding of the impact of data structure on alignment, and suggests a simple explanation for the well-known failure of DFA to train convolutional neural networks. Numerical experiments on MNIST and CIFAR10 clearly demonstrate degeneracy breaking in deep non-linear networks and show that the align-then-memorize process occurs sequentially from the bottom layers of the network to the top.' volume: 139 URL: https://proceedings.mlr.press/v139/refinetti21a.html PDF: http://proceedings.mlr.press/v139/refinetti21a/refinetti21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-refinetti21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Maria family: Refinetti - given: Stéphane family: D’Ascoli - given: Ruben family: Ohana - given: Sebastian family: Goldt editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8925-8935 id: refinetti21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8925 lastpage: 8935 published: 2021-07-01 00:00:00 +0000 - title: 'Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed' abstract: 'A recent series of theoretical works showed that the dynamics of neural networks with a certain initialisation are well-captured by kernel methods. Concurrent empirical work demonstrated that kernel methods can come close to the performance of neural networks on some image classification tasks. These results raise the question of whether neural networks only learn successfully if kernels also learn successfully, despite being the more expressive function class. Here, we show that two-layer neural networks with *only a few neurons* achieve near-optimal performance on high-dimensional Gaussian mixture classification while lazy training approaches such as random features and kernel methods do not. Our analysis is based on the derivation of a set of ordinary differential equations that exactly track the dynamics of the network and thus allow to extract the asymptotic performance of the network as a function of regularisation or signal-to-noise ratio. We also show how over-parametrising the neural network leads to faster convergence, but does not improve its final performance.' volume: 139 URL: https://proceedings.mlr.press/v139/refinetti21b.html PDF: http://proceedings.mlr.press/v139/refinetti21b/refinetti21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-refinetti21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Maria family: Refinetti - given: Sebastian family: Goldt - given: Florent family: Krzakala - given: Lenka family: Zdeborova editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8936-8947 id: refinetti21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8936 lastpage: 8947 published: 2021-07-01 00:00:00 +0000 - title: 'Sharf: Shape-conditioned Radiance Fields from a Single View' abstract: 'We present a method for estimating neural scenes representations of objects given only a single image. The core of our method is the estimation of a geometric scaffold for the object and its use as a guide for the reconstruction of the underlying radiance field. Our formulation is based on a generative process that first maps a latent code to a voxelized shape, and then renders it to an image, with the object appearance being controlled by a second latent code. During inference, we optimize both the latent codes and the networks to fit a test image of a new object. The explicit disentanglement of shape and appearance allows our model to be fine-tuned given a single image. We can then render new views in a geometrically consistent manner and they represent faithfully the input object. Additionally, our method is able to generalize to images outside of the training domain (more realistic renderings and even real photographs). Finally, the inferred geometric scaffold is itself an accurate estimate of the object’s 3D shape. We demonstrate in several experiments the effectiveness of our approach in both synthetic and real images.' volume: 139 URL: https://proceedings.mlr.press/v139/rematas21a.html PDF: http://proceedings.mlr.press/v139/rematas21a/rematas21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rematas21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Konstantinos family: Rematas - given: Ricardo family: Martin-Brualla - given: Vittorio family: Ferrari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8948-8958 id: rematas21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8948 lastpage: 8958 published: 2021-07-01 00:00:00 +0000 - title: 'LEGO: Latent Execution-Guided Reasoning for Multi-Hop Question Answering on Knowledge Graphs' abstract: 'Answering complex natural language questions on knowledge graphs (KGQA) is a challenging task. It requires reasoning with the input natural language questions as well as a massive, incomplete heterogeneous KG. Prior methods obtain an abstract structured query graph/tree from the input question and traverse the KG for answers following the query tree. However, they inherently cannot deal with missing links in the KG. Here we present LEGO, a Latent Execution-Guided reasOning framework to handle this challenge in KGQA. LEGO works in an iterative way, which alternates between (1) a Query Synthesizer, which synthesizes a reasoning action and grows the query tree step-by-step, and (2) a Latent Space Executor that executes the reasoning action in the latent embedding space to combat against the missing information in KG. To learn the synthesizer without step-wise supervision, we design a generic latent execution guided bottom-up search procedure to find good execution traces efficiently in the vast query space. Experimental results on several KGQA benchmarks demonstrate the effectiveness of our framework compared with previous state of the art.' volume: 139 URL: https://proceedings.mlr.press/v139/ren21a.html PDF: http://proceedings.mlr.press/v139/ren21a/ren21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ren21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hongyu family: Ren - given: Hanjun family: Dai - given: Bo family: Dai - given: Xinyun family: Chen - given: Michihiro family: Yasunaga - given: Haitian family: Sun - given: Dale family: Schuurmans - given: Jure family: Leskovec - given: Denny family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8959-8970 id: ren21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8959 lastpage: 8970 published: 2021-07-01 00:00:00 +0000 - title: 'Interpreting and Disentangling Feature Components of Various Complexity from DNNs' abstract: 'This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method decomposes and visualizes feature components of different complexity orders from the feature. The feature decomposition enables us to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, such analysis helps to improve the performance of DNNs. As a generic method, the feature complexity also provides new insights into existing deep-learning techniques, such as network compression and knowledge distillation.' volume: 139 URL: https://proceedings.mlr.press/v139/ren21b.html PDF: http://proceedings.mlr.press/v139/ren21b/ren21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ren21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jie family: Ren - given: Mingjie family: Li - given: Zexu family: Liu - given: Quanshi family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8971-8981 id: ren21b issued: date-parts: - 2021 - 7 - 1 firstpage: 8971 lastpage: 8981 published: 2021-07-01 00:00:00 +0000 - title: 'Integrated Defense for Resilient Graph Matching' abstract: 'A recent study has shown that graph matching models are vulnerable to adversarial manipulation of their input which is intended to cause a mismatching. Nevertheless, there is still a lack of a comprehensive solution for further enhancing the robustness of graph matching against adversarial attacks. In this paper, we identify and study two types of unique topology attacks in graph matching: inter-graph dispersion and intra-graph assembly attacks. We propose an integrated defense model, IDRGM, for resilient graph matching with two novel defense techniques to defend against the above two attacks simultaneously. A detection technique of inscribed simplexes in the hyperspheres consisting of multiple matched nodes is proposed to tackle inter-graph dispersion attacks, in which the distances among the matched nodes in multiple graphs are maximized to form regular simplexes. A node separation method based on phase-type distribution and maximum likelihood estimation is developed to estimate the distribution of perturbed graphs and separate the nodes within the same graphs over a wide space, for defending intra-graph assembly attacks, such that the interference from the similar neighbors of the perturbed nodes is significantly reduced. We evaluate the robustness of our IDRGM model on real datasets against state-of-the-art algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/ren21c.html PDF: http://proceedings.mlr.press/v139/ren21c/ren21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ren21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiaxiang family: Ren - given: Zijie family: Zhang - given: Jiayin family: Jin - given: Xin family: Zhao - given: Sixing family: Wu - given: Yang family: Zhou - given: Yelong family: Shen - given: Tianshi family: Che - given: Ruoming family: Jin - given: Dejing family: Dou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8982-8997 id: ren21c issued: date-parts: - 2021 - 7 - 1 firstpage: 8982 lastpage: 8997 published: 2021-07-01 00:00:00 +0000 - title: 'Solving high-dimensional parabolic PDEs using the tensor train format' abstract: 'High-dimensional partial differential equations (PDEs) are ubiquitous in economics, science and engineering. However, their numerical treatment poses formidable challenges since traditional grid-based methods tend to be frustrated by the curse of dimensionality. In this paper, we argue that tensor trains provide an appealing approximation framework for parabolic PDEs: the combination of reformulations in terms of backward stochastic differential equations and regression-type methods in the tensor format holds the promise of leveraging latent low-rank structures enabling both compression and efficient computation. Following this paradigm, we develop novel iterative schemes, involving either explicit and fast or implicit and accurate updates. We demonstrate in a number of examples that our methods achieve a favorable trade-off between accuracy and computational efficiency in comparison with state-of-the-art neural network based approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/richter21a.html PDF: http://proceedings.mlr.press/v139/richter21a/richter21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-richter21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lorenz family: Richter - given: Leon family: Sallandt - given: Nikolas family: Nüsken editor: - given: Marina family: Meila - given: Tong family: Zhang page: 8998-9009 id: richter21a issued: date-parts: - 2021 - 7 - 1 firstpage: 8998 lastpage: 9009 published: 2021-07-01 00:00:00 +0000 - title: 'Best Arm Identification in Graphical Bilinear Bandits' abstract: 'We introduce a new graphical bilinear bandit problem where a learner (or a \emph{central entity}) allocates arms to the nodes of a graph and observes for each edge a noisy bilinear reward representing the interaction between the two end nodes. We study the best arm identification problem in which the learner wants to find the graph allocation maximizing the sum of the bilinear rewards. By efficiently exploiting the geometry of this bandit problem, we propose a \emph{decentralized} allocation strategy based on random sampling with theoretical guarantees. In particular, we characterize the influence of the graph structure (e.g. star, complete or circle) on the convergence rate and propose empirical experiments that confirm this dependency.' volume: 139 URL: https://proceedings.mlr.press/v139/rizk21a.html PDF: http://proceedings.mlr.press/v139/rizk21a/rizk21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rizk21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Geovani family: Rizk - given: Albert family: Thomas - given: Igor family: Colin - given: Rida family: Laraki - given: Yann family: Chevaleyre editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9010-9019 id: rizk21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9010 lastpage: 9019 published: 2021-07-01 00:00:00 +0000 - title: 'Principled Simplicial Neural Networks for Trajectory Prediction' abstract: 'We consider the construction of neural network architectures for data on simplicial complexes. In studying maps on the chain complex of a simplicial complex, we define three desirable properties of a simplicial neural network architecture: namely, permutation equivariance, orientation equivariance, and simplicial awareness. The first two properties respectively account for the fact that the node indexing and the simplex orientations in a simplicial complex are arbitrary. The last property encodes the desirable feature that the output of the neural network depends on the entire simplicial complex and not on a subset of its dimensions. Based on these properties, we propose a simple convolutional architecture, rooted in tools from algebraic topology, for the problem of trajectory prediction, and show that it obeys all three of these properties when an odd, nonlinear activation function is used. We then demonstrate the effectiveness of this architecture in extrapolating trajectories on synthetic and real datasets, with particular emphasis on the gains in generalizability to unseen trajectories.' volume: 139 URL: https://proceedings.mlr.press/v139/roddenberry21a.html PDF: http://proceedings.mlr.press/v139/roddenberry21a/roddenberry21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-roddenberry21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: T. Mitchell family: Roddenberry - given: Nicholas family: Glaze - given: Santiago family: Segarra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9020-9029 id: roddenberry21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9020 lastpage: 9029 published: 2021-07-01 00:00:00 +0000 - title: 'On Linear Identifiability of Learned Representations' abstract: 'Identifiability is a desirable property of a statistical model: it implies that the true model parameters may be estimated to any desired precision, given sufficient computational resources and data. We study identifiability in the context of representation learning: discovering nonlinear data representations that are optimal with respect to some downstream task. When parameterized as deep neural networks, such representation functions lack identifiability in parameter space, because they are over-parameterized by design. In this paper, building on recent advances in nonlinear Independent Components Analysis, we aim to rehabilitate identifiability by showing that a large family of discriminative models are in fact identifiable in function space, up to a linear indeterminacy. Many models for representation learning in a wide variety of domains have been identifiable in this sense, including text, images and audio, state-of-the-art at time of publication. We derive sufficient conditions for linear identifiability and provide empirical support for the result on both simulated and real-world data.' volume: 139 URL: https://proceedings.mlr.press/v139/roeder21a.html PDF: http://proceedings.mlr.press/v139/roeder21a/roeder21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-roeder21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Geoffrey family: Roeder - given: Luke family: Metz - given: Durk family: Kingma editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9030-9039 id: roeder21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9030 lastpage: 9039 published: 2021-07-01 00:00:00 +0000 - title: 'Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data' abstract: 'Collecting more diverse and representative training data is often touted as a remedy for the disparate performance of machine learning predictors across subpopulations. However, a precise framework for understanding how dataset properties like diversity affect learning outcomes is largely lacking. By casting data collection as part of the learning process, we demonstrate that diverse representation in training data is key not only to increasing subgroup performances, but also to achieving population-level objectives. Our analysis and experiments describe how dataset compositions influence performance and provide constructive results for using trends in existing data, alongside domain knowledge, to help guide intentional, objective-aware dataset design' volume: 139 URL: https://proceedings.mlr.press/v139/rolf21a.html PDF: http://proceedings.mlr.press/v139/rolf21a/rolf21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rolf21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Esther family: Rolf - given: Theodora T family: Worledge - given: Benjamin family: Recht - given: Michael family: Jordan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9040-9051 id: rolf21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9040 lastpage: 9051 published: 2021-07-01 00:00:00 +0000 - title: 'TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL' abstract: 'Training autonomous agents able to generalize to multiple tasks is a key target of Deep Reinforcement Learning (DRL) research. In parallel to improving DRL algorithms themselves, Automatic Curriculum Learning (ACL) study how teacher algorithms can train DRL agents more efficiently by adapting task selection to their evolving abilities. While multiple standard benchmarks exist to compare DRL agents, there is currently no such thing for ACL algorithms. Thus, comparing existing approaches is difficult, as too many experimental parameters differ from paper to paper. In this work, we identify several key challenges faced by ACL algorithms. Based on these, we present TeachMyAgent (TA), a benchmark of current ACL algorithms leveraging procedural task generation. It includes 1) challenge-specific unit-tests using variants of a procedural Box2D bipedal walker environment, and 2) a new procedural Parkour environment combining most ACL challenges, making it ideal for global performance assessment. We then use TeachMyAgent to conduct a comparative study of representative existing approaches, showcasing the competitiveness of some ACL algorithms that do not use expert knowledge. We also show that the Parkour environment remains an open problem. We open-source our environments, all studied ACL algorithms (collected from open-source code or re-implemented), and DRL students in a Python package available at https://github.com/flowersteam/TeachMyAgent.' volume: 139 URL: https://proceedings.mlr.press/v139/romac21a.html PDF: http://proceedings.mlr.press/v139/romac21a/romac21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-romac21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Clément family: Romac - given: Rémy family: Portelas - given: Katja family: Hofmann - given: Pierre-Yves family: Oudeyer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9052-9063 id: romac21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9052 lastpage: 9063 published: 2021-07-01 00:00:00 +0000 - title: 'Discretization Drift in Two-Player Games' abstract: 'Gradient-based methods for two-player games produce rich dynamics that can solve challenging problems, yet can be difficult to stabilize and understand. Part of this complexity originates from the discrete update steps given by simultaneous or alternating gradient descent, which causes each player to drift away from the continuous gradient flow – a phenomenon we call discretization drift. Using backward error analysis, we derive modified continuous dynamical systems that closely follow the discrete dynamics. These modified dynamics provide an insight into the notorious challenges associated with zero-sum games, including Generative Adversarial Networks. In particular, we identify distinct components of the discretization drift that can alter performance and in some cases destabilize the game. Finally, quantifying discretization drift allows us to identify regularizers that explicitly cancel harmful forms of drift or strengthen beneficial forms of drift, and thus improve performance of GAN training.' volume: 139 URL: https://proceedings.mlr.press/v139/rosca21a.html PDF: http://proceedings.mlr.press/v139/rosca21a/rosca21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rosca21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mihaela C family: Rosca - given: Yan family: Wu - given: Benoit family: Dherin - given: David family: Barrett editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9064-9074 id: rosca21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9064 lastpage: 9074 published: 2021-07-01 00:00:00 +0000 - title: 'On the Predictability of Pruning Across Scales' abstract: 'We show that the error of iteratively magnitude-pruned networks empirically follows a scaling law with interpretable coefficients that depend on the architecture and task. We functionally approximate the error of the pruned networks, showing it is predictable in terms of an invariant tying width, depth, and pruning level, such that networks of vastly different pruned densities are interchangeable. We demonstrate the accuracy of this approximation over orders of magnitude in depth, width, dataset size, and density. We show that the functional form holds (generalizes) for large scale data (e.g., ImageNet) and architectures (e.g., ResNets). As neural networks become ever larger and costlier to train, our findings suggest a framework for reasoning conceptually and analytically about a standard method for unstructured pruning.' volume: 139 URL: https://proceedings.mlr.press/v139/rosenfeld21a.html PDF: http://proceedings.mlr.press/v139/rosenfeld21a/rosenfeld21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rosenfeld21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jonathan S family: Rosenfeld - given: Jonathan family: Frankle - given: Michael family: Carbin - given: Nir family: Shavit editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9075-9083 id: rosenfeld21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9075 lastpage: 9083 published: 2021-07-01 00:00:00 +0000 - title: 'Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement' abstract: 'In representation learning, there has been recent interest in developing algorithms to disentangle the ground-truth generative factors behind a dataset, and metrics to quantify how fully this occurs. However, these algorithms and metrics often assume that both representations and ground-truth factors are flat, continuous, and factorized, whereas many real-world generative processes involve rich hierarchical structure, mixtures of discrete and continuous variables with dependence between them, and even varying intrinsic dimensionality. In this work, we develop benchmarks, algorithms, and metrics for learning such hierarchical representations.' volume: 139 URL: https://proceedings.mlr.press/v139/ross21a.html PDF: http://proceedings.mlr.press/v139/ross21a/ross21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ross21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrew family: Ross - given: Finale family: Doshi-Velez editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9084-9094 id: ross21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9084 lastpage: 9094 published: 2021-07-01 00:00:00 +0000 - title: 'Simultaneous Similarity-based Self-Distillation for Deep Metric Learning' abstract: 'Deep Metric Learning (DML) provides a crucial tool for visual similarity and zero-shot retrieval applications by learning generalizing embedding spaces, although recent work in DML has shown strong performance saturation across training objectives. However, generalization capacity is known to scale with the embedding space dimensionality. Unfortunately, high dimensional embeddings also create higher retrieval cost for downstream applications. To remedy this, we propose S2SD - Simultaneous Similarity-based Self-distillation. S2SD extends DML with knowledge distillation from auxiliary, high-dimensional embedding and feature spaces to leverage complementary context during training while retaining test-time cost and with negligible changes to the training time. Experiments and ablations across different objectives and standard benchmarks show S2SD offering highly significant improvements of up to 7% in Recall@1, while also setting a new state-of-the-art.' volume: 139 URL: https://proceedings.mlr.press/v139/roth21a.html PDF: http://proceedings.mlr.press/v139/roth21a/roth21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-roth21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Karsten family: Roth - given: Timo family: Milbich - given: Bjorn family: Ommer - given: Joseph Paul family: Cohen - given: Marzyeh family: Ghassemi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9095-9106 id: roth21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9095 lastpage: 9106 published: 2021-07-01 00:00:00 +0000 - title: 'Multi-group Agnostic PAC Learnability' abstract: 'An agnostic PAC learning algorithm finds a predictor that is competitive with the best predictor in a benchmark hypothesis class, where competitiveness is measured with respect to a given loss function. However, its predictions might be quite sub-optimal for structured subgroups of individuals, such as protected demographic groups. Motivated by such fairness concerns, we study “multi-group agnostic PAC learnability”: fixing a measure of loss, a benchmark class $\H$ and a (potentially) rich collection of subgroups $\G$, the objective is to learn a single predictor such that the loss experienced by every group $g \in \G$ is not much larger than the best possible loss for this group within $\H$. Under natural conditions, we provide a characterization of the loss functions for which such a predictor is guaranteed to exist. For any such loss function we construct a learning algorithm whose sample complexity is logarithmic in the size of the collection $\G$. Our results unify and extend previous positive and negative results from the multi-group fairness literature, which applied for specific loss functions.' volume: 139 URL: https://proceedings.mlr.press/v139/rothblum21a.html PDF: http://proceedings.mlr.press/v139/rothblum21a/rothblum21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rothblum21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Guy N family: Rothblum - given: Gal family: Yona editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9107-9115 id: rothblum21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9107 lastpage: 9115 published: 2021-07-01 00:00:00 +0000 - title: 'PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees' abstract: 'Meta-learning can successfully acquire useful inductive biases from data. Yet, its generalization properties to unseen learning tasks are poorly understood. Particularly if the number of meta-training tasks is small, this raises concerns about overfitting. We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning. Using these bounds, we develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization. Unlike previous PAC-Bayesian meta-learners, our method results in a standard stochastic optimization problem which can be solved efficiently and scales well.When instantiating our PAC-optimal hyper-posterior (PACOH) with Gaussian processes and Bayesian Neural Networks as base learners, the resulting methods yield state-of-the-art performance, both in terms of predictive accuracy and the quality of uncertainty estimates. Thanks to their principled treatment of uncertainty, our meta-learners can also be successfully employed for sequential decision problems.' volume: 139 URL: https://proceedings.mlr.press/v139/rothfuss21a.html PDF: http://proceedings.mlr.press/v139/rothfuss21a/rothfuss21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rothfuss21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jonas family: Rothfuss - given: Vincent family: Fortuin - given: Martin family: Josifoski - given: Andreas family: Krause editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9116-9126 id: rothfuss21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9116 lastpage: 9126 published: 2021-07-01 00:00:00 +0000 - title: 'An Algorithm for Stochastic and Adversarial Bandits with Switching Costs' abstract: 'We propose an algorithm for stochastic and adversarial multiarmed bandits with switching costs, where the algorithm pays a price $\lambda$ every time it switches the arm being played. Our algorithm is based on adaptation of the Tsallis-INF algorithm of Zimmert and Seldin (2021) and requires no prior knowledge of the regime or time horizon. In the oblivious adversarial setting it achieves the minimax optimal regret bound of $ O( (\lambda K)^{1/3}T^{2/3} + \sqrt{KT})$, where $T$ is the time horizon and $K$ is the number of arms. In the stochastically constrained adversarial regime, which includes the stochastic regime as a special case, it achieves a regret bound of $O((\lambda K)^{2/3} T^{1/3} + \ln T)\sum_{i \neq i^*} \Delta_i^{-1})$, where $\Delta_i$ are suboptimality gaps and $i^*$ is the unique optimal arm. In the special case of $\lambda = 0$ (no switching costs), both bounds are minimax optimal within constants. We also explore variants of the problem, where switching cost is allowed to change over time. We provide experimental evaluation showing competitiveness of our algorithm with the relevant baselines in the stochastic, stochastically constrained adversarial, and adversarial regimes with fixed switching cost.' volume: 139 URL: https://proceedings.mlr.press/v139/rouyer21a.html PDF: http://proceedings.mlr.press/v139/rouyer21a/rouyer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rouyer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chloé family: Rouyer - given: Yevgeny family: Seldin - given: Nicolò family: Cesa-Bianchi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9127-9135 id: rouyer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9127 lastpage: 9135 published: 2021-07-01 00:00:00 +0000 - title: 'Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding' abstract: 'Latent variable models have been successfully applied in lossless compression with the bits-back coding algorithm. However, bits-back suffers from an increase in the bitrate equal to the KL divergence between the approximate posterior and the true posterior. In this paper, we show how to remove this gap asymptotically by deriving bits-back coding algorithms from tighter variational bounds. The key idea is to exploit extended space representations of Monte Carlo estimators of the marginal likelihood. Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space. When parallel architectures can be exploited, our coders can achieve better rates than bits-back with little additional cost. We demonstrate improved lossless compression rates in a variety of settings, especially in out-of-distribution or sequential data compression.' volume: 139 URL: https://proceedings.mlr.press/v139/ruan21a.html PDF: http://proceedings.mlr.press/v139/ruan21a/ruan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ruan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yangjun family: Ruan - given: Karen family: Ullrich - given: Daniel S family: Severo - given: James family: Townsend - given: Ashish family: Khisti - given: Arnaud family: Doucet - given: Alireza family: Makhzani - given: Chris family: Maddison editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9136-9147 id: ruan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9136 lastpage: 9147 published: 2021-07-01 00:00:00 +0000 - title: 'On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes' abstract: 'We show that the gradient estimates used in training Deep Gaussian Processes (DGPs) with importance-weighted variational inference are susceptible to signal-to-noise ratio (SNR) issues. Specifically, we show both theoretically and via an extensive empirical evaluation that the SNR of the gradient estimates for the latent variable’s variational parameters decreases as the number of importance samples increases. As a result, these gradient estimates degrade to pure noise if the number of importance samples is too large. To address this pathology, we show how doubly-reparameterized gradient estimators, originally proposed for training variational autoencoders, can be adapted to the DGP setting and that the resultant estimators completely remedy the SNR issue, thereby providing more reliable training. Finally, we demonstrate that our fix can lead to consistent improvements in the predictive performance of DGP models.' volume: 139 URL: https://proceedings.mlr.press/v139/rudner21a.html PDF: http://proceedings.mlr.press/v139/rudner21a/rudner21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rudner21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tim G. J. family: Rudner - given: Oscar family: Key - given: Yarin family: Gal - given: Tom family: Rainforth editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9148-9156 id: rudner21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9148 lastpage: 9156 published: 2021-07-01 00:00:00 +0000 - title: 'Tilting the playing field: Dynamical loss functions for machine learning' abstract: 'We show that learning can be improved by using loss functions that evolve cyclically during training to emphasize one class at a time. In underparameterized networks, such dynamical loss functions can lead to successful training for networks that fail to find deep minima of the standard cross-entropy loss. In overparameterized networks, dynamical loss functions can lead to better generalization. Improvement arises from the interplay of the changing loss landscape with the dynamics of the system as it evolves to minimize the loss. In particular, as the loss function oscillates, instabilities develop in the form of bifurcation cascades, which we study using the Hessian and Neural Tangent Kernel. Valleys in the landscape widen and deepen, and then narrow and rise as the loss landscape changes during a cycle. As the landscape narrows, the learning rate becomes too large and the network becomes unstable and bounces around the valley. This process ultimately pushes the system into deeper and wider regions of the loss landscape and is characterized by decreasing eigenvalues of the Hessian. This results in better regularized models with improved generalization performance.' volume: 139 URL: https://proceedings.mlr.press/v139/ruiz-garcia21a.html PDF: http://proceedings.mlr.press/v139/ruiz-garcia21a/ruiz-garcia21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ruiz-garcia21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Miguel family: Ruiz-Garcia - given: Ge family: Zhang - given: Samuel S family: Schoenholz - given: Andrea J. family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9157-9167 id: ruiz-garcia21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9157 lastpage: 9167 published: 2021-07-01 00:00:00 +0000 - title: 'UnICORNN: A recurrent model for learning very long time dependencies' abstract: 'The design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-time dependencies is very challenging on account of the exploding and vanishing gradient problem. To overcome this, we propose a novel RNN architecture which is based on a structure preserving discretization of a Hamiltonian system of second-order ordinary differential equations that models networks of oscillators. The resulting RNN is fast, invertible (in time), memory efficient and we derive rigorous bounds on the hidden state gradients to prove the mitigation of the exploding and vanishing gradient problem. A suite of experiments are presented to demonstrate that the proposed RNN provides state of the art performance on a variety of learning tasks with (very) long-time dependencies.' volume: 139 URL: https://proceedings.mlr.press/v139/rusch21a.html PDF: http://proceedings.mlr.press/v139/rusch21a/rusch21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rusch21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: T. Konstantin family: Rusch - given: Siddhartha family: Mishra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9168-9178 id: rusch21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9168 lastpage: 9178 published: 2021-07-01 00:00:00 +0000 - title: 'Simple and Effective VAE Training with Calibrated Decoders' abstract: 'Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions. However, training VAEs often requires considerable hyperparameter tuning to determine the optimal amount of information retained by the latent variable. We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution and can determine this amount of information automatically, on the VAE performance. While many methods for learning calibrated decoders have been proposed, many of the recent papers that employ VAEs rely on heuristic hyperparameters and ad-hoc modifications instead. We perform the first comprehensive comparative analysis of calibrated decoder and provide recommendations for simple and effective VAE training. Our analysis covers a range of datasets and several single-image and sequential VAE models. We further propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically. We observe empirically that using heuristic modifications is not necessary with our method.' volume: 139 URL: https://proceedings.mlr.press/v139/rybkin21a.html PDF: http://proceedings.mlr.press/v139/rybkin21a/rybkin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rybkin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Oleh family: Rybkin - given: Kostas family: Daniilidis - given: Sergey family: Levine editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9179-9189 id: rybkin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9179 lastpage: 9189 published: 2021-07-01 00:00:00 +0000 - title: 'Model-Based Reinforcement Learning via Latent-Space Collocation' abstract: 'The ability to plan into the future while utilizing only raw high-dimensional observations, such as images, can provide autonomous agents with broad and general capabilities. However, realistic tasks require performing temporally extended reasoning, and cannot be solved with only myopic, short-sighted planning. Recent work in model-based reinforcement learning (RL) has shown impressive results on tasks that require only short-horizon reasoning. In this work, we study how the long-horizon planning abilities can be improved with an algorithm that optimizes over sequences of states, rather than actions, which allows better credit assignment. To achieve this, we draw on the idea of collocation and adapt it to the image-based setting by leveraging probabilistic latent variable models, resulting in an algorithm that optimizes trajectories over latent variables. Our latent collocation method (LatCo) provides a general and effective visual planning approach, and significantly outperforms prior model-based approaches on challenging visual control tasks with sparse rewards and long-term goals. See the videos on the supplementary website \url{https://sites.google.com/view/latco-mbrl/.}' volume: 139 URL: https://proceedings.mlr.press/v139/rybkin21b.html PDF: http://proceedings.mlr.press/v139/rybkin21b/rybkin21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-rybkin21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Oleh family: Rybkin - given: Chuning family: Zhu - given: Anusha family: Nagabandi - given: Kostas family: Daniilidis - given: Igor family: Mordatch - given: Sergey family: Levine editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9190-9201 id: rybkin21b issued: date-parts: - 2021 - 7 - 1 firstpage: 9190 lastpage: 9201 published: 2021-07-01 00:00:00 +0000 - title: 'Training Data Subset Selection for Regression with Controlled Generalization Error' abstract: 'Data subset selection from a large number of training instances has been a successful approach toward efficient and cost-effective machine learning. However, models trained on a smaller subset may show poor generalization ability. In this paper, our goal is to design an algorithm for selecting a subset of the training data, so that the model can be trained quickly, without significantly sacrificing on accuracy. More specifically, we focus on data subset selection for $L_2$ regularized regression problems and provide a novel problem formulation which seeks to minimize the training loss with respect to both the trainable parameters and the subset of training data, subject to error bounds on the validation set. We tackle this problem using several technical innovations. First, we represent this problem with simplified constraints using the dual of the original training problem and show that the objective of this new representation is a monotone and $\alpha$-submodular function, for a wide variety of modeling choices. Such properties lead us to develop SELCON, an efficient majorization-minimization algorithm for data subset selection, that admits an approximation guarantee even when the training provides an imperfect estimate of the trained model. Finally, our experiments on several datasets show that SELCON trades off accuracy and efficiency more effectively than the current state-of-the-art.' volume: 139 URL: https://proceedings.mlr.press/v139/s21a.html PDF: http://proceedings.mlr.press/v139/s21a/s21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-s21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Durga family: S - given: Rishabh family: Iyer - given: Ganesh family: Ramakrishnan - given: Abir family: De editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9202-9212 id: s21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9202 lastpage: 9212 published: 2021-07-01 00:00:00 +0000 - title: 'Unsupervised Part Representation by Flow Capsules' abstract: 'Capsule networks aim to parse images into a hierarchy of objects, parts and relations. While promising, they remain limited by an inability to learn effective low level part descriptions. To address this issue we propose a way to learn primary capsule encoders that detect atomic parts from a single image. During training we exploit motion as a powerful perceptual cue for part definition, with an expressive decoder for part generation within a layered image model with occlusion. Experiments demonstrate robust part discovery in the presence of multiple objects, cluttered backgrounds, and occlusion. The learned part decoder is shown to infer the underlying shape masks, effectively filling in occluded regions of the detected shapes. We evaluate FlowCapsules on unsupervised part segmentation and unsupervised image classification.' volume: 139 URL: https://proceedings.mlr.press/v139/sabour21a.html PDF: http://proceedings.mlr.press/v139/sabour21a/sabour21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sabour21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sara family: Sabour - given: Andrea family: Tagliasacchi - given: Soroosh family: Yazdani - given: Geoffrey family: Hinton - given: David J family: Fleet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9213-9223 id: sabour21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9213 lastpage: 9223 published: 2021-07-01 00:00:00 +0000 - title: 'Stochastic Sign Descent Methods: New Algorithms and Better Theory' abstract: 'Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we analyze sign-based methods for non-convex optimization in three key settings: (i) standard single node, (ii) parallel with shared data and (iii) distributed with partitioned data. For single machine case, we generalize the previous analysis of signSGD relying on intuitive bounds on success probabilities and allowing even biased estimators. Furthermore, we extend the analysis to parallel setting within a parameter server framework, where exponentially fast noise reduction is guaranteed with respect to number of nodes, maintaining $1$-bit compression in both directions and using small mini-batch sizes. Next, we identify a fundamental issue with signSGD to converge in distributed environment. To resolve this issue, we propose a new sign-based method, {\em Stochastic Sign Descent with Momentum (SSDM)}, which converges under standard bounded variance assumption with the optimal asymptotic rate. We validate several aspects of our theoretical findings with numerical experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/safaryan21a.html PDF: http://proceedings.mlr.press/v139/safaryan21a/safaryan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-safaryan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mher family: Safaryan - given: Peter family: Richtarik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9224-9234 id: safaryan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9224 lastpage: 9234 published: 2021-07-01 00:00:00 +0000 - title: 'Adversarial Dueling Bandits' abstract: 'We introduce the problem of regret minimization in Adversarial Dueling Bandits. As in classic Dueling Bandits, the learner has to repeatedly choose a pair of items and observe only a relative binary ‘win-loss’ feedback for this pair, but here this feedback is generated from an arbitrary preference matrix, possibly chosen adversarially. Our main result is an algorithm whose $T$-round regret compared to the \emph{Borda-winner} from a set of $K$ items is $\tilde{O}(K^{1/3}T^{2/3})$, as well as a matching $\Omega(K^{1/3}T^{2/3})$ lower bound. We also prove a similar high probability regret bound. We further consider a simpler \emph{fixed-gap} adversarial setup, which bridges between two extreme preference feedback models for dueling bandits: stationary preferences and an arbitrary sequence of preferences. For the fixed-gap adversarial setup we give an $\smash{ \tilde{O}((K/\Delta^2)\log{T}) }$ regret algorithm, where $\Delta$ is the gap in Borda scores between the best item and all other items, and show a lower bound of $\Omega(K/\Delta^2)$ indicating that our dependence on the main problem parameters $K$ and $\Delta$ is tight (up to logarithmic factors). Finally, we corroborate the theoretical results with empirical evaluations.' volume: 139 URL: https://proceedings.mlr.press/v139/saha21a.html PDF: http://proceedings.mlr.press/v139/saha21a/saha21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-saha21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aadirupa family: Saha - given: Tomer family: Koren - given: Yishay family: Mansour editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9235-9244 id: saha21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9235 lastpage: 9244 published: 2021-07-01 00:00:00 +0000 - title: 'Dueling Convex Optimization' abstract: 'We address the problem of convex optimization with preference (dueling) feedback. Like the traditional optimization objective, the goal is to find the optimal point with the least possible query complexity, however, without the luxury of even a zeroth order feedback. Instead, the learner can only observe a single noisy bit which is win-loss feedback for a pair of queried points based on their function values. % The problem is certainly of great practical relevance as in many real-world scenarios, such as recommender systems or learning from customer preferences, where the system feedback is often restricted to just one binary-bit preference information. % We consider the problem of online convex optimization (OCO) solely by actively querying $\{0,1\}$ noisy-comparison feedback of decision point pairs, with the objective of finding a near-optimal point (function minimizer) with the least possible number of queries. %a very general class of monotonic, non-decreasing transfer functions, and analyze the problem for any $d$-dimensional smooth convex function. % For the non-stationary OCO setup, where the underlying convex function may change over time, we prove an impossibility result towards achieving the above objective. We next focus only on the stationary OCO problem, and our main contribution lies in designing a normalized gradient descent based algorithm towards finding a $\epsilon$-best optimal point. Towards this, our algorithm is shown to yield a convergence rate of $\tilde O(\nicefrac{d\beta}{\epsilon \nu^2})$ ($\nu$ being the noise parameter) when the underlying function is $\beta$-smooth. Further we show an improved convergence rate of just $\tilde O(\nicefrac{d\beta}{\alpha \nu^2} \log \frac{1}{\epsilon})$ when the function is additionally also $\alpha$-strongly convex.' volume: 139 URL: https://proceedings.mlr.press/v139/saha21b.html PDF: http://proceedings.mlr.press/v139/saha21b/saha21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-saha21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aadirupa family: Saha - given: Tomer family: Koren - given: Yishay family: Mansour editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9245-9254 id: saha21b issued: date-parts: - 2021 - 7 - 1 firstpage: 9245 lastpage: 9254 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal regret algorithm for Pseudo-1d Bandit Convex Optimization' abstract: 'We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions $\f_t$ admit a "pseudo-1d" structure, i.e. $\f_t(\w) = \loss_t(\pred_t(\w))$ where the output of $\pred_t$ is one-dimensional. At each round, the learner observes context $\x_t$, plays prediction $\pred_t(\w_t; \x_t)$ (e.g. $\pred_t(\cdot)=⟨\x_t, \cdot⟩$) for some $\w_t \in \mathbb{R}^d$ and observes loss $\loss_t(\pred_t(\w_t))$ where $\loss_t$ is a convex Lipschitz-continuous function. The goal is to minimize the standard regret metric. This pseudo-1d bandit convex optimization problem (\SBCO) arises frequently in domains such as online decision-making or parameter-tuning in large systems. For this problem, we first show a regret lower bound of $\min(\sqrt{dT}, T^{3/4})$ for any algorithm, where $T$ is the number of rounds. We propose a new algorithm \sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively, guaranteeing the {\em optimal} regret bound mentioned above, up to additional logarithmic factors. In contrast, applying state-of-the-art online convex optimization methods leads to $\tilde{O}\left(\min\left(d^{9.5}\sqrt{T},\sqrt{d}T^{3/4}\right)\right)$ regret, that is significantly suboptimal in terms of $d$.' volume: 139 URL: https://proceedings.mlr.press/v139/saha21c.html PDF: http://proceedings.mlr.press/v139/saha21c/saha21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-saha21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aadirupa family: Saha - given: Nagarajan family: Natarajan - given: Praneeth family: Netrapalli - given: Prateek family: Jain editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9255-9264 id: saha21c issued: date-parts: - 2021 - 7 - 1 firstpage: 9255 lastpage: 9264 published: 2021-07-01 00:00:00 +0000 - title: 'Asymptotics of Ridge Regression in Convolutional Models' abstract: 'Understanding generalization and estimation error of estimators for simple models such as linear and generalized linear models has attracted a lot of attention recently. This is in part due to an interesting observation made in machine learning community that highly over-parameterized neural networks achieve zero training error, and yet they are able to generalize well over the test samples. This phenomenon is captured by the so called double descent curve, where the generalization error starts decreasing again after the interpolation threshold. A series of recent works tried to explain such phenomenon for simple models. In this work, we analyze the asymptotics of estimation error in ridge estimators for convolutional linear models. These convolutional inverse problems, also known as deconvolution, naturally arise in different fields such as seismology, imaging, and acoustics among others. Our results hold for a large class of input distributions that include i.i.d. features as a special case. We derive exact formulae for estimation error of ridge estimators that hold in a certain high-dimensional regime. We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/sahraee-ardakan21a.html PDF: http://proceedings.mlr.press/v139/sahraee-ardakan21a/sahraee-ardakan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sahraee-ardakan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mojtaba family: Sahraee-Ardakan - given: Tung family: Mai - given: Anup family: Rao - given: Ryan A. family: Rossi - given: Sundeep family: Rangan - given: Alyson K family: Fletcher editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9265-9275 id: sahraee-ardakan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9265 lastpage: 9275 published: 2021-07-01 00:00:00 +0000 - title: 'Momentum Residual Neural Networks' abstract: 'The training of deep residual neural networks (ResNets) with backpropagation has a memory cost that increases linearly with respect to the depth of the network. A simple way to circumvent this issue is to use reversible architectures. In this paper, we propose to change the forward rule of a ResNet by adding a momentum term. The resulting networks, momentum residual neural networks (MomentumNets), are invertible. Unlike previous invertible architectures, they can be used as a drop-in replacement for any existing ResNet block. We show that MomentumNets can be interpreted in the infinitesimal step size regime as second-order ordinary differential equations (ODEs) and exactly characterize how adding momentum progressively increases the representation capabilities of MomentumNets: they can learn any linear mapping up to a multiplicative factor, while ResNets cannot. In a learning to optimize setting, where convergence to a fixed point is required, we show theoretically and empirically that our method succeeds while existing invertible architectures fail. We show on CIFAR and ImageNet that MomentumNets have the same accuracy as ResNets, while having a much smaller memory footprint, and show that pre-trained MomentumNets are promising for fine-tuning models.' volume: 139 URL: https://proceedings.mlr.press/v139/sander21a.html PDF: http://proceedings.mlr.press/v139/sander21a/sander21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sander21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michael E. family: Sander - given: Pierre family: Ablin - given: Mathieu family: Blondel - given: Gabriel family: Peyré editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9276-9287 id: sander21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9276 lastpage: 9287 published: 2021-07-01 00:00:00 +0000 - title: 'Meta-Learning Bidirectional Update Rules' abstract: 'In this paper, we introduce a new type of generalized neural network where neurons and synapses maintain multiple states. We show that classical gradient-based backpropagation in neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients, with update rules derived from the chain rule. In our generalized framework, networks have neither explicit notion of nor ever receive gradients. The synapses and neurons are updated using a bidirectional Hebb-style update rule parameterized by a shared low-dimensional "genome". We show that such genomes can be meta-learned from scratch, using either conventional optimization techniques, or evolutionary strategies, such as CMA-ES. Resulting update rules generalize to unseen tasks and train faster than gradient descent based optimizers for several standard computer vision and synthetic tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/sandler21a.html PDF: http://proceedings.mlr.press/v139/sandler21a/sandler21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sandler21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mark family: Sandler - given: Max family: Vladymyrov - given: Andrey family: Zhmoginov - given: Nolan family: Miller - given: Tom family: Madams - given: Andrew family: Jackson - given: Blaise Agüera Y family: Arcas editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9288-9300 id: sandler21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9288 lastpage: 9300 published: 2021-07-01 00:00:00 +0000 - title: 'Recomposing the Reinforcement Learning Building Blocks with Hypernetworks' abstract: 'The Reinforcement Learning (RL) building blocks, i.e. $Q$-functions and policy networks, usually take elements from the cartesian product of two domains as input. In particular, the input of the $Q$-function is both the state and the action, and in multi-task problems (Meta-RL) the policy can take a state and a context. Standard architectures tend to ignore these variables’ underlying interpretations and simply concatenate their features into a single vector. In this work, we argue that this choice may lead to poor gradient estimation in actor-critic algorithms and high variance learning steps in Meta-RL algorithms. To consider the interaction between the input variables, we suggest using a Hypernetwork architecture where a primary network determines the weights of a conditional dynamic network. We show that this approach improves the gradient approximation and reduces the learning step variance, which both accelerates learning and improves the final performance. We demonstrate a consistent improvement across different locomotion tasks and different algorithms both in RL (TD3 and SAC) and in Meta-RL (MAML and PEARL).' volume: 139 URL: https://proceedings.mlr.press/v139/sarafian21a.html PDF: http://proceedings.mlr.press/v139/sarafian21a/sarafian21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sarafian21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Elad family: Sarafian - given: Shai family: Keynan - given: Sarit family: Kraus editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9301-9312 id: sarafian21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9301 lastpage: 9312 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Understanding Learning in Neural Networks with Linear Teachers' abstract: 'Can a neural network minimizing cross-entropy learn linearly separable data? Despite progress in the theory of deep learning, this question remains unsolved. Here we prove that SGD globally optimizes this learning problem for a two-layer network with Leaky ReLU activations. The learned network can in principle be very complex. However, empirical evidence suggests that it often turns out to be approximately linear. We provide theoretical support for this phenomenon by proving that if network weights converge to two weight clusters, this will imply an approximately linear decision boundary. Finally, we show a condition on the optimization that leads to weight clustering. We provide empirical results that validate our theoretical analysis.' volume: 139 URL: https://proceedings.mlr.press/v139/sarussi21a.html PDF: http://proceedings.mlr.press/v139/sarussi21a/sarussi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sarussi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Roei family: Sarussi - given: Alon family: Brutzkus - given: Amir family: Globerson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9313-9322 id: sarussi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9313 lastpage: 9322 published: 2021-07-01 00:00:00 +0000 - title: 'E(n) Equivariant Graph Neural Networks' abstract: 'This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs). In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance. In addition, whereas existing methods are limited to equivariance on 3 dimensional spaces, our model is easily scaled to higher-dimensional spaces. We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties.' volume: 139 URL: https://proceedings.mlr.press/v139/satorras21a.html PDF: http://proceedings.mlr.press/v139/satorras21a/satorras21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-satorras21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vı́ctor Garcia family: Satorras - given: Emiel family: Hoogeboom - given: Max family: Welling editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9323-9332 id: satorras21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9323 lastpage: 9332 published: 2021-07-01 00:00:00 +0000 - title: 'A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning' abstract: 'An effective approach in meta-learning is to utilize multiple “train tasks” to learn a good initialization for model parameters that can help solve unseen “test tasks” with very few samples by fine-tuning from this initialization. Although successful in practice, theoretical understanding of such methods is limited. This work studies an important aspect of these methods: splitting the data from each task into train (support) and validation (query) sets during meta-training. Inspired by recent work (Raghu et al., 2020), we view such meta-learning methods through the lens of representation learning and argue that the train-validation split encourages the learned representation to be {\em low-rank} without compromising on expressivity, as opposed to the non-splitting variant that encourages high-rank representations. Since sample efficiency benefits from low-rankness, the splitting strategy will require very few samples to solve unseen test tasks. We present theoretical results that formalize this idea for linear representation learning on a subspace meta-learning instance, and experimentally verify this practical benefit of splitting in simulations and on standard meta-learning benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/saunshi21a.html PDF: http://proceedings.mlr.press/v139/saunshi21a/saunshi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-saunshi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nikunj family: Saunshi - given: Arushi family: Gupta - given: Wei family: Hu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9333-9343 id: saunshi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9333 lastpage: 9343 published: 2021-07-01 00:00:00 +0000 - title: 'Low-Rank Sinkhorn Factorization' abstract: 'Several recent applications of optimal transport (OT) theory to machine learning have relied on regularization, notably entropy and the Sinkhorn algorithm. Because matrix-vector products are pervasive in the Sinkhorn algorithm, several works have proposed to \textit{approximate} kernel matrices appearing in its iterations using low-rank factors. Another route lies instead in imposing low-nonnegative rank constraints on the feasible set of couplings considered in OT problems, with no approximations on cost nor kernel matrices. This route was first explored by \citet{forrow2018statistical}, who proposed an algorithm tailored for the squared Euclidean ground cost, using a proxy objective that can be solved through the machinery of regularized 2-Wasserstein barycenters. Building on this, we introduce in this work a generic approach that aims at solving, in full generality, the OT problem under low-nonnegative rank constraints with arbitrary costs. Our algorithm relies on an explicit factorization of low-rank couplings as a product of \textit{sub-coupling} factors linked by a common marginal; similar to an NMF approach, we alternatively updates these factors. We prove the non-asymptotic stationary convergence of this algorithm and illustrate its efficiency on benchmark experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/scetbon21a.html PDF: http://proceedings.mlr.press/v139/scetbon21a/scetbon21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-scetbon21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Meyer family: Scetbon - given: Marco family: Cuturi - given: Gabriel family: Peyré editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9344-9354 id: scetbon21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9344 lastpage: 9354 published: 2021-07-01 00:00:00 +0000 - title: 'Linear Transformers Are Secretly Fast Weight Programmers' abstract: 'We show the formal equivalence of linearised self-attention mechanisms and fast weight controllers from the early ’90s, where a slow neural net learns by gradient descent to program the fast weights of another net through sequences of elementary programming instructions which are additive outer products of self-invented activation patterns (today called keys and values). Such Fast Weight Programmers (FWPs) learn to manipulate the contents of a finite memory and dynamically interact with it. We infer a memory capacity limitation of recent linearised softmax attention variants, and replace the purely additive outer products by a delta rule-like programming instruction, such that the FWP can more easily learn to correct the current mapping from keys to values. The FWP also learns to compute dynamically changing learning rates. We also propose a new kernel function to linearise attention which balances simplicity and effectiveness. We conduct experiments on synthetic retrieval problems as well as standard machine translation and language modelling tasks which demonstrate the benefits of our methods.' volume: 139 URL: https://proceedings.mlr.press/v139/schlag21a.html PDF: http://proceedings.mlr.press/v139/schlag21a/schlag21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-schlag21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Imanol family: Schlag - given: Kazuki family: Irie - given: Jürgen family: Schmidhuber editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9355-9366 id: schlag21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9355 lastpage: 9366 published: 2021-07-01 00:00:00 +0000 - title: 'Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers' abstract: 'Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one. The growing literature now lists hundreds of optimization methods. In the absence of clear theoretical guidance and conclusive empirical evidence, the decision is often made based on anecdotes. In this work, we aim to replace these anecdotes, if not with a conclusive ranking, then at least with evidence-backed heuristics. To do so, we perform an extensive, standardized benchmark of fifteen particularly popular deep learning optimizers while giving a concise overview of the wide range of possible choices. Analyzing more than 50,000 individual runs, we contribute the following three points: (i) Optimizer performance varies greatly across tasks. (ii) We observe that evaluating multiple optimizers with default parameters works approximately as well as tuning the hyperparameters of a single, fixed optimizer. (iii) While we cannot discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific optimizers and parameter choices that generally lead to competitive results in our experiments: Adam remains a strong contender, with newer methods failing to significantly and consistently outperform it. Our open-sourced results are available as challenging and well-tuned baselines for more meaningful evaluations of novel optimization methods without requiring any further computational efforts.' volume: 139 URL: https://proceedings.mlr.press/v139/schmidt21a.html PDF: http://proceedings.mlr.press/v139/schmidt21a/schmidt21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-schmidt21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Robin M family: Schmidt - given: Frank family: Schneider - given: Philipp family: Hennig editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9367-9376 id: schmidt21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9367 lastpage: 9376 published: 2021-07-01 00:00:00 +0000 - title: 'Equivariant message passing for the prediction of tensorial properties and molecular spectra' abstract: 'Message passing neural networks have become a method of choice for learning on graphs, in particular the prediction of chemical properties and the acceleration of molecular dynamics studies. While they readily scale to large training data sets, previous approaches have proven to be less data efficient than kernel methods. We identify limitations of invariant representations as a major reason and extend the message passing formulation to rotationally equivariant representations. On this basis, we propose the polarizable atom interaction neural network (PaiNN) and improve on common molecule benchmarks over previous networks, while reducing model size and inference time. We leverage the equivariant atomwise representations obtained by PaiNN for the prediction of tensorial properties. Finally, we apply this to the simulation of molecular spectra, achieving speedups of 4-5 orders of magnitude compared to the electronic structure reference.' volume: 139 URL: https://proceedings.mlr.press/v139/schutt21a.html PDF: http://proceedings.mlr.press/v139/schutt21a/schutt21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-schutt21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kristof family: Schütt - given: Oliver family: Unke - given: Michael family: Gastegger editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9377-9388 id: schutt21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9377 lastpage: 9388 published: 2021-07-01 00:00:00 +0000 - title: 'Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks' abstract: 'Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference. A recent survey of industry practitioners found that data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks. However, it remains unclear exactly how dangerous poisoning methods are and which ones are more effective considering that these methods, even ones with identical objectives, have not been tested in consistent or realistic settings. We observe that data poisoning and backdoor attacks are highly sensitive to variations in the testing setup. Moreover, we find that existing methods may not generalize to realistic settings. While these existing works serve as valuable prototypes for data poisoning, we apply rigorous tests to determine the extent to which we should fear them. In order to promote fair comparison in future work, we develop standardized benchmarks for data poisoning and backdoor attacks.' volume: 139 URL: https://proceedings.mlr.press/v139/schwarzschild21a.html PDF: http://proceedings.mlr.press/v139/schwarzschild21a/schwarzschild21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-schwarzschild21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Avi family: Schwarzschild - given: Micah family: Goldblum - given: Arjun family: Gupta - given: John P family: Dickerson - given: Tom family: Goldstein editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9389-9398 id: schwarzschild21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9389 lastpage: 9398 published: 2021-07-01 00:00:00 +0000 - title: 'Connecting Sphere Manifolds Hierarchically for Regularization' abstract: 'This paper considers classification problems with hierarchically organized classes. We force the classifier (hyperplane) of each class to belong to a sphere manifold, whose center is the classifier of its super-class. Then, individual sphere manifolds are connected based on their hierarchical relations. Our technique replaces the last layer of a neural network by combining a spherical fully-connected layer with a hierarchical layer. This regularization is shown to improve the performance of widely used deep neural network architectures (ResNet and DenseNet) on publicly available datasets (CIFAR100, CUB200, Stanford dogs, Stanford cars, and Tiny-ImageNet).' volume: 139 URL: https://proceedings.mlr.press/v139/scieur21a.html PDF: http://proceedings.mlr.press/v139/scieur21a/scieur21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-scieur21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Damien family: Scieur - given: Youngsung family: Kim editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9399-9409 id: scieur21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9399 lastpage: 9409 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Intra-Batch Connections for Deep Metric Learning' abstract: 'The goal of metric learning is to learn a function that maps samples to a lower-dimensional space where similar samples lie closer than dissimilar ones. Particularly, deep metric learning utilizes neural networks to learn such a mapping. Most approaches rely on losses that only take the relations between pairs or triplets of samples into account, which either belong to the same class or two different classes. However, these methods do not explore the embedding space in its entirety. To this end, we propose an approach based on message passing networks that takes all the relations in a mini-batch into account. We refine embedding vectors by exchanging messages among all samples in a given batch allowing the training process to be aware of its overall structure. Since not all samples are equally important to predict a decision boundary, we use an attention mechanism during message passing to allow samples to weigh the importance of each neighbor accordingly. We achieve state-of-the-art results on clustering and image retrieval on the CUB-200-2011, Cars196, Stanford Online Products, and In-Shop Clothes datasets. To facilitate further research, we make available the code and the models at https://github.com/dvl-tum/intra_batch_connections.' volume: 139 URL: https://proceedings.mlr.press/v139/seidenschwarz21a.html PDF: http://proceedings.mlr.press/v139/seidenschwarz21a/seidenschwarz21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-seidenschwarz21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jenny Denise family: Seidenschwarz - given: Ismail family: Elezi - given: Laura family: Leal-Taixé editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9410-9421 id: seidenschwarz21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9410 lastpage: 9421 published: 2021-07-01 00:00:00 +0000 - title: 'Top-k eXtreme Contextual Bandits with Arm Hierarchy' abstract: 'Motivated by modern applications, such as online advertisement and recommender systems, we study the top-$k$ extreme contextual bandits problem, where the total number of arms can be enormous, and the learner is allowed to select $k$ arms and observe all or some of the rewards for the chosen arms. We first propose an algorithm for the non-extreme realizable setting, utilizing the Inverse Gap Weighting strategy for selecting multiple arms. We show that our algorithm has a regret guarantee of $O(k\sqrt{(A-k+1)T \log (|F|T)})$, where $A$ is the total number of arms and $F$ is the class containing the regression function, while only requiring $\tilde{O}(A)$ computation per time step. In the extreme setting, where the total number of arms can be in the millions, we propose a practically-motivated arm hierarchy model that induces a certain structure in mean rewards to ensure statistical and computational efficiency. The hierarchical structure allows for an exponential reduction in the number of relevant arms for each context, thus resulting in a regret guarantee of $O(k\sqrt{(\log A-k+1)T \log (|F|T)})$. Finally, we implement our algorithm using a hierarchical linear function class and show superior performance with respect to well-known benchmarks on simulated bandit feedback experiments using extreme multi-label classification datasets. On a dataset with three million arms, our reduction scheme has an average inference time of only 7.9 milliseconds, which is a 100x improvement.' volume: 139 URL: https://proceedings.mlr.press/v139/sen21a.html PDF: http://proceedings.mlr.press/v139/sen21a/sen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rajat family: Sen - given: Alexander family: Rakhlin - given: Lexing family: Ying - given: Rahul family: Kidambi - given: Dean family: Foster - given: Daniel N family: Hill - given: Inderjit S. family: Dhillon editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9422-9433 id: sen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9422 lastpage: 9433 published: 2021-07-01 00:00:00 +0000 - title: 'Pure Exploration and Regret Minimization in Matching Bandits' abstract: 'Finding an optimal matching in a weighted graph is a standard combinatorial problem. We consider its semi-bandit version where either a pair or a full matching is sampled sequentially. We prove that it is possible to leverage a rank-1 assumption on the adjacency matrix to reduce the sample complexity and the regret of off-the-shelf algorithms up to reaching a linear dependency in the number of vertices (up to to poly-log terms).' volume: 139 URL: https://proceedings.mlr.press/v139/sentenac21a.html PDF: http://proceedings.mlr.press/v139/sentenac21a/sentenac21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sentenac21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Flore family: Sentenac - given: Jialin family: Yi - given: Clement family: Calauzenes - given: Vianney family: Perchet - given: Milan family: Vojnovic editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9434-9442 id: sentenac21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9434 lastpage: 9442 published: 2021-07-01 00:00:00 +0000 - title: 'State Entropy Maximization with Random Encoders for Efficient Exploration' abstract: 'Recent exploration methods have proven to be a recipe for improving sample-efficiency in deep reinforcement learning (RL). However, efficient exploration in high-dimensional observation spaces still remains a challenge. This paper presents Random Encoders for Efficient Exploration (RE3), an exploration method that utilizes state entropy as an intrinsic reward. In order to estimate state entropy in environments with high-dimensional observations, we utilize a k-nearest neighbor entropy estimator in the low-dimensional representation space of a convolutional encoder. In particular, we find that the state entropy can be estimated in a stable and compute-efficient manner by utilizing a randomly initialized encoder, which is fixed throughout training. Our experiments show that RE3 significantly improves the sample-efficiency of both model-free and model-based RL methods on locomotion and navigation tasks from DeepMind Control Suite and MiniGrid benchmarks. We also show that RE3 allows learning diverse behaviors without extrinsic rewards, effectively improving sample-efficiency in downstream tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/seo21a.html PDF: http://proceedings.mlr.press/v139/seo21a/seo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-seo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Younggyo family: Seo - given: Lili family: Chen - given: Jinwoo family: Shin - given: Honglak family: Lee - given: Pieter family: Abbeel - given: Kimin family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9443-9454 id: seo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9443 lastpage: 9454 published: 2021-07-01 00:00:00 +0000 - title: 'Online Submodular Resource Allocation with Applications to Rebalancing Shared Mobility Systems' abstract: 'Motivated by applications in shared mobility, we address the problem of allocating a group of agents to a set of resources to maximize a cumulative welfare objective. We model the welfare obtainable from each resource as a monotone DR-submodular function which is a-priori unknown and can only be learned by observing the welfare of selected allocations. Moreover, these functions can depend on time-varying contextual information. We propose a distributed scheme to maximize the cumulative welfare by designing a repeated game among the agents, who learn to act via regret minimization. We propose two design choices for the game rewards based on upper confidence bounds built around the unknown welfare functions. We analyze them theoretically, bounding the gap between the cumulative welfare of the game and the highest cumulative welfare obtainable in hindsight. Finally, we evaluate our approach in a realistic case study of rebalancing a shared mobility system (i.e., positioning vehicles in strategic areas). From observed trip data, our algorithm gradually learns the users’ demand pattern and improves the overall system operation.' volume: 139 URL: https://proceedings.mlr.press/v139/sessa21a.html PDF: http://proceedings.mlr.press/v139/sessa21a/sessa21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sessa21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pier Giuseppe family: Sessa - given: Ilija family: Bogunovic - given: Andreas family: Krause - given: Maryam family: Kamgarpour editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9455-9464 id: sessa21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9455 lastpage: 9464 published: 2021-07-01 00:00:00 +0000 - title: 'RRL: Resnet as representation for Reinforcement Learning' abstract: 'The ability to autonomously learn behaviors via direct interactions in uninstrumented environments can lead to generalist robots capable of enhancing productivity or providing care in unstructured settings like homes. Such uninstrumented settings warrant operations only using the robot’s proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning {–} a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fails to make significant progress, RRL delivers contact rich behaviors. The appeal of RRL lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.' volume: 139 URL: https://proceedings.mlr.press/v139/shah21a.html PDF: http://proceedings.mlr.press/v139/shah21a/shah21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shah21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rutav M family: Shah - given: Vikash family: Kumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9465-9476 id: shah21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9465 lastpage: 9476 published: 2021-07-01 00:00:00 +0000 - title: 'Equivariant Networks for Pixelized Spheres' abstract: 'Pixelizations of Platonic solids such as the cube and icosahedron have been widely used to represent spherical data, from climate records to Cosmic Microwave Background maps. Platonic solids have well-known global symmetries. Once we pixelize each face of the solid, each face also possesses its own local symmetries in the form of Euclidean isometries. One way to combine these symmetries is through a hierarchy. However, this approach does not adequately model the interplay between the two levels of symmetry transformations. We show how to model this interplay using ideas from group theory, identify the equivariant linear maps, and introduce equivariant padding that respects these symmetries. Deep networks that use these maps as their building blocks generalize gauge equivariant CNNs on pixelized spheres. These deep networks achieve state-of-the-art results on semantic segmentation for climate data and omnidirectional image processing. Code is available at https://git.io/JGiZA.' volume: 139 URL: https://proceedings.mlr.press/v139/shakerinava21a.html PDF: http://proceedings.mlr.press/v139/shakerinava21a/shakerinava21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shakerinava21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mehran family: Shakerinava - given: Siamak family: Ravanbakhsh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9477-9488 id: shakerinava21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9477 lastpage: 9488 published: 2021-07-01 00:00:00 +0000 - title: 'Personalized Federated Learning using Hypernetworks' abstract: 'Personalized federated learning is tasked with training machine learning models for multiple clients, each with its own data distribution. The goal is to train personalized models collaboratively while accounting for data disparities across clients and reducing communication costs. We propose a novel approach to this problem using hypernetworks, termed pFedHN for personalized Federated HyperNetworks. In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client. This architecture provides effective parameter sharing across clients while maintaining the capacity to generate unique and diverse personal models. Furthermore, since hypernetwork parameters are never transmitted, this approach decouples the communication cost from the trainable model size. We test pFedHN empirically in several personalized federated learning challenges and find that it outperforms previous methods. Finally, since hypernetworks share information across clients, we show that pFedHN can generalize better to new clients whose distributions differ from any client observed during training.' volume: 139 URL: https://proceedings.mlr.press/v139/shamsian21a.html PDF: http://proceedings.mlr.press/v139/shamsian21a/shamsian21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shamsian21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aviv family: Shamsian - given: Aviv family: Navon - given: Ethan family: Fetaya - given: Gal family: Chechik editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9489-9502 id: shamsian21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9489 lastpage: 9502 published: 2021-07-01 00:00:00 +0000 - title: 'On the Power of Localized Perceptron for Label-Optimal Learning of Halfspaces with Adversarial Noise' abstract: 'We study {\em online} active learning of homogeneous halfspaces in $\mathbb{R}^d$ with adversarial noise where the overall probability of a noisy label is constrained to be at most $\nu$. Our main contribution is a Perceptron-like online active learning algorithm that runs in polynomial time, and under the conditions that the marginal distribution is isotropic log-concave and $\nu = \Omega(\epsilon)$, where $\epsilon \in (0, 1)$ is the target error rate, our algorithm PAC learns the underlying halfspace with near-optimal label complexity of $\tilde{O}\big(d \cdot \polylog(\frac{1}{\epsilon})\big)$ and sample complexity of $\tilde{O}\big(\frac{d}{\epsilon} \big)$. Prior to this work, existing online algorithms designed for tolerating the adversarial noise are subject to either label complexity polynomial in $\frac{1}{\epsilon}$, or suboptimal noise tolerance, or restrictive marginal distributions. With the additional prior knowledge that the underlying halfspace is $s$-sparse, we obtain attribute-efficient label complexity of $\tilde{O}\big( s \cdot \polylog(d, \frac{1}{\epsilon}) \big)$ and sample complexity of $\tilde{O}\big(\frac{s}{\epsilon} \cdot \polylog(d) \big)$. As an immediate corollary, we show that under the agnostic model where no assumption is made on the noise rate $\nu$, our active learner achieves an error rate of $O(OPT) + \epsilon$ with the same running time and label and sample complexity, where $OPT$ is the best possible error rate achievable by any homogeneous halfspace.' volume: 139 URL: https://proceedings.mlr.press/v139/shen21a.html PDF: http://proceedings.mlr.press/v139/shen21a/shen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jie family: Shen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9503-9514 id: shen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9503 lastpage: 9514 published: 2021-07-01 00:00:00 +0000 - title: 'Sample-Optimal PAC Learning of Halfspaces with Malicious Noise' abstract: 'We study efficient PAC learning of homogeneous halfspaces in $\mathbb{R}^d$ in the presence of malicious noise of Valiant (1985). This is a challenging noise model and only until recently has near-optimal noise tolerance bound been established under the mild condition that the unlabeled data distribution is isotropic log-concave. However, it remains unsettled how to obtain the optimal sample complexity simultaneously. In this work, we present a new analysis for the algorithm of Awasthi et al. (2017) and show that it essentially achieves the near-optimal sample complexity bound of $\tilde{O}(d)$, improving the best known result of $\tilde{O}(d^2)$. Our main ingredient is a novel incorporation of a matrix Chernoff-type inequality to bound the spectrum of an empirical covariance matrix for well-behaved distributions, in conjunction with a careful exploration of the localization schemes of Awasthi et al. (2017). We further extend the algorithm and analysis to the more general and stronger nasty noise model of Bshouty et al. (2002), showing that it is still possible to achieve near-optimal noise tolerance and sample complexity in polynomial time.' volume: 139 URL: https://proceedings.mlr.press/v139/shen21b.html PDF: http://proceedings.mlr.press/v139/shen21b/shen21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shen21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jie family: Shen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9515-9524 id: shen21b issued: date-parts: - 2021 - 7 - 1 firstpage: 9515 lastpage: 9524 published: 2021-07-01 00:00:00 +0000 - title: 'Backdoor Scanning for Deep Neural Networks through K-Arm Optimization' abstract: 'Back-door attack poses a severe threat to deep learning systems. It injects hidden malicious behaviors to a model such that any input stamped with a special pattern can trigger such behaviors. Detecting back-door is hence of pressing need. Many existing defense techniques use optimization to generate the smallest input pattern that forces the model to misclassify a set of benign inputs injected with the pattern to a target label. However, the complexity is quadratic to the number of class labels such that they can hardly handle models with many classes. Inspired by Multi-Arm Bandit in Reinforcement Learning, we propose a K-Arm optimization method for backdoor detection. By iteratively and stochastically selecting the most promising labels for optimization with the guidance of an objective function, we substantially reduce the complexity, allowing to handle models with many classes. Moreover, by iteratively refining the selection of labels to optimize, it substantially mitigates the uncertainty in choosing the right labels, improving detection accuracy. At the time of submission, the evaluation of our method on over 4000 models in the IARPA TrojAI competition from round 1 to the latest round 4 achieves top performance on the leaderboard. Our technique also supersedes five state-of-the-art techniques in terms of accuracy and the scanning time needed. The code of our work is available at https://github.com/PurduePAML/K-ARM_Backdoor_Optimization' volume: 139 URL: https://proceedings.mlr.press/v139/shen21c.html PDF: http://proceedings.mlr.press/v139/shen21c/shen21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shen21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Guangyu family: Shen - given: Yingqi family: Liu - given: Guanhong family: Tao - given: Shengwei family: An - given: Qiuling family: Xu - given: Siyuan family: Cheng - given: Shiqing family: Ma - given: Xiangyu family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9525-9536 id: shen21c issued: date-parts: - 2021 - 7 - 1 firstpage: 9525 lastpage: 9536 published: 2021-07-01 00:00:00 +0000 - title: 'State Relevance for Off-Policy Evaluation' abstract: 'Importance sampling-based estimators for off-policy evaluation (OPE) are valued for their simplicity, unbiasedness, and reliance on relatively few assumptions. However, the variance of these estimators is often high, especially when trajectories are of different lengths. In this work, we introduce Omitting-States-Irrelevant-to-Return Importance Sampling (OSIRIS), an estimator which reduces variance by strategically omitting likelihood ratios associated with certain states. We formalize the conditions under which OSIRIS is unbiased and has lower variance than ordinary importance sampling, and we demonstrate these properties empirically.' volume: 139 URL: https://proceedings.mlr.press/v139/shen21d.html PDF: http://proceedings.mlr.press/v139/shen21d/shen21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shen21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Simon P family: Shen - given: Yecheng family: Ma - given: Omer family: Gottesman - given: Finale family: Doshi-Velez editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9537-9546 id: shen21d issued: date-parts: - 2021 - 7 - 1 firstpage: 9537 lastpage: 9546 published: 2021-07-01 00:00:00 +0000 - title: 'SparseBERT: Rethinking the Importance Analysis in Self-attention' abstract: 'Transformer-based models are popularly used in natural language processing (NLP). Its core component, self-attention, has aroused widespread interest. To understand the self-attention mechanism, a direct method is to visualize the attention map of a pre-trained model. Based on the patterns observed, a series of efficient Transformers with different sparse attention masks have been proposed. From a theoretical perspective, universal approximability of Transformer-based models is also recently proved. However, the above understanding and analysis of self-attention is based on a pre-trained model. To rethink the importance analysis in self-attention, we study the significance of different positions in attention matrix during pre-training. A surprising result is that diagonal elements in the attention map are the least important compared with other attention positions. We provide a proof showing that these diagonal elements can indeed be removed without deteriorating model performance. Furthermore, we propose a Differentiable Attention Mask (DAM) algorithm, which further guides the design of the SparseBERT. Extensive experiments verify our interesting findings and illustrate the effect of the proposed algorithm.' volume: 139 URL: https://proceedings.mlr.press/v139/shi21a.html PDF: http://proceedings.mlr.press/v139/shi21a/shi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Han family: Shi - given: Jiahui family: Gao - given: Xiaozhe family: Ren - given: Hang family: Xu - given: Xiaodan family: Liang - given: Zhenguo family: Li - given: James Tin-Yau family: Kwok editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9547-9557 id: shi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9547 lastpage: 9557 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Gradient Fields for Molecular Conformation Generation' abstract: 'We study a fundamental problem in computational chemistry known as molecular conformation generation, trying to predict stable 3D structures from 2D molecular graphs. Existing machine learning approaches usually first predict distances between atoms and then generate a 3D structure satisfying the distances, where noise in predicted distances may induce extra errors during 3D coordinate generation. Inspired by the traditional force field methods for molecular dynamics simulation, in this paper, we propose a novel approach called ConfGF by directly estimating the gradient fields of the log density of atomic coordinates. The estimated gradient fields allow directly generating stable conformations via Langevin dynamics. However, the problem is very challenging as the gradient fields are roto-translation equivariant. We notice that estimating the gradient fields of atomic coordinates can be translated to estimating the gradient fields of interatomic distances, and hence develop a novel algorithm based on recent score-based generative models to effectively estimate these gradients. Experimental results across multiple tasks show that ConfGF outperforms previous state-of-the-art baselines by a significant margin.' volume: 139 URL: https://proceedings.mlr.press/v139/shi21b.html PDF: http://proceedings.mlr.press/v139/shi21b/shi21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shi21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chence family: Shi - given: Shitong family: Luo - given: Minkai family: Xu - given: Jian family: Tang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9558-9568 id: shi21b issued: date-parts: - 2021 - 7 - 1 firstpage: 9558 lastpage: 9568 published: 2021-07-01 00:00:00 +0000 - title: 'Segmenting Hybrid Trajectories using Latent ODEs' abstract: 'Smooth dynamics interrupted by discontinuities are known as hybrid systems and arise commonly in nature. Latent ODEs allow for powerful representation of irregularly sampled time series but are not designed to capture trajectories arising from hybrid systems. Here, we propose the Latent Segmented ODE (LatSegODE), which uses Latent ODEs to perform reconstruction and changepoint detection within hybrid trajectories featuring jump discontinuities and switching dynamical modes. Where it is possible to train a Latent ODE on the smooth dynamical flows between discontinuities, we apply the pruned exact linear time (PELT) algorithm to detect changepoints where latent dynamics restart, thereby maximizing the joint probability of a piece-wise continuous latent dynamical representation. We propose usage of the marginal likelihood as a score function for PELT, circumventing the need for model-complexity-based penalization. The LatSegODE outperforms baselines in reconstructive and segmentation tasks including synthetic data sets of sine waves, Lotka Volterra dynamics, and UCI Character Trajectories.' volume: 139 URL: https://proceedings.mlr.press/v139/shi21c.html PDF: http://proceedings.mlr.press/v139/shi21c/shi21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shi21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ruian family: Shi - given: Quaid family: Morris editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9569-9579 id: shi21c issued: date-parts: - 2021 - 7 - 1 firstpage: 9569 lastpage: 9579 published: 2021-07-01 00:00:00 +0000 - title: 'Deeply-Debiased Off-Policy Interval Estimation' abstract: 'Off-policy evaluation learns a target policy’s value with a historical dataset generated by a different behavior policy. In addition to a point estimate, many applications would benefit significantly from having a confidence interval (CI) that quantifies the uncertainty of the point estimate. In this paper, we propose a novel procedure to construct an efficient, robust, and flexible CI on a target policy’s value. Our method is justified by theoretical results and numerical experiments. A Python implementation of the proposed procedure is available at https://github.com/ RunzheStat/D2OPE.' volume: 139 URL: https://proceedings.mlr.press/v139/shi21d.html PDF: http://proceedings.mlr.press/v139/shi21d/shi21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shi21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chengchun family: Shi - given: Runzhe family: Wan - given: Victor family: Chernozhukov - given: Rui family: Song editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9580-9591 id: shi21d issued: date-parts: - 2021 - 7 - 1 firstpage: 9580 lastpage: 9591 published: 2021-07-01 00:00:00 +0000 - title: 'GANMEX: One-vs-One Attributions using GAN-based Model Explainability' abstract: 'Attribution methods have been shown as promising approaches for identifying key features that led to learned model predictions. While most existing attribution methods rely on a baseline input for performing feature perturbations, limited research has been conducted to address the baseline selection issues. Poor choices of baselines limit the ability of one-vs-one explanations for multi-class classifiers, which means the attribution methods were not able to explain why an input belongs to its original class but not the other specified target class. Achieving one-vs-one explanation is crucial when certain classes are more similar than others, e.g. two bird types among multiple animals, by focusing on key differentiating features rather than shared features across classes. In this paper, we present GANMEX, a novel approach applying Generative Adversarial Networks (GAN) by incorporating the to-be-explained classifier as part of the adversarial networks. Our approach effectively selects the baseline as the closest realistic sample belong to the target class, which allows attribution methods to provide true one-vs-one explanations. We showed that GANMEX baselines improved the saliency maps and led to stronger performance on multiple evaluation metrics over the existing baselines. Existing attribution results are known for being insensitive to model randomization, and we demonstrated that GANMEX baselines led to better outcome under the cascading randomization of the model.' volume: 139 URL: https://proceedings.mlr.press/v139/shih21a.html PDF: http://proceedings.mlr.press/v139/shih21a/shih21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shih21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sheng-Min family: Shih - given: Pin-Ju family: Tien - given: Zohar family: Karnin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9592-9602 id: shih21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9592 lastpage: 9602 published: 2021-07-01 00:00:00 +0000 - title: 'Large-Scale Meta-Learning with Continual Trajectory Shifting' abstract: 'Meta-learning of shared initialization parameters has shown to be highly effective in solving few-shot learning tasks. However, extending the framework to many-shot scenarios, which may further enhance its practicality, has been relatively overlooked due to the technical difficulties of meta-learning over long chains of inner-gradient steps. In this paper, we first show that allowing the meta-learners to take a larger number of inner gradient steps better captures the structure of heterogeneous and large-scale task distributions, thus results in obtaining better initialization points. Further, in order to increase the frequency of meta-updates even with the excessively long inner-optimization trajectories, we propose to estimate the required shift of the task-specific parameters with respect to the change of the initialization parameters. By doing so, we can arbitrarily increase the frequency of meta-updates and thus greatly improve the meta-level convergence as well as the quality of the learned initializations. We validate our method on a heterogeneous set of large-scale tasks, and show that the algorithm largely outperforms the previous first-order meta-learning methods in terms of both generalization performance and convergence, as well as multi-task learning and fine-tuning baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/shin21a.html PDF: http://proceedings.mlr.press/v139/shin21a/shin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jaewoong family: Shin - given: Hae Beom family: Lee - given: Boqing family: Gong - given: Sung Ju family: Hwang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9603-9613 id: shin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9603 lastpage: 9613 published: 2021-07-01 00:00:00 +0000 - title: 'AGENT: A Benchmark for Core Psychological Reasoning' abstract: 'For machine agents to successfully interact with humans in real-world settings, they will need to develop an understanding of human mental life. Intuitive psychology, the ability to reason about hidden mental variables that drive observable actions, comes naturally to people: even pre-verbal infants can tell agents from objects, expecting agents to act efficiently to achieve goals given constraints. Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning. Inspired by cognitive development studies on intuitive psychology, we present a benchmark consisting of a large dataset of procedurally generated 3D animations, AGENT (Action, Goal, Efficiency, coNstraint, uTility), structured around four scenarios (goal preferences, action efficiency, unobserved constraints, and cost-reward trade-offs) that probe key concepts of core intuitive psychology. We validate AGENT with human-ratings, propose an evaluation protocol emphasizing generalization, and compare two strong baselines built on Bayesian inverse planning and a Theory of Mind neural network. Our results suggest that to pass the designed tests of core intuitive psychology at human levels, a model must acquire or have built-in representations of how agents plan, combining utility computations and core knowledge of objects and physics.' volume: 139 URL: https://proceedings.mlr.press/v139/shu21a.html PDF: http://proceedings.mlr.press/v139/shu21a/shu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianmin family: Shu - given: Abhishek family: Bhandwaldar - given: Chuang family: Gan - given: Kevin family: Smith - given: Shari family: Liu - given: Dan family: Gutfreund - given: Elizabeth family: Spelke - given: Joshua family: Tenenbaum - given: Tomer family: Ullman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9614-9625 id: shu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9614 lastpage: 9625 published: 2021-07-01 00:00:00 +0000 - title: 'Zoo-Tuning: Adaptive Transfer from A Zoo of Models' abstract: 'With the development of deep networks on various large-scale datasets, a large zoo of pretrained models are available. When transferring from a model zoo, applying classic single-model-based transfer learning methods to each source model suffers from high computational cost and cannot fully utilize the rich knowledge in the zoo. We propose \emph{Zoo-Tuning} to address these challenges, which learns to adaptively transfer the parameters of pretrained models to the target task. With the learnable channel alignment layer and adaptive aggregation layer, Zoo-Tuning \emph{adaptively aggregates channel aligned pretrained parameters to derive the target model}, which simultaneously promotes knowledge transfer and adapts source models to downstream tasks. The adaptive aggregation substantially reduces the computation cost at both training and inference. We further propose lite Zoo-Tuning with the temporal ensemble of batch average gating values to reduce the storage cost at the inference time. We evaluate our approach on a variety of tasks, including reinforcement learning, image classification, and facial landmark detection. Experiment results demonstrate that the proposed adaptive transfer learning approach can more effectively and efficiently transfer knowledge from a zoo of models.' volume: 139 URL: https://proceedings.mlr.press/v139/shu21b.html PDF: http://proceedings.mlr.press/v139/shu21b/shu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yang family: Shu - given: Zhi family: Kou - given: Zhangjie family: Cao - given: Jianmin family: Wang - given: Mingsheng family: Long editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9626-9637 id: shu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 9626 lastpage: 9637 published: 2021-07-01 00:00:00 +0000 - title: 'Aggregating From Multiple Target-Shifted Sources' abstract: 'Multi-source domain adaptation aims at leveraging the knowledge from multiple tasks for predicting a related target domain. Hence, a crucial aspect is to properly combine different sources based on their relations. In this paper, we analyzed the problem for aggregating source domains with different label distributions, where most recent source selection approaches fail. Our proposed algorithm differs from previous approaches in two key ways: the model aggregates multiple sources mainly through the similarity of semantic conditional distribution rather than marginal distribution; the model proposes a unified framework to select relevant sources for three popular scenarios, i.e., domain adaptation with limited label on target domain, unsupervised domain adaptation and label partial unsupervised domain adaption. We evaluate the proposed method through extensive experiments. The empirical results significantly outperform the baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/shui21a.html PDF: http://proceedings.mlr.press/v139/shui21a/shui21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-shui21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Changjian family: Shui - given: Zijian family: Li - given: Jiaqi family: Li - given: Christian family: Gagné - given: Charles X family: Ling - given: Boyu family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9638-9648 id: shui21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9638 lastpage: 9648 published: 2021-07-01 00:00:00 +0000 - title: 'Testing Group Fairness via Optimal Transport Projections' abstract: 'We have developed a statistical testing framework to detect if a given machine learning classifier fails to satisfy a wide range of group fairness notions. Our test is a flexible, interpretable, and statistically rigorous tool for auditing whether exhibited biases are intrinsic to the algorithm or simply due to the randomness in the data. The statistical challenges, which may arise from multiple impact criteria that define group fairness and which are discontinuous on model parameters, are conveniently tackled by projecting the empirical measure to the set of group-fair probability models using optimal transport. This statistic is efficiently computed using linear programming, and its asymptotic distribution is explicitly obtained. The proposed framework can also be used to test for composite fairness hypotheses and fairness with multiple sensitive attributes. The optimal transport testing formulation improves interpretability by characterizing the minimal covariate perturbations that eliminate the bias observed in the audit.' volume: 139 URL: https://proceedings.mlr.press/v139/si21a.html PDF: http://proceedings.mlr.press/v139/si21a/si21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-si21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nian family: Si - given: Karthyek family: Murthy - given: Jose family: Blanchet - given: Viet Anh family: Nguyen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9649-9659 id: si21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9649 lastpage: 9659 published: 2021-07-01 00:00:00 +0000 - title: 'On Characterizing GAN Convergence Through Proximal Duality Gap' abstract: 'Despite the accomplishments of Generative Adversarial Networks (GANs) in modeling data distributions, training them remains a challenging task. A contributing factor to this difficulty is the non-intuitive nature of the GAN loss curves, which necessitates a subjective evaluation of the generated output to infer training progress. Recently, motivated by game theory, Duality Gap has been proposed as a domain agnostic measure to monitor GAN training. However, it is restricted to the setting when the GAN converges to a Nash equilibrium. But GANs need not always converge to a Nash equilibrium to model the data distribution. In this work, we extend the notion of duality gap to proximal duality gap that is applicable to the general context of training GANs where Nash equilibria may not exist. We show theoretically that the proximal duality gap can monitor the convergence of GANs to a broader spectrum of equilibria that subsumes Nash equilibria. We also theoretically establish the relationship between the proximal duality gap and the divergence between the real and generated data distributions for different GAN formulations. Our results provide new insights into the nature of GAN convergence. Finally, we validate experimentally the usefulness of proximal duality gap for monitoring and influencing GAN training.' volume: 139 URL: https://proceedings.mlr.press/v139/sidheekh21a.html PDF: http://proceedings.mlr.press/v139/sidheekh21a/sidheekh21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sidheekh21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sahil family: Sidheekh - given: Aroof family: Aimen - given: Narayanan C family: Krishnan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9660-9670 id: sidheekh21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9660 lastpage: 9670 published: 2021-07-01 00:00:00 +0000 - title: 'A Precise Performance Analysis of Support Vector Regression' abstract: 'In this paper, we study the hard and soft support vector regression techniques applied to a set of $n$ linear measurements of the form $y_i=\boldsymbol{\beta}_\star^{T}{\bf x}_i +n_i$ where $\boldsymbol{\beta}_\star$ is an unknown vector, $\left\{{\bf x}_i\right\}_{i=1}^n$ are the feature vectors and $\left\{{n}_i\right\}_{i=1}^n$ model the noise. Particularly, under some plausible assumptions on the statistical distribution of the data, we characterize the feasibility condition for the hard support vector regression in the regime of high dimensions and, when feasible, derive an asymptotic approximation for its risk. Similarly, we study the test risk for the soft support vector regression as a function of its parameters. Our results are then used to optimally tune the parameters intervening in the design of hard and soft support vector regression algorithms. Based on our analysis, we illustrate that adding more samples may be harmful to the test performance of support vector regression, while it is always beneficial when the parameters are optimally selected. Such a result reminds a similar phenomenon observed in modern learning architectures according to which optimally tuned architectures present a decreasing test performance curve with respect to the number of samples.' volume: 139 URL: https://proceedings.mlr.press/v139/sifaou21a.html PDF: http://proceedings.mlr.press/v139/sifaou21a/sifaou21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sifaou21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Houssem family: Sifaou - given: Abla family: Kammoun - given: Mohamed-Slim family: Alouini editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9671-9680 id: sifaou21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9671 lastpage: 9680 published: 2021-07-01 00:00:00 +0000 - title: 'Directed Graph Embeddings in Pseudo-Riemannian Manifolds' abstract: 'The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.' volume: 139 URL: https://proceedings.mlr.press/v139/sim21a.html PDF: http://proceedings.mlr.press/v139/sim21a/sim21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sim21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aaron family: Sim - given: Maciej L family: Wiatrak - given: Angus family: Brayne - given: Paidi family: Creed - given: Saee family: Paliwal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9681-9690 id: sim21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9681 lastpage: 9690 published: 2021-07-01 00:00:00 +0000 - title: 'Collaborative Bayesian Optimization with Fair Regret' abstract: 'Bayesian optimization (BO) is a popular tool for optimizing complex and costly-to-evaluate black-box objective functions. To further reduce the number of function evaluations, any party performing BO may be interested to collaborate with others to optimize the same objective function concurrently. To do this, existing BO algorithms have considered optimizing a batch of input queries in parallel and provided theoretical bounds on their cumulative regret reflecting inefficiency. However, when the objective function values are correlated with real-world rewards (e.g., money), parties may be hesitant to collaborate if they risk incurring larger cumulative regret (i.e., smaller real-world reward) than others. This paper shows that fairness and efficiency are both necessary for the collaborative BO setting. Inspired by social welfare concepts from economics, we propose a new notion of regret capturing these properties and a collaborative BO algorithm whose convergence rate can be theoretically guaranteed by bounding the new regret, both of which share an adjustable parameter for trading off between fairness vs. efficiency. We empirically demonstrate the benefits (e.g., increased fairness) of our algorithm using synthetic and real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/sim21b.html PDF: http://proceedings.mlr.press/v139/sim21b/sim21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sim21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rachael Hwee Ling family: Sim - given: Yehong family: Zhang - given: Bryan Kian Hsiang family: Low - given: Patrick family: Jaillet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9691-9701 id: sim21b issued: date-parts: - 2021 - 7 - 1 firstpage: 9691 lastpage: 9701 published: 2021-07-01 00:00:00 +0000 - title: 'Dynamic Planning and Learning under Recovering Rewards' abstract: 'Motivated by emerging applications such as live-streaming e-commerce, promotions and recommendations, we introduce a general class of multi-armed bandit problems that have the following two features: (i) the decision maker can pull and collect rewards from at most $K$ out of $N$ different arms in each time period; (ii) the expected reward of an arm immediately drops after it is pulled, and then non-parametrically recovers as the idle time increases. With the objective of maximizing expected cumulative rewards over $T$ time periods, we propose, construct and prove performance guarantees for a class of “Purely Periodic Policies”. For the offline problem when all model parameters are known, our proposed policy obtains an approximation ratio that is at the order of $1-\mathcal O(1/\sqrt{K})$, which is asymptotically optimal when $K$ grows to infinity. For the online problem when the model parameters are unknown and need to be learned, we design an Upper Confidence Bound (UCB) based policy that approximately has $\widetilde{\mathcal O}(N\sqrt{T})$ regret against the offline benchmark. Our framework and policy design may have the potential to be adapted into other offline planning and online learning applications with non-stationary and recovering rewards.' volume: 139 URL: https://proceedings.mlr.press/v139/simchi-levi21a.html PDF: http://proceedings.mlr.press/v139/simchi-levi21a/simchi-levi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-simchi-levi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Simchi-Levi - given: Zeyu family: Zheng - given: Feng family: Zhu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9702-9711 id: simchi-levi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9702 lastpage: 9711 published: 2021-07-01 00:00:00 +0000 - title: 'PopSkipJump: Decision-Based Attack for Probabilistic Classifiers' abstract: 'Most current classifiers are vulnerable to adversarial examples, small input perturbations that change the classification output. Many existing attack algorithms cover various settings, from white-box to black-box classifiers, but usually assume that the answers are deterministic and often fail when they are not. We therefore propose a new adversarial decision-based attack specifically designed for classifiers with probabilistic outputs. It is based on the HopSkipJump attack by Chen et al. (2019), a strong and query efficient decision-based attack originally designed for deterministic classifiers. Our P(robabilisticH)opSkipJump attack adapts its amount of queries to maintain HopSkipJump’s original output quality across various noise levels, while converging to its query efficiency as the noise level decreases. We test our attack on various noise models, including state-of-the-art off-the-shelf randomized defenses, and show that they offer almost no extra robustness to decision-based attacks. Code is available at https://github.com/cjsg/PopSkipJump.' volume: 139 URL: https://proceedings.mlr.press/v139/simon-gabriel21a.html PDF: http://proceedings.mlr.press/v139/simon-gabriel21a/simon-gabriel21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-simon-gabriel21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Carl-Johann family: Simon-Gabriel - given: Noman Ahmed family: Sheikh - given: Andreas family: Krause editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9712-9721 id: simon-gabriel21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9712 lastpage: 9721 published: 2021-07-01 00:00:00 +0000 - title: 'Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances' abstract: 'We study how permutation symmetries in overparameterized multi-layer neural networks generate ‘symmetry-induced’ critical points. Assuming a network with $ L $ layers of minimal widths $ r_1^*, \ldots, r_{L-1}^* $ reaches a zero-loss minimum at $ r_1^*! \cdots r_{L-1}^*! $ isolated points that are permutations of one another, we show that adding one extra neuron to each layer is sufficient to connect all these previously discrete minima into a single manifold. For a two-layer overparameterized network of width $ r^*+ h =: m $ we explicitly describe the manifold of global minima: it consists of $ T(r^*, m) $ affine subspaces of dimension at least $ h $ that are connected to one another. For a network of width $m$, we identify the number $G(r,m)$ of affine subspaces containing only symmetry-induced critical points that are related to the critical points of a smaller network of width $r1$). A \textit{simplex-net} is introduced to produce architecture-customized code for each path. As a result, all paths can adaptively learn how to share weights in the $K$-shot supernets and acquire corresponding weights for better evaluation. $K$-shot supernets and simplex-net can be iteratively trained, and we further extend the search to the channel dimension. Extensive experiments on benchmark datasets validate that K-shot NAS significantly improves the evaluation accuracy of paths and thus brings in impressive performance improvements.' volume: 139 URL: https://proceedings.mlr.press/v139/su21a.html PDF: http://proceedings.mlr.press/v139/su21a/su21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-su21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiu family: Su - given: Shan family: You - given: Mingkai family: Zheng - given: Fei family: Wang - given: Chen family: Qian - given: Changshui family: Zhang - given: Chang family: Xu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9880-9890 id: su21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9880 lastpage: 9890 published: 2021-07-01 00:00:00 +0000 - title: 'More Powerful and General Selective Inference for Stepwise Feature Selection using Homotopy Method' abstract: 'Conditional selective inference (SI) has been actively studied as a new statistical inference framework for data-driven hypotheses. The basic idea of conditional SI is to make inferences conditional on the selection event characterized by a set of linear and/or quadratic inequalities. Conditional SI has been mainly studied in the context of feature selection such as stepwise feature selection (SFS). The main limitation of the existing conditional SI methods is the loss of power due to over-conditioning, which is required for computational tractability. In this study, we develop a more powerful and general conditional SI method for SFS using the homotopy method which enables us to overcome this limitation. The homotopy-based SI is especially effective for more complicated feature selection algorithms. As an example, we develop a conditional SI method for forward-backward SFS with AIC-based stopping criteria and show that it is not adversely affected by the increased complexity of the algorithm. We conduct several experiments to demonstrate the effectiveness and efficiency of the proposed method.' volume: 139 URL: https://proceedings.mlr.press/v139/sugiyama21a.html PDF: http://proceedings.mlr.press/v139/sugiyama21a/sugiyama21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sugiyama21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kazuya family: Sugiyama - given: Vo Nguyen Le family: Duy - given: Ichiro family: Takeuchi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9891-9901 id: sugiyama21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9891 lastpage: 9901 published: 2021-07-01 00:00:00 +0000 - title: 'Not All Memories are Created Equal: Learning to Forget by Expiring' abstract: 'Attention mechanisms have shown promising results in sequence modeling tasks that require long-term memory. Recent work investigated mechanisms to reduce the computational cost of preserving and storing memories. However, not all content in the past is equally important to remember. We propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information. This forgetting of memories enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently, as not all states from previous timesteps are preserved. We demonstrate that Expire-Span can help models identify and retain critical information and show it can achieve strong performance on reinforcement learning tasks specifically designed to challenge this functionality. Next, we show that Expire-Span can scale to memories that are tens of thousands in size, setting a new state of the art on incredibly long context tasks such as character-level language modeling and a frame-by-frame moving objects task. Finally, we analyze the efficiency of Expire-Span compared to existing approaches and demonstrate that it trains faster and uses less memory.' volume: 139 URL: https://proceedings.mlr.press/v139/sukhbaatar21a.html PDF: http://proceedings.mlr.press/v139/sukhbaatar21a/sukhbaatar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sukhbaatar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sainbayar family: Sukhbaatar - given: Da family: Ju - given: Spencer family: Poff - given: Stephen family: Roller - given: Arthur family: Szlam - given: Jason family: Weston - given: Angela family: Fan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9902-9912 id: sukhbaatar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9902 lastpage: 9912 published: 2021-07-01 00:00:00 +0000 - title: 'Nondeterminism and Instability in Neural Network Optimization' abstract: 'Nondeterminism in neural network optimization produces uncertainty in performance, making small improvements difficult to discern from run-to-run variability. While uncertainty can be reduced by training multiple model copies, doing so is time-consuming, costly, and harms reproducibility. In this work, we establish an experimental protocol for understanding the effect of optimization nondeterminism on model diversity, allowing us to isolate the effects of a variety of sources of nondeterminism. Surprisingly, we find that all sources of nondeterminism have similar effects on measures of model diversity. To explain this intriguing fact, we identify the instability of model training, taken as an end-to-end procedure, as the key determinant. We show that even one-bit changes in initial parameters result in models converging to vastly different values. Last, we propose two approaches for reducing the effects of instability on run-to-run variability.' volume: 139 URL: https://proceedings.mlr.press/v139/summers21a.html PDF: http://proceedings.mlr.press/v139/summers21a/summers21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-summers21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Cecilia family: Summers - given: Michael J. family: Dinneen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9913-9922 id: summers21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9913 lastpage: 9922 published: 2021-07-01 00:00:00 +0000 - title: 'AutoSampling: Search for Effective Data Sampling Schedules' abstract: 'Data sampling acts as a pivotal role in training deep learning models. However, an effective sampling schedule is difficult to learn due to its inherent high-dimension as a hyper-parameter. In this paper, we propose an AutoSampling method to automatically learn sampling schedules for model training, which consists of the multi-exploitation step aiming for optimal local sampling schedules and the exploration step for the ideal sampling distribution. More specifically, we achieve sampling schedule search with shortened exploitation cycle to provide enough supervision. In addition, we periodically estimate the sampling distribution from the learned sampling schedules and perturb it to search in the distribution space. The combination of two searches allows us to learn a robust sampling schedule. We apply our AutoSampling method to a variety of image classification tasks illustrating the effectiveness of the proposed method.' volume: 139 URL: https://proceedings.mlr.press/v139/sun21a.html PDF: http://proceedings.mlr.press/v139/sun21a/sun21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sun21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ming family: Sun - given: Haoxuan family: Dou - given: Baopu family: Li - given: Junjie family: Yan - given: Wanli family: Ouyang - given: Lei family: Cui editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9923-9933 id: sun21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9923 lastpage: 9933 published: 2021-07-01 00:00:00 +0000 - title: 'What Makes for End-to-End Object Detection?' abstract: 'Object detection has recently achieved a breakthrough for removing the last one non-differentiable component in the pipeline, Non-Maximum Suppression (NMS), and building up an end-to-end system. However, what makes for its one-to-one prediction has not been well understood. In this paper, we first point out that one-to-one positive sample assignment is the key factor, while, one-to-many assignment in previous detectors causes redundant predictions in inference. Second, we surprisingly find that even training with one-to-one assignment, previous detectors still produce redundant predictions. We identify that classification cost in matching cost is the main ingredient: (1) previous detectors only consider location cost, (2) by additionally introducing classification cost, previous detectors immediately produce one-to-one prediction during inference. We introduce the concept of score gap to explore the effect of matching cost. Classification cost enlarges the score gap by choosing positive samples as those of highest score in the training iteration and reducing noisy positive samples brought by only location cost. Finally, we demonstrate the advantages of end-to-end object detection on crowded scenes.' volume: 139 URL: https://proceedings.mlr.press/v139/sun21b.html PDF: http://proceedings.mlr.press/v139/sun21b/sun21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sun21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peize family: Sun - given: Yi family: Jiang - given: Enze family: Xie - given: Wenqi family: Shao - given: Zehuan family: Yuan - given: Changhu family: Wang - given: Ping family: Luo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9934-9944 id: sun21b issued: date-parts: - 2021 - 7 - 1 firstpage: 9934 lastpage: 9944 published: 2021-07-01 00:00:00 +0000 - title: 'DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning' abstract: 'In fully cooperative multi-agent reinforcement learning (MARL) settings, the environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of the other agents. To address the above issues, we integrate distributional RL and value function factorization methods by proposing a Distributional Value Function Factorization (DFAC) framework to generalize expected value function factorization methods to their distributional variants. DFAC extends the individual utility functions from deterministic variables to random variables, and models the quantile function of the total return as a quantile mixture. To validate DFAC, we demonstrate DFAC’s ability to factorize a simple two-step matrix game with stochastic rewards and perform experiments on all Super Hard tasks of StarCraft Multi-Agent Challenge, showing that DFAC is able to outperform expected value function factorization baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/sun21c.html PDF: http://proceedings.mlr.press/v139/sun21c/sun21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sun21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wei-Fang family: Sun - given: Cheng-Kuang family: Lee - given: Chun-Yi family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9945-9954 id: sun21c issued: date-parts: - 2021 - 7 - 1 firstpage: 9945 lastpage: 9954 published: 2021-07-01 00:00:00 +0000 - title: 'Scalable Variational Gaussian Processes via Harmonic Kernel Decomposition' abstract: 'We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability. We propose the harmonic kernel decomposition (HKD), which uses Fourier series to decompose a kernel as a sum of orthogonal kernels. Our variational approximation exploits this orthogonality to enable a large number of inducing points at a low computational cost. We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections, and it significantly outperforms standard variational methods in scalability and accuracy. Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.' volume: 139 URL: https://proceedings.mlr.press/v139/sun21d.html PDF: http://proceedings.mlr.press/v139/sun21d/sun21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sun21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shengyang family: Sun - given: Jiaxin family: Shi - given: Andrew Gordon Gordon family: Wilson - given: Roger B family: Grosse editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9955-9965 id: sun21d issued: date-parts: - 2021 - 7 - 1 firstpage: 9955 lastpage: 9965 published: 2021-07-01 00:00:00 +0000 - title: 'Reasoning Over Virtual Knowledge Bases With Open Predicate Relations' abstract: 'We present the Open Predicate Query Language (OPQL); a method for constructing a virtual KB (VKB) trained entirely from text. Large Knowledge Bases (KBs) are indispensable for a wide-range of industry applications such as question answering and recommendation. Typically, KBs encode world knowledge in a structured, readily accessible form derived from laborious human annotation efforts. Unfortunately, while they are extremely high precision, KBs are inevitably highly incomplete and automated methods for enriching them are far too inaccurate. Instead, OPQL constructs a VKB by encoding and indexing a set of relation mentions in a way that naturally enables reasoning and can be trained without any structured supervision. We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks and, additionally, can be used as an external memory integrated into a language model (OPQL-LM) leading to improvements on two open-domain question answering tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/sun21e.html PDF: http://proceedings.mlr.press/v139/sun21e/sun21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sun21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Haitian family: Sun - given: Patrick family: Verga - given: Bhuwan family: Dhingra - given: Ruslan family: Salakhutdinov - given: William W family: Cohen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9966-9977 id: sun21e issued: date-parts: - 2021 - 7 - 1 firstpage: 9966 lastpage: 9977 published: 2021-07-01 00:00:00 +0000 - title: 'PAC-Learning for Strategic Classification' abstract: 'The study of strategic or adversarial manipulation of testing data to fool a classifier has attracted much recent attention. Most previous works have focused on two extreme situations where any testing data point either is completely adversarial or always equally prefers the positive label. In this paper, we generalize both of these through a unified framework for strategic classification and introduce the notion of strategic VC-dimension (SVC) to capture the PAC-learnability in our general strategic setup. SVC provably generalizes the recent concept of adversarial VC-dimension (AVC) introduced by Cullina et al. (2018). We instantiate our framework for the fundamental strategic linear classification problem. We fully characterize: (1) the statistical learnability of linear classifiers by pinning down its SVC; (2) it’s computational tractability by pinning down the complexity of the empirical risk minimization problem. Interestingly, the SVC of linear classifiers is always upper bounded by its standard VC-dimension. This characterization also strictly generalizes the AVC bound for linear classifiers in (Cullina et al., 2018).' volume: 139 URL: https://proceedings.mlr.press/v139/sundaram21a.html PDF: http://proceedings.mlr.press/v139/sundaram21a/sundaram21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-sundaram21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ravi family: Sundaram - given: Anil family: Vullikanti - given: Haifeng family: Xu - given: Fan family: Yao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9978-9988 id: sundaram21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9978 lastpage: 9988 published: 2021-07-01 00:00:00 +0000 - title: 'Reinforcement Learning for Cost-Aware Markov Decision Processes' abstract: 'Ratio maximization has applications in areas as diverse as finance, reward shaping for reinforcement learning (RL), and the development of safe artificial intelligence, yet there has been very little exploration of RL algorithms for ratio maximization. This paper addresses this deficiency by introducing two new, model-free RL algorithms for solving cost-aware Markov decision processes, where the goal is to maximize the ratio of long-run average reward to long-run average cost. The first algorithm is a two-timescale scheme based on relative value iteration (RVI) Q-learning and the second is an actor-critic scheme. The paper proves almost sure convergence of the former to the globally optimal solution in the tabular case and almost sure convergence of the latter under linear function approximation for the critic. Unlike previous methods, the two algorithms provably converge for general reward and cost functions under suitable conditions. The paper also provides empirical results demonstrating promising performance and lending strong support to the theoretical results.' volume: 139 URL: https://proceedings.mlr.press/v139/suttle21a.html PDF: http://proceedings.mlr.press/v139/suttle21a/suttle21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-suttle21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wesley family: Suttle - given: Kaiqing family: Zhang - given: Zhuoran family: Yang - given: Ji family: Liu - given: David family: Kraemer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 9989-9999 id: suttle21a issued: date-parts: - 2021 - 7 - 1 firstpage: 9989 lastpage: 9999 published: 2021-07-01 00:00:00 +0000 - title: 'Model-Targeted Poisoning Attacks with Provable Convergence' abstract: 'In a poisoning attack, an adversary who controls a small fraction of the training data attempts to select that data, so a model is induced that misbehaves in a particular way. We consider poisoning attacks against convex machine learning models and propose an efficient poisoning attack designed to induce a model specified by the adversary. Unlike previous model-targeted poisoning attacks, our attack comes with provable convergence to any attainable target model. We also provide a lower bound on the minimum number of poisoning points needed to achieve a given target model. Our method uses online convex optimization and finds poisoning points incrementally. This provides more flexibility than previous attacks which require an a priori assumption about the number of poisoning points. Our attack is the first model-targeted poisoning attack that provides provable convergence for convex models. In our experiments, it either exceeds or matches state-of-the-art attacks in terms of attack success rate and distance to the target model.' volume: 139 URL: https://proceedings.mlr.press/v139/suya21a.html PDF: http://proceedings.mlr.press/v139/suya21a/suya21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-suya21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fnu family: Suya - given: Saeed family: Mahloujifar - given: Anshuman family: Suri - given: David family: Evans - given: Yuan family: Tian editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10000-10010 id: suya21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10000 lastpage: 10010 published: 2021-07-01 00:00:00 +0000 - title: 'Generalization Error Bound for Hyperbolic Ordinal Embedding' abstract: 'Hyperbolic ordinal embedding (HOE) represents entities as points in hyperbolic space so that they agree as well as possible with given constraints in the form of entity $i$ is more similar to entity $j$ than to entity $k$. It has been experimentally shown that HOE can obtain representations of hierarchical data such as a knowledge base and a citation network effectively, owing to hyperbolic space’s exponential growth property. However, its theoretical analysis has been limited to ideal noiseless settings, and its generalization error in compensation for hyperbolic space’s exponential representation ability has not been guaranteed. The difficulty is that existing generalization error bound derivations for ordinal embedding based on the Gramian matrix are not applicable in HOE, since hyperbolic space is not inner-product space. In this paper, through our novel characterization of HOE with decomposed Lorentz Gramian matrices, we provide a generalization error bound of HOE for the first time, which is at most exponential with respect to the embedding space’s radius. Our comparison between the bounds of HOE and Euclidean ordinal embedding shows that HOE’s generalization error comes at a reasonable cost considering its exponential representation ability.' volume: 139 URL: https://proceedings.mlr.press/v139/suzuki21a.html PDF: http://proceedings.mlr.press/v139/suzuki21a/suzuki21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-suzuki21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Atsushi family: Suzuki - given: Atsushi family: Nitanda - given: Jing family: Wang - given: Linchuan family: Xu - given: Kenji family: Yamanishi - given: Marc family: Cavazza editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10011-10021 id: suzuki21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10011 lastpage: 10021 published: 2021-07-01 00:00:00 +0000 - title: 'Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap' abstract: 'We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching. At its core, our classification scheme is based on whether the learner attempts to match (1) reward or (2) action-value moments of the expert’s behavior, with each option leading to differing algorithmic approaches. By considering adversarially chosen divergences between learner and expert behavior, we are able to derive bounds on policy performance that apply for all algorithms in each of these classes, the first to our knowledge. We also introduce the notion of moment recoverability, implicit in many previous analyses of imitation learning, which allows us to cleanly delineate how well each algorithmic family is able to mitigate compounding errors. We derive three novel algorithm templates (AdVIL, AdRIL, and DAeQuIL) with strong guarantees, simple implementation, and competitive empirical performance.' volume: 139 URL: https://proceedings.mlr.press/v139/swamy21a.html PDF: http://proceedings.mlr.press/v139/swamy21a/swamy21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-swamy21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gokul family: Swamy - given: Sanjiban family: Choudhury - given: J. Andrew family: Bagnell - given: Steven family: Wu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10022-10032 id: swamy21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10022 lastpage: 10032 published: 2021-07-01 00:00:00 +0000 - title: 'Parallel tempering on optimized paths' abstract: 'Parallel tempering (PT) is a class of Markov chain Monte Carlo algorithms that constructs a path of distributions annealing between a tractable reference and an intractable target, and then interchanges states along the path to improve mixing in the target. The performance of PT depends on how quickly a sample from the reference distribution makes its way to the target, which in turn depends on the particular path of annealing distributions. However, past work on PT has used only simple paths constructed from convex combinations of the reference and target log-densities. This paper begins by demonstrating that this path performs poorly in the setting where the reference and target are nearly mutually singular. To address this issue, we expand the framework of PT to general families of paths, formulate the choice of path as an optimization problem that admits tractable gradient estimates, and propose a flexible new family of spline interpolation paths for use in practice. Theoretical and empirical results both demonstrate that our proposed methodology breaks previously-established upper performance limits for traditional paths.' volume: 139 URL: https://proceedings.mlr.press/v139/syed21a.html PDF: http://proceedings.mlr.press/v139/syed21a/syed21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-syed21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Saifuddin family: Syed - given: Vittorio family: Romaniello - given: Trevor family: Campbell - given: Alexandre family: Bouchard-Cote editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10033-10042 id: syed21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10033 lastpage: 10042 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Representation Learning via Perceptual Similarity Metrics' abstract: 'A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting such task-relevant predictive information becomes particularly difficult for noisy and high-dimensional real-world data. In this work, we propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance. Our method leverages a perceptual similarity metric via a triplet loss to ensure that the transformation preserves task-relevant information. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations: classification with nuisance information, out-of-distribution generalization, and preservation of subgroup accuracies. We additionally show that CIM is complementary to other mutual information-based representation learning techniques, and demonstrate that it improves the performance of variational information bottleneck (VIB) when used in conjunction.' volume: 139 URL: https://proceedings.mlr.press/v139/taghanaki21a.html PDF: http://proceedings.mlr.press/v139/taghanaki21a/taghanaki21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-taghanaki21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Saeid A family: Taghanaki - given: Kristy family: Choi - given: Amir Hosein family: Khasahmadi - given: Anirudh family: Goyal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10043-10053 id: taghanaki21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10043 lastpage: 10053 published: 2021-07-01 00:00:00 +0000 - title: 'DriftSurf: Stable-State / Reactive-State Learning under Concept Drift' abstract: 'When learning from streaming data, a change in the data distribution, also known as concept drift, can render a previously-learned model inaccurate and require training a new model. We present an adaptive learning algorithm that extends previous drift-detection-based methods by incorporating drift detection into a broader stable-state/reactive-state process. The advantage of our approach is that we can use aggressive drift detection in the stable state to achieve a high detection rate, but mitigate the false positive rate of standalone drift detection via a reactive state that reacts quickly to true drifts while eliminating most false positives. The algorithm is generic in its base learner and can be applied across a variety of supervised learning problems. Our theoretical analysis shows that the risk of the algorithm is (i) statistically better than standalone drift detection and (ii) competitive to an algorithm with oracle knowledge of when (abrupt) drifts occur. Experiments on synthetic and real datasets with concept drifts confirm our theoretical analysis.' volume: 139 URL: https://proceedings.mlr.press/v139/tahmasbi21a.html PDF: http://proceedings.mlr.press/v139/tahmasbi21a/tahmasbi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tahmasbi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ashraf family: Tahmasbi - given: Ellango family: Jothimurugesan - given: Srikanta family: Tirthapura - given: Phillip B family: Gibbons editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10054-10064 id: tahmasbi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10054 lastpage: 10064 published: 2021-07-01 00:00:00 +0000 - title: 'Sinkhorn Label Allocation: Semi-Supervised Classification via Annealed Self-Training' abstract: 'Self-training is a standard approach to semi-supervised learning where the learner’s own predictions on unlabeled data are used as supervision during training. In this paper, we reinterpret this label assignment process as an optimal transportation problem between examples and classes, wherein the cost of assigning an example to a class is mediated by the current predictions of the classifier. This formulation facilitates a practical annealing strategy for label assignment and allows for the inclusion of prior knowledge on class proportions via flexible upper bound constraints. The solutions to these assignment problems can be efficiently approximated using Sinkhorn iteration, thus enabling their use in the inner loop of standard stochastic optimization algorithms. We demonstrate the effectiveness of our algorithm on the CIFAR-10, CIFAR-100, and SVHN datasets in comparison with FixMatch, a state-of-the-art self-training algorithm.' volume: 139 URL: https://proceedings.mlr.press/v139/tai21a.html PDF: http://proceedings.mlr.press/v139/tai21a/tai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kai Sheng family: Tai - given: Peter D family: Bailis - given: Gregory family: Valiant editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10065-10075 id: tai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10065 lastpage: 10075 published: 2021-07-01 00:00:00 +0000 - title: 'Approximation Theory Based Methods for RKHS Bandits' abstract: 'The RKHS bandit problem (also called kernelized multi-armed bandit problem) is an online optimization problem of non-linear functions with noisy feedback. Although the problem has been extensively studied, there are unsatisfactory results for some problems compared to the well-studied linear bandit case. Specifically, there is no general algorithm for the adversarial RKHS bandit problem. In addition, high computational complexity of existing algorithms hinders practical application. We address these issues by considering a novel amalgamation of approximation theory and the misspecified linear bandit problem. Using an approximation method, we propose efficient algorithms for the stochastic RKHS bandit problem and the first general algorithm for the adversarial RKHS bandit problem. Furthermore, we empirically show that one of our proposed methods has comparable cumulative regret to IGP-UCB and its running time is much shorter.' volume: 139 URL: https://proceedings.mlr.press/v139/takemori21a.html PDF: http://proceedings.mlr.press/v139/takemori21a/takemori21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-takemori21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sho family: Takemori - given: Masahiro family: Sato editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10076-10085 id: takemori21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10076 lastpage: 10085 published: 2021-07-01 00:00:00 +0000 - title: 'Supervised Tree-Wasserstein Distance' abstract: 'To measure the similarity of documents, the Wasserstein distance is a powerful tool, but it requires a high computational cost. Recently, for fast computation of the Wasserstein distance, methods for approximating the Wasserstein distance using a tree metric have been proposed. These tree-based methods allow fast comparisons of a large number of documents; however, they are unsupervised and do not learn task-specific distances. In this work, we propose the Supervised Tree-Wasserstein (STW) distance, a fast, supervised metric learning method based on the tree metric. Specifically, we rewrite the Wasserstein distance on the tree metric by the parent-child relationships of a tree, and formulate it as a continuous optimization problem using a contrastive loss. Experimentally, we show that the STW distance can be computed fast, and improves the accuracy of document classification tasks. Furthermore, the STW distance is formulated by matrix multiplications, runs on a GPU, and is suitable for batch processing. Therefore, we show that the STW distance is extremely efficient when comparing a large number of documents.' volume: 139 URL: https://proceedings.mlr.press/v139/takezawa21a.html PDF: http://proceedings.mlr.press/v139/takezawa21a/takezawa21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-takezawa21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuki family: Takezawa - given: Ryoma family: Sato - given: Makoto family: Yamada editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10086-10095 id: takezawa21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10086 lastpage: 10095 published: 2021-07-01 00:00:00 +0000 - title: 'EfficientNetV2: Smaller Models and Faster Training' abstract: 'This paper introduces EfficientNetV2, a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. To develop these models, we use a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency. The models were searched from the search space enriched with new ops such as Fused-MBConv. Our experiments show that EfficientNetV2 models train much faster than state-of-the-art models while being up to 6.8x smaller. Our training can be further sped up by progressively increasing the image size during training, but it often causes a drop in accuracy. To compensate for this accuracy drop, we propose an improved method of progressive learning, which adaptively adjusts regularization (e.g. data augmentation) along with image size. With progressive learning, our EfficientNetV2 significantly outperforms previous models on ImageNet and CIFAR/Cars/Flowers datasets. By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87.3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2.0% accuracy while training 5x-11x faster using the same computing resources.' volume: 139 URL: https://proceedings.mlr.press/v139/tan21a.html PDF: http://proceedings.mlr.press/v139/tan21a/tan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mingxing family: Tan - given: Quoc family: Le editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10096-10106 id: tan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10096 lastpage: 10106 published: 2021-07-01 00:00:00 +0000 - title: 'SGA: A Robust Algorithm for Partial Recovery of Tree-Structured Graphical Models with Noisy Samples' abstract: 'We consider learning Ising tree models when the observations from the nodes are corrupted by independent but non-identically distributed noise with unknown statistics. Katiyar et al. (2020) showed that although the exact tree structure cannot be recovered, one can recover a partial tree structure; that is, a structure belonging to the equivalence class containing the true tree. This paper presents a systematic improvement of Katiyar et al. (2020). First, we present a novel impossibility result by deriving a bound on the necessary number of samples for partial recovery. Second, we derive a significantly improved sample complexity result in which the dependence on the minimum correlation $\rho_{\min}$ is $\rho_{\min}^{-8}$ instead of $\rho_{\min}^{-24}$. Finally, we propose Symmetrized Geometric Averaging (SGA), a more statistically robust algorithm for partial tree recovery. We provide error exponent analyses and extensive numerical results on a variety of trees to show that the sample complexity of SGA is significantly better than the algorithm of Katiyar et al. (2020). SGA can be readily extended to Gaussian models and is shown via numerical experiments to be similarly superior.' volume: 139 URL: https://proceedings.mlr.press/v139/tandon21a.html PDF: http://proceedings.mlr.press/v139/tandon21a/tandon21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tandon21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Anshoo family: Tandon - given: Aldric family: Han - given: Vincent family: Tan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10107-10117 id: tandon21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10107 lastpage: 10117 published: 2021-07-01 00:00:00 +0000 - title: '1-bit Adam: Communication Efficient Large-Scale Training with Adam’s Convergence Speed' abstract: 'Scalable training of large models (like BERT and GPT-3) requires careful optimization rooted in model design, architecture, and system capabilities. From a system standpoint, communication has become a major bottleneck, especially on commodity systems with standard TCP interconnects that offer limited network bandwidth. Communication compression is an important technique to reduce training time on such systems. One of the most effective ways to compress communication is via error compensation compression, which offers robust convergence speed, even under 1-bit compression. However, state-of-the-art error compensation techniques only work with basic optimizers like SGD and momentum SGD, which are linearly dependent on the gradients. They do not work with non-linear gradient-based optimizers like Adam, which offer state-of-the-art convergence efficiency and accuracy for models like BERT. In this paper, we propose 1-bit Adam that reduces the communication volume by up to 5x, offers much better scalability, and provides the same convergence speed as uncompressed Adam. Our key finding is that Adam’s variance becomes stable (after a warmup phase) and can be used as a fixed precondition for the rest of the training (compression phase). We performed experiments on up to 256 GPUs and show that 1-bit Adam enables up to 3.3x higher throughput for BERT-Large pre-training and up to 2.9x higher throughput for SQuAD fine-tuning. In addition, we provide theoretical analysis for 1-bit Adam.' volume: 139 URL: https://proceedings.mlr.press/v139/tang21a.html PDF: http://proceedings.mlr.press/v139/tang21a/tang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hanlin family: Tang - given: Shaoduo family: Gan - given: Ammar Ahmad family: Awan - given: Samyam family: Rajbhandari - given: Conglong family: Li - given: Xiangru family: Lian - given: Ji family: Liu - given: Ce family: Zhang - given: Yuxiong family: He editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10118-10129 id: tang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10118 lastpage: 10129 published: 2021-07-01 00:00:00 +0000 - title: 'Taylor Expansion of Discount Factors' abstract: 'In practical reinforcement learning (RL), the discount factor used for estimating value functions often differs from that used for defining the evaluation objective. In this work, we study the effect that this discrepancy of discount factors has during learning, and discover a family of objectives that interpolate value functions of two distinct discount factors. Our analysis suggests new ways for estimating value functions and performing policy optimization updates, which demonstrate empirical performance gains. This framework also leads to new insights on commonly-used deep RL heuristic modifications to policy optimization algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/tang21b.html PDF: http://proceedings.mlr.press/v139/tang21b/tang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yunhao family: Tang - given: Mark family: Rowland - given: Remi family: Munos - given: Michal family: Valko editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10130-10140 id: tang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 10130 lastpage: 10140 published: 2021-07-01 00:00:00 +0000 - title: 'REPAINT: Knowledge Transfer in Deep Reinforcement Learning' abstract: 'Accelerating learning processes for complex tasks by leveraging previously learned tasks has been one of the most challenging problems in reinforcement learning, especially when the similarity between source and target tasks is low. This work proposes REPresentation And INstance Transfer (REPAINT) algorithm for knowledge transfer in deep reinforcement learning. REPAINT not only transfers the representation of a pre-trained teacher policy in the on-policy learning, but also uses an advantage-based experience selection approach to transfer useful samples collected following the teacher policy in the off-policy learning. Our experimental results on several benchmark tasks show that REPAINT significantly reduces the total training time in generic cases of task similarity. In particular, when the source tasks are dissimilar to, or sub-tasks of, the target tasks, REPAINT outperforms other baselines in both training-time reduction and asymptotic performance of return scores.' volume: 139 URL: https://proceedings.mlr.press/v139/tao21a.html PDF: http://proceedings.mlr.press/v139/tao21a/tao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yunzhe family: Tao - given: Sahika family: Genc - given: Jonathan family: Chung - given: Tao family: Sun - given: Sunil family: Mallya editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10141-10152 id: tao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10141 lastpage: 10152 published: 2021-07-01 00:00:00 +0000 - title: 'Understanding the Dynamics of Gradient Flow in Overparameterized Linear models' abstract: 'We provide a detailed analysis of the dynamics ofthe gradient flow in overparameterized two-layerlinear models. A particularly interesting featureof this model is that its nonlinear dynamics can beexactly solved as a consequence of a large num-ber of conservation laws that constrain the systemto follow particular trajectories. More precisely,the gradient flow preserves the difference of theGramian matrices of the input and output weights,and its convergence to equilibrium depends onboth the magnitude of that difference (which isfixed at initialization) and the spectrum of the data.In addition, and generalizing prior work, we proveour results without assuming small, balanced orspectral initialization for the weights. Moreover,we establish interesting mathematical connectionsbetween matrix factorization problems and differ-ential equations of the Riccati type.' volume: 139 URL: https://proceedings.mlr.press/v139/tarmoun21a.html PDF: http://proceedings.mlr.press/v139/tarmoun21a/tarmoun21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tarmoun21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Salma family: Tarmoun - given: Guilherme family: Franca - given: Benjamin D family: Haeffele - given: Rene family: Vidal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10153-10161 id: tarmoun21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10153 lastpage: 10161 published: 2021-07-01 00:00:00 +0000 - title: 'Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts' abstract: 'Least squares estimators, when trained on few target domain samples, may predict poorly. Supervised domain adaptation aims to improve the predictive accuracy by exploiting additional labeled training samples from a source distribution that is close to the target distribution. Given available data, we investigate novel strategies to synthesize a family of least squares estimator experts that are robust with regard to moment conditions. When these moment conditions are specified using Kullback-Leibler or Wasserstein-type divergences, we can find the robust estimators efficiently using convex optimization. We use the Bernstein online aggregation algorithm on the proposed family of robust experts to generate predictions for the sequential stream of target test samples. Numerical experiments on real data show that the robust strategies systematically outperform non-robust interpolations of the empirical least squares estimators.' volume: 139 URL: https://proceedings.mlr.press/v139/taskesen21a.html PDF: http://proceedings.mlr.press/v139/taskesen21a/taskesen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-taskesen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bahar family: Taskesen - given: Man-Chung family: Yue - given: Jose family: Blanchet - given: Daniel family: Kuhn - given: Viet Anh family: Nguyen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10162-10172 id: taskesen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10162 lastpage: 10172 published: 2021-07-01 00:00:00 +0000 - title: 'A Language for Counterfactual Generative Models' abstract: 'We present Omega, a probabilistic programming language with support for counterfactual inference. Counterfactual inference means to observe some fact in the present, and infer what would have happened had some past intervention been taken, e.g. “given that medication was not effective at dose x, what is the probability that it would have been effective at dose 2x?.” We accomplish this by introducing a new operator to probabilistic programming akin to Pearl’s do, define its formal semantics, provide an implementation, and demonstrate its utility through examples in a variety of simulation models.' volume: 139 URL: https://proceedings.mlr.press/v139/tavares21a.html PDF: http://proceedings.mlr.press/v139/tavares21a/tavares21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tavares21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zenna family: Tavares - given: James family: Koppel - given: Xin family: Zhang - given: Ria family: Das - given: Armando family: Solar-Lezama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10173-10182 id: tavares21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10173 lastpage: 10182 published: 2021-07-01 00:00:00 +0000 - title: 'Synthesizer: Rethinking Self-Attention for Transformer Models' abstract: 'The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is useful but not that important after all. To this end, we propose \textsc{Synthesizer}, a model that learns synthetic attention weights without token-token interactions. In our experiments, we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks, including machine translation, language modeling, text generation and GLUE/SuperGLUE benchmarks. When composed with dot product attention, we find that Synthesizers consistently outperform Transformers. Moreover, we conduct additional comparisons of Synthesizers against Dynamic Convolutions, showing that simple Random Synthesizer is not only $60%$ faster but also improves perplexity by a relative $3.5%$. Finally, we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/tay21a.html PDF: http://proceedings.mlr.press/v139/tay21a/tay21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tay21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi family: Tay - given: Dara family: Bahri - given: Donald family: Metzler - given: Da-Cheng family: Juan - given: Zhe family: Zhao - given: Che family: Zheng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10183-10192 id: tay21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10183 lastpage: 10192 published: 2021-07-01 00:00:00 +0000 - title: 'OmniNet: Omnidirectional Representations from Transformers' abstract: 'This paper proposes Omnidirectional Representations from Transformers (OMNINET). In OmniNet, instead of maintaining a strictly horizon-tal receptive field, each token is allowed to attend to all tokens in the entire network. This process can also be interpreted as a form of extreme or intensive attention mechanism that has the receptive field of the entire width and depth of the network. To this end, the omnidirectional attention is learned via a meta-learner, which is essentially another self-attention based model. In order to mitigate the computationally expensive costs of full receptive field attention, we leverage efficient self-attention models such as kernel-based, low-rank attention and/or Big Bird as the meta-learner. Extensive experiments are conducted on autoregressive language modeling(LM1B, C4), Machine Translation, Long Range Arena (LRA), and Image Recognition.The experiments show that OmniNet achieves considerable improvements across these tasks, including achieving state-of-the-art performance on LM1B,WMT’14 En-De/En-Fr, and Long Range Arena.Moreover, using omnidirectional representation in Vision Transformers leads to significant improvements on image recognition tasks on both few-shot learning and fine-tuning setups.' volume: 139 URL: https://proceedings.mlr.press/v139/tay21b.html PDF: http://proceedings.mlr.press/v139/tay21b/tay21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tay21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi family: Tay - given: Mostafa family: Dehghani - given: Vamsi family: Aribandi - given: Jai family: Gupta - given: Philip M family: Pham - given: Zhen family: Qin - given: Dara family: Bahri - given: Da-Cheng family: Juan - given: Donald family: Metzler editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10193-10202 id: tay21b issued: date-parts: - 2021 - 7 - 1 firstpage: 10193 lastpage: 10202 published: 2021-07-01 00:00:00 +0000 - title: 'T-SCI: A Two-Stage Conformal Inference Algorithm with Guaranteed Coverage for Cox-MLP' abstract: 'It is challenging to deal with censored data, where we only have access to the incomplete information of survival time instead of its exact value. Fortunately, under linear predictor assumption, people can obtain guaranteed coverage for the confidence interval of survival time using methods like Cox Regression. However, when relaxing the linear assumption with neural networks (e.g., Cox-MLP \citep{katzman2018deepsurv,kvamme2019time}), we lose the guaranteed coverage. To recover the guaranteed coverage without linear assumption, we propose two algorithms based on conformal inference. In the first algorithm \emph{WCCI}, we revisit weighted conformal inference and introduce a new non-conformity score based on partial likelihood. We then propose a two-stage algorithm \emph{T-SCI}, where we run WCCI in the first stage and apply quantile conformal inference to calibrate the results in the second stage. Theoretical analysis shows that T-SCI returns guaranteed coverage under milder assumptions than WCCI. We conduct extensive experiments on synthetic data and real data using different methods, which validate our analysis.' volume: 139 URL: https://proceedings.mlr.press/v139/teng21a.html PDF: http://proceedings.mlr.press/v139/teng21a/teng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-teng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiaye family: Teng - given: Zeren family: Tan - given: Yang family: Yuan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10203-10213 id: teng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10203 lastpage: 10213 published: 2021-07-01 00:00:00 +0000 - title: 'Moreau-Yosida $f$-divergences' abstract: 'Variational representations of $f$-divergences are central to many machine learning algorithms, with Lipschitz constrained variants recently gaining attention. Inspired by this, we define the Moreau-Yosida approximation of $f$-divergences with respect to the Wasserstein-$1$ metric. The corresponding variational formulas provide a generalization of a number of recent results, novel special cases of interest and a relaxation of the hard Lipschitz constraint. Additionally, we prove that the so-called tight variational representation of $f$-divergences can be to be taken over the quotient space of Lipschitz functions, and give a characterization of functions achieving the supremum in the variational representation. On the practical side, we propose an algorithm to calculate the tight convex conjugate of $f$-divergences compatible with automatic differentiation frameworks. As an application of our results, we propose the Moreau-Yosida $f$-GAN, providing an implementation of the variational formulas for the Kullback-Leibler, reverse Kullback-Leibler, $\chi^2$, reverse $\chi^2$, squared Hellinger, Jensen-Shannon, Jeffreys, triangular discrimination and total variation divergences as GANs trained on CIFAR-10, leading to competitive results and a simple solution to the problem of uniqueness of the optimal critic.' volume: 139 URL: https://proceedings.mlr.press/v139/terjek21a.html PDF: http://proceedings.mlr.press/v139/terjek21a/terjek21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-terjek21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dávid family: Terjék editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10214-10224 id: terjek21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10214 lastpage: 10224 published: 2021-07-01 00:00:00 +0000 - title: 'Understanding Invariance via Feedforward Inversion of Discriminatively Trained Classifiers' abstract: 'A discriminatively trained neural net classifier can fit the training data perfectly if all information about its input other than class membership has been discarded prior to the output layer. Surprisingly, past research has discovered that some extraneous visual detail remains in the unnormalized logits. This finding is based on inversion techniques that map deep embeddings back to images. We explore this phenomenon further using a novel synthesis of methods, yielding a feedforward inversion model that produces remarkably high fidelity reconstructions, qualitatively superior to those of past efforts. When applied to an adversarially robust classifier model, the reconstructions contain sufficient local detail and global structure that they might be confused with the original image in a quick glance, and the object category can clearly be gleaned from the reconstruction. Our approach is based on BigGAN (Brock, 2019), with conditioning on logits instead of one-hot class labels. We use our reconstruction model as a tool for exploring the nature of representations, including: the influence of model architecture and training objectives (specifically robust losses), the forms of invariance that networks achieve, representational differences between correctly and incorrectly classified images, and the effects of manipulating logits and images. We believe that our method can inspire future investigations into the nature of information flow in a neural net and can provide diagnostics for improving discriminative models. We provide pre-trained models and visualizations at \url{https://sites.google.com/view/understanding-invariance/home}.' volume: 139 URL: https://proceedings.mlr.press/v139/teterwak21a.html PDF: http://proceedings.mlr.press/v139/teterwak21a/teterwak21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-teterwak21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Piotr family: Teterwak - given: Chiyuan family: Zhang - given: Dilip family: Krishnan - given: Michael C family: Mozer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10225-10235 id: teterwak21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10225 lastpage: 10235 published: 2021-07-01 00:00:00 +0000 - title: 'Resource Allocation in Multi-armed Bandit Exploration: Overcoming Sublinear Scaling with Adaptive Parallelism' abstract: 'We study exploration in stochastic multi-armed bandits when we have access to a divisible resource that can be allocated in varying amounts to arm pulls. We focus in particular on the allocation of distributed computing resources, where we may obtain results faster by allocating more resources per pull, but might have reduced throughput due to nonlinear scaling. For example, in simulation-based scientific studies, an expensive simulation can be sped up by running it on multiple cores. This speed-up however, is partly offset by the communication among cores, which results in lower throughput than if fewer cores were allocated to run more trials in parallel. In this paper, we explore these trade-offs in two settings. First, in a fixed confidence setting, we need to find the best arm with a given target success probability as quickly as possible. We propose an algorithm which trades off between information accumulation and throughput and show that the time taken can be upper bounded by the solution of a dynamic program whose inputs are the gaps between the sub-optimal and optimal arms. We also prove a matching hardness result. Second, we present an algorithm for a fixed deadline setting, where we are given a time deadline and need to maximize the probability of finding the best arm. We corroborate our theoretical insights with simulation experiments that show that the algorithms consistently match or outperform baseline algorithms on a variety of problem instances.' volume: 139 URL: https://proceedings.mlr.press/v139/thananjeyan21a.html PDF: http://proceedings.mlr.press/v139/thananjeyan21a/thananjeyan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-thananjeyan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Brijen family: Thananjeyan - given: Kirthevasan family: Kandasamy - given: Ion family: Stoica - given: Michael family: Jordan - given: Ken family: Goldberg - given: Joseph family: Gonzalez editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10236-10246 id: thananjeyan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10236 lastpage: 10246 published: 2021-07-01 00:00:00 +0000 - title: 'Monte Carlo Variational Auto-Encoders' abstract: 'Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO). To obtain tighter ELBO and hence better variational approximations, it has been proposed to use importance sampling to get a lower variance estimate of the evidence. However, importance sampling is known to perform poorly in high dimensions. While it has been suggested many times in the literature to use more sophisticated algorithms such as Annealed Importance Sampling (AIS) and its Sequential Importance Sampling (SIS) extensions, the potential benefits brought by these advanced techniques have never been realized for VAE: the AIS estimate cannot be easily differentiated, while SIS requires the specification of carefully chosen backward Markov kernels. In this paper, we address both issues and demonstrate the performance of the resulting Monte Carlo VAEs on a variety of applications.' volume: 139 URL: https://proceedings.mlr.press/v139/thin21a.html PDF: http://proceedings.mlr.press/v139/thin21a/thin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-thin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Achille family: Thin - given: Nikita family: Kotelevskii - given: Arnaud family: Doucet - given: Alain family: Durmus - given: Eric family: Moulines - given: Maxim family: Panov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10247-10257 id: thin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10247 lastpage: 10257 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Generative Modelling of Protein Structure Fragments using a Deep Markov Model' abstract: 'Fragment libraries are often used in protein structure prediction, simulation and design as a means to significantly reduce the vast conformational search space. Current state-of-the-art methods for fragment library generation do not properly account for aleatory and epistemic uncertainty, respectively due to the dynamic nature of proteins and experimental errors in protein structures. Additionally, they typically rely on information that is not generally or readily available, such as homologous sequences, related protein structures and other complementary information. To address these issues, we developed BIFROST, a novel take on the fragment library problem based on a Deep Markov Model architecture combined with directional statistics for angular degrees of freedom, implemented in the deep probabilistic programming language Pyro. BIFROST is a probabilistic, generative model of the protein backbone dihedral angles conditioned solely on the amino acid sequence. BIFROST generates fragment libraries with a quality on par with current state-of-the-art methods at a fraction of the run-time, while requiring considerably less information and allowing efficient evaluation of probabilities.' volume: 139 URL: https://proceedings.mlr.press/v139/thygesen21a.html PDF: http://proceedings.mlr.press/v139/thygesen21a/thygesen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-thygesen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Christian B family: Thygesen - given: Christian Skjødt family: Steenmans - given: Ahmad Salim family: Al-Sibahi - given: Lys Sanz family: Moreta - given: Anders Bundgård family: Sørensen - given: Thomas family: Hamelryck editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10258-10267 id: thygesen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10258 lastpage: 10267 published: 2021-07-01 00:00:00 +0000 - title: 'Understanding self-supervised learning dynamics without contrastive pairs' abstract: 'While contrastive approaches of self-supervised learning (SSL) learn representations by minimizing the distance between two augmented views of the same data point (positive pairs) and maximizing views from different data points (negative pairs), recent \emph{non-contrastive} SSL (e.g., BYOL and SimSiam) show remarkable performance {\it without} negative pairs, with an extra learnable predictor and a stop-gradient operation. A fundamental question rises: why they do not collapse into trivial representation? In this paper, we answer this question via a simple theoretical study and propose a novel approach, \ourmethod{}, that \emph{directly} sets the linear predictor based on the statistics of its inputs, rather than trained with gradient update. On ImageNet, it performs comparably with more complex two-layer non-linear predictors that employ BatchNorm and outperforms linear predictor by $2.5%$ in 300-epoch training (and $5%$ in 60-epoch). \ourmethod{} is motivated by our theoretical study of the nonlinear learning dynamics of non-contrastive SSL in simple linear networks. Our study yields conceptual insights into how non-contrastive SSL methods learn, how they avoid representational collapse, and how multiple factors, like predictor networks, stop-gradients, exponential moving averages, and weight decay all come into play. Our simple theory recapitulates the results of real-world ablation studies in both STL-10 and ImageNet. Code is released\footnote{\url{https://github.com/facebookresearch/luckmatters/tree/master/ssl}}.' volume: 139 URL: https://proceedings.mlr.press/v139/tian21a.html PDF: http://proceedings.mlr.press/v139/tian21a/tian21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tian21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuandong family: Tian - given: Xinlei family: Chen - given: Surya family: Ganguli editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10268-10278 id: tian21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10268 lastpage: 10278 published: 2021-07-01 00:00:00 +0000 - title: 'Online Learning in Unknown Markov Games' abstract: 'We study online learning in unknown Markov games, a problem that arises in episodic multi-agent reinforcement learning where the actions of the opponents are unobservable. We show that in this challenging setting, achieving sublinear regret against the best response in hindsight is statistically hard. We then consider a weaker notion of regret by competing with the \emph{minimax value} of the game, and present an algorithm that achieves a sublinear $\tilde{\mathcal{O}}(K^{2/3})$ regret after $K$ episodes. This is the first sublinear regret bound (to our knowledge) for online learning in unknown Markov games. Importantly, our regret bound is independent of the size of the opponents’ action spaces. As a result, even when the opponents’ actions are fully observable, our regret bound improves upon existing analysis (e.g., (Xie et al., 2020)) by an exponential factor in the number of opponents.' volume: 139 URL: https://proceedings.mlr.press/v139/tian21b.html PDF: http://proceedings.mlr.press/v139/tian21b/tian21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tian21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi family: Tian - given: Yuanhao family: Wang - given: Tiancheng family: Yu - given: Suvrit family: Sra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10279-10288 id: tian21b issued: date-parts: - 2021 - 7 - 1 firstpage: 10279 lastpage: 10288 published: 2021-07-01 00:00:00 +0000 - title: 'BORE: Bayesian Optimization by Density-Ratio Estimation' abstract: 'Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods. BO proposes solutions according to an explore-exploit trade-off criterion encoded in an acquisition function, many of which are computed from the posterior predictive of a probabilistic surrogate model. Prevalent among these is the expected improvement (EI). The need to ensure analytical tractability of the predictive often poses limitations that can hinder the efficiency and applicability of BO. In this paper, we cast the computation of EI as a binary classification problem, building on the link between class-probability estimation and density-ratio estimation, and the lesser-known link between density-ratios and EI. By circumventing the tractability constraints, this reformulation provides numerous advantages, not least in terms of expressiveness, versatility, and scalability.' volume: 139 URL: https://proceedings.mlr.press/v139/tiao21a.html PDF: http://proceedings.mlr.press/v139/tiao21a/tiao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tiao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Louis C family: Tiao - given: Aaron family: Klein - given: Matthias W family: Seeger - given: Edwin V. family: Bonilla - given: Cedric family: Archambeau - given: Fabio family: Ramos editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10289-10300 id: tiao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10289 lastpage: 10300 published: 2021-07-01 00:00:00 +0000 - title: 'Nonparametric Decomposition of Sparse Tensors' abstract: 'Tensor decomposition is a powerful framework for multiway data analysis. Despite the success of existing approaches, they ignore the sparse nature of the tensor data in many real-world applications, explicitly or implicitly assuming dense tensors. To address this model misspecification and to exploit the sparse tensor structures, we propose Nonparametric dEcomposition of Sparse Tensors (\ours), which can capture both the sparse structure properties and complex relationships between the tensor nodes to enhance the embedding estimation. Specifically, we first use completely random measures to construct tensor-valued random processes. We prove that the entry growth is much slower than that of the corresponding tensor size, which implies sparsity. Given finite observations (\ie projections), we then propose two nonparametric decomposition models that couple Dirichlet processes and Gaussian processes to jointly sample the sparse entry indices and the entry values (the latter as a nonlinear mapping of the embeddings), so as to encode both the structure properties and nonlinear relationships of the tensor nodes into the embeddings. Finally, we use the stick-breaking construction and random Fourier features to develop a scalable, stochastic variational learning algorithm. We show the advantage of our approach in sparse tensor generation, and entry index and value prediction in several real-world applications.' volume: 139 URL: https://proceedings.mlr.press/v139/tillinghast21a.html PDF: http://proceedings.mlr.press/v139/tillinghast21a/tillinghast21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tillinghast21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Conor family: Tillinghast - given: Shandian family: Zhe editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10301-10311 id: tillinghast21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10301 lastpage: 10311 published: 2021-07-01 00:00:00 +0000 - title: 'Probabilistic Programs with Stochastic Conditioning' abstract: 'We tackle the problem of conditioning probabilistic programs on distributions of observable variables. Probabilistic programs are usually conditioned on samples from the joint data distribution, which we refer to as deterministic conditioning. However, in many real-life scenarios, the observations are given as marginal distributions, summary statistics, or samplers. Conventional probabilistic programming systems lack adequate means for modeling and inference in such scenarios. We propose a generalization of deterministic conditioning to stochastic conditioning, that is, conditioning on the marginal distribution of a variable taking a particular form. To this end, we first define the formal notion of stochastic conditioning and discuss its key properties. We then show how to perform inference in the presence of stochastic conditioning. We demonstrate potential usage of stochastic conditioning on several case studies which involve various kinds of stochastic conditioning and are difficult to solve otherwise. Although we present stochastic conditioning in the context of probabilistic programming, our formalization is general and applicable to other settings.' volume: 139 URL: https://proceedings.mlr.press/v139/tolpin21a.html PDF: http://proceedings.mlr.press/v139/tolpin21a/tolpin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tolpin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: David family: Tolpin - given: Yuan family: Zhou - given: Tom family: Rainforth - given: Hongseok family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10312-10323 id: tolpin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10312 lastpage: 10323 published: 2021-07-01 00:00:00 +0000 - title: 'Deep Continuous Networks' abstract: 'CNNs and computational models of biological vision share some fundamental principles, which opened new avenues of research. However, fruitful cross-field research is hampered by conventional CNN architectures being based on spatially and depthwise discrete representations, which cannot accommodate certain aspects of biological complexity such as continuously varying receptive field sizes and dynamics of neuronal responses. Here we propose deep continuous networks (DCNs), which combine spatially continuous filters, with the continuous depth framework of neural ODEs. This allows us to learn the spatial support of the filters during training, as well as model the continuous evolution of feature maps, linking DCNs closely to biological models. We show that DCNs are versatile and highly applicable to standard image classification and reconstruction problems, where they improve parameter and data efficiency, and allow for meta-parametrization. We illustrate the biological plausibility of the scale distributions learned by DCNs and explore their performance in a neuroscientifically inspired pattern completion task. Finally, we investigate an efficient implementation of DCNs by changing input contrast.' volume: 139 URL: https://proceedings.mlr.press/v139/tomen21a.html PDF: http://proceedings.mlr.press/v139/tomen21a/tomen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tomen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nergis family: Tomen - given: Silvia-Laura family: Pintea - given: Jan family: Van Gemert editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10324-10335 id: tomen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10324 lastpage: 10335 published: 2021-07-01 00:00:00 +0000 - title: 'Diffusion Earth Mover’s Distance and Distribution Embeddings' abstract: 'We propose a new fast method of measuring distances between large numbers of related high dimensional datasets called the Diffusion Earth Mover’s Distance (EMD). We model the datasets as distributions supported on common data graph that is derived from the affinity matrix computed on the combined data. In such cases where the graph is a discretization of an underlying Riemannian closed manifold, we prove that Diffusion EMD is topologically equivalent to the standard EMD with a geodesic ground distance. Diffusion EMD can be computed in {Õ}(n) time and is more accurate than similarly fast algorithms such as tree-based EMDs. We also show Diffusion EMD is fully differentiable, making it amenable to future uses in gradient-descent frameworks such as deep neural networks. Finally, we demonstrate an application of Diffusion EMD to single cell data collected from 210 COVID-19 patient samples at Yale New Haven Hospital. Here, Diffusion EMD can derive distances between patients on the manifold of cells at least two orders of magnitude faster than equally accurate methods. This distance matrix between patients can be embedded into a higher level patient manifold which uncovers structure and heterogeneity in patients. More generally, Diffusion EMD is applicable to all datasets that are massively collected in parallel in many medical and biological systems.' volume: 139 URL: https://proceedings.mlr.press/v139/tong21a.html PDF: http://proceedings.mlr.press/v139/tong21a/tong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexander Y family: Tong - given: Guillaume family: Huguet - given: Amine family: Natik - given: Kincaid family: Macdonald - given: Manik family: Kuchroo - given: Ronald family: Coifman - given: Guy family: Wolf - given: Smita family: Krishnaswamy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10336-10346 id: tong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10336 lastpage: 10346 published: 2021-07-01 00:00:00 +0000 - title: 'Training data-efficient image transformers & distillation through attention' abstract: 'Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These high-performing vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption. In this work, we produce competitive convolution-free transformers trained on ImageNet only using a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data. We also introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention, typically from a convnet teacher. The learned transformers are competitive (85.2% top-1 acc.) with the state of the art on ImageNet, and similarly when transferred to other tasks. We will share our code and models.' volume: 139 URL: https://proceedings.mlr.press/v139/touvron21a.html PDF: http://proceedings.mlr.press/v139/touvron21a/touvron21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-touvron21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hugo family: Touvron - given: Matthieu family: Cord - given: Matthijs family: Douze - given: Francisco family: Massa - given: Alexandre family: Sablayrolles - given: Herve family: Jegou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10347-10357 id: touvron21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10347 lastpage: 10357 published: 2021-07-01 00:00:00 +0000 - title: 'Conservative Objective Models for Effective Offline Model-Based Optimization' abstract: 'In this paper, we aim to solve data-driven model-based optimization (MBO) problems, where the goal is to find a design input that maximizes an unknown objective function provided access to only a static dataset of inputs and their corresponding objective values. Such data-driven optimization procedures are the only practical methods in many real-world domains where active data collection is expensive (e.g., when optimizing over proteins) or dangerous (e.g., when optimizing over aircraft designs, actively evaluating malformed aircraft designs is unsafe). Typical methods for MBO that optimize the input against a learned model of the unknown score function are affected by erroneous overestimation in the learned model caused due to distributional shift, that drives the optimizer to low-scoring or invalid inputs. To overcome this, we propose conservative objective models (COMs), a method that learns a model of the objective function which lower bounds the actual value of the ground-truth objective on out-of-distribution inputs and uses it for optimization. In practice, COMs outperform a number existing methods on a wide range of MBO problems, including optimizing controller parameters, robot morphologies, and superconducting materials.' volume: 139 URL: https://proceedings.mlr.press/v139/trabucco21a.html PDF: http://proceedings.mlr.press/v139/trabucco21a/trabucco21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-trabucco21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Brandon family: Trabucco - given: Aviral family: Kumar - given: Xinyang family: Geng - given: Sergey family: Levine editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10358-10368 id: trabucco21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10358 lastpage: 10368 published: 2021-07-01 00:00:00 +0000 - title: 'Sparse within Sparse Gaussian Processes using Neighbor Information' abstract: 'Approximations to Gaussian processes (GPs) based on inducing variables, combined with variational inference techniques, enable state-of-the-art sparse approaches to infer GPs at scale through mini-batch based learning. In this work, we further push the limits of scalability of sparse GPs by allowing large number of inducing variables without imposing a special structure on the inducing inputs. In particular, we introduce a novel hierarchical prior, which imposes sparsity on the set of inducing variables. We treat our model variationally, and we experimentally show considerable computational gains compared to standard sparse GPs when sparsity on the inducing variables is realized considering the nearest inducing inputs of a random mini-batch of the data. We perform an extensive experimental validation that demonstrates the effectiveness of our approach compared to the state-of-the-art. Our approach enables the possibility to use sparse GPs using a large number of inducing points without incurring a prohibitive computational cost.' volume: 139 URL: https://proceedings.mlr.press/v139/tran21a.html PDF: http://proceedings.mlr.press/v139/tran21a/tran21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tran21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gia-Lac family: Tran - given: Dimitrios family: Milios - given: Pietro family: Michiardi - given: Maurizio family: Filippone editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10369-10378 id: tran21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10369 lastpage: 10378 published: 2021-07-01 00:00:00 +0000 - title: 'SMG: A Shuffling Gradient-Based Method with Momentum' abstract: 'We combine two advanced ideas widely used in optimization for machine learning: \textit{shuffling} strategy and \textit{momentum} technique to develop a novel shuffling gradient-based method with momentum, coined \textbf{S}huffling \textbf{M}omentum \textbf{G}radient (SMG), for non-convex finite-sum optimization problems. While our method is inspired by momentum techniques, its update is fundamentally different from existing momentum-based methods. We establish state-of-the-art convergence rates of SMG for any shuffling strategy using either constant or diminishing learning rate under standard assumptions (i.e. \textit{$L$-smoothness} and \textit{bounded variance}). When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods, and prove the same convergence rates for this algorithm under the $L$-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods. Our tests have shown encouraging performance of the new algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/tran21b.html PDF: http://proceedings.mlr.press/v139/tran21b/tran21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tran21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Trang H family: Tran - given: Lam M family: Nguyen - given: Quoc family: Tran-Dinh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10379-10389 id: tran21b issued: date-parts: - 2021 - 7 - 1 firstpage: 10379 lastpage: 10389 published: 2021-07-01 00:00:00 +0000 - title: 'Bayesian Optimistic Optimisation with Exponentially Decaying Regret' abstract: 'Bayesian optimisation (BO) is a well known algorithm for finding the global optimum of expensive, black-box functions. The current practical BO algorithms have regret bounds ranging from $\mathcal{O}(\frac{logN}{\sqrt{N}})$ to $\mathcal O(e^{-\sqrt{N}})$, where $N$ is the number of evaluations. This paper explores the possibility of improving the regret bound in the noise-free setting by intertwining concepts from BO and optimistic optimisation methods which are based on partitioning the search space. We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $\mathcal O(N^{-\sqrt{N}})$ under the assumption that the objective function is sampled from a Gaussian process with a Matérn kernel with smoothness parameter $\nu > 4 +\frac{D}{2}$, where $D$ is the number of dimensions. We perform experiments on optimisation of various synthetic functions and machine learning hyperparameter tuning tasks and show that our algorithm outperforms baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/tran-the21a.html PDF: http://proceedings.mlr.press/v139/tran-the21a/tran-the21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tran-the21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hung family: Tran-The - given: Sunil family: Gupta - given: Santu family: Rana - given: Svetha family: Venkatesh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10390-10400 id: tran-the21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10390 lastpage: 10400 published: 2021-07-01 00:00:00 +0000 - title: 'On Disentangled Representations Learned from Correlated Data' abstract: 'The focus of disentanglement approaches has been on identifying independent factors of variation in data. However, the causal variables underlying real-world observations are often not statistically independent. In this work, we bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data in a large-scale empirical study (including 4260 models). We show and quantify that systematically induced correlations in the dataset are being learned and reflected in the latent representations, which has implications for downstream applications of disentanglement such as fairness. We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.' volume: 139 URL: https://proceedings.mlr.press/v139/trauble21a.html PDF: http://proceedings.mlr.press/v139/trauble21a/trauble21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-trauble21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Frederik family: Träuble - given: Elliot family: Creager - given: Niki family: Kilbertus - given: Francesco family: Locatello - given: Andrea family: Dittadi - given: Anirudh family: Goyal - given: Bernhard family: Schölkopf - given: Stefan family: Bauer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10401-10412 id: trauble21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10401 lastpage: 10412 published: 2021-07-01 00:00:00 +0000 - title: 'A New Formalism, Method and Open Issues for Zero-Shot Coordination' abstract: 'In many coordination problems, independently reasoning humans are able to discover mutually compatible policies. In contrast, independently trained self-play policies are often mutually incompatible. Zero-shot coordination (ZSC) has recently been proposed as a new frontier in multi-agent reinforcement learning to address this fundamental issue. Prior work approaches the ZSC problem by assuming players can agree on a shared learning algorithm but not on labels for actions and observations, and proposes other-play as an optimal solution. However, until now, this “label-free” problem has only been informally defined. We formalize this setting as the label-free coordination (LFC) problem by defining the label-free coordination game. We show that other-play is not an optimal solution to the LFC problem as it fails to consistently break ties between incompatible maximizers of the other-play objective. We introduce an extension of the algorithm, other-play with tie-breaking, and prove that it is optimal in the LFC problem and an equilibrium in the LFC game. Since arbitrary tie-breaking is precisely what the ZSC setting aims to prevent, we conclude that the LFC problem does not reflect the aims of ZSC. To address this, we introduce an alternative informal operationalization of ZSC as a starting point for future work.' volume: 139 URL: https://proceedings.mlr.press/v139/treutlein21a.html PDF: http://proceedings.mlr.press/v139/treutlein21a/treutlein21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-treutlein21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Johannes family: Treutlein - given: Michael family: Dennis - given: Caspar family: Oesterheld - given: Jakob family: Foerster editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10413-10423 id: treutlein21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10413 lastpage: 10423 published: 2021-07-01 00:00:00 +0000 - title: 'Learning a Universal Template for Few-shot Dataset Generalization' abstract: 'Few-shot dataset generalization is a challenging variant of the well-studied few-shot classification problem where a diverse training set of several datasets is given, for the purpose of training an adaptable model that can then learn classes from \emph{new datasets} using only a few examples. To this end, we propose to utilize the diverse training set to construct a \emph{universal template}: a partial model that can define a wide array of dataset-specialized models, by plugging in appropriate components. For each new few-shot classification problem, our approach therefore only requires inferring a small number of parameters to insert into the universal template. We design a separate network that produces an initialization of those parameters for each given task, and we then fine-tune its proposed initialization via a few steps of gradient descent. Our approach is more parameter-efficient, scalable and adaptable compared to previous methods, and achieves the state-of-the-art on the challenging Meta-Dataset benchmark.' volume: 139 URL: https://proceedings.mlr.press/v139/triantafillou21a.html PDF: http://proceedings.mlr.press/v139/triantafillou21a/triantafillou21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-triantafillou21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eleni family: Triantafillou - given: Hugo family: Larochelle - given: Richard family: Zemel - given: Vincent family: Dumoulin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10424-10433 id: triantafillou21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10424 lastpage: 10433 published: 2021-07-01 00:00:00 +0000 - title: 'Provable Meta-Learning of Linear Representations' abstract: 'Meta-learning, or learning-to-learn, seeks to design algorithms that can utilize previous experience to rapidly learn new skills or adapt to new environments. Representation learning—a key tool for performing meta-learning—learns a data representation that can transfer knowledge across multiple tasks, which is essential in regimes where data is scarce. Despite a recent surge of interest in the practice of meta-learning, the theoretical underpinnings of meta-learning algorithms are lacking, especially in the context of learning transferable representations. In this paper, we focus on the problem of multi-task linear regression—in which multiple linear regression models share a common, low-dimensional linear representation. Here, we provide provably fast, sample-efficient algorithms to address the dual challenges of (1) learning a common set of features from multiple, related tasks, and (2) transferring this knowledge to new, unseen tasks. Both are central to the general problem of meta-learning. Finally, we complement these results by providing information-theoretic lower bounds on the sample complexity of learning these linear features.' volume: 139 URL: https://proceedings.mlr.press/v139/tripuraneni21a.html PDF: http://proceedings.mlr.press/v139/tripuraneni21a/tripuraneni21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tripuraneni21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nilesh family: Tripuraneni - given: Chi family: Jin - given: Michael family: Jordan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10434-10443 id: tripuraneni21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10434 lastpage: 10443 published: 2021-07-01 00:00:00 +0000 - title: 'Cumulants of Hawkes Processes are Robust to Observation Noise' abstract: 'Multivariate Hawkes processes (MHPs) are widely used in a variety of fields to model the occurrence of causally related discrete events in continuous time. Most state-of-the-art approaches address the problem of learning MHPs from perfect traces without noise. In practice, the process through which events are collected might introduce noise in the timestamps. In this work, we address the problem of learning the causal structure of MHPs when the observed timestamps of events are subject to random and unknown shifts, also known as random translations. We prove that the cumulants of MHPs are invariant to random translations, and therefore can be used to learn their underlying causal structure. Furthermore, we empirically characterize the effect of random translations on state-of-the-art learning methods. We show that maximum likelihood-based estimators are brittle, while cumulant-based estimators remain stable even in the presence of significant time shifts.' volume: 139 URL: https://proceedings.mlr.press/v139/trouleau21a.html PDF: http://proceedings.mlr.press/v139/trouleau21a/trouleau21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-trouleau21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: William family: Trouleau - given: Jalal family: Etesami - given: Matthias family: Grossglauser - given: Negar family: Kiyavash - given: Patrick family: Thiran editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10444-10454 id: trouleau21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10444 lastpage: 10454 published: 2021-07-01 00:00:00 +0000 - title: 'PixelTransformer: Sample Conditioned Signal Generation' abstract: 'We propose a generative model that can infer a distribution for the underlying spatial signal conditioned on sparse samples e.g. plausible images given a few observed pixels. In contrast to sequential autoregressive generative models, our model allows conditioning on arbitrary samples and can answer distributional queries for any location. We empirically validate our approach across three image datasets and show that we learn to generate diverse and meaningful samples, with the distribution variance reducing given more observed pixels. We also show that our approach is applicable beyond images and can allow generating other types of spatial outputs e.g. polynomials, 3D shapes, and videos.' volume: 139 URL: https://proceedings.mlr.press/v139/tulsiani21a.html PDF: http://proceedings.mlr.press/v139/tulsiani21a/tulsiani21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-tulsiani21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shubham family: Tulsiani - given: Abhinav family: Gupta editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10455-10464 id: tulsiani21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10455 lastpage: 10464 published: 2021-07-01 00:00:00 +0000 - title: 'A Framework for Private Matrix Analysis in Sliding Window Model' abstract: 'We perform a rigorous study of private matrix analysis when only the last $W$ updates to matrices are considered useful for analysis. We show the existing framework in the non-private setting is not robust to noise required for privacy. We then propose a framework robust to noise and use it to give first efficient $o(W)$ space differentially private algorithms for spectral approximation, principal component analysis (PCA), multi-response linear regression, sparse PCA, and non-negative PCA. Prior to our work, no such result was known for sparse and non-negative differentially private PCA even in the static data setting. We also give a lower bound to demonstrate the cost of privacy in the sliding window model.' volume: 139 URL: https://proceedings.mlr.press/v139/upadhyay21a.html PDF: http://proceedings.mlr.press/v139/upadhyay21a/upadhyay21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-upadhyay21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jalaj family: Upadhyay - given: Sarvagya family: Upadhyay editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10465-10475 id: upadhyay21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10465 lastpage: 10475 published: 2021-07-01 00:00:00 +0000 - title: 'Fast Projection Onto Convex Smooth Constraints' abstract: 'The Euclidean projection onto a convex set is an important problem that arises in numerous constrained optimization tasks. Unfortunately, in many cases, computing projections is computationally demanding. In this work, we focus on projection problems where the constraints are smooth and the number of constraints is significantly smaller than the dimension. The runtime of existing approaches to solving such problems is either cubic in the dimension or polynomial in the inverse of the target accuracy. Conversely, we propose a simple and efficient primal-dual approach, with a runtime that scales only linearly with the dimension, and only logarithmically in the inverse of the target accuracy. We empirically demonstrate its performance, and compare it with standard baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/usmanova21a.html PDF: http://proceedings.mlr.press/v139/usmanova21a/usmanova21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-usmanova21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ilnura family: Usmanova - given: Maryam family: Kamgarpour - given: Andreas family: Krause - given: Kfir family: Levy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10476-10486 id: usmanova21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10476 lastpage: 10486 published: 2021-07-01 00:00:00 +0000 - title: 'SGLB: Stochastic Gradient Langevin Boosting' abstract: 'This paper introduces Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework that may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of the Langevin diffusion equation specifically designed for gradient boosting. This allows us to theoretically guarantee the global convergence even for multimodal loss functions, while standard gradient boosting algorithms can guarantee only local optimum. We also empirically show that SGLB outperforms classic gradient boosting when applied to classification tasks with 0-1 loss function, which is known to be multimodal.' volume: 139 URL: https://proceedings.mlr.press/v139/ustimenko21a.html PDF: http://proceedings.mlr.press/v139/ustimenko21a/ustimenko21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-ustimenko21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aleksei family: Ustimenko - given: Liudmila family: Prokhorenkova editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10487-10496 id: ustimenko21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10487 lastpage: 10496 published: 2021-07-01 00:00:00 +0000 - title: 'LTL2Action: Generalizing LTL Instructions for Multi-Task RL' abstract: 'We address the problem of teaching a deep reinforcement learning (RL) agent to follow instructions in multi-task environments. Instructions are expressed in a well-known formal language {–} linear temporal logic (LTL) {–} and can specify a diversity of complex, temporally extended behaviours, including conditionals and alternative realizations. Our proposed learning approach exploits the compositional syntax and the semantics of LTL, enabling our RL agent to learn task-conditioned policies that generalize to new instructions, not observed during training. To reduce the overhead of learning LTL semantics, we introduce an environment-agnostic LTL pretraining scheme which improves sample-efficiency in downstream environments. Experiments on discrete and continuous domains target combinatorial task sets of up to $\sim10^{39}$ unique tasks and demonstrate the strength of our approach in learning to solve (unseen) tasks, given LTL instructions.' volume: 139 URL: https://proceedings.mlr.press/v139/vaezipoor21a.html PDF: http://proceedings.mlr.press/v139/vaezipoor21a/vaezipoor21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-vaezipoor21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pashootan family: Vaezipoor - given: Andrew C family: Li - given: Rodrigo A Toro family: Icarte - given: Sheila A. family: Mcilraith editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10497-10508 id: vaezipoor21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10497 lastpage: 10508 published: 2021-07-01 00:00:00 +0000 - title: 'Active Deep Probabilistic Subsampling' abstract: 'Subsampling a signal of interest can reduce costly data transfer, battery drain, radiation exposure and acquisition time in a wide range of problems. The recently proposed Deep Probabilistic Subsampling (DPS) method effectively integrates subsampling in an end-to-end deep learning model, but learns a static pattern for all datapoints. We generalize DPS to a sequential method that actively picks the next sample based on the information acquired so far; dubbed Active-DPS (A-DPS). We validate that A-DPS improves over DPS for MNIST classification at high subsampling rates. Moreover, we demonstrate strong performance in active acquisition Magnetic Resonance Image (MRI) reconstruction, outperforming DPS and other deep learning methods.' volume: 139 URL: https://proceedings.mlr.press/v139/van-gorp21a.html PDF: http://proceedings.mlr.press/v139/van-gorp21a/van-gorp21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-van-gorp21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hans family: Van Gorp - given: Iris family: Huijben - given: Bastiaan S family: Veeling - given: Nicola family: Pezzotti - given: Ruud J. G. family: Van Sloun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10509-10518 id: van-gorp21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10509 lastpage: 10518 published: 2021-07-01 00:00:00 +0000 - title: 'CURI: A Benchmark for Productive Concept Learning Under Uncertainty' abstract: 'Humans can learn and reason under substantial uncertainty in a space of infinitely many compositional, productive concepts. For example, if a scene with two blue spheres qualifies as “daxy,” one can reason that the underlying concept may require scenes to have “only blue spheres” or “only spheres” or “only two objects.” In contrast, standard benchmarks for compositional reasoning do not explicitly capture a notion of reasoning under uncertainty or evaluate compositional concept acquisition. We introduce a new benchmark, Compositional Reasoning Under Uncertainty (CURI) that instantiates a series of few-shot, meta-learning tasks in a productive concept space to evaluate different aspects of systematic generalization under uncertainty, including splits that test abstract understandings of disentangling, productive generalization, learning boolean operations, variable binding, etc. Importantly, we also contribute a model-independent “compositionality gap” to evaluate the difficulty of generalizing out-of-distribution along each of these axes, allowing objective comparison of the difficulty of each compositional split. Evaluations across a range of modeling choices and splits reveal substantial room for improvement on the proposed benchmark.' volume: 139 URL: https://proceedings.mlr.press/v139/vedantam21a.html PDF: http://proceedings.mlr.press/v139/vedantam21a/vedantam21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-vedantam21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ramakrishna family: Vedantam - given: Arthur family: Szlam - given: Maximillian family: Nickel - given: Ari family: Morcos - given: Brenden M family: Lake editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10519-10529 id: vedantam21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10519 lastpage: 10529 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Domain-Agnostic Contrastive Learning' abstract: 'Despite recent successes, most contrastive self-supervised learning methods are domain-specific, relying heavily on data augmentation techniques that require knowledge about a particular domain, such as image cropping and rotation. To overcome such limitation, we propose a domain-agnostic approach to contrastive learning, named DACL, that is applicable to problems where domain-specific data augmentations are not readily available. Key to our approach is the use of Mixup noise to create similar and dissimilar examples by mixing data samples differently either at the input or hidden-state levels. We theoretically analyze our method and show advantages over the Gaussian-noise based contrastive learning approach. To demonstrate the effectiveness of DACL, we conduct experiments across various domains such as tabular data, images, and graphs. Our results show that DACL not only outperforms other domain-agnostic noising methods, such as Gaussian-noise, but also combines well with domain-specific methods, such as SimCLR, to improve self-supervised visual representation learning.' volume: 139 URL: https://proceedings.mlr.press/v139/verma21a.html PDF: http://proceedings.mlr.press/v139/verma21a/verma21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-verma21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Vikas family: Verma - given: Thang family: Luong - given: Kenji family: Kawaguchi - given: Hieu family: Pham - given: Quoc family: Le editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10530-10541 id: verma21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10530 lastpage: 10541 published: 2021-07-01 00:00:00 +0000 - title: 'Sparsifying Networks via Subdifferential Inclusion' abstract: 'Sparsifying deep neural networks is of paramount interest in many areas, especially when those networks have to be implemented on low-memory devices. In this article, we propose a new formulation of the problem of generating sparse weights for a pre-trained neural network. By leveraging the properties of standard nonlinear activation functions, we show that the problem is equivalent to an approximate subdifferential inclusion problem. The accuracy of the approximation controls the sparsity. We show that the proposed approach is valid for a broad class of activation functions (ReLU, sigmoid, softmax). We propose an iterative optimization algorithm to induce sparsity whose convergence is guaranteed. Because of the algorithm flexibility, the sparsity can be ensured from partial training data in a minibatch manner. To demonstrate the effectiveness of our method, we perform experiments on various networks in different applicative contexts: image classification, speech recognition, natural language processing, and time-series forecasting.' volume: 139 URL: https://proceedings.mlr.press/v139/verma21b.html PDF: http://proceedings.mlr.press/v139/verma21b/verma21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-verma21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sagar family: Verma - given: Jean-Christophe family: Pesquet editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10542-10552 id: verma21b issued: date-parts: - 2021 - 7 - 1 firstpage: 10542 lastpage: 10552 published: 2021-07-01 00:00:00 +0000 - title: 'Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies' abstract: 'Unrolled computation graphs arise in many scenarios, including training RNNs, tuning hyperparameters through unrolled optimization, and training learned optimizers. Current approaches to optimizing parameters in such computation graphs suffer from high variance gradients, bias, slow updates, or large memory usage. We introduce a method called Persistent Evolution Strategies (PES), which divides the computation graph into a series of truncated unrolls, and performs an evolution strategies-based update step after each unroll. PES eliminates bias from these truncations by accumulating correction terms over the entire sequence of unrolls. PES allows for rapid parameter updates, has low memory usage, is unbiased, and has reasonable variance characteristics. We experimentally demonstrate the advantages of PES compared to several other methods for gradient estimation on synthetic tasks, and show its applicability to training learned optimizers and tuning hyperparameters.' volume: 139 URL: https://proceedings.mlr.press/v139/vicol21a.html PDF: http://proceedings.mlr.press/v139/vicol21a/vicol21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-vicol21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Paul family: Vicol - given: Luke family: Metz - given: Jascha family: Sohl-Dickstein editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10553-10563 id: vicol21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10553 lastpage: 10563 published: 2021-07-01 00:00:00 +0000 - title: 'Online Graph Dictionary Learning' abstract: 'Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements. Yet, this analysis is not amenable in the context of graph learning, as graphs usually belong to different metric spaces. We fill this gap by proposing a new online Graph Dictionary Learning approach, which uses the Gromov Wasserstein divergence for the data fitting term. In our work, graphs are encoded through their nodes’ pairwise relations and modeled as convex combination of graph atoms, i.e. dictionary elements, estimated thanks to an online stochastic algorithm, which operates on a dataset of unregistered graphs with potentially different number of nodes. Our approach naturally extends to labeled graphs, and is completed by a novel upper bound that can be used as a fast approximation of Gromov Wasserstein in the embedding space. We provide numerical evidences showing the interest of our approach for unsupervised embedding of graph datasets and for online graph subspace estimation and tracking.' volume: 139 URL: https://proceedings.mlr.press/v139/vincent-cuaz21a.html PDF: http://proceedings.mlr.press/v139/vincent-cuaz21a/vincent-cuaz21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-vincent-cuaz21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Cédric family: Vincent-Cuaz - given: Titouan family: Vayer - given: Rémi family: Flamary - given: Marco family: Corneli - given: Nicolas family: Courty editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10564-10574 id: vincent-cuaz21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10564 lastpage: 10574 published: 2021-07-01 00:00:00 +0000 - title: 'Neuro-algorithmic Policies Enable Fast Combinatorial Generalization' abstract: 'Although model-based and model-free approaches to learning the control of systems have achieved impressive results on standard benchmarks, generalization to task variations is still lacking. Recent results suggest that generalization for standard architectures improves only after obtaining exhaustive amounts of data. We give evidence that generalization capabilities are in many cases bottlenecked by the inability to generalize on the combinatorial aspects of the problem. We show that, for a certain subclass of the MDP framework, this can be alleviated by a neuro-algorithmic policy architecture that embeds a time-dependent shortest path solver in a deep neural network. Trained end-to-end via blackbox-differentiation, this method leads to considerable improvement in generalization capabilities in the low-data regime.' volume: 139 URL: https://proceedings.mlr.press/v139/vlastelica21a.html PDF: http://proceedings.mlr.press/v139/vlastelica21a/vlastelica21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-vlastelica21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Marin family: Vlastelica - given: Michal family: Rolinek - given: Georg family: Martius editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10575-10585 id: vlastelica21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10575 lastpage: 10585 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Training of Robust Decision Trees Against Adversarial Examples' abstract: 'Current state-of-the-art algorithms for training robust decision trees have high runtime costs and require hours to run. We present GROOT, an efficient algorithm for training robust decision trees and random forests that runs in a matter of seconds to minutes. Where before the worst-case Gini impurity was computed iteratively, we find that we can solve this function analytically to improve time complexity from O(n) to O(1) in terms of n samples. Our results on both single trees and ensembles on 14 structured datasets as well as on MNIST and Fashion-MNIST demonstrate that GROOT runs several orders of magnitude faster than the state-of-the-art works and also shows better performance in terms of adversarial accuracy on structured data.' volume: 139 URL: https://proceedings.mlr.press/v139/vos21a.html PDF: http://proceedings.mlr.press/v139/vos21a/vos21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-vos21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Daniël family: Vos - given: Sicco family: Verwer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10586-10595 id: vos21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10586 lastpage: 10595 published: 2021-07-01 00:00:00 +0000 - title: 'Object Segmentation Without Labels with Large-Scale Generative Models' abstract: 'The recent rise of unsupervised and self-supervised learning has dramatically reduced the dependency on labeled data, providing high-quality representations for transfer on downstream tasks. Furthermore, recent works also employed these representations in a fully unsupervised setup for image classification, reducing the need for human labels on the fine-tuning stage as well. This work demonstrates that large-scale unsupervised models can also perform a more challenging object segmentation task, requiring neither pixel-level nor image-level labeling. Namely, we show that recent unsupervised GANs allow to differentiate between foreground/background pixels, providing high-quality saliency masks. By extensive comparison on common benchmarks, we outperform existing unsupervised alternatives for object segmentation, achieving new state-of-the-art.' volume: 139 URL: https://proceedings.mlr.press/v139/voynov21a.html PDF: http://proceedings.mlr.press/v139/voynov21a/voynov21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-voynov21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrey family: Voynov - given: Stanislav family: Morozov - given: Artem family: Babenko editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10596-10606 id: voynov21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10596 lastpage: 10606 published: 2021-07-01 00:00:00 +0000 - title: 'Principal Component Hierarchy for Sparse Quadratic Programs' abstract: 'We propose a novel approximation hierarchy for cardinality-constrained, convex quadratic programs that exploits the rank-dominating eigenvectors of the quadratic matrix. Each level of approximation admits a min-max characterization whose objective function can be optimized over the binary variables analytically, while preserving convexity in the continuous variables. Exploiting this property, we propose two scalable optimization algorithms, coined as the “best response" and the “dual program", that can efficiently screen the potential indices of the nonzero elements of the original program. We show that the proposed methods are competitive with the existing screening methods in the current sparse regression literature, and it is particularly fast on instances with high number of measurements in experiments with both synthetic and real datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/vreugdenhil21a.html PDF: http://proceedings.mlr.press/v139/vreugdenhil21a/vreugdenhil21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-vreugdenhil21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Robbie family: Vreugdenhil - given: Viet Anh family: Nguyen - given: Armin family: Eftekhari - given: Peyman Mohajerin family: Esfahani editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10607-10616 id: vreugdenhil21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10607 lastpage: 10616 published: 2021-07-01 00:00:00 +0000 - title: 'Whitening and Second Order Optimization Both Make Information in the Dataset Unusable During Training, and Can Reduce or Prevent Generalization' abstract: 'Machine learning is predicated on the concept of generalization: a model achieving low error on a sufficiently large training set should also perform well on novel samples from the same distribution. We show that both data whitening and second order optimization can harm or entirely prevent generalization. In general, model training harnesses information contained in the sample-sample second moment matrix of a dataset. For a general class of models, namely models with a fully connected first layer, we prove that the information contained in this matrix is the only information which can be used to generalize. Models trained using whitened data, or with certain second order optimization schemes, have less access to this information, resulting in reduced or nonexistent generalization ability. We experimentally verify these predictions for several architectures, and further demonstrate that generalization continues to be harmed even when theoretical requirements are relaxed. However, we also show experimentally that regularized second order optimization can provide a practical tradeoff, where training is accelerated but less information is lost, and generalization can in some circumstances even improve.' volume: 139 URL: https://proceedings.mlr.press/v139/wadia21a.html PDF: http://proceedings.mlr.press/v139/wadia21a/wadia21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wadia21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Neha family: Wadia - given: Daniel family: Duckworth - given: Samuel S family: Schoenholz - given: Ethan family: Dyer - given: Jascha family: Sohl-Dickstein editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10617-10629 id: wadia21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10617 lastpage: 10629 published: 2021-07-01 00:00:00 +0000 - title: 'Safe Reinforcement Learning Using Advantage-Based Intervention' abstract: 'Many sequential decision problems involve finding a policy that maximizes total reward while obeying safety constraints. Although much recent research has focused on the development of safe reinforcement learning (RL) algorithms that produce a safe policy after training, ensuring safety during training as well remains an open problem. A fundamental challenge is performing exploration while still satisfying constraints in an unknown Markov decision process (MDP). In this work, we address this problem for the chance-constrained setting.We propose a new algorithm, SAILR, that uses an intervention mechanism based on advantage functions to keep the agent safe throughout training and optimizes the agent’s policy using off-the-shelf RL algorithms designed for unconstrained MDPs. Our method comes with strong guarantees on safety during "both" training and deployment (i.e., after training and without the intervention mechanism) and policy performance compared to the optimal safety-constrained policy. In our experiments, we show that SAILR violates constraints far less during training than standard safe RL and constrained MDP approaches and converges to a well-performing policy that can be deployed safely without intervention. Our code is available at https://github.com/nolanwagener/safe_rl.' volume: 139 URL: https://proceedings.mlr.press/v139/wagener21a.html PDF: http://proceedings.mlr.press/v139/wagener21a/wagener21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wagener21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Nolan C family: Wagener - given: Byron family: Boots - given: Ching-An family: Cheng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10630-10640 id: wagener21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10630 lastpage: 10640 published: 2021-07-01 00:00:00 +0000 - title: 'Task-Optimal Exploration in Linear Dynamical Systems' abstract: 'Exploration in unknown environments is a fundamental problem in reinforcement learning and control. In this work, we study task-guided exploration and determine what precisely an agent must learn about their environment in order to complete a particular task. Formally, we study a broad class of decision-making problems in the setting of linear dynamical systems, a class that includes the linear quadratic regulator problem. We provide instance- and task-dependent lower bounds which explicitly quantify the difficulty of completing a task of interest. Motivated by our lower bound, we propose a computationally efficient experiment-design based exploration algorithm. We show that it optimally explores the environment, collecting precisely the information needed to complete the task, and provide finite-time bounds guaranteeing that it achieves the instance- and task-optimal sample complexity, up to constant factors. Through several examples of the linear quadratic regulator problem, we show that performing task-guided exploration provably improves on exploration schemes which do not take into account the task of interest. Along the way, we establish that certainty equivalence decision making is instance- and task-optimal, and obtain the first algorithm for the linear quadratic regulator problem which is instance-optimal. We conclude with several experiments illustrating the effectiveness of our approach in practice.' volume: 139 URL: https://proceedings.mlr.press/v139/wagenmaker21a.html PDF: http://proceedings.mlr.press/v139/wagenmaker21a/wagenmaker21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wagenmaker21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrew J family: Wagenmaker - given: Max family: Simchowitz - given: Kevin family: Jamieson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10641-10652 id: wagenmaker21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10641 lastpage: 10652 published: 2021-07-01 00:00:00 +0000 - title: 'Learning and Planning in Average-Reward Markov Decision Processes' abstract: 'We introduce learning and planning algorithms for average-reward MDPs, including 1) the first general proven-convergent off-policy model-free control algorithm without reference states, 2) the first proven-convergent off-policy model-free prediction algorithm, and 3) the first off-policy learning algorithm that converges to the actual value function rather than to the value function plus an offset. All of our algorithms are based on using the temporal-difference error rather than the conventional error when updating the estimate of the average reward. Our proof techniques are a slight generalization of those by Abounadi, Bertsekas, and Borkar (2001). In experiments with an Access-Control Queuing Task, we show some of the difficulties that can arise when using methods that rely on reference states and argue that our new algorithms are significantly easier to use.' volume: 139 URL: https://proceedings.mlr.press/v139/wan21a.html PDF: http://proceedings.mlr.press/v139/wan21a/wan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi family: Wan - given: Abhishek family: Naik - given: Richard S family: Sutton editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10653-10662 id: wan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10653 lastpage: 10662 published: 2021-07-01 00:00:00 +0000 - title: 'Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces' abstract: 'High-dimensional black-box optimisation remains an important yet notoriously challenging problem. Despite the success of Bayesian optimisation methods on continuous domains, domains that are categorical, or that mix continuous and categorical variables, remain challenging. We propose a novel solution—we combine local optimisation with a tailored kernel design, effectively handling high-dimensional categorical and mixed search spaces, whilst retaining sample efficiency. We further derive convergence guarantee for the proposed approach. Finally, we demonstrate empirically that our method outperforms the current baselines on a variety of synthetic and real-world tasks in terms of performance, computational costs, or both.' volume: 139 URL: https://proceedings.mlr.press/v139/wan21b.html PDF: http://proceedings.mlr.press/v139/wan21b/wan21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wan21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xingchen family: Wan - given: Vu family: Nguyen - given: Huong family: Ha - given: Binxin family: Ru - given: Cong family: Lu - given: Michael A. family: Osborne editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10663-10674 id: wan21b issued: date-parts: - 2021 - 7 - 1 firstpage: 10663 lastpage: 10674 published: 2021-07-01 00:00:00 +0000 - title: 'Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model' abstract: 'Knowledge distillation (KD) is a successful approach for deep neural network acceleration, with which a compact network (student) is trained by mimicking the softmax output of a pre-trained high-capacity network (teacher). In tradition, KD usually relies on access to the training samples and the parameters of the white-box teacher to acquire the transferred knowledge. However, these prerequisites are not always realistic due to storage costs or privacy issues in real-world applications. Here we propose the concept of decision-based black-box (DB3) knowledge distillation, with which the student is trained by distilling the knowledge from a black-box teacher (parameters are not accessible) that only returns classes rather than softmax outputs. We start with the scenario when the training set is accessible. We represent a sample’s robustness against other classes by computing its distances to the teacher’s decision boundaries and use it to construct the soft label for each training sample. After that, the student can be trained via standard KD. We then extend this approach to a more challenging scenario in which even accessing the training data is not feasible. We propose to generate pseudo samples that are distinguished by the decision boundaries of the DB3 teacher to the largest extent and construct soft labels for these samples, which are used as the transfer set. We evaluate our approaches on various benchmark networks and datasets and experiment results demonstrate their effectiveness.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21a.html PDF: http://proceedings.mlr.press/v139/wang21a/wang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zi family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10675-10685 id: wang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 10675 lastpage: 10685 published: 2021-07-01 00:00:00 +0000 - title: 'Fairness of Exposure in Stochastic Bandits' abstract: 'Contextual bandit algorithms have become widely used for recommendation in online systems (e.g. marketplaces, music streaming, news), where they now wield substantial influence on which items get shown to users. This raises questions of fairness to the items — and to the sellers, artists, and writers that benefit from this exposure. We argue that the conventional bandit formulation can lead to an undesirable and unfair winner-takes-all allocation of exposure. To remedy this problem, we propose a new bandit objective that guarantees merit-based fairness of exposure to the items while optimizing utility to the users. We formulate fairness regret and reward regret in this setting and present algorithms for both stochastic multi-armed bandits and stochastic linear bandits. We prove that the algorithms achieve sublinear fairness regret and reward regret. Beyond the theoretical analysis, we also provide empirical evidence that these algorithms can allocate exposure to different arms effectively.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21b.html PDF: http://proceedings.mlr.press/v139/wang21b/wang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lequn family: Wang - given: Yiwei family: Bai - given: Wen family: Sun - given: Thorsten family: Joachims editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10686-10696 id: wang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 10686 lastpage: 10696 published: 2021-07-01 00:00:00 +0000 - title: 'A Proxy Variable View of Shared Confounding' abstract: 'Causal inference from observational data can be biased by unobserved confounders. Confounders{—}the variables that affect both the treatments and the outcome{—}induce spurious non-causal correlations between the two. Without additional conditions, unobserved confounders generally make causal quantities hard to identify. In this paper, we focus on the setting where there are many treatments with shared confounding, and we study under what conditions is causal identification possible. The key observation is that we can view subsets of treatments as proxies of the unobserved confounder and identify the intervention distributions of the rest. Moreover, while existing identification formulas for proxy variables involve solving integral equations, we show that one can circumvent the need for such solutions by directly modeling the data. Finally, we extend these results to an expanded class of causal graphs, those with other confounders and selection variables.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21c.html PDF: http://proceedings.mlr.press/v139/wang21c/wang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yixin family: Wang - given: David family: Blei editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10697-10707 id: wang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 10697 lastpage: 10707 published: 2021-07-01 00:00:00 +0000 - title: 'Fast Algorithms for Stackelberg Prediction Game with Least Squares Loss' abstract: 'The Stackelberg prediction game (SPG) has been extensively used to model the interactions between the learner and data provider in the training process of various machine learning algorithms. Particularly, SPGs played prominent roles in cybersecurity applications, such as intrusion detection, banking fraud detection, spam filtering, and malware detection. Often formulated as NP-hard bi-level optimization problems, it is generally computationally intractable to find global solutions to SPGs. As an interesting progress in this area, a special class of SPGs with the least squares loss (SPG-LS) have recently been shown polynomially solvable by a bisection method. However, in each iteration of this method, a semidefinite program (SDP) needs to be solved. The resulted high computational costs prevent its applications for large-scale problems. In contrast, we propose a novel approach that reformulates a SPG-LS as a single SDP of a similar form and the same dimension as those solved in the bisection method. Our SDP reformulation is, evidenced by our numerical experiments, orders of magnitude faster than the existing bisection method. We further show that the obtained SDP can be reduced to a second order cone program (SOCP). This allows us to provide real-time response to large-scale SPG-LS problems. Numerical results on both synthetic and real world datasets indicate that the proposed SOCP method is up to 20,000+ times faster than the state of the art.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21d.html PDF: http://proceedings.mlr.press/v139/wang21d/wang21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiali family: Wang - given: He family: Chen - given: Rujun family: Jiang - given: Xudong family: Li - given: Zihao family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10708-10716 id: wang21d issued: date-parts: - 2021 - 7 - 1 firstpage: 10708 lastpage: 10716 published: 2021-07-01 00:00:00 +0000 - title: 'Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework' abstract: 'Most neural network pruning methods, such as filter-level and layer-level prunings, prune the network model along one dimension (depth, width, or resolution) solely to meet a computational budget. However, such a pruning policy often leads to excessive reduction of that dimension, thus inducing a huge accuracy loss. To alleviate this issue, we argue that pruning should be conducted along three dimensions comprehensively. For this purpose, our pruning framework formulates pruning as an optimization problem. Specifically, it first casts the relationships between a certain model’s accuracy and depth/width/resolution into a polynomial regression and then maximizes the polynomial to acquire the optimal values for the three dimensions. Finally, the model is pruned along the three optimal dimensions accordingly. In this framework, since collecting too much data for training the regression is very time-costly, we propose two approaches to lower the cost: 1) specializing the polynomial to ensure an accurate regression even with less training data; 2) employing iterative pruning and fine-tuning to collect the data faster. Extensive experiments show that our proposed algorithm surpasses state-of-the-art pruning algorithms and even neural architecture search-based algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21e.html PDF: http://proceedings.mlr.press/v139/wang21e/wang21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wenxiao family: Wang - given: Minghao family: Chen - given: Shuai family: Zhao - given: Long family: Chen - given: Jinming family: Hu - given: Haifeng family: Liu - given: Deng family: Cai - given: Xiaofei family: He - given: Wei family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10717-10726 id: wang21e issued: date-parts: - 2021 - 7 - 1 firstpage: 10717 lastpage: 10726 published: 2021-07-01 00:00:00 +0000 - title: 'Explainable Automated Graph Representation Learning with Hyperparameter Importance' abstract: 'Current graph representation (GR) algorithms require huge demand of human experts in hyperparameter tuning, which significantly limits their practical applications, leading to an urge for automated graph representation without human intervention. Although automated machine learning (AutoML) serves as a good candidate for automatic hyperparameter tuning, little literature has been reported on automated graph presentation learning and the only existing work employs a black-box strategy, lacking insights into explaining the relative importance of different hyperparameters. To address this issue, we study explainable automated graph representation with hyperparameter importance in this paper. We propose an explainable AutoML approach for graph representation (e-AutoGR) which utilizes explainable graph features during performance estimation and learns decorrelated importance weights for different hyperparameters in affecting the model performance through a non-linear decorrelated weighting regression. These learned importance weights can in turn help to provide more insights in hyperparameter search procedure. We theoretically prove the soundness of the decorrelated weighting algorithm. Extensive experiments on real-world datasets demonstrate the superiority of our proposed e-AutoGR model against state-of-the-art methods in terms of both model performance and hyperparameter importance explainability.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21f.html PDF: http://proceedings.mlr.press/v139/wang21f/wang21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xin family: Wang - given: Shuyi family: Fan - given: Kun family: Kuang - given: Wenwu family: Zhu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10727-10737 id: wang21f issued: date-parts: - 2021 - 7 - 1 firstpage: 10727 lastpage: 10737 published: 2021-07-01 00:00:00 +0000 - title: 'Self-Tuning for Data-Efficient Deep Learning' abstract: 'Deep learning has made revolutionary advances to diverse applications in the presence of large-scale labeled datasets. However, it is prohibitively time-costly and labor-expensive to collect sufficient labeled data in most realistic scenarios. To mitigate the requirement for labeled data, semi-supervised learning (SSL) focuses on simultaneously exploring both labeled and unlabeled data, while transfer learning (TL) popularizes a favorable practice of fine-tuning a pre-trained model to the target data. A dilemma is thus encountered: Without a decent pre-trained model to provide an implicit regularization, SSL through self-training from scratch will be easily misled by inaccurate pseudo-labels, especially in large-sized label space; Without exploring the intrinsic structure of unlabeled data, TL through fine-tuning from limited labeled data is at risk of under-transfer caused by model shift. To escape from this dilemma, we present Self-Tuning to enable data-efficient deep learning by unifying the exploration of labeled and unlabeled data and the transfer of a pre-trained model, as well as a Pseudo Group Contrast (PGC) mechanism to mitigate the reliance on pseudo-labels and boost the tolerance to false labels. Self-Tuning outperforms its SSL and TL counterparts on five tasks by sharp margins, e.g. it doubles the accuracy of fine-tuning on Cars with $15%$ labels.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21g.html PDF: http://proceedings.mlr.press/v139/wang21g/wang21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ximei family: Wang - given: Jinghan family: Gao - given: Mingsheng family: Long - given: Jianmin family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10738-10748 id: wang21g issued: date-parts: - 2021 - 7 - 1 firstpage: 10738 lastpage: 10748 published: 2021-07-01 00:00:00 +0000 - title: 'Label Distribution Learning Machine' abstract: 'Although Label Distribution Learning (LDL) has witnessed extensive classification applications, it faces the challenge of objective mismatch – the objective of LDL mismatches that of classification, which has seldom been noticed in existing studies. Our goal is to solve the objective mismatch and improve the classification performance of LDL. Specifically, we extend the margin theory to LDL and propose a new LDL method called \textbf{L}abel \textbf{D}istribution \textbf{L}earning \textbf{M}achine (LDLM). First, we define the label distribution margin and propose the \textbf{S}upport \textbf{V}ector \textbf{R}egression \textbf{M}achine (SVRM) to learn the optimal label. Second, we propose the adaptive margin loss to learn label description degrees. In theoretical analysis, we develop a generalization theory for the SVRM and analyze the generalization of LDLM. Experimental results validate the better classification performance of LDLM.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21h.html PDF: http://proceedings.mlr.press/v139/wang21h/wang21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jing family: Wang - given: Xin family: Geng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10749-10759 id: wang21h issued: date-parts: - 2021 - 7 - 1 firstpage: 10749 lastpage: 10759 published: 2021-07-01 00:00:00 +0000 - title: 'AlphaNet: Improved Training of Supernets with Alpha-Divergence' abstract: 'Weight-sharing neural architecture search (NAS) is an effective technique for automating efficient neural architecture design. Weight-sharing NAS builds a supernet that assembles all the architectures as its sub-networks and jointly trains the supernet with the sub-networks. The success of weight-sharing NAS heavily relies on distilling the knowledge of the supernet to the sub-networks. However, we find that the widely used distillation divergence, i.e., KL divergence, may lead to student sub-networks that over-estimate or under-estimate the uncertainty of the teacher supernet, leading to inferior performance of the sub-networks. In this work, we propose to improve the supernet training with a more generalized alpha-divergence. By adaptively selecting the alpha-divergence, we simultaneously prevent the over-estimation or under-estimation of the uncertainty of the teacher model. We apply the proposed alpha-divergence based supernets training to both slimmable neural networks and weight-sharing NAS, and demonstrate significant improvements. Specifically, our discovered model family, AlphaNet, outperforms prior-art models on a wide range of FLOPs regimes, including BigNAS, Once-for-All networks, and AttentiveNAS. We achieve ImageNet top-1 accuracy of 80.0% with only 444M FLOPs. Our code and pretrained models are available at https://github.com/facebookresearch/AlphaNet.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21i.html PDF: http://proceedings.mlr.press/v139/wang21i/wang21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dilin family: Wang - given: Chengyue family: Gong - given: Meng family: Li - given: Qiang family: Liu - given: Vikas family: Chandra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10760-10771 id: wang21i issued: date-parts: - 2021 - 7 - 1 firstpage: 10760 lastpage: 10771 published: 2021-07-01 00:00:00 +0000 - title: 'Global Convergence of Policy Gradient for Linear-Quadratic Mean-Field Control/Game in Continuous Time' abstract: 'Recent years have witnessed the success of multi-agent reinforcement learning, which has motivated new research directions for mean-field control (MFC) and mean-field game (MFG), as the multi-agent system can be well approximated by a mean-field problem when the number of agents grows to be very large. In this paper, we study the policy gradient (PG) method for the linear-quadratic mean-field control and game, where we assume each agent has identical linear state transitions and quadratic cost functions. While most recent works on policy gradient for MFC and MFG are based on discrete-time models, we focus on a continuous-time model where some of our analyzing techniques could be valuable to the interested readers. For both the MFC and the MFG, we provide PG update and show that it converges to the optimal solution at a linear rate, which is verified by a synthetic simulation. For the MFG, we also provide sufficient conditions for the existence and uniqueness of the Nash equilibrium.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21j.html PDF: http://proceedings.mlr.press/v139/wang21j/wang21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Weichen family: Wang - given: Jiequn family: Han - given: Zhuoran family: Yang - given: Zhaoran family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10772-10782 id: wang21j issued: date-parts: - 2021 - 7 - 1 firstpage: 10772 lastpage: 10782 published: 2021-07-01 00:00:00 +0000 - title: 'SG-PALM: a Fast Physically Interpretable Tensor Graphical Model' abstract: 'We propose a new graphical model inference procedure, called SG-PALM, for learning conditional dependency structure of high-dimensional tensor-variate data. Unlike most other tensor graphical models the proposed model is interpretable and computationally scalable to high dimension. Physical interpretability follows from the Sylvester generative (SG) model on which SG-PALM is based: the model is exact for any observation process that is a solution of a partial differential equation of Poisson type. Scalability follows from the fast proximal alternating linearized minimization (PALM) procedure that SG-PALM uses during training. We establish that SG-PALM converges linearly (i.e., geometric convergence rate) to a global optimum of its objective function. We demonstrate scalability and accuracy of SG-PALM for an important but challenging climate prediction problem: spatio-temporal forecasting of solar flares from multimodal imaging data.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21k.html PDF: http://proceedings.mlr.press/v139/wang21k/wang21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yu family: Wang - given: Alfred family: Hero editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10783-10793 id: wang21k issued: date-parts: - 2021 - 7 - 1 firstpage: 10783 lastpage: 10793 published: 2021-07-01 00:00:00 +0000 - title: 'Deep Generative Learning via Schrödinger Bridge' abstract: 'We propose to learn a generative model via entropy interpolation with a Schr{ö}dinger Bridge. The generative learning task can be formulated as interpolating between a reference distribution and a target distribution based on the Kullback-Leibler divergence. At the population level, this entropy interpolation is characterized via an SDE on [0,1] with a time-varying drift term. At the sample level, we derive our Schr{ö}dinger Bridge algorithm by plugging the drift term estimated by a deep score estimator and a deep density ratio estimator into the Euler-Maruyama method. Under some mild smoothness assumptions of the target distribution, we prove the consistency of both the score estimator and the density ratio estimator, and then establish the consistency of the proposed Schr{ö}dinger Bridge approach. Our theoretical results guarantee that the distribution learned by our approach converges to the target distribution. Experimental results on multimodal synthetic data and benchmark data support our theoretical findings and indicate that the generative model via Schr{ö}dinger Bridge is comparable with state-of-the-art GANs, suggesting a new formulation of generative learning. We demonstrate its usefulness in image interpolation and image inpainting.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21l.html PDF: http://proceedings.mlr.press/v139/wang21l/wang21l.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21l.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gefei family: Wang - given: Yuling family: Jiao - given: Qian family: Xu - given: Yang family: Wang - given: Can family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10794-10804 id: wang21l issued: date-parts: - 2021 - 7 - 1 firstpage: 10794 lastpage: 10804 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Inference for High-Dimensional Linear Models via Residual Randomization' abstract: 'We propose a residual randomization procedure designed for robust inference using Lasso estimates in the high-dimensional setting. Compared to earlier work that focuses on sub-Gaussian errors, the proposed procedure is designed to work robustly in settings that also include heavy-tailed covariates and errors. Moreover, our procedure can be valid under clustered errors, which is important in practice, but has been largely overlooked by earlier work. Through extensive simulations, we illustrate our method’s wider range of applicability as suggested by theory. In particular, we show that our method outperforms state-of-art methods in challenging, yet more realistic, settings where the distribution of covariates is heavy-tailed or the sample size is small, while it remains competitive in standard, “well behaved" settings previously studied in the literature.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21m.html PDF: http://proceedings.mlr.press/v139/wang21m/wang21m.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21m.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Y. Samuel family: Wang - given: Si Kai family: Lee - given: Panos family: Toulis - given: Mladen family: Kolar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10805-10815 id: wang21m issued: date-parts: - 2021 - 7 - 1 firstpage: 10805 lastpage: 10815 published: 2021-07-01 00:00:00 +0000 - title: 'A Modular Analysis of Provable Acceleration via Polyak’s Momentum: Training a Wide ReLU Network and a Deep Linear Network' abstract: 'Incorporating a so-called “momentum” dynamic in gradient descent methods is widely used in neural net training as it has been broadly observed that, at least empirically, it often leads to significantly faster convergence. At the same time, there are very few theoretical guarantees in the literature to explain this apparent acceleration effect. Even for the classical strongly convex quadratic problems, several existing results only show Polyak’s momentum has an accelerated linear rate asymptotically. In this paper, we first revisit the quadratic problems and show a non-asymptotic accelerated linear rate of Polyak’s momentum. Then, we provably show that Polyak’s momentum achieves acceleration for training a one-layer wide ReLU network and a deep linear network, which are perhaps the two most popular canonical models for studying optimization and deep learning in the literature. Prior works (Du et al. 2019) and (Wu et al. 2019) showed that using vanilla gradient descent, and with an use of over-parameterization, the error decays as $(1- \Theta(\frac{1}{ \kappa’}))^t$ after $t$ iterations, where $\kappa’$ is the condition number of a Gram Matrix. Our result shows that with the appropriate choice of parameters Polyak’s momentum has a rate of $(1-\Theta(\frac{1}{\sqrt{\kappa’}}))^t$. For the deep linear network, prior work (Hu et al. 2020) showed that vanilla gradient descent has a rate of $(1-\Theta(\frac{1}{\kappa}))^t$, where $\kappa$ is the condition number of a data matrix. Our result shows an acceleration rate $(1- \Theta(\frac{1}{\sqrt{\kappa}}))^t$ is achievable by Polyak’s momentum. This work establishes that momentum does indeed speed up neural net training.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21n.html PDF: http://proceedings.mlr.press/v139/wang21n/wang21n.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21n.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jun-Kun family: Wang - given: Chi-Heng family: Lin - given: Jacob D family: Abernethy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10816-10827 id: wang21n issued: date-parts: - 2021 - 7 - 1 firstpage: 10816 lastpage: 10827 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Non-Convex Exact Recovery in Stochastic Block Model via Projected Power Method' abstract: 'In this paper, we study the problem of exact community recovery in the symmetric stochastic block model, where a graph of $n$ vertices is randomly generated by partitioning the vertices into $K \ge 2$ equal-sized communities and then connecting each pair of vertices with probability that depends on their community memberships. Although the maximum-likelihood formulation of this problem is discrete and non-convex, we propose to tackle it directly using projected power iterations with an initialization that satisfies a partial recovery condition. Such an initialization can be obtained by a host of existing methods. We show that in the logarithmic degree regime of the considered problem, the proposed method can exactly recover the underlying communities at the information-theoretic limit. Moreover, with a qualified initialization, it runs in $\mO(n\log^2n/\log\log n)$ time, which is competitive with existing state-of-the-art methods. We also present numerical results of the proposed method to support and complement our theoretical development.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21o.html PDF: http://proceedings.mlr.press/v139/wang21o/wang21o.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21o.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Peng family: Wang - given: Huikang family: Liu - given: Zirui family: Zhou - given: Anthony Man-Cho family: So editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10828-10838 id: wang21o issued: date-parts: - 2021 - 7 - 1 firstpage: 10828 lastpage: 10838 published: 2021-07-01 00:00:00 +0000 - title: 'ConvexVST: A Convex Optimization Approach to Variance-stabilizing Transformation' abstract: 'The variance-stabilizing transformation (VST) problem is to transform heteroscedastic data to homoscedastic data so that they are more tractable for subsequent analysis. However, most of the existing approaches focus on finding an analytical solution for a certain parametric distribution, which severely limits the applications, because simple distributions cannot faithfully describe the real data while more complicated distributions cannot be analytically solved. In this paper, we converted the VST problem into a convex optimization problem, which can always be efficiently solved, identified the specific structure of the convex problem, which further improved the efficiency of the proposed algorithm, and showed that any finite discrete distributions and the discretized version of any continuous distributions from real data can be variance-stabilized in an easy and nonparametric way. We demonstrated the new approach on bioimaging data and achieved superior performance compared to peer algorithms in terms of not only the variance homoscedasticity but also the impact on subsequent analysis such as denoising. Source codes are available at https://github.com/yu-lab-vt/ConvexVST.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21p.html PDF: http://proceedings.mlr.press/v139/wang21p/wang21p.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21p.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mengfan family: Wang - given: Boyu family: Lyu - given: Guoqiang family: Yu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10839-10848 id: wang21p issued: date-parts: - 2021 - 7 - 1 firstpage: 10839 lastpage: 10848 published: 2021-07-01 00:00:00 +0000 - title: 'The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks' abstract: 'Despite their overwhelming capacity to overfit, deep neural networks trained by specific optimization algorithms tend to generalize relatively well to unseen data. Recently, researchers explained it by investigating the implicit bias of optimization algorithms. A remarkable progress is the work (Lyu & Li, 2019), which proves gradient descent (GD) maximizes the margin of homogeneous deep neural networks. Except the first-order optimization algorithms like GD, adaptive algorithms such as AdaGrad, RMSProp and Adam are popular owing to their rapid training process. Mean-while, numerous works have provided empirical evidence that adaptive methods may suffer from poor generalization performance. However, theoretical explanation for the generalization of adaptive optimization algorithms is still lacking. In this paper, we study the implicit bias of adaptive optimization algorithms on homogeneous neural networks. In particular, we study the convergent direction of parameters when they are optimizing the logistic loss. We prove that the convergent direction of Adam and RMSProp is the same as GD, while for AdaGrad, the convergent direction depends on the adaptive conditioner. Technically, we provide a unified framework to analyze convergent direction of adaptive optimization algorithms by constructing novel and nontrivial adaptive gradient flow and surrogate margin. The theoretical findings explain the superiority on generalization of exponential moving average strategy that is adopted by RMSProp and Adam. To the best of knowledge, it is the first work to study the convergent direction of adaptive optimizations on non-linear deep neural networks' volume: 139 URL: https://proceedings.mlr.press/v139/wang21q.html PDF: http://proceedings.mlr.press/v139/wang21q/wang21q.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21q.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bohan family: Wang - given: Qi family: Meng - given: Wei family: Chen - given: Tie-Yan family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10849-10858 id: wang21q issued: date-parts: - 2021 - 7 - 1 firstpage: 10849 lastpage: 10858 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Learning for Data Poisoning Attacks' abstract: 'We investigate the robustness of stochastic approximation approaches against data poisoning attacks. We focus on two-layer neural networks with ReLU activation and show that under a specific notion of separability in the RKHS induced by the infinite-width network, training (finite-width) networks with stochastic gradient descent is robust against data poisoning attacks. Interestingly, we find that in addition to a lower bound on the width of the network, which is standard in the literature, we also require a distribution-dependent upper bound on the width for robust generalization. We provide extensive empirical evaluations that support and validate our theoretical results.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21r.html PDF: http://proceedings.mlr.press/v139/wang21r/wang21r.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21r.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yunjuan family: Wang - given: Poorya family: Mianjy - given: Raman family: Arora editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10859-10869 id: wang21r issued: date-parts: - 2021 - 7 - 1 firstpage: 10859 lastpage: 10869 published: 2021-07-01 00:00:00 +0000 - title: 'SketchEmbedNet: Learning Novel Concepts by Imitating Drawings' abstract: 'Sketch drawings capture the salient information of visual concepts. Previous work has shown that neural networks are capable of producing sketches of natural objects drawn from a small number of classes. While earlier approaches focus on generation quality or retrieval, we explore properties of image representations learned by training a model to produce sketches of images. We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting. Additionally, we find that these learned representations exhibit interesting structure and compositionality.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21s.html PDF: http://proceedings.mlr.press/v139/wang21s/wang21s.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21s.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alexander family: Wang - given: Mengye family: Ren - given: Richard family: Zemel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10870-10881 id: wang21s issued: date-parts: - 2021 - 7 - 1 firstpage: 10870 lastpage: 10881 published: 2021-07-01 00:00:00 +0000 - title: 'Directional Bias Amplification' abstract: 'Mitigating bias in machine learning systems requires refining our understanding of bias propagation pathways: from societal structures to large-scale data to trained models to impact on society. In this work, we focus on one aspect of the problem, namely bias amplification: the tendency of models to amplify the biases present in the data they are trained on. A metric for measuring bias amplification was introduced in the seminal work by Zhao et al. (2017); however, as we demonstrate, this metric suffers from a number of shortcomings including conflating different types of bias amplification and failing to account for varying base rates of protected attributes. We introduce and analyze a new, decoupled metric for measuring bias amplification, $BiasAmp_{\rightarrow}$ (Directional Bias Amplification). We thoroughly analyze and discuss both the technical assumptions and normative implications of this metric. We provide suggestions about its measurement by cautioning against predicting sensitive attributes, encouraging the use of confidence intervals due to fluctuations in the fairness of models across runs, and discussing the limitations of what this metric captures. Throughout this paper, we work to provide an interrogative look at the technical measurement of bias amplification, guided by our normative ideas of what we want it to encompass. Code is located at https://github.com/princetonvisualai/directional-bias-amp.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21t.html PDF: http://proceedings.mlr.press/v139/wang21t/wang21t.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21t.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Angelina family: Wang - given: Olga family: Russakovsky editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10882-10893 id: wang21t issued: date-parts: - 2021 - 7 - 1 firstpage: 10882 lastpage: 10893 published: 2021-07-01 00:00:00 +0000 - title: 'An exact solver for the Weston-Watkins SVM subproblem' abstract: 'Recent empirical evidence suggests that the Weston-Watkins support vector machine is among the best performing multiclass extensions of the binary SVM. Current state-of-the-art solvers repeatedly solve a particular subproblem approximately using an iterative strategy. In this work, we propose an algorithm that solves the subproblem exactly using a novel reparametrization of the Weston-Watkins dual problem. For linear WW-SVMs, our solver shows significant speed-up over the state-of-the-art solver when the number of classes is large. Our exact subproblem solver also allows us to prove linear convergence of the overall solver.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21u.html PDF: http://proceedings.mlr.press/v139/wang21u/wang21u.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21u.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yutong family: Wang - given: Clayton family: Scott editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10894-10904 id: wang21u issued: date-parts: - 2021 - 7 - 1 firstpage: 10894 lastpage: 10904 published: 2021-07-01 00:00:00 +0000 - title: 'SCC: an efficient deep reinforcement learning agent mastering the game of StarCraft II' abstract: 'AlphaStar, the AI that reaches GrandMaster level in StarCraft II, is a remarkable milestone demonstrating what deep reinforcement learning can achieve in complex Real-Time Strategy (RTS) games. However, the complexities of the game, algorithms and systems, and especially the tremendous amount of computation needed are big obstacles for the community to conduct further research in this direction. We propose a deep reinforcement learning agent, StarCraft Commander (SCC). With order of magnitude less computation, it demonstrates top human performance defeating GrandMaster players in test matches and top professional players in a live event. Moreover, it shows strong robustness to various human strategies and discovers novel strategies unseen from human plays. In this paper, we’ll share the key insights and optimizations on efficient imitation learning and reinforcement learning for StarCraft II full game.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21v.html PDF: http://proceedings.mlr.press/v139/wang21v/wang21v.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21v.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiangjun family: Wang - given: Junxiao family: Song - given: Penghui family: Qi - given: Peng family: Peng - given: Zhenkun family: Tang - given: Wei family: Zhang - given: Weimin family: Li - given: Xiongjun family: Pi - given: Jujie family: He - given: Chao family: Gao - given: Haitao family: Long - given: Quan family: Yuan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10905-10915 id: wang21v issued: date-parts: - 2021 - 7 - 1 firstpage: 10905 lastpage: 10915 published: 2021-07-01 00:00:00 +0000 - title: 'Quantum algorithms for reinforcement learning with a generative model' abstract: 'Reinforcement learning studies how an agent should interact with an environment to maximize its cumulative reward. A standard way to study this question abstractly is to ask how many samples an agent needs from the environment to learn an optimal policy for a $\gamma$-discounted Markov decision process (MDP). For such an MDP, we design quantum algorithms that approximate an optimal policy ($\pi^*$), the optimal value function ($v^*$), and the optimal $Q$-function ($q^*$), assuming the algorithms can access samples from the environment in quantum superposition. This assumption is justified whenever there exists a simulator for the environment; for example, if the environment is a video game or some other program. Our quantum algorithms, inspired by value iteration, achieve quadratic speedups over the best-possible classical sample complexities in the approximation accuracy ($\epsilon$) and two main parameters of the MDP: the effective time horizon ($\frac{1}{1-\gamma}$) and the size of the action space ($A$). Moreover, we show that our quantum algorithm for computing $q^*$ is optimal by proving a matching quantum lower bound.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21w.html PDF: http://proceedings.mlr.press/v139/wang21w/wang21w.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21w.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Daochen family: Wang - given: Aarthi family: Sundaram - given: Robin family: Kothari - given: Ashish family: Kapoor - given: Martin family: Roetteler editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10916-10926 id: wang21w issued: date-parts: - 2021 - 7 - 1 firstpage: 10916 lastpage: 10926 published: 2021-07-01 00:00:00 +0000 - title: 'Matrix Completion with Model-free Weighting' abstract: 'In this paper, we propose a novel method for matrix completion under general non-uniform missing structures. By controlling an upper bound of a novel balancing error, we construct weights that can actively adjust for the non-uniformity in the empirical risk without explicitly modeling the observation probabilities, and can be computed efficiently via convex optimization. The recovered matrix based on the proposed weighted empirical risk enjoys appealing theoretical guarantees. In particular, the proposed method achieves stronger guarantee than existing work in terms of the scaling with respect to the observation probabilities, under asymptotically heterogeneous missing settings (where entry-wise observation probabilities can be of different orders). These settings can be regarded as a better theoretical model of missing patterns with highly varying probabilities. We also provide a new minimax lower bound under a class of heterogeneous settings. Numerical experiments are also provided to demonstrate the effectiveness of the proposed method.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21x.html PDF: http://proceedings.mlr.press/v139/wang21x/wang21x.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21x.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiayi family: Wang - given: Raymond K. W. family: Wong - given: Xiaojun family: Mao - given: Kwun Chuen Gary family: Chan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10927-10936 id: wang21x issued: date-parts: - 2021 - 7 - 1 firstpage: 10927 lastpage: 10936 published: 2021-07-01 00:00:00 +0000 - title: 'UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data' abstract: 'In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both labeled and unlabeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 26.9% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also verified on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21y.html PDF: http://proceedings.mlr.press/v139/wang21y/wang21y.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21y.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chengyi family: Wang - given: Yu family: Wu - given: Yao family: Qian - given: Kenichi family: Kumatani - given: Shujie family: Liu - given: Furu family: Wei - given: Michael family: Zeng - given: Xuedong family: Huang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10937-10947 id: wang21y issued: date-parts: - 2021 - 7 - 1 firstpage: 10937 lastpage: 10947 published: 2021-07-01 00:00:00 +0000 - title: 'Instabilities of Offline RL with Pre-Trained Neural Representation' abstract: 'In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated. Recent theoretical advances have shown that such sample-efficient offline RL is indeed possible provided certain strong representational conditions hold, else there are lower bounds exhibiting exponential error amplification (in the problem horizon) unless the data collection distribution has only a mild distribution shift relative to the target policy. This work studies these issues from an empirical perspective to gauge how stable offline RL methods are. In particular, our methodology explores these ideas when using features from pre-trained neural networks, in the hope that these representations are powerful enough to permit sample efficient offline RL. Through extensive experiments on a range of tasks, we see that substantial error amplification does occur even when using such pre-trained representations (trained on the same task itself); we find offline RL is stable only under extremely mild distribution shift. The implications of these results, both from a theoretical and an empirical perspective, are that successful offline RL (where we seek to go beyond the low distribution shift regime) requires substantially stronger conditions beyond those which suffice for successful supervised learning.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21z.html PDF: http://proceedings.mlr.press/v139/wang21z/wang21z.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21z.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ruosong family: Wang - given: Yifan family: Wu - given: Ruslan family: Salakhutdinov - given: Sham family: Kakade editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10948-10960 id: wang21z issued: date-parts: - 2021 - 7 - 1 firstpage: 10948 lastpage: 10960 published: 2021-07-01 00:00:00 +0000 - title: 'Learning to Weight Imperfect Demonstrations' abstract: 'This paper investigates how to weight imperfect expert demonstrations for generative adversarial imitation learning (GAIL). The agent is expected to perform behaviors demonstrated by experts. But in many applications, experts could also make mistakes and their demonstrations would mislead or slow the learning process of the agent. Recently, existing methods for imitation learning from imperfect demonstrations mostly focus on using the preference or confidence scores to distinguish imperfect demonstrations. However, these auxiliary information needs to be collected with the help of an oracle, which is usually hard and expensive to afford in practice. In contrast, this paper proposes a method of learning to weight imperfect demonstrations in GAIL without imposing extensive prior information. We provide a rigorous mathematical analysis, presenting that the weights of demonstrations can be exactly determined by combining the discriminator and agent policy in GAIL. Theoretical analysis suggests that with the estimated weights the agent can learn a better policy beyond those plain expert demonstrations. Experiments in the Mujoco and Atari environments demonstrate that the proposed algorithm outperforms baseline methods in handling imperfect expert demonstrations.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21aa.html PDF: http://proceedings.mlr.press/v139/wang21aa/wang21aa.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21aa.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yunke family: Wang - given: Chang family: Xu - given: Bo family: Du - given: Honglak family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10961-10970 id: wang21aa issued: date-parts: - 2021 - 7 - 1 firstpage: 10961 lastpage: 10970 published: 2021-07-01 00:00:00 +0000 - title: 'Evolving Attention with Residual Convolutions' abstract: 'Transformer is a ubiquitous model for natural language processing and has attracted wide attentions in computer vision. The attention maps are indispensable for a transformer model to encode the dependencies among input tokens. However, they are learned independently in each layer and sometimes fail to capture precise patterns. In this paper, we propose a novel and generic mechanism based on evolving attention to improve the performance of transformers. On one hand, the attention maps in different layers share common knowledge, thus the ones in preceding layers can instruct the attention in succeeding layers through residual connections. On the other hand, low-level and high-level attentions vary in the level of abstraction, so we adopt convolutional layers to model the evolutionary process of attention maps. The proposed evolving attention mechanism achieves significant performance improvement over various state-of-the-art models for multiple tasks, including image classification, natural language understanding and machine translation.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21ab.html PDF: http://proceedings.mlr.press/v139/wang21ab/wang21ab.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21ab.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yujing family: Wang - given: Yaming family: Yang - given: Jiangang family: Bai - given: Mingliang family: Zhang - given: Jing family: Bai - given: Jing family: Yu - given: Ce family: Zhang - given: Gao family: Huang - given: Yunhai family: Tong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10971-10980 id: wang21ab issued: date-parts: - 2021 - 7 - 1 firstpage: 10971 lastpage: 10980 published: 2021-07-01 00:00:00 +0000 - title: 'Guarantees for Tuning the Step Size using a Learning-to-Learn Approach' abstract: 'Choosing the right parameters for optimization algorithms is often the key to their success in practice. Solving this problem using a learning-to-learn approach—using meta-gradient descent on a meta-objective based on the trajectory that the optimizer generates—was recently shown to be effective. However, the meta-optimization problem is difficult. In particular, the meta-gradient can often explode/vanish, and the learned optimizer may not have good generalization performance if the meta-objective is not chosen carefully. In this paper we give meta-optimization guarantees for the learning-to-learn approach on a simple problem of tuning the step size for quadratic loss. Our results show that the naïve objective suffers from meta-gradient explosion/vanishing problem. Although there is a way to design the meta-objective so that the meta-gradient remains polynomially bounded, computing the meta-gradient directly using backpropagation leads to numerical issues. We also characterize when it is necessary to compute the meta-objective on a separate validation set to ensure the generalization performance of the learned optimizer. Finally, we verify our results empirically and show that a similar phenomenon appears even for more complicated learned optimizers parametrized by neural networks.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21ac.html PDF: http://proceedings.mlr.press/v139/wang21ac/wang21ac.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21ac.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiang family: Wang - given: Shuai family: Yuan - given: Chenwei family: Wu - given: Rong family: Ge editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10981-10990 id: wang21ac issued: date-parts: - 2021 - 7 - 1 firstpage: 10981 lastpage: 10990 published: 2021-07-01 00:00:00 +0000 - title: 'Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation' abstract: 'Multi-task learning (MTL) aims to improve the generalization of several related tasks by learning them jointly. As a comparison, in addition to the joint training scheme, modern meta-learning allows unseen tasks with limited labels during the test phase, in the hope of fast adaptation over them. Despite the subtle difference between MTL and meta-learning in the problem formulation, both learning paradigms share the same insight that the shared structure between existing training tasks could lead to better generalization and adaptation. In this paper, we take one important step further to understand the close connection between these two learning paradigms, through both theoretical analysis and empirical investigation. Theoretically, we first demonstrate that MTL shares the same optimization formulation with a class of gradient-based meta-learning (GBML) algorithms. We then prove that for over-parameterized neural networks with sufficient depth, the learned predictive functions of MTL and GBML are close. In particular, this result implies that the predictions given by these two models are similar over the same unseen task. Empirically, we corroborate our theoretical findings by showing that, with proper implementation, MTL is competitive against state-of-the-art GBML algorithms on a set of few-shot image classification benchmarks. Since existing GBML algorithms often involve costly second-order bi-level optimization, our first-order MTL method is an order of magnitude faster on large-scale datasets such as mini-ImageNet. We believe this work could help bridge the gap between these two learning paradigms, and provide a computationally efficient alternative to GBML that also supports fast task adaptation.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21ad.html PDF: http://proceedings.mlr.press/v139/wang21ad/wang21ad.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21ad.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Haoxiang family: Wang - given: Han family: Zhao - given: Bo family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 10991-11002 id: wang21ad issued: date-parts: - 2021 - 7 - 1 firstpage: 10991 lastpage: 11002 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing' abstract: 'The Laplacian representation recently gains increasing attention for reinforcement learning as it provides succinct and informative representation for states, by taking the eigenvectors of the Laplacian matrix of the state-transition graph as state embeddings. Such representation captures the geometry of the underlying state space and is beneficial to RL tasks such as option discovery and reward shaping. To approximate the Laplacian representation in large (or even continuous) state spaces, recent works propose to minimize a spectral graph drawing objective, which however has infinitely many global minimizers other than the eigenvectors. As a result, their learned Laplacian representation may differ from the ground truth. To solve this problem, we reformulate the graph drawing objective into a generalized form and derive a new learning objective, which is proved to have eigenvectors as its unique global minimizer. It enables learning high-quality Laplacian representations that faithfully approximate the ground truth. We validate this via comprehensive experiments on a set of gridworld and continuous control environments. Moreover, we show that our learned Laplacian representations lead to more exploratory options and better reward shaping.' volume: 139 URL: https://proceedings.mlr.press/v139/wang21ae.html PDF: http://proceedings.mlr.press/v139/wang21ae/wang21ae.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wang21ae.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kaixin family: Wang - given: Kuangqi family: Zhou - given: Qixin family: Zhang - given: Jie family: Shao - given: Bryan family: Hooi - given: Jiashi family: Feng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11003-11012 id: wang21ae issued: date-parts: - 2021 - 7 - 1 firstpage: 11003 lastpage: 11012 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Asymmetric Learning in POMDPs' abstract: 'Policies for partially observed Markov decision processes can be efficiently learned by imitating expert policies generated using asymmetric information. Unfortunately, existing approaches for this kind of imitation learning have a serious flaw: the expert does not know what the trainee cannot see, and as a result may encourage actions that are sub-optimal or unsafe under partial information. To address this issue, we derive an update which, when applied iteratively to an expert, maximizes the expected reward of the trainee’s policy. Using this update, we construct a computationally efficient algorithm, adaptive asymmetric DAgger (A2D), that jointly trains the expert and trainee policies. We then show that A2D allows the trainee to safely imitate the modified expert, and outperforms policies learned either by imitating a fixed expert or through direct reinforcement learning.' volume: 139 URL: https://proceedings.mlr.press/v139/warrington21a.html PDF: http://proceedings.mlr.press/v139/warrington21a/warrington21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-warrington21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrew family: Warrington - given: Jonathan W family: Lavington - given: Adam family: Scibior - given: Mark family: Schmidt - given: Frank family: Wood editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11013-11023 id: warrington21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11013 lastpage: 11023 published: 2021-07-01 00:00:00 +0000 - title: 'A Unified Generative Adversarial Network Training via Self-Labeling and Self-Attention' abstract: 'We propose a novel GAN training scheme that can handle any level of labeling in a unified manner. Our scheme introduces a form of artificial labeling that can incorporate manually defined labels, when available, and induce an alignment between them. To define the artificial labels, we exploit the assumption that neural network generators can be trained more easily to map nearby latent vectors to data with semantic similarities, than across separate categories. We use generated data samples and their corresponding artificial conditioning labels to train a classifier. The classifier is then used to self-label real data. To boost the accuracy of the self-labeling, we also use the exponential moving average of the classifier. However, because the classifier might still make mistakes, especially at the beginning of the training, we also refine the labels through self-attention, by using the labeling of real data samples only when the classifier outputs a high classification probability score. We evaluate our approach on CIFAR-10, STL-10 and SVHN, and show that both self-labeling and self-attention consistently improve the quality of generated data. More surprisingly, we find that the proposed scheme can even outperform class-conditional GANs.' volume: 139 URL: https://proceedings.mlr.press/v139/watanabe21a.html PDF: http://proceedings.mlr.press/v139/watanabe21a/watanabe21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-watanabe21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tomoki family: Watanabe - given: Paolo family: Favaro editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11024-11034 id: watanabe21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11024 lastpage: 11034 published: 2021-07-01 00:00:00 +0000 - title: 'Decision-Making Under Selective Labels: Optimal Finite-Domain Policies and Beyond' abstract: 'Selective labels are a common feature of high-stakes decision-making applications, referring to the lack of observed outcomes under one of the possible decisions. This paper studies the learning of decision policies in the face of selective labels, in an online setting that balances learning costs against future utility. In the homogeneous case in which individuals’ features are disregarded, the optimal decision policy is shown to be a threshold policy. The threshold becomes more stringent as more labels are collected; the rate at which this occurs is characterized. In the case of features drawn from a finite domain, the optimal policy consists of multiple homogeneous policies in parallel. For the general infinite-domain case, the homogeneous policy is extended by using a probabilistic classifier and bootstrapping to provide its inputs. In experiments on synthetic and real data, the proposed policies achieve consistently superior utility with no parameter tuning in the finite-domain case and lower parameter sensitivity in the general case.' volume: 139 URL: https://proceedings.mlr.press/v139/wei21a.html PDF: http://proceedings.mlr.press/v139/wei21a/wei21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wei21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dennis family: Wei editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11035-11046 id: wei21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11035 lastpage: 11046 published: 2021-07-01 00:00:00 +0000 - title: 'Inferring serial correlation with dynamic backgrounds' abstract: 'Sequential data with serial correlation and an unknown, unstructured, and dynamic background is ubiquitous in neuroscience, psychology, and econometrics. Inferring serial correlation for such data is a fundamental challenge in statistics. We propose a Total Variation (TV) constrained least square estimator coupled with hypothesis tests to infer the serial correlation in the presence of unknown and unstructured dynamic background. The TV constraint on the dynamic background encourages a piecewise constant structure, which can approximate a wide range of dynamic backgrounds. The tuning parameter is selected via the Ljung-Box test to control the bias-variance trade-off. We establish a non-asymptotic upper bound for the estimation error through variational inequalities. We also derive a lower error bound via Fano’s method and show the proposed method is near-optimal. Numerical simulation and a real study in psychology demonstrate the excellent performance of our proposed method compared with the state-of-the-art.' volume: 139 URL: https://proceedings.mlr.press/v139/wei21b.html PDF: http://proceedings.mlr.press/v139/wei21b/wei21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wei21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Song family: Wei - given: Yao family: Xie - given: Dobromir family: Rahnev editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11047-11057 id: wei21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11047 lastpage: 11057 published: 2021-07-01 00:00:00 +0000 - title: 'Meta-learning Hyperparameter Performance Prediction with Neural Processes' abstract: 'The surrogate that predicts the performance of hyperparameters has been a key component for sequential model-based hyperparameter optimization. In practical applications, a trial of a hyper-parameter configuration may be so costly that a surrogate is expected to return an optimal configuration with as few trials as possible. Observing that human experts draw on their expertise in a machine learning model by trying configurations that once performed well on other datasets, we are inspired to build a trial-efficient surrogate by transferring the meta-knowledge learned from historical trials on other datasets. We propose an end-to-end surrogate named as Transfer NeuralProcesses (TNP) that learns a comprehensive set of meta-knowledge, including the parameters of historical surrogates, historical trials, and initial configurations for other datasets. Experiments on extensive OpenML datasets and three computer vision datasets demonstrate that the proposed algorithm achieves state-of-the-art performance in at least one order of magnitude less trials.' volume: 139 URL: https://proceedings.mlr.press/v139/wei21c.html PDF: http://proceedings.mlr.press/v139/wei21c/wei21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wei21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ying family: Wei - given: Peilin family: Zhao - given: Junzhou family: Huang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11058-11067 id: wei21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11058 lastpage: 11067 published: 2021-07-01 00:00:00 +0000 - title: 'A Structured Observation Distribution for Generative Biological Sequence Prediction and Forecasting' abstract: 'Generative probabilistic modeling of biological sequences has widespread existing and potential application across biology and biomedicine, from evolutionary biology to epidemiology to protein design. Many standard sequence analysis methods preprocess data using a multiple sequence alignment (MSA) algorithm, one of the most widely used computational methods in all of science. However, as we show in this article, training generative probabilistic models with MSA preprocessing leads to statistical pathologies in the context of sequence prediction and forecasting. To address these problems, we propose a principled drop-in alternative to MSA preprocessing in the form of a structured observation distribution (the "MuE" distribution). We prove theoretically that the MuE distribution comprehensively generalizes popular methods for inferring biological sequence alignments, and provide a precise characterization of how such biological models have differed from natural language latent alignment models. We show empirically that models that use the MuE as an observation distribution outperform comparable methods across a variety of datasets, and apply MuE models to a novel problem for generative probabilistic sequence models: forecasting pathogen evolution.' volume: 139 URL: https://proceedings.mlr.press/v139/weinstein21a.html PDF: http://proceedings.mlr.press/v139/weinstein21a/weinstein21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-weinstein21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eli N family: Weinstein - given: Debora family: Marks editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11068-11079 id: weinstein21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11068 lastpage: 11079 published: 2021-07-01 00:00:00 +0000 - title: 'Thinking Like Transformers' abstract: 'What is the computational model behind a Transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language. We map the basic components of a transformer-encoder—attention and feed-forward computation—into simple primitives, around which we form a programming language: the Restricted Access Sequence Processing Language (RASP). We show how RASP can be used to program solutions to tasks that could conceivably be learned by a Transformer, and how a Transformer can be trained to mimic a RASP solution. In particular, we provide RASP programs for histograms, sorting, and Dyck-languages. We further use our model to relate their difficulty in terms of the number of required layers and attention heads: analyzing a RASP program implies a maximum number of heads and layers necessary to encode a task in a transformer. Finally, we see how insights gained from our abstraction might be used to explain phenomena seen in recent works.' volume: 139 URL: https://proceedings.mlr.press/v139/weiss21a.html PDF: http://proceedings.mlr.press/v139/weiss21a/weiss21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-weiss21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gail family: Weiss - given: Yoav family: Goldberg - given: Eran family: Yahav editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11080-11090 id: weiss21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11080 lastpage: 11090 published: 2021-07-01 00:00:00 +0000 - title: 'Leveraged Weighted Loss for Partial Label Learning' abstract: 'As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/wen21a.html PDF: http://proceedings.mlr.press/v139/wen21a/wen21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wen21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hongwei family: Wen - given: Jingyi family: Cui - given: Hanyuan family: Hang - given: Jiabin family: Liu - given: Yisen family: Wang - given: Zhouchen family: Lin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11091-11100 id: wen21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11091 lastpage: 11100 published: 2021-07-01 00:00:00 +0000 - title: 'Characterizing the Gap Between Actor-Critic and Policy Gradient' abstract: 'Actor-critic (AC) methods are ubiquitous in reinforcement learning. Although it is understood that AC methods are closely related to policy gradient (PG), their precise connection has not been fully characterized previously. In this paper, we explain the gap between AC and PG methods by identifying the exact adjustment to the AC objective/gradient that recovers the true policy gradient of the cumulative reward objective (PG). Furthermore, by viewing the AC method as a two-player Stackelberg game between the actor and critic, we show that the Stackelberg policy gradient can be recovered as a special case of our more general analysis. Based on these results, we develop practical algorithms, Residual Actor-Critic and Stackelberg Actor-Critic, for estimating the correction between AC and PG and use these to modify the standard AC algorithm. Experiments on popular tabular and continuous environments show the proposed corrections can improve both the sample efficiency and final performance of existing AC methods.' volume: 139 URL: https://proceedings.mlr.press/v139/wen21b.html PDF: http://proceedings.mlr.press/v139/wen21b/wen21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wen21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Junfeng family: Wen - given: Saurabh family: Kumar - given: Ramki family: Gummadi - given: Dale family: Schuurmans editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11101-11111 id: wen21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11101 lastpage: 11111 published: 2021-07-01 00:00:00 +0000 - title: 'Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning' abstract: 'We formally study how contrastive learning learns the feature representations for neural networks by investigating its feature learning process. We consider the case where our data are comprised of two types of features: the sparse features which we want to learn from, and the dense features we want to get rid of. Theoretically, we prove that contrastive learning using ReLU networks provably learns the desired features if proper augmentations are adopted. We present an underlying principle called feature decoupling to explain the effects of augmentations, where we theoretically characterize how augmentations can reduce the correlations of dense features between positive samples while keeping the correlations of sparse features intact, thereby forcing the neural networks to learn from the self-supervision of sparse features. Empirically, we verified that the feature decoupling principle matches the underlying mechanism of contrastive learning in practice.' volume: 139 URL: https://proceedings.mlr.press/v139/wen21c.html PDF: http://proceedings.mlr.press/v139/wen21c/wen21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wen21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zixin family: Wen - given: Yuanzhi family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11112-11122 id: wen21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11112 lastpage: 11122 published: 2021-07-01 00:00:00 +0000 - title: 'Keyframe-Focused Visual Imitation Learning' abstract: 'Imitation learning trains control policies by mimicking pre-recorded expert demonstrations. In partially observable settings, imitation policies must rely on observation histories, but many seemingly paradoxical results show better performance for policies that only access the most recent observation. Recent solutions ranging from causal graph learning to deep information bottlenecks have shown promising results, but failed to scale to realistic settings such as visual imitation. We propose a solution that outperforms these prior approaches by upweighting demonstration keyframes corresponding to expert action changepoints. This simple approach easily scales to complex visual imitation settings. Our experimental results demonstrate consistent performance improvements over all baselines on image-based Gym MuJoCo continuous control tasks. Finally, on the CARLA photorealistic vision-based urban driving simulator, we resolve a long-standing issue in behavioral cloning for driving by demonstrating effective imitation from observation histories. Supplementary materials and code at: \url{https://tinyurl.com/imitation-keyframes}.' volume: 139 URL: https://proceedings.mlr.press/v139/wen21d.html PDF: http://proceedings.mlr.press/v139/wen21d/wen21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wen21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chuan family: Wen - given: Jierui family: Lin - given: Jianing family: Qian - given: Yang family: Gao - given: Dinesh family: Jayaraman editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11123-11133 id: wen21d issued: date-parts: - 2021 - 7 - 1 firstpage: 11123 lastpage: 11133 published: 2021-07-01 00:00:00 +0000 - title: 'Learning de-identified representations of prosody from raw audio' abstract: 'We propose a method for learning de-identified prosody representations from raw audio using a contrastive self-supervised signal. Whereas prior work has relied on conditioning models with bottlenecks, we introduce a set of inductive biases that exploit the natural structure of prosody to minimize timbral information and decouple prosody from speaker representations. Despite aggressive downsampling of the input and having no access to linguistic information, our model performs comparably to state-of-the-art speech representations on DAMMP, a new benchmark we introduce for spoken language understanding. We use minimum description length probing to show that our representations have selectively learned the subcomponents of non-timbral prosody, and that the product quantizer naturally disentangles them without using bottlenecks. We derive an information-theoretic definition of speech de-identifiability and use it to demonstrate that our prosody representations are less identifiable than the other speech representations.' volume: 139 URL: https://proceedings.mlr.press/v139/weston21a.html PDF: http://proceedings.mlr.press/v139/weston21a/weston21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-weston21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jack family: Weston - given: Raphael family: Lenain - given: Udeepa family: Meepegama - given: Emil family: Fristed editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11134-11145 id: weston21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11134 lastpage: 11145 published: 2021-07-01 00:00:00 +0000 - title: 'Solving Inverse Problems with a Flow-based Noise Model' abstract: 'We study image inverse problems with a normalizing flow prior. Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements. This formulation allows us to use noise models with arbitrary dependencies as well as non-linear forward operators. We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns. We also present initial theoretical recovery guarantees for solving inverse problems with a flow prior.' volume: 139 URL: https://proceedings.mlr.press/v139/whang21a.html PDF: http://proceedings.mlr.press/v139/whang21a/whang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-whang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jay family: Whang - given: Qi family: Lei - given: Alex family: Dimakis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11146-11157 id: whang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11146 lastpage: 11157 published: 2021-07-01 00:00:00 +0000 - title: 'Composing Normalizing Flows for Inverse Problems' abstract: 'Given an inverse problem with a normalizing flow prior, we wish to estimate the distribution of the underlying signal conditioned on the observations. We approach this problem as a task of conditional inference on the pre-trained unconditional flow model. We first establish that this is computationally hard for a large class of flow models. Motivated by this, we propose a framework for approximate inference that estimates the target conditional as a composition of two flow models. This formulation leads to a stable variational inference training procedure that avoids adversarial training. Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty quantification. We further demonstrate that our approach can be amortized for zero-shot inference.' volume: 139 URL: https://proceedings.mlr.press/v139/whang21b.html PDF: http://proceedings.mlr.press/v139/whang21b/whang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-whang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jay family: Whang - given: Erik family: Lindgren - given: Alex family: Dimakis editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11158-11169 id: whang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11158 lastpage: 11169 published: 2021-07-01 00:00:00 +0000 - title: 'Which transformer architecture fits my data? A vocabulary bottleneck in self-attention' abstract: 'After their successful debut in natural language processing, Transformer architectures are now becoming the de-facto standard in many domains. An obstacle for their deployment over new modalities is the architectural configuration: the optimal depth-to-width ratio has been shown to dramatically vary across data types (i.e., 10x larger over images than over language). We theoretically predict the existence of an embedding rank bottleneck that limits the contribution of self-attention width to the Transformer expressivity. We thus directly tie the input vocabulary size and rank to the optimal depth-to-width ratio, since a small vocabulary size or rank dictates an added advantage of depth over width. We empirically demonstrate the existence of this bottleneck and its implications on the depth-to-width interplay of Transformer architectures, linking the architecture variability across domains to the often glossed-over usage of different vocabulary sizes or embedding ranks in different domains. As an additional benefit, our rank bottlenecking framework allows us to identify size redundancies of 25%-50% in leading NLP models such as ALBERT and T5.' volume: 139 URL: https://proceedings.mlr.press/v139/wies21a.html PDF: http://proceedings.mlr.press/v139/wies21a/wies21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wies21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Noam family: Wies - given: Yoav family: Levine - given: Daniel family: Jannai - given: Amnon family: Shashua editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11170-11181 id: wies21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11170 lastpage: 11181 published: 2021-07-01 00:00:00 +0000 - title: 'Prediction-Centric Learning of Independent Cascade Dynamics from Partial Observations' abstract: 'Spreading processes play an increasingly important role in modeling for diffusion networks, information propagation, marketing and opinion setting. We address the problem of learning of a spreading model such that the predictions generated from this model are accurate and could be subsequently used for the optimization, and control of diffusion dynamics. We focus on a challenging setting where full observations of the dynamics are not available, and standard approaches such as maximum likelihood quickly become intractable for large network instances. We introduce a computationally efficient algorithm, based on a scalable dynamic message-passing approach, which is able to learn parameters of the effective spreading model given only limited information on the activation times of nodes in the network. The popular Independent Cascade model is used to illustrate our approach. We show that tractable inference from the learned model generates a better prediction of marginal probabilities compared to the original model. We develop a systematic procedure for learning a mixture of models which further improves the prediction quality.' volume: 139 URL: https://proceedings.mlr.press/v139/wilinski21a.html PDF: http://proceedings.mlr.press/v139/wilinski21a/wilinski21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wilinski21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mateusz family: Wilinski - given: Andrey family: Lokhov editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11182-11192 id: wilinski21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11182 lastpage: 11192 published: 2021-07-01 00:00:00 +0000 - title: 'Leveraging Language to Learn Program Abstractions and Search Heuristics' abstract: 'Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, andgeneralizable machine learning systems. Effective program synthesis depends on two key ingredients: a strong library of functions from which to build programs, and an efficient search strategy for finding programs that solve a given task. We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis. When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization on three domains {–} string editing, image composition, and abstract reasoning about scenes {–} even when no natural language hints are available at test time.' volume: 139 URL: https://proceedings.mlr.press/v139/wong21a.html PDF: http://proceedings.mlr.press/v139/wong21a/wong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Catherine family: Wong - given: Kevin M family: Ellis - given: Joshua family: Tenenbaum - given: Jacob family: Andreas editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11193-11204 id: wong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11193 lastpage: 11204 published: 2021-07-01 00:00:00 +0000 - title: 'Leveraging Sparse Linear Layers for Debuggable Deep Networks' abstract: 'We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks. These networks remain highly accurate while also being more amenable to human interpretation, as we demonstrate quantitatively and via human experiments. We further illustrate how the resulting sparse explanations can help to identify spurious correlations, explain misclassifications, and diagnose model biases in vision and language tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/wong21b.html PDF: http://proceedings.mlr.press/v139/wong21b/wong21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wong21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eric family: Wong - given: Shibani family: Santurkar - given: Aleksander family: Madry editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11205-11216 id: wong21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11205 lastpage: 11216 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Neural Network Subspaces' abstract: 'Recent observations have advanced our understanding of the neural network optimization landscape, revealing the existence of (1) paths of high accuracy containing diverse solutions and (2) wider minima offering improved performance. Previous methods observing diverse paths require multiple training runs. In contrast we aim to leverage both property (1) and (2) with a single method and in a single training run. With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks. These neural network subspaces contain diverse solutions that can be ensembled, approaching the ensemble performance of independently trained networks without the training cost. Moreover, using the subspace midpoint boosts accuracy, calibration, and robustness to label noise, outperforming Stochastic Weight Averaging.' volume: 139 URL: https://proceedings.mlr.press/v139/wortsman21a.html PDF: http://proceedings.mlr.press/v139/wortsman21a/wortsman21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wortsman21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mitchell family: Wortsman - given: Maxwell C family: Horton - given: Carlos family: Guestrin - given: Ali family: Farhadi - given: Mohammad family: Rastegari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11217-11227 id: wortsman21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11217 lastpage: 11227 published: 2021-07-01 00:00:00 +0000 - title: 'Conjugate Energy-Based Models' abstract: 'In this paper, we propose conjugate energy-based models (CEBMs), a new class of energy-based models that define a joint density over data and latent variables. The joint density of a CEBM decomposes into an intractable distribution over data and a tractable posterior over latent variables. CEBMs have similar use cases as variational autoencoders, in the sense that they learn an unsupervised mapping from data to latent variables. However, these models omit a generator network, which allows them to learn more flexible notions of similarity between data points. Our experiments demonstrate that conjugate EBMs achieve competitive results in terms of image modelling, predictive power of latent space, and out-of-domain detection on a variety of datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21a.html PDF: http://proceedings.mlr.press/v139/wu21a/wu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hao family: Wu - given: Babak family: Esmaeili - given: Michael family: Wick - given: Jean-Baptiste family: Tristan - given: Jan-Willem family: Van De Meent editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11228-11239 id: wu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11228 lastpage: 11239 published: 2021-07-01 00:00:00 +0000 - title: 'Making Paper Reviewing Robust to Bid Manipulation Attacks' abstract: 'Most computer science conferences rely on paper bidding to assign reviewers to papers. Although paper bidding enables high-quality assignments in days of unprecedented submission numbers, it also opens the door for dishonest reviewers to adversarially influence paper reviewing assignments. Anecdotal evidence suggests that some reviewers bid on papers by "friends" or colluding authors, even though these papers are outside their area of expertise, and recommend them for acceptance without considering the merit of the work. In this paper, we study the efficacy of such bid manipulation attacks and find that, indeed, they can jeopardize the integrity of the review process. We develop a novel approach for paper bidding and assignment that is much more robust against such attacks. We show empirically that our approach provides robustness even when dishonest reviewers collude, have full knowledge of the assignment system’s internal workings, and have access to the system’s inputs. In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21b.html PDF: http://proceedings.mlr.press/v139/wu21b/wu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ruihan family: Wu - given: Chuan family: Guo - given: Felix family: Wu - given: Rahul family: Kidambi - given: Laurens family: Van Der Maaten - given: Kilian family: Weinberger editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11240-11250 id: wu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11240 lastpage: 11250 published: 2021-07-01 00:00:00 +0000 - title: 'LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning' abstract: 'While designing inductive bias in neural architectures has been widely studied, we hypothesize that transformer networks are flexible enough to learn inductive bias from suitable generic tasks. Here, we replace architecture engineering by encoding inductive bias in the form of datasets. Inspired by Peirce’s view that deduction, induction, and abduction are the primitives of reasoning, we design three synthetic tasks that are intended to require the model to have these three abilities. We specifically design these tasks to be synthetic and devoid of mathematical knowledge to ensure that only the fundamental reasoning biases can be learned from these tasks. This defines a new pre-training methodology called "LIME" (Learning Inductive bias for Mathematical rEasoning). Models trained with LIME significantly outperform vanilla transformers on four very different large mathematical reasoning benchmarks. Unlike dominating the computation cost as traditional pre-training approaches, LIME requires only a small fraction of the computation cost of the typical downstream task. The code for generating LIME tasks is available at https://github.com/tonywu95/LIME.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21c.html PDF: http://proceedings.mlr.press/v139/wu21c/wu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuhuai family: Wu - given: Markus N family: Rabe - given: Wenda family: Li - given: Jimmy family: Ba - given: Roger B family: Grosse - given: Christian family: Szegedy editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11251-11262 id: wu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11251 lastpage: 11262 published: 2021-07-01 00:00:00 +0000 - title: 'ChaCha for Online AutoML' abstract: 'We propose the ChaCha (Champion-Challengers) algorithm for making an online choice of hyperparameters in online learning settings. ChaCha handles the process of determining a champion and scheduling a set of ‘live’ challengers over time based on sample complexity bounds. It is guaranteed to have sublinear regret after the optimal configuration is added into consideration by an application-dependent oracle based on the champions. Empirically, we show that ChaCha provides good performance across a wide array of datasets when optimizing over featurization and hyperparameter decisions.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21d.html PDF: http://proceedings.mlr.press/v139/wu21d/wu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qingyun family: Wu - given: Chi family: Wang - given: John family: Langford - given: Paul family: Mineiro - given: Marco family: Rossi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11263-11273 id: wu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 11263 lastpage: 11273 published: 2021-07-01 00:00:00 +0000 - title: 'Temporally Correlated Task Scheduling for Sequence Learning' abstract: 'Sequence learning has attracted much research attention from the machine learning community in recent years. In many applications, a sequence learning task is usually associated with multiple temporally correlated auxiliary tasks, which are different in terms of how much input information to use or which future step to predict. For example, (i) in simultaneous machine translation, one can conduct translation under different latency (i.e., how many input words to read/wait before translation); (ii) in stock trend forecasting, one can predict the price of a stock in different future days (e.g., tomorrow, the day after tomorrow). While it is clear that those temporally correlated tasks can help each other, there is a very limited exploration on how to better leverage multiple auxiliary tasks to boost the performance of the main task. In this work, we introduce a learnable scheduler to sequence learning, which can adaptively select auxiliary tasks for training depending on the model status and the current training data. The scheduler and the model for the main task are jointly trained through bi-level optimization. Experiments show that our method significantly improves the performance of simultaneous machine translation and stock trend forecasting.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21e.html PDF: http://proceedings.mlr.press/v139/wu21e/wu21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xueqing family: Wu - given: Lewen family: Wang - given: Yingce family: Xia - given: Weiqing family: Liu - given: Lijun family: Wu - given: Shufang family: Xie - given: Tao family: Qin - given: Tie-Yan family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11274-11284 id: wu21e issued: date-parts: - 2021 - 7 - 1 firstpage: 11274 lastpage: 11284 published: 2021-07-01 00:00:00 +0000 - title: 'Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels' abstract: 'Learning with noisy labels has attracted a lot of attention in recent years, where the mainstream approaches are in \emph{pointwise} manners. Meanwhile, \emph{pairwise} manners have shown great potential in supervised metric learning and unsupervised contrastive learning. Thus, a natural question is raised: does learning in a pairwise manner \emph{mitigate} label noise? To give an affirmative answer, in this paper, we propose a framework called \emph{Class2Simi}: it transforms data points with noisy \emph{class labels} to data pairs with noisy \emph{similarity labels}, where a similarity label denotes whether a pair shares the class label or not. Through this transformation, the \emph{reduction of the noise rate} is theoretically guaranteed, and hence it is in principle easier to handle noisy similarity labels. Amazingly, DNNs that predict the \emph{clean} class labels can be trained from noisy data pairs if they are first pretrained from noisy data points. Class2Simi is \emph{computationally efficient} because not only this transformation is on-the-fly in mini-batches, but also it just changes loss computation on top of model prediction into a pairwise manner. Its effectiveness is verified by extensive experiments.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21f.html PDF: http://proceedings.mlr.press/v139/wu21f/wu21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Songhua family: Wu - given: Xiaobo family: Xia - given: Tongliang family: Liu - given: Bo family: Han - given: Mingming family: Gong - given: Nannan family: Wang - given: Haifeng family: Liu - given: Gang family: Niu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11285-11295 id: wu21f issued: date-parts: - 2021 - 7 - 1 firstpage: 11285 lastpage: 11295 published: 2021-07-01 00:00:00 +0000 - title: 'On Reinforcement Learning with Adversarial Corruption and Its Application to Block MDP' abstract: 'We study reinforcement learning (RL) in episodic tabular MDPs with adversarial corruptions, where some episodes can be adversarially corrupted. When the total number of corrupted episodes is known, we propose an algorithm, Corruption Robust Monotonic Value Propagation (\textsf{CR-MVP}), which achieves a regret bound of $\tilde{O}\left(\left(\sqrt{SAK}+S^2A+CSA)\right)\polylog(H)\right)$, where $S$ is the number of states, $A$ is the number of actions, $H$ is the planning horizon, $K$ is the number of episodes, and $C$ is the corruption level. We also provide a corresponding lower bound, which indicates that our upper bound is tight. Finally, as an application, we study RL with rich observations in the block MDP model. We provide the first algorithm that achieves a $\sqrt{K}$-type regret in this setting and is computationally efficient.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21g.html PDF: http://proceedings.mlr.press/v139/wu21g/wu21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianhao family: Wu - given: Yunchang family: Yang - given: Simon family: Du - given: Liwei family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11296-11306 id: wu21g issued: date-parts: - 2021 - 7 - 1 firstpage: 11296 lastpage: 11306 published: 2021-07-01 00:00:00 +0000 - title: 'Generative Video Transformer: Can Objects be the Words?' abstract: 'Transformers have been successful for many natural language processing tasks. However, applying transformers to the video domain for tasks such as long-term video generation and scene understanding has remained elusive due to the high computational complexity and the lack of natural tokenization. In this paper, we propose the ObjectCentric Video Transformer (OCVT) which utilizes an object-centric approach for decomposing scenes into tokens suitable for use in a generative video transformer. By factoring the video into objects, our fully unsupervised model is able to learn complex spatio-temporal dynamics of multiple interacting objects in a scene and generate future frames of the video. Our model is also significantly more memory-efficient than pixel-based models and thus able to train on videos of length up to 70 frames with a single 48GB GPU. We compare our model with previous RNN-based approaches as well as other possible video transformer baselines. We demonstrate OCVT performs well when compared to baselines in generating future frames. OCVT also develops useful representations for video reasoning, achieving start-of-the-art performance on the CATER task.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21h.html PDF: http://proceedings.mlr.press/v139/wu21h/wu21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi-Fu family: Wu - given: Jaesik family: Yoon - given: Sungjin family: Ahn editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11307-11318 id: wu21h issued: date-parts: - 2021 - 7 - 1 firstpage: 11307 lastpage: 11318 published: 2021-07-01 00:00:00 +0000 - title: 'Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning' abstract: 'Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration. However, existing Q-learning and actor-critic based off-policy RL algorithms fail when bootstrapping from out-of-distribution (OOD) actions or states. We hypothesize that a key missing ingredient from the existing methods is a proper treatment of uncertainty in the offline setting. We propose Uncertainty Weighted Actor-Critic (UWAC), an algorithm that detects OOD state-action pairs and down-weights their contribution in the training objectives accordingly. Implementation-wise, we adopt a practical and effective dropout-based uncertainty estimation method that introduces very little overhead over existing RL algorithms. Empirically, we observe that UWAC substantially improves model stability during training. In addition, UWAC out-performs existing offline RL methods on a variety of competitive tasks, and achieves significant performance gains over the state-of-the-art baseline on datasets with sparse demonstrations collected from human experts.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21i.html PDF: http://proceedings.mlr.press/v139/wu21i/wu21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yue family: Wu - given: Shuangfei family: Zhai - given: Nitish family: Srivastava - given: Joshua M family: Susskind - given: Jian family: Zhang - given: Ruslan family: Salakhutdinov - given: Hanlin family: Goh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11319-11328 id: wu21i issued: date-parts: - 2021 - 7 - 1 firstpage: 11319 lastpage: 11328 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Open-World Recommendation: An Inductive Model-based Collaborative Filtering Approach' abstract: 'Recommendation models can effectively estimate underlying user interests and predict one’s future behaviors by factorizing an observed user-item rating matrix into products of two sets of latent factors. However, the user-specific embedding factors can only be learned in a transductive way, making it difficult to handle new users on-the-fly. In this paper, we propose an inductive collaborative filtering framework that contains two representation models. The first model follows conventional matrix factorization which factorizes a group of key users’ rating matrix to obtain meta latents. The second model resorts to attention-based structure learning that estimates hidden relations from query to key users and learns to leverage meta latents to inductively compute embeddings for query users via neural message passing. Our model enables inductive representation learning for users and meanwhile guarantees equivalent representation capacity as matrix factorization. Experiments demonstrate that our model achieves promising results for recommendation on few-shot users with limited training ratings and new unseen users which are commonly encountered in open-world recommender systems.' volume: 139 URL: https://proceedings.mlr.press/v139/wu21j.html PDF: http://proceedings.mlr.press/v139/wu21j/wu21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wu21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qitian family: Wu - given: Hengrui family: Zhang - given: Xiaofeng family: Gao - given: Junchi family: Yan - given: Hongyuan family: Zha editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11329-11339 id: wu21j issued: date-parts: - 2021 - 7 - 1 firstpage: 11329 lastpage: 11339 published: 2021-07-01 00:00:00 +0000 - title: 'Data-efficient Hindsight Off-policy Option Learning' abstract: 'We introduce Hindsight Off-policy Options (HO2), a data-efficient option learning algorithm. Given any trajectory, HO2 infers likely option choices and backpropagates through the dynamic programming inference procedure to robustly train all policy components off-policy and end-to-end. The approach outperforms existing option learning methods on common benchmarks. To better understand the option framework and disentangle benefits from both temporal and action abstraction, we evaluate ablations with flat policies and mixture policies with comparable optimization. The results highlight the importance of both types of abstraction as well as off-policy training and trust-region constraints, particularly in challenging, simulated 3D robot manipulation tasks from raw pixel inputs. Finally, we intuitively adapt the inference step to investigate the effect of increased temporal abstraction on training with pre-trained options and from scratch.' volume: 139 URL: https://proceedings.mlr.press/v139/wulfmeier21a.html PDF: http://proceedings.mlr.press/v139/wulfmeier21a/wulfmeier21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-wulfmeier21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Markus family: Wulfmeier - given: Dushyant family: Rao - given: Roland family: Hafner - given: Thomas family: Lampe - given: Abbas family: Abdolmaleki - given: Tim family: Hertweck - given: Michael family: Neunert - given: Dhruva family: Tirumala - given: Noah family: Siegel - given: Nicolas family: Heess - given: Martin family: Riedmiller editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11340-11350 id: wulfmeier21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11340 lastpage: 11350 published: 2021-07-01 00:00:00 +0000 - title: 'A Bit More Bayesian: Domain-Invariant Learning with Uncertainty' abstract: 'Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data. In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference, by incorporating uncertainty into neural network weights. We couple domain invariance in a probabilistic formula with the variational Bayesian inference. This enables us to explore domain-invariant learning in a principled way. Specifically, we derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network. We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies validate the synergistic benefits of our Bayesian treatment when jointly learning domain-invariant representations and classifiers for domain generalization. Further, our method consistently delivers state-of-the-art mean accuracy on all benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/xiao21a.html PDF: http://proceedings.mlr.press/v139/xiao21a/xiao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xiao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zehao family: Xiao - given: Jiayi family: Shen - given: Xiantong family: Zhen - given: Ling family: Shao - given: Cees family: Snoek editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11351-11361 id: xiao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11351 lastpage: 11361 published: 2021-07-01 00:00:00 +0000 - title: 'On the Optimality of Batch Policy Optimization Algorithms' abstract: 'Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment. Although interest in this problem has grown significantly in recent years, its theoretical foundations remain under-developed. To advance the understanding of this problem, we provide three results that characterize the limits and possibilities of batch policy optimization in the finite-armed stochastic bandit setting. First, we introduce a class of confidence-adjusted index algorithms that unifies optimistic and pessimistic principles in a common framework, which enables a general analysis. For this family, we show that any confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral. Our analysis reveals that instance-dependent optimality, commonly used to establish optimality of on-line stochastic bandit algorithms, cannot be achieved by any algorithm in the batch setting. In particular, for any algorithm that performs optimally in some environment, there exists another environment where the same algorithm suffers arbitrarily larger regret. Therefore, to establish a framework for distinguishing algorithms, we introduce a new weighted-minimax criterion that considers the inherent difficulty of optimal value prediction. We demonstrate how this criterion can be used to justify commonly used pessimistic principles for batch policy optimization.' volume: 139 URL: https://proceedings.mlr.press/v139/xiao21b.html PDF: http://proceedings.mlr.press/v139/xiao21b/xiao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xiao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chenjun family: Xiao - given: Yifan family: Wu - given: Jincheng family: Mei - given: Bo family: Dai - given: Tor family: Lattimore - given: Lihong family: Li - given: Csaba family: Szepesvari - given: Dale family: Schuurmans editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11362-11371 id: xiao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11362 lastpage: 11371 published: 2021-07-01 00:00:00 +0000 - title: 'CRFL: Certifiably Robust Federated Learning against Backdoor Attacks' abstract: 'Federated Learning (FL) as a distributed learning paradigm that aggregates information from diverse clients to train a shared global model, has demonstrated great success. However, malicious clients can perform poisoning attacks and model replacement to introduce backdoors into the trained global model. Although there have been intensive studies designing robust aggregation methods and empirical robust federated training protocols against backdoors, existing approaches lack robustness certification. This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors. Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude. Our certification also specifies the relation to federated learning parameters, such as poisoning ratio on instance level, number of attackers, and training iterations. Practically, we conduct comprehensive experiments across a range of federated datasets, and provide the first benchmark for certified robustness against backdoor attacks in federated learning. Our code is publicaly available at https://github.com/AI-secure/CRFL.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21a.html PDF: http://proceedings.mlr.press/v139/xie21a/xie21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chulin family: Xie - given: Minghao family: Chen - given: Pin-Yu family: Chen - given: Bo family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11372-11382 id: xie21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11372 lastpage: 11382 published: 2021-07-01 00:00:00 +0000 - title: 'RNNRepair: Automatic RNN Repair via Model-based Analysis' abstract: 'Deep neural networks are vulnerable to adversarial attacks. Due to their black-box nature, it is rather challenging to interpret and properly repair these incorrect behaviors. This paper focuses on interpreting and repairing the incorrect behaviors of Recurrent Neural Networks (RNNs). We propose a lightweight model-based approach (RNNRepair) to help understand and repair incorrect behaviors of an RNN. Specifically, we build an influence model to characterize the stateful and statistical behaviors of an RNN over all the training data and to perform the influence analysis for the errors. Compared with the existing techniques on influence function, our method can efficiently estimate the influence of existing or newly added training samples for a given prediction at both sample level and segmentation level. Our empirical evaluation shows that the proposed influence model is able to extract accurate and understandable features. Based on the influence model, our proposed technique could effectively infer the influential instances from not only an entire testing sequence but also a segment within that sequence. Moreover, with the sample-level and segment-level influence relations, RNNRepair could further remediate two types of incorrect predictions at the sample level and segment level.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21b.html PDF: http://proceedings.mlr.press/v139/xie21b/xie21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiaofei family: Xie - given: Wenbo family: Guo - given: Lei family: Ma - given: Wei family: Le - given: Jian family: Wang - given: Lingjun family: Zhou - given: Yang family: Liu - given: Xinyu family: Xing editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11383-11392 id: xie21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11383 lastpage: 11392 published: 2021-07-01 00:00:00 +0000 - title: 'Deep Reinforcement Learning amidst Continual Structured Non-Stationarity' abstract: 'As humans, our goals and our environment are persistently changing throughout our lifetime based on our experiences, actions, and internal and external drives. In contrast, typical reinforcement learning problem set-ups consider decision processes that are stationary across episodes. Can we develop reinforcement learning algorithms that can cope with the persistent change in the former, more realistic problem settings? While on-policy algorithms such as policy gradients in principle can be extended to non-stationary settings, the same cannot be said for more efficient off-policy algorithms that replay past experiences when learning. In this work, we formalize this problem setting, and draw upon ideas from the online learning and probabilistic inference literature to derive an off-policy RL algorithm that can reason about and tackle such lifelong non-stationarity. Our method leverages latent variable models to learn a representation of the environment from current and past experiences, and performs off-policy RL with this representation. We further introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21c.html PDF: http://proceedings.mlr.press/v139/xie21c/xie21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Annie family: Xie - given: James family: Harrison - given: Chelsea family: Finn editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11393-11403 id: xie21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11393 lastpage: 11403 published: 2021-07-01 00:00:00 +0000 - title: 'Batch Value-function Approximation with Only Realizability' abstract: 'We make progress in a long-standing problem of batch reinforcement learning (RL): learning Q* from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen & Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action-space partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21d.html PDF: http://proceedings.mlr.press/v139/xie21d/xie21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tengyang family: Xie - given: Nan family: Jiang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11404-11413 id: xie21d issued: date-parts: - 2021 - 7 - 1 firstpage: 11404 lastpage: 11413 published: 2021-07-01 00:00:00 +0000 - title: 'Interaction-Grounded Learning' abstract: 'Consider a prosthetic arm, learning to adapt to its user’s control signals. We propose \emph{Interaction-Grounded Learning} for this novel setting, in which a learner’s goal is to interact with the environment with no grounding or explicit reward to optimize its policies. Such a problem evades common RL solutions which require an explicit reward. The learning agent observes a multidimensional \emph{context vector}, takes an \emph{action}, and then observes a multidimensional \emph{feedback vector}. This multidimensional feedback vector has \emph{no} explicit reward information. In order to succeed, the algorithm must learn how to evaluate the feedback vector to discover a latent reward signal, with which it can ground its policies without supervision. We show that in an Interaction-Grounded Learning setting, with certain natural assumptions, a learner can discover the latent reward and ground its policy for successful interaction. We provide theoretical guarantees and a proof-of-concept empirical evaluation to demonstrate the effectiveness of our proposed approach.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21e.html PDF: http://proceedings.mlr.press/v139/xie21e/xie21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tengyang family: Xie - given: John family: Langford - given: Paul family: Mineiro - given: Ida family: Momennejad editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11414-11423 id: xie21e issued: date-parts: - 2021 - 7 - 1 firstpage: 11414 lastpage: 11423 published: 2021-07-01 00:00:00 +0000 - title: 'Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization' abstract: 'We focus on prediction problems with structured outputs that are subject to output validity constraints, e.g. pseudocode-to-code translation where the code must compile. While labeled input-output pairs are expensive to obtain, "unlabeled" outputs, i.e. outputs without corresponding inputs, are freely available (e.g. code on GitHub) and provide information about output validity. Pre-training captures this structure by training a denoiser to denoise corrupted versions of unlabeled outputs. We first show that standard fine-tuning after pre-training destroys some of this structure. We then propose composed fine-tuning, which trains a predictor composed with the pre-trained denoiser. Importantly, the denoiser is fixed to preserve output structure. Like standard fine-tuning, the predictor is also initialized with the pre-trained denoiser. We prove for two-layer ReLU networks that composed fine-tuning significantly reduces the complexity of the predictor, thus improving generalization. Empirically, we show that composed fine-tuning improves over standard fine-tuning on two pseudocode-to-code translation datasets (3% and 6% relative). The improvement is magnified on out-of-distribution (OOD) examples (4% and 25% relative), suggesting that reducing predictor complexity improves OOD extrapolation.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21f.html PDF: http://proceedings.mlr.press/v139/xie21f/xie21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sang Michael family: Xie - given: Tengyu family: Ma - given: Percy family: Liang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11424-11435 id: xie21f issued: date-parts: - 2021 - 7 - 1 firstpage: 11424 lastpage: 11435 published: 2021-07-01 00:00:00 +0000 - title: 'Learning While Playing in Mean-Field Games: Convergence and Optimality' abstract: 'We study reinforcement learning in mean-field games. To achieve the Nash equilibrium, which consists of a policy and a mean-field state, existing algorithms require obtaining the optimal policy while fixing any mean-field state. In practice, however, the policy and the mean-field state evolve simultaneously, as each agent is learning while playing. To bridge such a gap, we propose a fictitious play algorithm, which alternatively updates the policy (learning) and the mean-field state (playing) by one step of policy optimization and gradient descent, respectively. Despite the nonstationarity induced by such an alternating scheme, we prove that the proposed algorithm converges to the Nash equilibrium with an explicit convergence rate. To the best of our knowledge, it is the first provably efficient algorithm that achieves learning while playing via alternating updates.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21g.html PDF: http://proceedings.mlr.press/v139/xie21g/xie21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qiaomin family: Xie - given: Zhuoran family: Yang - given: Zhaoran family: Wang - given: Andreea family: Minca editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11436-11447 id: xie21g issued: date-parts: - 2021 - 7 - 1 firstpage: 11436 lastpage: 11447 published: 2021-07-01 00:00:00 +0000 - title: 'Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization' abstract: 'It is well-known that stochastic gradient noise (SGN) acts as implicit regularization for deep learning and is essentially important for both optimization and generalization of deep networks. Some works attempted to artificially simulate SGN by injecting random noise to improve deep learning. However, it turned out that the injected simple random noise cannot work as well as SGN, which is anisotropic and parameter-dependent. For simulating SGN at low computational costs and without changing the learning rate or batch size, we propose the Positive-Negative Momentum (PNM) approach that is a powerful alternative to conventional Momentum in classic optimizers. The introduced PNM method maintains two approximate independent momentum terms. Then, we can control the magnitude of SGN explicitly by adjusting the momentum difference. We theoretically prove the convergence guarantee and the generalization advantage of PNM over Stochastic Gradient Descent (SGD). By incorporating PNM into the two conventional optimizers, SGD with Momentum and Adam, our extensive experiments empirically verified the significant advantage of the PNM-based variants over the corresponding conventional Momentum-based optimizers. Code: \url{https://github.com/zeke-xie/Positive-Negative-Momentum}.' volume: 139 URL: https://proceedings.mlr.press/v139/xie21h.html PDF: http://proceedings.mlr.press/v139/xie21h/xie21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xie21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zeke family: Xie - given: Li family: Yuan - given: Zhanxing family: Zhu - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11448-11458 id: xie21h issued: date-parts: - 2021 - 7 - 1 firstpage: 11448 lastpage: 11458 published: 2021-07-01 00:00:00 +0000 - title: 'A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization' abstract: 'This paper considers decentralized stochastic optimization over a network of $n$ nodes, where each node possesses a smooth non-convex local cost function and the goal of the networked nodes is to find an $\epsilon$-accurate first-order stationary point of the sum of the local costs. We focus on an online setting, where each node accesses its local cost only by means of a stochastic first-order oracle that returns a noisy version of the exact gradient. In this context, we propose a novel single-loop decentralized hybrid variance-reduced stochastic gradient method, called GT-HSGD, that outperforms the existing approaches in terms of both the oracle complexity and practical implementation. The GT-HSGD algorithm implements specialized local hybrid stochastic gradient estimators that are fused over the network to track the global gradient. Remarkably, GT-HSGD achieves a network topology-independent oracle complexity of $O(n^{-1}\epsilon^{-3})$ when the required error tolerance $\epsilon$ is small enough, leading to a linear speedup with respect to the centralized optimal online variance-reduced approaches that operate on a single node. Numerical experiments are provided to illustrate our main technical results.' volume: 139 URL: https://proceedings.mlr.press/v139/xin21a.html PDF: http://proceedings.mlr.press/v139/xin21a/xin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ran family: Xin - given: Usman family: Khan - given: Soummya family: Kar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11459-11469 id: xin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11459 lastpage: 11469 published: 2021-07-01 00:00:00 +0000 - title: 'Explore Visual Concept Formation for Image Classification' abstract: 'Human beings acquire the ability of image classification through visual concept learning, in which the process of concept formation involves intertwined searches of common properties and concept descriptions. However, in most image classification algorithms using deep convolutional neural network (ConvNet), the representation space is constructed under the premise that concept descriptions are fixed as one-hot codes, which limits the mining of properties and the ability of identifying unseen samples. Inspired by this, we propose a learning strategy of visual concept formation (LSOVCF) based on the ConvNet, in which the two intertwined parts of concept formation, i.e. feature extraction and concept description, are learned together. First, LSOVCF takes sample response in the last layer of ConvNet to induct concept description being assumed as Gaussian distribution, which is part of the training process. Second, the exploration and experience loss is designed for optimization, which adopts experience cache pool to speed up convergence. Experiments show that LSOVCF improves the ability of identifying unseen samples on cifar10, STL10, flower17 and ImageNet based on several backbones, from the classic VGG to the SOTA Ghostnet. The code is available at \url{https://github.com/elvintanhust/LSOVCF}.' volume: 139 URL: https://proceedings.mlr.press/v139/xiong21a.html PDF: http://proceedings.mlr.press/v139/xiong21a/xiong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xiong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shengzhou family: Xiong - given: Yihua family: Tan - given: Guoyou family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11470-11479 id: xiong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11470 lastpage: 11479 published: 2021-07-01 00:00:00 +0000 - title: 'CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee' abstract: 'In safe reinforcement learning (SRL) problems, an agent explores the environment to maximize an expected total reward and meanwhile avoids violation of certain constraints on a number of expected total costs. In general, such SRL problems have nonconvex objective functions subject to multiple nonconvex constraints, and hence are very challenging to solve, particularly to provide a globally optimal policy. Many popular SRL algorithms adopt a primal-dual structure which utilizes the updating of dual variables for satisfying the constraints. In contrast, we propose a primal approach, called constraint-rectified policy optimization (CRPO), which updates the policy alternatingly between objective improvement and constraint satisfaction. CRPO provides a primal-type algorithmic framework to solve SRL problems, where each policy update can take any variant of policy optimization step. To demonstrate the theoretical performance of CRPO, we adopt natural policy gradient (NPG) for each policy update step and show that CRPO achieves an $\mathcal{O}(1/\sqrt{T})$ convergence rate to the global optimal policy in the constrained policy set and an $\mathcal{O}(1/\sqrt{T})$ error bound on constraint satisfaction. This is the first finite-time analysis of primal SRL algorithms with global optimality guarantee. Our empirical results demonstrate that CRPO can outperform the existing primal-dual baseline algorithms significantly.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21a.html PDF: http://proceedings.mlr.press/v139/xu21a/xu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tengyu family: Xu - given: Yingbin family: Liang - given: Guanghui family: Lan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11480-11491 id: xu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11480 lastpage: 11491 published: 2021-07-01 00:00:00 +0000 - title: 'To be Robust or to be Fair: Towards Fairness in Adversarial Training' abstract: 'Adversarial training algorithms have been proved to be reliable to improve machine learning models’ robustness against adversarial examples. However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data. For instance, PGD adversarially trained ResNet18 model on CIFAR-10 has 93% clean accuracy and 67% PGD l_infty-8 adversarial accuracy on the class ”automobile” but only 65% and 17% on class ”cat”. This phenomenon happens in balanced datasets and does not exist in naturally trained models when only using clean samples. In this work, we empirically and theoretically show that this phenomenon can generally happen under adversarial training algorithms which minimize DNN models’ robust errors. Motivated by these findings, we propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses and experimental results validate the effectiveness of FRL.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21b.html PDF: http://proceedings.mlr.press/v139/xu21b/xu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Han family: Xu - given: Xiaorui family: Liu - given: Yaxin family: Li - given: Anil family: Jain - given: Jiliang family: Tang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11492-11501 id: xu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11492 lastpage: 11501 published: 2021-07-01 00:00:00 +0000 - title: 'Interpretable Stein Goodness-of-fit Tests on Riemannian Manifold' abstract: 'In many applications, we encounter data on Riemannian manifolds such as torus and rotation groups. Standard statistical procedures for multivariate data are not applicable to such data. In this study, we develop goodness-of-fit testing and interpretable model criticism methods for general distributions on Riemannian manifolds, including those with an intractable normalization constant. The proposed methods are based on extensions of kernel Stein discrepancy, which are derived from Stein operators on Riemannian manifolds. We discuss the connections between the proposed tests with existing ones and provide a theoretical analysis of their asymptotic Bahadur efficiency. Simulation results and real data applications show the validity and usefulness of the proposed methods.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21c.html PDF: http://proceedings.mlr.press/v139/xu21c/xu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wenkai family: Xu - given: Takeru family: Matsuda editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11502-11513 id: xu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11502 lastpage: 11513 published: 2021-07-01 00:00:00 +0000 - title: 'Rethinking Neural vs. Matrix-Factorization Collaborative Filtering: the Theoretical Perspectives' abstract: 'The recent work by Rendle et al. (2020), based on empirical observations, argues that matrix-factorization collaborative filtering (MCF) compares favorably to neural collaborative filtering (NCF), and conjectures the dot product’s superiority over the feed-forward neural network as similarity function. In this paper, we address the comparison rigorously by answering the following questions: 1. what is the limiting expressivity of each model; 2. under the practical gradient descent, to which solution does each optimization path converge; 3. how would the models generalize under the inductive and transductive learning setting. Our results highlight the similar expressivity for the overparameterized NCF and MCF as kernelized predictors, and reveal the relation between their optimization paths. We further show their different generalization behaviors, where MCF and NCF experience specific tradeoff and comparison in the transductive and inductive collaborative filtering setting. Lastly, by showing a novel generalization result, we reveal the critical role of correcting exposure bias for model evaluation in the inductive setting. Our results explain some of the previously observed conflicts, and we provide synthetic and real-data experiments to shed further insights to this topic.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21d.html PDF: http://proceedings.mlr.press/v139/xu21d/xu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Da family: Xu - given: Chuanwei family: Ruan - given: Evren family: Korpeoglu - given: Sushant family: Kumar - given: Kannan family: Achan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11514-11524 id: xu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 11514 lastpage: 11524 published: 2021-07-01 00:00:00 +0000 - title: 'Dash: Semi-Supervised Learning with Dynamic Thresholding' abstract: 'While semi-supervised learning (SSL) has received tremendous attentions in many machine learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either all unlabeled examples or the unlabeled examples with a fixed high-confidence prediction during the training progress. However, it is possible that too many correct/wrong pseudo labeled examples are eliminated/selected. In this work we develop a simple yet powerful framework, whose key idea is to select a subset of training examples from the unlabeled data when performing existing SSL methods so that only the unlabeled examples with pseudo labels related to the labeled data will be used to train models. The selection is performed at each updating iteration by only keeping the examples whose losses are smaller than a given threshold that is dynamically adjusted through the iteration. Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection and its theoretical guarantee. Specifically, we theoretically establish the convergence rate of Dash from the view of non-convex optimization. Finally, we empirically demonstrate the effectiveness of the proposed method in comparison with state-of-the-art over benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21e.html PDF: http://proceedings.mlr.press/v139/xu21e/xu21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi family: Xu - given: Lei family: Shang - given: Jinxing family: Ye - given: Qi family: Qian - given: Yu-Feng family: Li - given: Baigui family: Sun - given: Hao family: Li - given: Rong family: Jin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11525-11536 id: xu21e issued: date-parts: - 2021 - 7 - 1 firstpage: 11525 lastpage: 11536 published: 2021-07-01 00:00:00 +0000 - title: 'An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming' abstract: 'Predicting molecular conformations (or 3D structures) from molecular graphs is a fundamental problem in many applications. Most existing approaches are usually divided into two steps by first predicting the distances between atoms and then generating a 3D structure through optimizing a distance geometry problem. However, the distances predicted with such two-stage approaches may not be able to consistently preserve the geometry of local atomic neighborhoods, making the generated structures unsatisfying. In this paper, we propose an end-to-end solution for molecular conformation prediction called ConfVAE based on the conditional variational autoencoder framework. Specifically, the molecular graph is first encoded in a latent space, and then the 3D structures are generated by solving a principled bilevel optimization program. Extensive experiments on several benchmark data sets prove the effectiveness of our proposed approach over existing state-of-the-art approaches. Code is available at \url{https://github.com/MinkaiXu/ConfVAE-ICML21}.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21f.html PDF: http://proceedings.mlr.press/v139/xu21f/xu21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Minkai family: Xu - given: Wujie family: Wang - given: Shitong family: Luo - given: Chence family: Shi - given: Yoshua family: Bengio - given: Rafael family: Gomez-Bombarelli - given: Jian family: Tang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11537-11547 id: xu21f issued: date-parts: - 2021 - 7 - 1 firstpage: 11537 lastpage: 11547 published: 2021-07-01 00:00:00 +0000 - title: 'Self-supervised Graph-level Representation Learning with Local and Global Structure' abstract: 'This paper studies unsupervised/self-supervised whole-graph representation learning, which is critical in many tasks such as molecule properties prediction in drug and material discovery. Existing methods mainly focus on preserving the local similarity structure between different graph instances but fail to discover the global semantic structure of the entire data set. In this paper, we propose a unified framework called Local-instance and Global-semantic Learning (GraphLoG) for self-supervised whole-graph representation learning. Specifically, besides preserving the local similarities, GraphLoG introduces the hierarchical prototypes to capture the global semantic clusters. An efficient online expectation-maximization (EM) algorithm is further developed for learning the model. We evaluate GraphLoG by pre-training it on massive unlabeled graphs followed by fine-tuning on downstream tasks. Extensive experiments on both chemical and biological benchmark data sets demonstrate the effectiveness of the proposed approach.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21g.html PDF: http://proceedings.mlr.press/v139/xu21g/xu21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Minghao family: Xu - given: Hang family: Wang - given: Bingbing family: Ni - given: Hongyu family: Guo - given: Jian family: Tang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11548-11558 id: xu21g issued: date-parts: - 2021 - 7 - 1 firstpage: 11548 lastpage: 11558 published: 2021-07-01 00:00:00 +0000 - title: 'Conformal prediction interval for dynamic time-series' abstract: 'We develop a method to construct distribution-free prediction intervals for dynamic time-series, called \Verb|EnbPI| that wraps around any bootstrap ensemble estimator to construct sequential prediction intervals. \Verb|EnbPI| is closely related to the conformal prediction (CP) framework but does not require data exchangeability. Theoretically, these intervals attain finite-sample, \textit{approximately valid} marginal coverage for broad classes of regression functions and time-series with strongly mixing stochastic errors. Computationally, \Verb|EnbPI| avoids overfitting and requires neither data-splitting nor training multiple ensemble estimators; it efficiently aggregates bootstrap estimators that have been trained. In general, \Verb|EnbPI| is easy to implement, scalable to producing arbitrarily many prediction intervals sequentially, and well-suited to a wide range of regression functions. We perform extensive real-data analyses to demonstrate its effectiveness.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21h.html PDF: http://proceedings.mlr.press/v139/xu21h/xu21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chen family: Xu - given: Yao family: Xie editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11559-11569 id: xu21h issued: date-parts: - 2021 - 7 - 1 firstpage: 11559 lastpage: 11569 published: 2021-07-01 00:00:00 +0000 - title: 'Learner-Private Convex Optimization' abstract: 'Convex optimization with feedback is a framework where a learner relies on iterative queries and feedback to arrive at the minimizer of a convex function. The paradigm has gained significant popularity recently thanks to its scalability in large-scale optimization and machine learning. The repeated interactions, however, expose the learner to privacy risks from eavesdropping adversaries that observe the submitted queries. In this paper, we study how to optimally obfuscate the learner’s queries in convex optimization with first-order feedback, so that their learned optimal value is provably difficult to estimate for the eavesdropping adversary. We consider two formulations of learner privacy: a Bayesian formulation in which the convex function is drawn randomly, and a minimax formulation in which the function is fixed and the adversary’s probability of error is measured with respect to a minimax criterion. We show that, if the learner wants to ensure the probability of the adversary estimating accurately be kept below 1/L, then the overhead in query complexity is additive in L in the minimax formulation, but multiplicative in L in the Bayesian formulation. Compared to existing learner-private sequential learning models with binary feedback, our results apply to the significantly richer family of general convex functions with full-gradient feedback. Our proofs are largely enabled by tools from the theory of Dirichlet processes, as well as more sophisticated lines of analysis aimed at measuring the amount of information leakage under a full-gradient oracle.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21i.html PDF: http://proceedings.mlr.press/v139/xu21i/xu21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiaming family: Xu - given: Kuang family: Xu - given: Dana family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11570-11580 id: xu21i issued: date-parts: - 2021 - 7 - 1 firstpage: 11570 lastpage: 11580 published: 2021-07-01 00:00:00 +0000 - title: 'Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality' abstract: 'Designing off-policy reinforcement learning algorithms is typically a very challenging task, because a desirable iteration update often involves an expectation over an on-policy distribution. Prior off-policy actor-critic (AC) algorithms have introduced a new critic that uses the density ratio for adjusting the distribution mismatch in order to stabilize the convergence, but at the cost of potentially introducing high biases due to the estimation errors of both the density ratio and value function. In this paper, we develop a doubly robust off-policy AC (DR-Off-PAC) for discounted MDP, which can take advantage of learned nuisance functions to reduce estimation errors. Moreover, DR-Off-PAC adopts a single timescale structure, in which both actor and critics are updated simultaneously with constant stepsize, and is thus more sample efficient than prior algorithms that adopt either two timescale or nested-loop structure. We study the finite-time convergence rate and characterize the sample complexity for DR-Off-PAC to attain an $\epsilon$-accurate optimal policy. We also show that the overall convergence of DR-Off-PAC is doubly robust to the approximation errors that depend only on the expressive power of approximation functions. To the best of our knowledge, our study establishes the first overall sample complexity analysis for single time-scale off-policy AC algorithm.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21j.html PDF: http://proceedings.mlr.press/v139/xu21j/xu21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tengyu family: Xu - given: Zhuoran family: Yang - given: Zhaoran family: Wang - given: Yingbin family: Liang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11581-11591 id: xu21j issued: date-parts: - 2021 - 7 - 1 firstpage: 11581 lastpage: 11591 published: 2021-07-01 00:00:00 +0000 - title: 'Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth' abstract: 'Graph Neural Networks (GNNs) have been studied through the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs’ training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21k.html PDF: http://proceedings.mlr.press/v139/xu21k/xu21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Keyulu family: Xu - given: Mozhi family: Zhang - given: Stefanie family: Jegelka - given: Kenji family: Kawaguchi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11592-11602 id: xu21k issued: date-parts: - 2021 - 7 - 1 firstpage: 11592 lastpage: 11602 published: 2021-07-01 00:00:00 +0000 - title: 'Group-Sparse Matrix Factorization for Transfer Learning of Word Embeddings' abstract: 'Sparse regression has recently been applied to enable transfer learning from very limited data. We study an extension of this approach to unsupervised learning—in particular, learning word embeddings from unstructured text corpora using low-rank matrix factorization. Intuitively, when transferring word embeddings to a new domain, we expect that the embeddings change for only a small number of words—e.g., the ones with novel meanings in that domain. We propose a novel group-sparse penalty that exploits this sparsity to perform transfer learning when there is very little text data available in the target domain—e.g., a single article of text. We prove generalization bounds for our algorithm. Furthermore, we empirically evaluate its effectiveness, both in terms of prediction accuracy in downstream tasks as well as in terms of interpretability of the results.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21l.html PDF: http://proceedings.mlr.press/v139/xu21l/xu21l.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21l.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kan family: Xu - given: Xuanyi family: Zhao - given: Hamsa family: Bastani - given: Osbert family: Bastani editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11603-11612 id: xu21l issued: date-parts: - 2021 - 7 - 1 firstpage: 11603 lastpage: 11612 published: 2021-07-01 00:00:00 +0000 - title: 'KNAS: Green Neural Architecture Search' abstract: 'Many existing neural architecture search (NAS) solutions rely on downstream training for architecture evaluation, which takes enormous computations. Considering that these computations bring a large carbon footprint, this paper aims to explore a green (namely environmental-friendly) NAS solution that evaluates architectures without training. Intuitively, gradients, induced by the architecture itself, directly decide the convergence and generalization results. It motivates us to propose the gradient kernel hypothesis: Gradients can be used as a coarse-grained proxy of downstream training to evaluate random-initialized networks. To support the hypothesis, we conduct a theoretical analysis and find a practical gradient kernel that has good correlations with training loss and validation performance. According to this hypothesis, we propose a new kernel based architecture search approach KNAS. Experiments show that KNAS achieves competitive results with orders of magnitude faster than “train-then-test” paradigms on image classification tasks. Furthermore, the extremely low search cost enables its wide applications. The searched network also outperforms strong baseline RoBERTA-large on two text classification tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/xu21m.html PDF: http://proceedings.mlr.press/v139/xu21m/xu21m.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-xu21m.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jingjing family: Xu - given: Liang family: Zhao - given: Junyang family: Lin - given: Rundong family: Gao - given: Xu family: Sun - given: Hongxia family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11613-11625 id: xu21m issued: date-parts: - 2021 - 7 - 1 firstpage: 11613 lastpage: 11625 published: 2021-07-01 00:00:00 +0000 - title: 'Structured Convolutional Kernel Networks for Airline Crew Scheduling' abstract: 'Motivated by the needs from an airline crew scheduling application, we introduce structured convolutional kernel networks (Struct-CKN), which combine CKNs from Mairal et al. (2014) in a structured prediction framework that supports constraints on the outputs. CKNs are a particular kind of convolutional neural networks that approximate a kernel feature map on training data, thus combining properties of deep learning with the non-parametric flexibility of kernel methods. Extending CKNs to structured outputs allows us to obtain useful initial solutions on a flight-connection dataset that can be further refined by an airline crew scheduling solver. More specifically, we use a flight-based network modeled as a general conditional random field capable of incorporating local constraints in the learning process. Our experiments demonstrate that this approach yields significant improvements for the large-scale crew pairing problem (50,000 flights per month) over standard approaches, reducing the solution cost by 17% (a gain of millions of dollars) and the cost of global constraints by 97%.' volume: 139 URL: https://proceedings.mlr.press/v139/yaakoubi21a.html PDF: http://proceedings.mlr.press/v139/yaakoubi21a/yaakoubi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yaakoubi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yassine family: Yaakoubi - given: Francois family: Soumis - given: Simon family: Lacoste-Julien editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11626-11636 id: yaakoubi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11626 lastpage: 11636 published: 2021-07-01 00:00:00 +0000 - title: 'Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences' abstract: 'Ordinary supervised learning is useful when we have paired training data of input $X$ and output $Y$. However, such paired data can be difficult to collect in practice. In this paper, we consider the task of predicting $Y$ from $X$ when we have no paired data of them, but we have two separate, independent datasets of $X$ and $Y$ each observed with some mediating variable $U$, that is, we have two datasets $S_X = \{(X_i, U_i)\}$ and $S_Y = \{(U’_j, Y’_j)\}$. A naive approach is to predict $U$ from $X$ using $S_X$ and then $Y$ from $U$ using $S_Y$, but we show that this is not statistically consistent. Moreover, predicting $U$ can be more difficult than predicting $Y$ in practice, e.g., when $U$ has higher dimensionality. To circumvent the difficulty, we propose a new method that avoids predicting $U$ but directly learns $Y = f(X)$ by training $f(X)$ with $S_{X}$ to predict $h(U)$ which is trained with $S_{Y}$ to approximate $Y$. We prove statistical consistency and error bounds of our method and experimentally confirm its practical usefulness.' volume: 139 URL: https://proceedings.mlr.press/v139/yamane21a.html PDF: http://proceedings.mlr.press/v139/yamane21a/yamane21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yamane21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ikko family: Yamane - given: Junya family: Honda - given: Florian family: Yger - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11637-11647 id: yamane21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11637 lastpage: 11647 published: 2021-07-01 00:00:00 +0000 - title: 'EL-Attention: Memory Efficient Lossless Attention for Generation' abstract: 'Transformer model with multi-head attention requires caching intermediate results for efficient inference in generation tasks. However, cache brings new memory-related costs and prevents leveraging larger batch size for faster speed. We propose memory-efficient lossless attention (called EL-attention) to address this issue. It avoids heavy operations for building multi-head keys and values, cache for them is not needed. EL-attention constructs an ensemble of attention results by expanding query while keeping key and value shared. It produces the same result as multi-head attention with less GPU memory and faster inference speed. We conduct extensive experiments on Transformer, BART, and GPT-2 for summarization and question generation tasks. The results show EL-attention speeds up existing models by 1.6x to 5.3x without accuracy loss.' volume: 139 URL: https://proceedings.mlr.press/v139/yan21a.html PDF: http://proceedings.mlr.press/v139/yan21a/yan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yu family: Yan - given: Jiusheng family: Chen - given: Weizhen family: Qi - given: Nikhil family: Bhendawade - given: Yeyun family: Gong - given: Nan family: Duan - given: Ruofei family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11648-11658 id: yan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11648 lastpage: 11658 published: 2021-07-01 00:00:00 +0000 - title: 'Link Prediction with Persistent Homology: An Interactive View' abstract: 'Link prediction is an important learning task for graph-structured data. In this paper, we propose a novel topological approach to characterize interactions between two nodes. Our topological feature, based on the extended persistent homology, encodes rich structural information regarding the multi-hop paths connecting nodes. Based on this feature, we propose a graph neural network method that outperforms state-of-the-arts on different benchmarks. As another contribution, we propose a novel algorithm to more efficiently compute the extended persistence diagrams for graphs. This algorithm can be generally applied to accelerate many other topological methods for graph learning tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/yan21b.html PDF: http://proceedings.mlr.press/v139/yan21b/yan21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yan21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zuoyu family: Yan - given: Tengfei family: Ma - given: Liangcai family: Gao - given: Zhi family: Tang - given: Chao family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11659-11669 id: yan21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11659 lastpage: 11669 published: 2021-07-01 00:00:00 +0000 - title: 'CATE: Computation-aware Neural Architecture Encoding with Transformers' abstract: 'Recent works (White et al., 2020a; Yan et al., 2020) demonstrate the importance of architecture encodings in Neural Architecture Search (NAS). These encodings encode either structure or computation information of the neural architectures. Compared to structure-aware encodings, computation-aware encodings map architectures with similar accuracies to the same region, which improves the downstream architecture search performance (Zhang et al., 2019; White et al., 2020a). In this work, we introduce a Computation-Aware Transformer-based Encoding method called CATE. Different from existing computation-aware encodings based on fixed transformation (e.g. path encoding), CATE employs a pairwise pre-training scheme to learn computation-aware encodings using Transformers with cross-attention. Such learned encodings contain dense and contextualized computation information of neural architectures. We compare CATE with eleven encodings under three major encoding-dependent NAS subroutines in both small and large search spaces. Our experiments show that CATE is beneficial to the downstream search, especially in the large search space. Moreover, the outside search space experiment demonstrates its superior generalization ability beyond the search space on which it was trained. Our code is available at: https://github.com/MSU-MLSys-Lab/CATE.' volume: 139 URL: https://proceedings.mlr.press/v139/yan21c.html PDF: http://proceedings.mlr.press/v139/yan21c/yan21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yan21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shen family: Yan - given: Kaiqiang family: Song - given: Fei family: Liu - given: Mi family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11670-11681 id: yan21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11670 lastpage: 11681 published: 2021-07-01 00:00:00 +0000 - title: 'On Perceptual Lossy Compression: The Cost of Perceptual Reconstruction and An Optimal Training Framework' abstract: 'Lossy compression algorithms are typically designed to achieve the lowest possible distortion at a given bit rate. However, recent studies show that pursuing high perceptual quality would lead to increase of the lowest achievable distortion (e.g., MSE). This paper provides nontrivial results theoretically revealing that, 1) the cost of achieving perfect perception quality is exactly a doubling of the lowest achievable MSE distortion, 2) an optimal encoder for the “classic” rate-distortion problem is also optimal for the perceptual compression problem, 3) distortion loss is unnecessary for training a perceptual decoder. Further, we propose a novel training framework to achieve the lowest MSE distortion under perfect perception constraint at a given bit rate. This framework uses a GAN with discriminator conditioned on an MSE-optimized encoder, which is superior over the traditional framework using distortion plus adversarial loss. Experiments are provided to verify the theoretical finding and demonstrate the superiority of the proposed training framework.' volume: 139 URL: https://proceedings.mlr.press/v139/yan21d.html PDF: http://proceedings.mlr.press/v139/yan21d/yan21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yan21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zeyu family: Yan - given: Fei family: Wen - given: Rendong family: Ying - given: Chao family: Ma - given: Peilin family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11682-11692 id: yan21d issued: date-parts: - 2021 - 7 - 1 firstpage: 11682 lastpage: 11692 published: 2021-07-01 00:00:00 +0000 - title: 'CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection' abstract: 'We investigate the adversarial robustness of CNNs from the perspective of channel-wise activations. By comparing normally trained and adversarially trained models, we observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts. However, the channels that are \textit{negatively-relevant} (NR) to predictions are still over-activated when processing adversarial data. Besides, we also observe that AT does not result in similar robustness for all classes. For the robust classes, channels with larger activation magnitudes are usually more \textit{positively-relevant} (PR) to predictions, but this alignment does not hold for the non-robust classes. Given these observations, we hypothesize that suppressing NR channels and aligning PR ones with their relevances further enhances the robustness of CNNs under AT. To examine this hypothesis, we introduce a novel mechanism, \textit{i.e.}, \underline{C}hannel-wise \underline{I}mportance-based \underline{F}eature \underline{S}election (CIFS). The CIFS manipulates channels’ activations of certain layers by generating non-negative multipliers to these channels based on their relevances to predictions. Extensive experiments on benchmark datasets including CIFAR10 and SVHN clearly verify the hypothesis and CIFS’s effectiveness of robustifying CNNs.' volume: 139 URL: https://proceedings.mlr.press/v139/yan21e.html PDF: http://proceedings.mlr.press/v139/yan21e/yan21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yan21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hanshu family: Yan - given: Jingfeng family: Zhang - given: Gang family: Niu - given: Jiashi family: Feng - given: Vincent family: Tan - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11693-11703 id: yan21e issued: date-parts: - 2021 - 7 - 1 firstpage: 11693 lastpage: 11703 published: 2021-07-01 00:00:00 +0000 - title: 'Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models' abstract: 'Recent work showed that there could be a large gap between the classical uniform convergence bound and the actual test error of zero-training-error predictors (interpolators) such as deep neural networks. To better understand this gap, we study the uniform convergence in the nonlinear random feature model and perform a precise theoretical analysis on how uniform convergence depends on the sample size and the number of parameters. We derive and prove analytical expressions for three quantities in this model: 1) classical uniform convergence over norm balls, 2) uniform convergence over interpolators in the norm ball (recently proposed by \citet{zhou2021uniform}), and 3) the risk of minimum norm interpolator. We show that, in the setting where the classical uniform convergence bound is vacuous (diverges to $\infty$), uniform convergence over the interpolators still gives a non-trivial bound of the test error of interpolating solutions. We also showcase a different setting where classical uniform convergence bound is non-vacuous, but uniform convergence over interpolators can give an improved sample complexity guarantee. Our result provides a first exact comparison between the test errors and uniform convergence bounds for interpolators beyond simple linear models.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21a.html PDF: http://proceedings.mlr.press/v139/yang21a/yang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zitong family: Yang - given: Yu family: Bai - given: Song family: Mei editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11704-11715 id: yang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11704 lastpage: 11715 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Optimal Auctions with Correlated Valuations from Samples' abstract: 'In single-item auction design, it is well known due to Cremer and McLean that when bidders’ valuations are drawn from a correlated prior distribution, the auctioneer can extract full social surplus as revenue. However, in most real-world applications, the prior is usually unknown and can only be learned from historical data. In this work, we investigate the robustness of the optimal auction with correlated valuations via sample complexity analysis. We prove upper and lower bounds on the number of samples from the unknown prior required to learn a (1-epsilon)-approximately optimal auction. Our results reinforce the common belief that optimal correlated auctions are sensitive to the distribution parameters and hard to learn unless the prior distribution is well-behaved.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21b.html PDF: http://proceedings.mlr.press/v139/yang21b/yang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chunxue family: Yang - given: Xiaohui family: Bei editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11716-11726 id: yang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11716 lastpage: 11726 published: 2021-07-01 00:00:00 +0000 - title: 'Tensor Programs IV: Feature Learning in Infinite-Width Neural Networks' abstract: 'As its width tends to infinity, a deep neural network’s behavior under gradient descent can become simplified and predictable (e.g. given by the Neural Tangent Kernel (NTK)), if it is parametrized appropriately (e.g. the NTK parametrization). However, we show that the standard and NTK parametrizations of a neural network do not admit infinite-width limits that can *learn* features, which is crucial for pretraining and transfer learning such as with BERT. We propose simple modifications to the standard parametrization to allow for feature learning in the limit. Using the *Tensor Programs* technique, we derive explicit formulas for such limits. On Word2Vec and few-shot learning on Omniglot via MAML, two canonical tasks that rely crucially on feature learning, we compute these limits exactly. We find that they outperform both NTK baselines and finite-width networks, with the latter approaching the infinite-width feature learning performance as width increases.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21c.html PDF: http://proceedings.mlr.press/v139/yang21c/yang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Greg family: Yang - given: Edward J. family: Hu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11727-11737 id: yang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11727 lastpage: 11737 published: 2021-07-01 00:00:00 +0000 - title: 'LARNet: Lie Algebra Residual Network for Face Recognition' abstract: 'Face recognition is an important yet challenging problem in computer vision. A major challenge in practical face recognition applications lies in significant variations between profile and frontal faces. Traditional techniques address this challenge either by synthesizing frontal faces or by pose invariant learning. In this paper, we propose a novel method with Lie algebra theory to explore how face rotation in the 3D space affects the deep feature generation process of convolutional neural networks (CNNs). We prove that face rotation in the image space is equivalent to an additive residual component in the feature space of CNNs, which is determined solely by the rotation. Based on this theoretical finding, we further design a Lie Algebraic Residual Network (LARNet) for tackling pose robust face recognition. Our LARNet consists of a residual subnet for decoding rotation information from input face images, and a gating subnet to learn rotation magnitude for controlling the strength of the residual component contributing to the feature learning process. Comprehensive experimental evaluations on both frontal-profile face datasets and general face recognition datasets convincingly demonstrate that our method consistently outperforms the state-of-the-art ones.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21d.html PDF: http://proceedings.mlr.press/v139/yang21d/yang21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiaolong family: Yang - given: Xiaohong family: Jia - given: Dihong family: Gong - given: Dong-Ming family: Yan - given: Zhifeng family: Li - given: Wei family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11738-11750 id: yang21d issued: date-parts: - 2021 - 7 - 1 firstpage: 11738 lastpage: 11750 published: 2021-07-01 00:00:00 +0000 - title: 'BASGD: Buffered Asynchronous SGD for Byzantine Learning' abstract: 'Distributed learning has become a hot research topic due to its wide application in cluster-based large-scale learning, federated learning, edge computing and so on. Most traditional distributed learning methods typically assume no failure or attack. However, many unexpected cases, such as communication failure and even malicious attack, may happen in real applications. Hence, Byzantine learning (BL), which refers to distributed learning with failure or attack, has recently attracted much attention. Most existing BL methods are synchronous, which are impractical in some applications due to heterogeneous or offline workers. In these cases, asynchronous BL (ABL) is usually preferred. In this paper, we propose a novel method, called buffered asynchronous stochastic gradient descent (BASGD), for ABL. To the best of our knowledge, BASGD is the first ABL method that can resist malicious attack without storing any instances on server. Compared with those methods which need to store instances on server, BASGD has a wider scope of application. BASGD is proved to be convergent, and be able to resist failure or attack. Empirical results show that BASGD significantly outperforms vanilla asynchronous stochastic gradient descent (ASGD) and other ABL baselines when there exists failure or attack on workers.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21e.html PDF: http://proceedings.mlr.press/v139/yang21e/yang21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yi-Rui family: Yang - given: Wu-Jun family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11751-11761 id: yang21e issued: date-parts: - 2021 - 7 - 1 firstpage: 11751 lastpage: 11761 published: 2021-07-01 00:00:00 +0000 - title: 'Tensor Programs IIb: Architectural Universality Of Neural Tangent Kernel Training Dynamics' abstract: 'Yang (2020) recently showed that the Neural Tangent Kernel (NTK) at initialization has an infinite-width limit for a large class of architectures including modern staples such as ResNet and Transformers. However, their analysis does not apply to training. Here, we show the same neural networks (in the so-called NTK parametrization) during training follow a kernel gradient descent dynamics in function space, where the kernel is the infinite-width NTK. This completes the proof of the architectural universality of NTK behavior. To achieve this result, we apply the Tensor Programs technique: Write the entire SGD dynamics inside a Tensor Program and analyze it via the Master Theorem. To facilitate this proof, we develop a graphical notation for Tensor Programs, which we believe is also an important contribution toward the pedagogy and exposition of the Tensor Programs technique.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21f.html PDF: http://proceedings.mlr.press/v139/yang21f/yang21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Greg family: Yang - given: Etai family: Littwin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11762-11772 id: yang21f issued: date-parts: - 2021 - 7 - 1 firstpage: 11762 lastpage: 11772 published: 2021-07-01 00:00:00 +0000 - title: 'Graph Neural Networks Inspired by Classical Iterative Algorithms' abstract: 'Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e.g., as can occur as a result of graph heterophily or adversarial attacks. To at least partially address these issues within a simple transparent framework, we consider a new family of GNN layers designed to mimic and integrate the update rules of two classical iterative algorithms, namely, proximal gradient descent and iterative reweighted least squares (IRLS). The former defines an extensible base GNN architecture that is immune to oversmoothing while nonetheless capturing long-range dependencies by allowing arbitrary propagation steps. In contrast, the latter produces a novel attention mechanism that is explicitly anchored to an underlying end-to-end energy function, contributing stability with respect to edge uncertainty. When combined we obtain an extremely simple yet robust model that we evaluate across disparate scenarios including standardized benchmarks, adversarially-perturbated graphs, graphs with heterophily, and graphs involving long-range dependencies. In doing so, we compare against SOTA GNN approaches that have been explicitly designed for the respective task, achieving competitive or superior node classification accuracy. Our code is available at https://github.com/FFTYYY/TWIRLS. And for an extended version of this work, please see https://arxiv.org/abs/2103.06064.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21g.html PDF: http://proceedings.mlr.press/v139/yang21g/yang21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yongyi family: Yang - given: Tang family: Liu - given: Yangkun family: Wang - given: Jinjing family: Zhou - given: Quan family: Gan - given: Zhewei family: Wei - given: Zheng family: Zhang - given: Zengfeng family: Huang - given: David family: Wipf editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11773-11783 id: yang21g issued: date-parts: - 2021 - 7 - 1 firstpage: 11773 lastpage: 11783 published: 2021-07-01 00:00:00 +0000 - title: 'Representation Matters: Offline Pretraining for Sequential Decision Making' abstract: 'The recent success of supervised learning methods on ever larger offline datasets has spurred interest in the reinforcement learning (RL) field to investigate whether the same paradigms can be translated to RL algorithms. This research area, known as offline RL, has largely focused on offline policy optimization, aiming to find a return-maximizing policy exclusively from offline data. In this paper, we consider a slightly different approach to incorporating offline data into sequential decision-making. We aim to answer the question, what unsupervised objectives applied to offline datasets are able to learn state representations which elevate performance on downstream tasks, whether those downstream tasks be online RL, imitation learning from expert demonstrations, or even offline policy optimization based on the same offline dataset? Through a variety of experiments utilizing standard offline RL datasets, we find that the use of pretraining with unsupervised learning objectives can dramatically improve the performance of policy learning algorithms that otherwise yield mediocre performance on their own. Extensive ablations further provide insights into what components of these unsupervised objectives {–} e.g., reward prediction, continuous or discrete representations, pretraining or finetuning {–} are most important and in which settings.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21h.html PDF: http://proceedings.mlr.press/v139/yang21h/yang21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mengjiao family: Yang - given: Ofir family: Nachum editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11784-11794 id: yang21h issued: date-parts: - 2021 - 7 - 1 firstpage: 11784 lastpage: 11794 published: 2021-07-01 00:00:00 +0000 - title: 'Accelerating Safe Reinforcement Learning with Constraint-mismatched Baseline Policies' abstract: 'We consider the problem of reinforcement learning when provided with (1) a baseline control policy and (2) a set of constraints that the learner must satisfy. The baseline policy can arise from demonstration data or a teacher agent and may provide useful cues for learning, but it might also be sub-optimal for the task at hand, and is not guaranteed to satisfy the specified constraints, which might encode safety, fairness or other application-specific requirements. In order to safely learn from baseline policies, we propose an iterative policy optimization algorithm that alternates between maximizing expected return on the task, minimizing distance to the baseline policy, and projecting the policy onto the constraint-satisfying set. We analyze our algorithm theoretically and provide a finite-time convergence guarantee. In our experiments on five different control tasks, our algorithm consistently outperforms several state-of-the-art baselines, achieving 10 times fewer constraint violations and 40% higher reward on average.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21i.html PDF: http://proceedings.mlr.press/v139/yang21i/yang21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tsung-Yen family: Yang - given: Justinian family: Rosca - given: Karthik family: Narasimhan - given: Peter J family: Ramadge editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11795-11807 id: yang21i issued: date-parts: - 2021 - 7 - 1 firstpage: 11795 lastpage: 11807 published: 2021-07-01 00:00:00 +0000 - title: 'Voice2Series: Reprogramming Acoustic Models for Time Series Classification' abstract: 'Learning to classify time series with limited data is a practical yet challenging problem. Current methods are primarily based on hand-designed feature extraction rules or domain-specific data augmentation. Motivated by the advances in deep speech processing models and the fact that voice data are univariate temporal signals, in this paper we propose Voice2Serie (V2S), a novel end-to-end approach that reprograms acoustic models for time series classification, through input transformation learning and output label mapping. Leveraging the representation learning power of a large-scale pre-trained speech processing model, on 31 different time series tasks we show that V2S outperforms or is on part with state-of-the-art methods on 22 tasks, and improves their average accuracy by 1.72%. We further provide theoretical justification of V2S by proving its population risk is upper bounded by the source risk and a Wasserstein distance accounting for feature alignment via reprogramming. Our results offer new and effective means to time series classification.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21j.html PDF: http://proceedings.mlr.press/v139/yang21j/yang21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chao-Han Huck family: Yang - given: Yun-Yun family: Tsai - given: Pin-Yu family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11808-11819 id: yang21j issued: date-parts: - 2021 - 7 - 1 firstpage: 11808 lastpage: 11819 published: 2021-07-01 00:00:00 +0000 - title: 'When All We Need is a Piece of the Pie: A Generic Framework for Optimizing Two-way Partial AUC' abstract: 'The Area Under the ROC Curve (AUC) is a crucial metric for machine learning, which evaluates the average performance over all possible True Positive Rates (TPRs) and False Positive Rates (FPRs). Based on the knowledge that a skillful classifier should simultaneously embrace a high TPR and a low FPR, we turn to study a more general variant called Two-way Partial AUC (TPAUC), where only the region with $\mathsf{TPR} \ge \alpha, \mathsf{FPR} \le \beta$ is included in the area. Moreover, a recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics where only the FPR range is restricted, opening a new problem to seek solutions to leverage high TPAUC. Motivated by this, we present the first trial in this paper to optimize this new metric. The critical challenge along this course lies in the difficulty of performing gradient-based optimization with end-to-end stochastic training, even with a proper choice of surrogate loss. To address this issue, we propose a generic framework to construct surrogate optimization problems, which supports efficient end-to-end training with deep-learning. Moreover, our theoretical analyses show that: 1) the objective function of the surrogate problems will achieve an upper bound of the original problem under mild conditions, and 2) optimizing the surrogate problems leads to good generalization performance in terms of TPAUC with a high probability. Finally, empirical studies over several benchmark datasets speak to the efficacy of our framework.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21k.html PDF: http://proceedings.mlr.press/v139/yang21k/yang21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhiyong family: Yang - given: Qianqian family: Xu - given: Shilong family: Bao - given: Yuan family: He - given: Xiaochun family: Cao - given: Qingming family: Huang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11820-11829 id: yang21k issued: date-parts: - 2021 - 7 - 1 firstpage: 11820 lastpage: 11829 published: 2021-07-01 00:00:00 +0000 - title: 'Rethinking Rotated Object Detection with Gaussian Wasserstein Distance Loss' abstract: 'Boundary discontinuity and its inconsistency to the final detection metric have been the bottleneck for rotating detection regression loss design. In this paper, we propose a novel regression loss based on Gaussian Wasserstein distance as a fundamental approach to solve the problem. Specifically, the rotated bounding box is converted to a 2-D Gaussian distribution, which enables to approximate the indifferentiable rotational IoU induced loss by the Gaussian Wasserstein distance (GWD) which can be learned efficiently by gradient back-propagation. GWD can still be informative for learning even there is no overlapping between two rotating bounding boxes which is often the case for small object detection. Thanks to its three unique properties, GWD can also elegantly solve the boundary discontinuity and square-like problem regardless how the bounding box is defined. Experiments on five datasets using different detectors show the effectiveness of our approach, and codes are available at https://github.com/yangxue0827/RotationDetection.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21l.html PDF: http://proceedings.mlr.press/v139/yang21l/yang21l.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21l.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xue family: Yang - given: Junchi family: Yan - given: Qi family: Ming - given: Wentao family: Wang - given: Xiaopeng family: Zhang - given: Qi family: Tian editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11830-11841 id: yang21l issued: date-parts: - 2021 - 7 - 1 firstpage: 11830 lastpage: 11841 published: 2021-07-01 00:00:00 +0000 - title: 'Delving into Deep Imbalanced Regression' abstract: 'Real-world data often exhibit imbalanced distributions, where certain target values have significantly fewer observations. Existing techniques for dealing with imbalanced data focus on targets with categorical indices, i.e., different classes. However, many tasks involve continuous targets, where hard boundaries between classes do not exist. We define Deep Imbalanced Regression (DIR) as learning from such imbalanced data with continuous targets, dealing with potential missing data for certain target values, and generalizing to the entire target range. Motivated by the intrinsic difference between categorical and continuous label space, we propose distribution smoothing for both labels and features, which explicitly acknowledges the effects of nearby targets, and calibrates both label and learned feature distributions. We curate and benchmark large-scale DIR datasets from common real-world tasks in computer vision, natural language processing, and healthcare domains. Extensive experiments verify the superior performance of our strategies. Our work fills the gap in benchmarks and techniques for practical imbalanced regression problems. Code and data are available at: https://github.com/YyzHarry/imbalanced-regression.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21m.html PDF: http://proceedings.mlr.press/v139/yang21m/yang21m.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21m.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuzhe family: Yang - given: Kaiwen family: Zha - given: Yingcong family: Chen - given: Hao family: Wang - given: Dina family: Katabi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11842-11851 id: yang21m issued: date-parts: - 2021 - 7 - 1 firstpage: 11842 lastpage: 11851 published: 2021-07-01 00:00:00 +0000 - title: 'Backpropagated Neighborhood Aggregation for Accurate Training of Spiking Neural Networks' abstract: 'While Backpropagation (BP) has been applied to spiking neural networks (SNNs) achieving encouraging results, a key challenge involved is to backpropagate a differentiable continuous-valued loss over layers of spiking neurons exhibiting discontinuous all-or-none firing activities. Existing methods deal with this difficulty by introducing compromises that come with their own limitations, leading to potential performance degradation. We propose a novel BP-like method, called neighborhood aggregation (NA), which computes accurate error gradients guiding weight updates that may lead to discontinuous modifications of firing activities. NA achieves this goal by aggregating the error gradient over multiple spike trains in the neighborhood of the present spike train of each neuron. The employed aggregation is based on a generalized finite difference approximation with a proposed distance metric quantifying the similarity between a given pair of spike trains. Our experiments show that the proposed NA algorithm delivers state-of-the-art performance for SNN training on several datasets including CIFAR10.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21n.html PDF: http://proceedings.mlr.press/v139/yang21n/yang21n.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21n.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yukun family: Yang - given: Wenrui family: Zhang - given: Peng family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11852-11862 id: yang21n issued: date-parts: - 2021 - 7 - 1 firstpage: 11852 lastpage: 11862 published: 2021-07-01 00:00:00 +0000 - title: 'SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks' abstract: 'In this paper, we propose a conceptually simple but very effective attention module for Convolutional Neural Networks (ConvNets). In contrast to existing channel-wise and spatial-wise attention modules, our module instead infers 3-D attention weights for the feature map in a layer without adding parameters to the original networks. Specifically, we base on some well-known neuroscience theories and propose to optimize an energy function to find the importance of each neuron. We further derive a fast closed-form solution for the energy function, and show that the solution can be implemented in less than ten lines of code. Another advantage of the module is that most of the operators are selected based on the solution to the defined energy function, avoiding too many efforts for structure tuning. Quantitative evaluations on various visual tasks demonstrate that the proposed module is flexible and effective to improve the representation ability of many ConvNets. Our code is available at Pytorch-SimAM.' volume: 139 URL: https://proceedings.mlr.press/v139/yang21o.html PDF: http://proceedings.mlr.press/v139/yang21o/yang21o.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yang21o.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lingxiao family: Yang - given: Ru-Yuan family: Zhang - given: Lida family: Li - given: Xiaohua family: Xie editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11863-11874 id: yang21o issued: date-parts: - 2021 - 7 - 1 firstpage: 11863 lastpage: 11874 published: 2021-07-01 00:00:00 +0000 - title: 'HAWQ-V3: Dyadic Neural Network Quantization' abstract: 'Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values. This hidden cost limits the latency improvement realized by quantizing Neural Networks. To address this, we present HAWQ-V3, a novel mixed-precision integer-only quantization framework. The contributions of HAWQ-V3 are the following: (i) An integer-only inference where the entire computational graph is performed only with integer multiplication, addition, and bit shifting, without any floating point operations or even integer division; (ii) A novel hardware-aware mixed-precision quantization method where the bit-precision is calculated by solving an integer linear programming problem that balances the trade-off between model perturbation and other constraints, e.g., memory footprint and latency; (iii) Direct hardware deployment and open source contribution for 4-bit uniform/mixed-precision quantization in TVM, achieving an average speed up of 1.45x for uniform 4-bit, as compared to uniform 8-bit for ResNet50 on T4 GPUs; and (iv) extensive evaluation of the proposed methods on ResNet18/50 and InceptionV3, for various model compression levels with/without mixed precision. For ResNet50, our INT8 quantization achieves an accuracy of 77.58%, which is 2.68% higher than prior integer-only work, and our mixed-precision INT4/8 quantization can reduce INT8 latency by 23% and still achieve 76.73% accuracy. Our framework and the TVM implementation have been open sourced (HAWQ, 2020).' volume: 139 URL: https://proceedings.mlr.press/v139/yao21a.html PDF: http://proceedings.mlr.press/v139/yao21a/yao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhewei family: Yao - given: Zhen family: Dong - given: Zhangcheng family: Zheng - given: Amir family: Gholami - given: Jiali family: Yu - given: Eric family: Tan - given: Leyuan family: Wang - given: Qijing family: Huang - given: Yida family: Wang - given: Michael family: Mahoney - given: Kurt family: Keutzer editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11875-11886 id: yao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11875 lastpage: 11886 published: 2021-07-01 00:00:00 +0000 - title: 'Improving Generalization in Meta-learning via Task Augmentation' abstract: 'Meta-learning has proven to be a powerful paradigm for transferring the knowledge from previous tasks to facilitate the learning of a novel task. Current dominant algorithms train a well-generalized model initialization which is adapted to each task via the support set. The crux lies in optimizing the generalization capability of the initialization, which is measured by the performance of the adapted model on the query set of each task. Unfortunately, this generalization measure, evidenced by empirical results, pushes the initialization to overfit the meta-training tasks, which significantly impairs the generalization and adaptation to novel tasks. To address this issue, we actively augment a meta-training task with “more data” when evaluating the generalization. Concretely, we propose two task augmentation methods, including MetaMix and Channel Shuffle. MetaMix linearly combines features and labels of samples from both the support and query sets. For each class of samples, Channel Shuffle randomly replaces a subset of their channels with the corresponding ones from a different class. Theoretical studies show how task augmentation improves the generalization of meta-learning. Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.' volume: 139 URL: https://proceedings.mlr.press/v139/yao21b.html PDF: http://proceedings.mlr.press/v139/yao21b/yao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Huaxiu family: Yao - given: Long-Kai family: Huang - given: Linjun family: Zhang - given: Ying family: Wei - given: Li family: Tian - given: James family: Zou - given: Junzhou family: Huang - given: Zhenhui () family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11887-11897 id: yao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11887 lastpage: 11897 published: 2021-07-01 00:00:00 +0000 - title: 'Deep Learning for Functional Data Analysis with Adaptive Basis Layers' abstract: 'Despite their widespread success, the application of deep neural networks to functional data remains scarce today. The infinite dimensionality of functional data means standard learning algorithms can be applied only after appropriate dimension reduction, typically achieved via basis expansions. Currently, these bases are chosen a priori without the information for the task at hand and thus may not be effective for the designated task. We instead propose to adaptively learn these bases in an end-to-end fashion. We introduce neural networks that employ a new Basis Layer whose hidden units are each basis functions themselves implemented as a micro neural network. Our architecture learns to apply parsimonious dimension reduction to functional inputs that focuses only on information relevant to the target rather than irrelevant variation in the input function. Across numerous classification/regression tasks with functional data, our method empirically outperforms other types of neural networks, and we prove that our approach is statistically consistent with low generalization error.' volume: 139 URL: https://proceedings.mlr.press/v139/yao21c.html PDF: http://proceedings.mlr.press/v139/yao21c/yao21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yao21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Junwen family: Yao - given: Jonas family: Mueller - given: Jane-Ling family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11898-11908 id: yao21c issued: date-parts: - 2021 - 7 - 1 firstpage: 11898 lastpage: 11908 published: 2021-07-01 00:00:00 +0000 - title: 'Addressing Catastrophic Forgetting in Few-Shot Problems' abstract: 'Neural networks are known to suffer from catastrophic forgetting when trained on sequential datasets. While there have been numerous attempts to solve this problem in large-scale supervised classification, little has been done to overcome catastrophic forgetting in few-shot classification problems. We demonstrate that the popular gradient-based model-agnostic meta-learning algorithm (MAML) indeed suffers from catastrophic forgetting and introduce a Bayesian online meta-learning framework that tackles this problem. Our framework utilises Bayesian online learning and meta-learning along with Laplace approximation and variational inference to overcome catastrophic forgetting in few-shot classification problems. The experimental evaluations demonstrate that our framework can effectively achieve this goal in comparison with various baselines. As an additional utility, we also demonstrate empirically that our framework is capable of meta-learning on sequentially arriving few-shot tasks from a stationary task distribution.' volume: 139 URL: https://proceedings.mlr.press/v139/yap21a.html PDF: http://proceedings.mlr.press/v139/yap21a/yap21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yap21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Pauching family: Yap - given: Hippolyt family: Ritter - given: David family: Barber editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11909-11919 id: yap21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11909 lastpage: 11919 published: 2021-07-01 00:00:00 +0000 - title: 'Reinforcement Learning with Prototypical Representations' abstract: 'Learning effective representations in image-based environments is crucial for sample efficient Reinforcement Learning (RL). Unfortunately, in RL, representation learning is confounded with the exploratory experience of the agent – learning a useful representation requires diverse data, while effective exploration is only possible with coherent representations. Furthermore, we would like to learn representations that not only generalize across tasks but also accelerate downstream exploration for efficient task-specific training. To address these challenges we propose Proto-RL, a self-supervised framework that ties representation learning with exploration through prototypical representations. These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations. We pre-train these task-agnostic representations and prototypes on environments without downstream task information. This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/yarats21a.html PDF: http://proceedings.mlr.press/v139/yarats21a/yarats21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yarats21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Denis family: Yarats - given: Rob family: Fergus - given: Alessandro family: Lazaric - given: Lerrel family: Pinto editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11920-11931 id: yarats21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11920 lastpage: 11931 published: 2021-07-01 00:00:00 +0000 - title: 'Elementary superexpressive activations' abstract: 'We call a finite family of activation functions \emph{superexpressive} if any multivariate continuous function can be approximated by a neural network that uses these activations and has a fixed architecture only depending on the number of input variables (i.e., to achieve any accuracy we only need to adjust the weights, without increasing the number of neurons). Previously, it was known that superexpressive activations exist, but their form was quite complex. We give examples of very simple superexpressive families: for example, we prove that the family $\{sin, arcsin\}$ is superexpressive. We also show that most practical activations (not involving periodic functions) are not superexpressive.' volume: 139 URL: https://proceedings.mlr.press/v139/yarotsky21a.html PDF: http://proceedings.mlr.press/v139/yarotsky21a/yarotsky21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yarotsky21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dmitry family: Yarotsky editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11932-11940 id: yarotsky21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11932 lastpage: 11940 published: 2021-07-01 00:00:00 +0000 - title: 'Break-It-Fix-It: Unsupervised Learning for Program Repair' abstract: 'We consider repair tasks: given a critic (e.g., compiler) that assesses the quality of an input, the goal is to train a fixer that converts a bad example (e.g., code with syntax errors) into a good one (e.g., code with no errors). Existing works create training data consisting of (bad, good) pairs by corrupting good examples using heuristics (e.g., dropping tokens). However, fixers trained on this synthetically-generated data do not extrapolate well to the real distribution of bad inputs. To bridge this gap, we propose a new training approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we use the critic to check a fixer’s output on real bad inputs and add good (fixed) outputs to the training data, and (ii) we train a breaker to generate realistic bad code from good code. Based on these ideas, we iteratively update the breaker and the fixer while using them in conjunction to generate more paired data. We evaluate BIFI on two code repair datasets: GitHub-Python, a new dataset we introduce where the goal is to repair Python code with AST parse errors; and DeepFix, where the goal is to repair C code with compiler errors. BIFI outperforms existing methods, obtaining 90.5% repair accuracy on GitHub-Python (+28.5%) and 71.7% on DeepFix (+5.6%). Notably, BIFI does not require any labeled data; we hope it will be a strong starting point for unsupervised learning of various repair tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/yasunaga21a.html PDF: http://proceedings.mlr.press/v139/yasunaga21a/yasunaga21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yasunaga21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Michihiro family: Yasunaga - given: Percy family: Liang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11941-11952 id: yasunaga21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11941 lastpage: 11952 published: 2021-07-01 00:00:00 +0000 - title: 'Improving Gradient Regularization using Complex-Valued Neural Networks' abstract: 'Gradient regularization is a neural network defense technique that requires no prior knowledge of an adversarial attack and that brings only limited increase in training computational complexity. A form of complex-valued neural network (CVNN) is proposed to improve the performance of gradient regularization on classification tasks of real-valued input in adversarial settings. The activation derivatives of each layer of the CVNN are dependent on the combination of inputs to the layer, and locally stable representations can be learned for inputs the network is trained on. Furthermore, the properties of the CVNN parameter derivatives resist decrease of performance on the standard objective that is caused by competition with the gradient regularization objective. Experimental results show that the performance of gradient regularized CVNN surpasses that of real-valued neural networks with comparable storage and computational complexity. Moreover, gradient regularized complex-valued networks exhibit robust performance approaching that of real-valued networks trained with multi-step adversarial training.' volume: 139 URL: https://proceedings.mlr.press/v139/yeats21a.html PDF: http://proceedings.mlr.press/v139/yeats21a/yeats21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yeats21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Eric C family: Yeats - given: Yiran family: Chen - given: Hai family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11953-11963 id: yeats21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11953 lastpage: 11963 published: 2021-07-01 00:00:00 +0000 - title: 'Neighborhood Contrastive Learning Applied to Online Patient Monitoring' abstract: 'Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients. In machine learning, online monitoring is often formulated as a supervised learning problem. Recently, contrastive learning approaches have demonstrated promising improvements over competitive supervised benchmarks. These methods rely on well-understood data augmentation techniques developed for image data which do not apply to online monitoring. In this work, we overcome this limitation by supplementing time-series data augmentation techniques with a novel contrastive learning objective which we call neighborhood contrastive learning (NCL). Our objective explicitly groups together contiguous time segments from each patient while maintaining state-specific information. Our experiments demonstrate a marked improvement over existing work applying contrastive methods to medical time-series.' volume: 139 URL: https://proceedings.mlr.press/v139/yeche21a.html PDF: http://proceedings.mlr.press/v139/yeche21a/yeche21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yeche21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hugo family: Yèche - given: Gideon family: Dresdner - given: Francesco family: Locatello - given: Matthias family: Hüser - given: Gunnar family: Rätsch editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11964-11974 id: yeche21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11964 lastpage: 11974 published: 2021-07-01 00:00:00 +0000 - title: 'From Local Structures to Size Generalization in Graph Neural Networks' abstract: 'Graph neural networks (GNNs) can process graphs of different sizes, but their ability to generalize across sizes, specifically from small to large graphs, is still not well understood. In this paper, we identify an important type of data where generalization from small to large graphs is challenging: graph distributions for which the local structure depends on the graph size. This effect occurs in multiple important graph learning domains, including social and biological networks. We first prove that when there is a difference between the local structures, GNNs are not guaranteed to generalize across sizes: there are "bad" global minima that do well on small graphs but fail on large graphs. We then study the size-generalization problem empirically and demonstrate that when there is a discrepancy in local structure, GNNs tend to converge to non-generalizing solutions. Finally, we suggest two approaches for improving size generalization, motivated by our findings. Notably, we propose a novel Self-Supervised Learning (SSL) task aimed at learning meaningful representations of local structures that appear in large graphs. Our SSL task improves classification accuracy on several popular datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/yehudai21a.html PDF: http://proceedings.mlr.press/v139/yehudai21a/yehudai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yehudai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Gilad family: Yehudai - given: Ethan family: Fetaya - given: Eli family: Meirom - given: Gal family: Chechik - given: Haggai family: Maron editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11975-11986 id: yehudai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11975 lastpage: 11986 published: 2021-07-01 00:00:00 +0000 - title: 'Improved OOD Generalization via Adversarial Training and Pretraing' abstract: 'Recently, learning a model that generalizes well on out-of-distribution (OOD) data has attracted great attention in the machine learning community. In this paper, after defining OOD generalization by Wasserstein distance, we theoretically justify that a model robust to input perturbation also generalizes well on OOD data. Inspired by previous findings that adversarial training helps improve robustness, we show that models trained by adversarial training have converged excess risk on OOD data. Besides, in the paradigm of pre-training then fine-tuning, we theoretically justify that the input perturbation robust model in the pre-training stage provides an initialization that generalizes well on downstream OOD data. Finally, various experiments conducted on image classification and natural language understanding tasks verify our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/yi21a.html PDF: http://proceedings.mlr.press/v139/yi21a/yi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mingyang family: Yi - given: Lu family: Hou - given: Jiacheng family: Sun - given: Lifeng family: Shang - given: Xin family: Jiang - given: Qun family: Liu - given: Zhiming family: Ma editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11987-11997 id: yi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 11987 lastpage: 11997 published: 2021-07-01 00:00:00 +0000 - title: 'Regret and Cumulative Constraint Violation Analysis for Online Convex Optimization with Long Term Constraints' abstract: 'This paper considers online convex optimization with long term constraints, where constraints can be violated in intermediate rounds, but need to be satisfied in the long run. The cumulative constraint violation is used as the metric to measure constraint violations, which excludes the situation that strictly feasible constraints can compensate the effects of violated constraints. A novel algorithm is first proposed and it achieves an $\mathcal{O}(T^{\max\{c,1-c\}})$ bound for static regret and an $\mathcal{O}(T^{(1-c)/2})$ bound for cumulative constraint violation, where $c\in(0,1)$ is a user-defined trade-off parameter, and thus has improved performance compared with existing results. Both static regret and cumulative constraint violation bounds are reduced to $\mathcal{O}(\log(T))$ when the loss functions are strongly convex, which also improves existing results. %In order to bound the regret with respect to any comparator sequence, In order to achieve the optimal regret with respect to any comparator sequence, another algorithm is then proposed and it achieves the optimal $\mathcal{O}(\sqrt{T(1+P_T)})$ regret and an $\mathcal{O}(\sqrt{T})$ cumulative constraint violation, where $P_T$ is the path-length of the comparator sequence. Finally, numerical simulations are provided to illustrate the effectiveness of the theoretical results.' volume: 139 URL: https://proceedings.mlr.press/v139/yi21b.html PDF: http://proceedings.mlr.press/v139/yi21b/yi21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yi21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xinlei family: Yi - given: Xiuxian family: Li - given: Tao family: Yang - given: Lihua family: Xie - given: Tianyou family: Chai - given: Karl family: Johansson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 11998-12008 id: yi21b issued: date-parts: - 2021 - 7 - 1 firstpage: 11998 lastpage: 12008 published: 2021-07-01 00:00:00 +0000 - title: 'Continuous-time Model-based Reinforcement Learning' abstract: 'Model-based reinforcement learning (MBRL) approaches rely on discrete-time state transition models whereas physical systems and the vast majority of control tasks operate in continuous-time. To avoid time-discretization approximation of the underlying process, we propose a continuous-time MBRL framework based on a novel actor-critic method. Our approach also infers the unknown state evolution differentials with Bayesian neural ordinary differential equations (ODE) to account for epistemic uncertainty. We implement and test our method on a new ODE-RL suite that explicitly solves continuous-time control systems. Our experiments illustrate that the model is robust against irregular and noisy data, and can solve classic control problems in a sample-efficient manner.' volume: 139 URL: https://proceedings.mlr.press/v139/yildiz21a.html PDF: http://proceedings.mlr.press/v139/yildiz21a/yildiz21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yildiz21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Cagatay family: Yildiz - given: Markus family: Heinonen - given: Harri family: Lähdesmäki editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12009-12018 id: yildiz21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12009 lastpage: 12018 published: 2021-07-01 00:00:00 +0000 - title: 'Distributed Nyström Kernel Learning with Communications' abstract: 'We study the statistical performance for distributed kernel ridge regression with Nyström (DKRR-NY) and with Nyström and iterative solvers (DKRR-NY-PCG) and successfully derive the optimal learning rates, which can improve the ranges of the number of local processors $p$ to the optimal in existing state-of-art bounds. More precisely, our theoretical analysis show that DKRR-NY and DKRR-NY-PCG achieve the same learning rates as the exact KRR requiring essentially $\mathcal{O}(|D|^{1.5})$ time and $\mathcal{O}(|D|)$ memory with relaxing the restriction on $p$ in expectation, where $|D|$ is the number of data, which exhibits the average effectiveness of multiple trials. Furthermore, for showing the generalization performance in a single trial, we deduce the learning rates for DKRR-NY and DKRR-NY-PCG in probability. Finally, we propose a novel algorithm DKRR-NY-CM based on DKRR-NY, which employs a communication strategy to further improve the learning performance, whose effectiveness of communications is validated in theoretical and experimental analysis.' volume: 139 URL: https://proceedings.mlr.press/v139/yin21a.html PDF: http://proceedings.mlr.press/v139/yin21a/yin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Rong family: Yin - given: Weiping family: Wang - given: Dan family: Meng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12019-12028 id: yin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12019 lastpage: 12028 published: 2021-07-01 00:00:00 +0000 - title: 'Path Planning using Neural A* Search' abstract: 'We present Neural A*, a novel data-driven search method for path planning problems. Despite the recent increasing attention to data-driven path planning, machine learning approaches to search-based planning are still challenging due to the discrete nature of search algorithms. In this work, we reformulate a canonical A* search algorithm to be differentiable and couple it with a convolutional encoder to form an end-to-end trainable neural network planner. Neural A* solves a path planning problem by encoding a problem instance to a guidance map and then performing the differentiable A* search with the guidance map. By learning to match the search results with ground-truth paths provided by experts, Neural A* can produce a path consistent with the ground truth accurately and efficiently. Our extensive experiments confirmed that Neural A* outperformed state-of-the-art data-driven planners in terms of the search optimality and efficiency trade-off. Furthermore, Neural A* successfully predicted realistic human trajectories by directly performing search-based planning on natural image inputs.' volume: 139 URL: https://proceedings.mlr.press/v139/yonetani21a.html PDF: http://proceedings.mlr.press/v139/yonetani21a/yonetani21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yonetani21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Ryo family: Yonetani - given: Tatsunori family: Taniai - given: Mohammadamin family: Barekatain - given: Mai family: Nishimura - given: Asako family: Kanezaki editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12029-12039 id: yonetani21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12029 lastpage: 12039 published: 2021-07-01 00:00:00 +0000 - title: 'SinIR: Efficient General Image Manipulation with Single Image Reconstruction' abstract: 'We propose SinIR, an efficient reconstruction-based framework trained on a single natural image for general image manipulation, including super-resolution, editing, harmonization, paint-to-image, photo-realistic style transfer, and artistic style transfer. We train our model on a single image with cascaded multi-scale learning, where each network at each scale is responsible for image reconstruction. This reconstruction objective greatly reduces the complexity and running time of training, compared to the GAN objective. However, the reconstruction objective also exacerbates the output quality. Therefore, to solve this problem, we further utilize simple random pixel shuffling, which also gives control over manipulation, inspired by the Denoising Autoencoder. With quantitative evaluation, we show that SinIR has competitive performance on various image manipulation tasks. Moreover, with a much simpler training objective (i.e., reconstruction), SinIR is trained 33.5 times faster than SinGAN (for 500x500 images) that solves similar tasks. Our code is publicly available at github.com/YooJiHyeong/SinIR.' volume: 139 URL: https://proceedings.mlr.press/v139/yoo21a.html PDF: http://proceedings.mlr.press/v139/yoo21a/yoo21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yoo21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jihyeong family: Yoo - given: Qifeng family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12040-12050 id: yoo21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12040 lastpage: 12050 published: 2021-07-01 00:00:00 +0000 - title: 'Conditional Temporal Neural Processes with Covariance Loss' abstract: 'We introduce a novel loss function, Covariance Loss, which is conceptually equivalent to conditional neural processes and has a form of regularization so that is applicable to many kinds of neural networks. With the proposed loss, mappings from input variables to target variables are highly affected by dependencies of target variables as well as mean activation and mean dependencies of input and target variables. This nature enables the resulting neural networks to become more robust to noisy observations and recapture missing dependencies from prior information. In order to show the validity of the proposed loss, we conduct extensive sets of experiments on real-world datasets with state-of-the-art models and discuss the benefits and drawbacks of the proposed Covariance Loss.' volume: 139 URL: https://proceedings.mlr.press/v139/yoo21b.html PDF: http://proceedings.mlr.press/v139/yoo21b/yoo21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yoo21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Boseon family: Yoo - given: Jiwoo family: Lee - given: Janghoon family: Ju - given: Seijun family: Chung - given: Soyeon family: Kim - given: Jaesik family: Choi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12051-12061 id: yoo21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12051 lastpage: 12061 published: 2021-07-01 00:00:00 +0000 - title: 'Adversarial Purification with Score-based Generative Models' abstract: 'While adversarial training is considered as a standard defense method against adversarial attacks for image classifiers, adversarial purification, which purifies attacked images into clean images with a standalone purification, model has shown promises as an alternative defense method. Recently, an EBM trained with MCMC has been highlighted as a purification model, where an attacked image is purified by running a long Markov-chain using the gradients of the EBM. Yet, the practicality of the adversarial purification using an EBM remains questionable because the number of MCMC steps required for such purification is too large. In this paper, we propose a novel adversarial purification method based on an EBM trained with DSM. We show that an EBM trained with DSM can quickly purify attacked images within a few steps. We further introduce a simple yet effective randomized purification scheme that injects random noises into images before purification. This process screens the adversarial perturbations imposed on images by the random noises and brings the images to the regime where the EBM can denoise well. We show that our purification method is robust against various attacks and demonstrate its state-of-the-art performances.' volume: 139 URL: https://proceedings.mlr.press/v139/yoon21a.html PDF: http://proceedings.mlr.press/v139/yoon21a/yoon21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yoon21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jongmin family: Yoon - given: Sung Ju family: Hwang - given: Juho family: Lee editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12062-12072 id: yoon21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12062 lastpage: 12072 published: 2021-07-01 00:00:00 +0000 - title: 'Federated Continual Learning with Weighted Inter-client Transfer' abstract: 'There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federated continual learning poses new challenges to continual learning, such as utilizing knowledge from other clients, while preventing interference from irrelevant knowledge. To resolve these issues, we propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT), which decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients by taking a weighted combination of their task-specific parameters. FedWeIT minimizes interference between incompatible tasks, and also allows positive knowledge transfer across clients during learning. We validate our FedWeIT against existing federated learning and continual learning methods under varying degrees of task similarity across clients, and our model significantly outperforms them with a large reduction in the communication cost.' volume: 139 URL: https://proceedings.mlr.press/v139/yoon21b.html PDF: http://proceedings.mlr.press/v139/yoon21b/yoon21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yoon21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jaehong family: Yoon - given: Wonyong family: Jeong - given: Giwoong family: Lee - given: Eunho family: Yang - given: Sung Ju family: Hwang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12073-12086 id: yoon21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12073 lastpage: 12086 published: 2021-07-01 00:00:00 +0000 - title: 'Autoencoding Under Normalization Constraints' abstract: 'Likelihood is a standard estimate for outlier detection. The specific role of the normalization constraint is to ensure that the out-of-distribution (OOD) regime has a small likelihood when samples are learned using maximum likelihood. Because autoencoders do not possess such a process of normalization, they often fail to recognize outliers even when they are obviously OOD. We propose the Normalized Autoencoder (NAE), a normalized probabilistic model constructed from an autoencoder. The probability density of NAE is defined using the reconstruction error of an autoencoder, which is differently defined in the conventional energy-based model. In our model, normalization is enforced by suppressing the reconstruction of negative samples, significantly improving the outlier detection performance. Our experimental results confirm the efficacy of NAE, both in detecting outliers and in generating in-distribution samples.' volume: 139 URL: https://proceedings.mlr.press/v139/yoon21c.html PDF: http://proceedings.mlr.press/v139/yoon21c/yoon21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yoon21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Sangwoong family: Yoon - given: Yung-Kyun family: Noh - given: Frank family: Park editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12087-12097 id: yoon21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12087 lastpage: 12097 published: 2021-07-01 00:00:00 +0000 - title: 'Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm' abstract: 'In this work, we study the computational complexity of reducing the squared gradient magnitude for smooth minimax optimization problems. First, we present algorithms with accelerated $\mathcal{O}(1/k^2)$ last-iterate rates, faster than the existing $\mathcal{O}(1/k)$ or slower rates for extragradient, Popov, and gradient descent with anchoring. The acceleration mechanism combines extragradient steps with anchoring and is distinct from Nesterov’s acceleration. We then establish optimality of the $\mathcal{O}(1/k^2)$ rate through a matching lower bound.' volume: 139 URL: https://proceedings.mlr.press/v139/yoon21d.html PDF: http://proceedings.mlr.press/v139/yoon21d/yoon21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yoon21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Taeho family: Yoon - given: Ernest K family: Ryu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12098-12109 id: yoon21d issued: date-parts: - 2021 - 7 - 1 firstpage: 12098 lastpage: 12109 published: 2021-07-01 00:00:00 +0000 - title: 'Lower-Bounded Proper Losses for Weakly Supervised Classification' abstract: 'This paper discusses the problem of weakly supervised classification, in which instances are given weak labels that are produced by some label-corruption process. The goal is to derive conditions under which loss functions for weak-label learning are proper and lower-bounded—two essential requirements for the losses used in class-probability estimation. To this end, we derive a representation theorem for proper losses in supervised learning, which dualizes the Savage representation. We use this theorem to characterize proper weak-label losses and find a condition for them to be lower-bounded. From these theoretical findings, we derive a novel regularization scheme called generalized logit squeezing, which makes any proper weak-label loss bounded from below, without losing properness. Furthermore, we experimentally demonstrate the effectiveness of our proposed approach, as compared to improper or unbounded losses. The results highlight the importance of properness and lower-boundedness.' volume: 139 URL: https://proceedings.mlr.press/v139/yoshida21a.html PDF: http://proceedings.mlr.press/v139/yoshida21a/yoshida21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yoshida21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shuhei M family: Yoshida - given: Takashi family: Takenouchi - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12110-12120 id: yoshida21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12110 lastpage: 12120 published: 2021-07-01 00:00:00 +0000 - title: 'Graph Contrastive Learning Automated' abstract: 'Self-supervised learning on graph-structured data has drawn recent interest for learning generalizable, transferable and robust representations from unlabeled graphs. Among many, graph contrastive learning (GraphCL) has emerged with promising representation learning performance. Unfortunately, unlike its counterpart on image data, the effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset, by either rules of thumb or trial-and-errors, owing to the diverse nature of graph data. That significantly limits the more general applicability of GraphCL. Aiming to fill in this crucial gap, this paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data. The general framework, dubbed JOint Augmentation Optimization (JOAO), is instantiated as min-max optimization. The selections of augmentations made by JOAO are shown to be in general aligned with previous "best practices" observed from handcrafted tuning: yet now being automated, more flexible and versatile. Moreover, we propose a new augmentation-aware projection head mechanism, which will route output features through different projection heads corresponding to different augmentations chosen at each training step. Extensive experiments demonstrate that JOAO performs on par with or sometimes better than the state-of-the-art competitors including GraphCL, on multiple graph datasets of various scales and types, yet without resorting to any laborious dataset-specific tuning on augmentation selection. We release the code at https://github.com/Shen-Lab/GraphCL_Automated.' volume: 139 URL: https://proceedings.mlr.press/v139/you21a.html PDF: http://proceedings.mlr.press/v139/you21a/you21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-you21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yuning family: You - given: Tianlong family: Chen - given: Yang family: Shen - given: Zhangyang family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12121-12132 id: you21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12121 lastpage: 12132 published: 2021-07-01 00:00:00 +0000 - title: 'LogME: Practical Assessment of Pre-trained Models for Transfer Learning' abstract: 'This paper studies task adaptive pre-trained model selection, an underexplored problem of assessing pre-trained models for the target task and select best ones from the model zoo \emph{without fine-tuning}. A few pilot works addressed the problem in transferring supervised pre-trained models to classification tasks, but they cannot handle emerging unsupervised pre-trained models or regression tasks. In pursuit of a practical assessment method, we propose to estimate the maximum value of label evidence given features extracted by pre-trained models. Unlike the maximum likelihood, the maximum evidence is \emph{immune to over-fitting}, while its expensive computation can be dramatically reduced by our carefully designed algorithm. The Logarithm of Maximum Evidence (LogME) can be used to assess pre-trained models for transfer learning: a pre-trained model with a high LogME value is likely to have good transfer performance. LogME is \emph{fast, accurate, and general}, characterizing itself as the first practical method for assessing pre-trained models. Compared with brute-force fine-tuning, LogME brings at most $3000\times$ speedup in wall-clock time and requires only $1%$ memory footprint. It outperforms prior methods by a large margin in their setting and is applicable to new settings. It is general enough for diverse pre-trained models (supervised pre-trained and unsupervised pre-trained), downstream tasks (classification and regression), and modalities (vision and language). Code is available at this repository: \href{https://github.com/thuml/LogME}{https://github.com/thuml/LogME}.' volume: 139 URL: https://proceedings.mlr.press/v139/you21b.html PDF: http://proceedings.mlr.press/v139/you21b/you21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-you21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Kaichao family: You - given: Yong family: Liu - given: Jianmin family: Wang - given: Mingsheng family: Long editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12133-12143 id: you21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12133 lastpage: 12143 published: 2021-07-01 00:00:00 +0000 - title: 'Exponentially Many Local Minima in Quantum Neural Networks' abstract: 'Quantum Neural Networks (QNNs), or the so-called variational quantum circuits, are important quantum applications both because of their similar promises as classical neural networks and because of the feasibility of their implementation on near-term intermediate-size noisy quantum machines (NISQ). However, the training task of QNNs is challenging and much less understood. We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training. Specifically, we show for typical under-parameterized QNNs, there exists a dataset that induces a loss function with the number of spurious local minima depending exponentially on the number of parameters. Moreover, we show the optimality of our construction by providing an almost matching upper bound on such dependence. While local minima in classical neural networks are due to non-linear activations, in quantum neural networks local minima appear as a result of the quantum interference phenomenon. Finally, we empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based optimizers, which demonstrates the practical value of our findings.' volume: 139 URL: https://proceedings.mlr.press/v139/you21c.html PDF: http://proceedings.mlr.press/v139/you21c/you21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-you21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xuchen family: You - given: Xiaodi family: Wu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12144-12155 id: you21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12144 lastpage: 12155 published: 2021-07-01 00:00:00 +0000 - title: 'DAGs with No Curl: An Efficient DAG Structure Learning Approach' abstract: 'Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints and was solved iteratively through subproblem optimization. To further improve efficiency, we propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly. Specifically, we first show that the set of weighted adjacency matrices of DAGs are equivalent to the set of weighted gradients of graph potential functions, and one may perform structure learning by searching in this equivalent set of DAGs. To instantiate this idea, we propose a new algorithm, DAG-NoCurl, which solves the optimization problem efficiently with a two-step procedure: $1)$ first we find an initial non-acyclic solution to the optimization problem, and $2)$ then we employ the Hodge decomposition of graphs and learn an acyclic graph by projecting the non-acyclic graph to the gradient of a potential function. Experimental studies on benchmark datasets demonstrate that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models, often by more than one order of magnitude.' volume: 139 URL: https://proceedings.mlr.press/v139/yu21a.html PDF: http://proceedings.mlr.press/v139/yu21a/yu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yue family: Yu - given: Tian family: Gao - given: Naiyu family: Yin - given: Qiang family: Ji editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12156-12166 id: yu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12156 lastpage: 12166 published: 2021-07-01 00:00:00 +0000 - title: 'Provably Efficient Algorithms for Multi-Objective Competitive RL' abstract: 'We study multi-objective reinforcement learning (RL) where an agent’s reward is represented as a vector. In settings where an agent competes against opponents, its performance is measured by the distance of its average return vector to a target set. We develop statistically and computationally efficient algorithms to approach the associated target set. Our results extend Blackwell’s approachability theorem \citep{blackwell1956analog} to tabular RL, where strategic exploration becomes essential. The algorithms presented are adaptive; their guarantees hold even without Blackwell’s approachability condition. If the opponents use fixed policies, we give an improved rate of approaching the target set while also tackling the more ambitious goal of simultaneously minimizing a scalar cost function. We discuss our analysis for this special case by relating our results to previous works on constrained RL. To our knowledge, this work provides the first provably efficient algorithms for vector-valued Markov games and our theoretical guarantees are near-optimal.' volume: 139 URL: https://proceedings.mlr.press/v139/yu21b.html PDF: http://proceedings.mlr.press/v139/yu21b/yu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tiancheng family: Yu - given: Yi family: Tian - given: Jingzhao family: Zhang - given: Suvrit family: Sra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12167-12176 id: yu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12167 lastpage: 12176 published: 2021-07-01 00:00:00 +0000 - title: 'Whittle Networks: A Deep Likelihood Model for Time Series' abstract: 'While probabilistic circuits have been extensively explored for tabular data, less attention has been paid to time series. Here, the goal is to estimate joint densities among the entire time series and, in turn, determining, for instance, conditional independence relations between them. To this end, we propose the first probabilistic circuits (PCs) approach for modeling the joint distribution of multivariate time series, called Whittle sum-product networks (WSPNs). WSPNs leverage the Whittle approximation, casting the likelihood in the frequency domain, and place a complex-valued sum-product network, the most prominent PC, over the frequencies. The conditional independence relations among the time series can then be determined efficiently in the spectral domain. Moreover, WSPNs can naturally be placed into the deep neural learning stack for time series, resulting in Whittle Networks, opening the likelihood toolbox for training deep neural models and inspecting their behaviour. Our experiments show that Whittle Networks can indeed capture complex dependencies between time series and provide a useful measure of uncertainty for neural networks.' volume: 139 URL: https://proceedings.mlr.press/v139/yu21c.html PDF: http://proceedings.mlr.press/v139/yu21c/yu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhongjie family: Yu - given: Fabrizio G family: Ventola - given: Kristian family: Kersting editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12177-12186 id: yu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12177 lastpage: 12186 published: 2021-07-01 00:00:00 +0000 - title: 'Deep Latent Graph Matching' abstract: 'Deep learning for graph matching (GM) has emerged as an important research topic due to its superior performance over traditional methods and insights it provides for solving other combinatorial problems on graph. While recent deep methods for GM extensively investigated effective node/edge feature learning or downstream GM solvers given such learned features, there is little existing work questioning if the fixed connectivity/topology typically constructed using heuristics (e.g., Delaunay or k-nearest) is indeed suitable for GM. From a learning perspective, we argue that the fixed topology may restrict the model capacity and thus potentially hinder the performance. To address this, we propose to learn the (distribution of) latent topology, which can better support the downstream GM task. We devise two latent graph generation procedures, one deterministic and one generative. Particularly, the generative procedure emphasizes the across-graph consistency and thus can be viewed as a matching-guided co-generative model. Our methods deliver superior performance over previous state-of-the-arts on public benchmarks, hence supporting our hypothesis.' volume: 139 URL: https://proceedings.mlr.press/v139/yu21d.html PDF: http://proceedings.mlr.press/v139/yu21d/yu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianshu family: Yu - given: Runzhong family: Wang - given: Junchi family: Yan - given: Baoxin family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12187-12197 id: yu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 12187 lastpage: 12197 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Generalized Intersection Over Union for Dense Pixelwise Prediction' abstract: 'Intersection over union (IoU) score, also named Jaccard Index, is one of the most fundamental evaluation methods in machine learning. The original IoU computation cannot provide non-zero gradients and thus cannot be directly optimized by nowadays deep learning methods. Several recent works generalized IoU for bounding box regression, but they are not straightforward to adapt for pixelwise prediction. In particular, the original IoU fails to provide effective gradients for the non-overlapping and location-deviation cases, which results in performance plateau. In this paper, we propose PixIoU, a generalized IoU for pixelwise prediction that is sensitive to the distance for non-overlapping cases and the locations in prediction. We provide proofs that PixIoU holds many nice properties as the original IoU. To optimize the PixIoU, we also propose a loss function that is proved to be submodular, hence we can apply the Lovász functions, the efficient surrogates for submodular functions for learning this loss. Experimental results show consistent performance improvements by learning PixIoU over the original IoU for several different pixelwise prediction tasks on Pascal VOC, VOT-2020 and Cityscapes.' volume: 139 URL: https://proceedings.mlr.press/v139/yu21e.html PDF: http://proceedings.mlr.press/v139/yu21e/yu21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yu21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiaqian family: Yu - given: Jingtao family: Xu - given: Yiwei family: Chen - given: Weiming family: Li - given: Qiang family: Wang - given: Byungin family: Yoo - given: Jae-Joon family: Han editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12198-12207 id: yu21e issued: date-parts: - 2021 - 7 - 1 firstpage: 12198 lastpage: 12207 published: 2021-07-01 00:00:00 +0000 - title: 'Large Scale Private Learning via Low-rank Reparametrization' abstract: 'We propose a reparametrization scheme to address the challenges of applying differentially private SGD on large neural networks, which are 1) the huge memory cost of storing individual gradients, 2) the added noise suffering notorious dimensional dependence. Specifically, we reparametrize each weight matrix with two \emph{gradient-carrier} matrices of small dimension and a \emph{residual weight} matrix. We argue that such reparametrization keeps the forward/backward process unchanged while enabling us to compute the projected gradient without computing the gradient itself. To learn with differential privacy, we design \emph{reparametrized gradient perturbation (RGP)} that perturbs the gradients on gradient-carrier matrices and reconstructs an update for the original weight from the noisy gradients. Importantly, we use historical updates to find the gradient-carrier matrices, whose optimality is rigorously justified under linear regression and empirically verified with deep learning tasks. RGP significantly reduces the memory cost and improves the utility. For example, we are the first able to apply differential privacy on the BERT model and achieve an average accuracy of $83.9%$ on four downstream tasks with $\epsilon=8$, which is within $5%$ loss compared to the non-private baseline but enjoys much lower privacy leakage risk.' volume: 139 URL: https://proceedings.mlr.press/v139/yu21f.html PDF: http://proceedings.mlr.press/v139/yu21f/yu21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yu21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Da family: Yu - given: Huishuai family: Zhang - given: Wei family: Chen - given: Jian family: Yin - given: Tie-Yan family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12208-12218 id: yu21f issued: date-parts: - 2021 - 7 - 1 firstpage: 12208 lastpage: 12218 published: 2021-07-01 00:00:00 +0000 - title: 'Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity' abstract: 'Deep AUC (area under the ROC curve) Maximization (DAM) has attracted much attention recently due to its great potential for imbalanced data classification. However, the research on Federated Deep AUC Maximization (FDAM) is still limited. Compared with standard federated learning (FL) approaches that focus on decomposable minimization objectives, FDAM is more complicated due to its minimization objective is non-decomposable over individual examples. In this paper, we propose improved FDAM algorithms for heterogeneous data by solving the popular non-convex strongly-concave min-max formulation of DAM in a distributed fashion, which can also be applied to a class of non-convex strongly-concave min-max problems. A striking result of this paper is that the communication complexity of the proposed algorithm is a constant independent of the number of machines and also independent of the accuracy level, which improves an existing result by orders of magnitude. The experiments have demonstrated the effectiveness of our FDAM algorithm on benchmark datasets, and on medical chest X-ray images from different organizations. Our experiment shows that the performance of FDAM using data from multiple hospitals can improve the AUC score on testing data from a single hospital for detecting life-threatening diseases based on chest radiographs.' volume: 139 URL: https://proceedings.mlr.press/v139/yuan21a.html PDF: http://proceedings.mlr.press/v139/yuan21a/yuan21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yuan21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhuoning family: Yuan - given: Zhishuai family: Guo - given: Yi family: Xu - given: Yiming family: Ying - given: Tianbao family: Yang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12219-12229 id: yuan21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12219 lastpage: 12229 published: 2021-07-01 00:00:00 +0000 - title: 'Neural Tangent Generalization Attacks' abstract: 'The remarkable performance achieved by Deep Neural Networks (DNNs) in many applications is followed by the rising concern about data privacy and security. Since DNNs usually require large datasets to train, many practitioners scrape data from external sources such as the Internet. However, an external data owner may not be willing to let this happen, causing legal or ethical issues. In this paper, we study the generalization attacks against DNNs, where an attacker aims to slightly modify training data in order to spoil the training process such that a trained network lacks generalizability. These attacks can be performed by data owners and protect data from unexpected use. However, there is currently no efficient generalization attack against DNNs due to the complexity of a bilevel optimization involved. We propose the Neural Tangent Generalization Attack (NTGA) that, to the best of our knowledge, is the first work enabling clean-label, black-box generalization attack against DNNs. We conduct extensive experiments, and the empirical results demonstrate the effectiveness of NTGA. Our code and perturbed datasets are available at: https://github.com/lionelmessi6410/ntga.' volume: 139 URL: https://proceedings.mlr.press/v139/yuan21b.html PDF: http://proceedings.mlr.press/v139/yuan21b/yuan21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yuan21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chia-Hung family: Yuan - given: Shan-Hung family: Wu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12230-12240 id: yuan21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12230 lastpage: 12240 published: 2021-07-01 00:00:00 +0000 - title: 'On Explainability of Graph Neural Networks via Subgraph Explorations' abstract: 'We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.' volume: 139 URL: https://proceedings.mlr.press/v139/yuan21c.html PDF: http://proceedings.mlr.press/v139/yuan21c/yuan21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yuan21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hao family: Yuan - given: Haiyang family: Yu - given: Jie family: Wang - given: Kang family: Li - given: Shuiwang family: Ji editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12241-12252 id: yuan21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12241 lastpage: 12252 published: 2021-07-01 00:00:00 +0000 - title: 'Federated Composite Optimization' abstract: 'Federated Learning (FL) is a distributed learning paradigm that scales on-device learning collaboratively and privately. Standard FL algorithms such as FEDAVG are primarily geared towards smooth unconstrained settings. In this paper, we study the Federated Composite Optimization (FCO) problem, in which the loss function contains a non-smooth regularizer. Such problems arise naturally in FL applications that involve sparsity, low-rank, monotonicity, or more general constraints. We first show that straightforward extensions of primal algorithms such as FedAvg are not well-suited for FCO since they suffer from the "curse of primal averaging," resulting in poor convergence. As a solution, we propose a new primal-dual algorithm, Federated Dual Averaging (FedDualAvg), which by employing a novel server dual averaging procedure circumvents the curse of primal averaging. Our theoretical analysis and empirical experiments demonstrate that FedDualAvg outperforms the other baselines.' volume: 139 URL: https://proceedings.mlr.press/v139/yuan21d.html PDF: http://proceedings.mlr.press/v139/yuan21d/yuan21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yuan21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Honglin family: Yuan - given: Manzil family: Zaheer - given: Sashank family: Reddi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12253-12266 id: yuan21d issued: date-parts: - 2021 - 7 - 1 firstpage: 12253 lastpage: 12266 published: 2021-07-01 00:00:00 +0000 - title: 'Three Operator Splitting with a Nonconvex Loss Function' abstract: 'We consider the problem of minimizing the sum of three functions, one of which is nonconvex but differentiable, and the other two are convex but possibly nondifferentiable. We investigate the Three Operator Splitting method (TOS) of Davis & Yin (2017) with an aim to extend its theoretical guarantees for this nonconvex problem template. In particular, we prove convergence of TOS with nonasymptotic bounds on its nonstationarity and infeasibility errors. In contrast with the existing work on nonconvex TOS, our guarantees do not require additional smoothness assumptions on the terms comprising the objective; hence they cover instances of particular interest where the nondifferentiable terms are indicator functions. We also extend our results to a stochastic setting where we have access only to an unbiased estimator of the gradient. Finally, we illustrate the effectiveness of the proposed method through numerical experiments on quadratic assignment problems.' volume: 139 URL: https://proceedings.mlr.press/v139/yurtsever21a.html PDF: http://proceedings.mlr.press/v139/yurtsever21a/yurtsever21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-yurtsever21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Alp family: Yurtsever - given: Varun family: Mangalick - given: Suvrit family: Sra editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12267-12277 id: yurtsever21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12267 lastpage: 12277 published: 2021-07-01 00:00:00 +0000 - title: 'Grey-box Extraction of Natural Language Models' abstract: 'Model extraction attacks attempt to replicate a target machine learning model by querying its inference API. State-of-the-art attacks are learning-based and construct replicas by supervised training on the target model’s predictions, but an emerging class of attacks exploit algebraic properties to obtain high-fidelity replicas using orders of magnitude fewer queries. So far, these algebraic attacks have been limited to neural networks with few hidden layers and ReLU activations. In this paper we present algebraic and hybrid algebraic/learning-based attacks on large-scale natural language models. We consider a grey-box setting, targeting models with a pre-trained (public) encoder followed by a single (private) classification layer. Our key findings are that (i) with a frozen encoder, high-fidelity extraction is possible with a small number of in-distribution queries, making extraction attacks indistinguishable from legitimate use; (ii) when the encoder is fine-tuned, a hybrid learning-based/algebraic attack improves over the learning-based state-of-the-art without requiring additional queries.' volume: 139 URL: https://proceedings.mlr.press/v139/zanella-beguelin21a.html PDF: http://proceedings.mlr.press/v139/zanella-beguelin21a/zanella-beguelin21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zanella-beguelin21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Santiago family: Zanella-Beguelin - given: Shruti family: Tople - given: Andrew family: Paverd - given: Boris family: Köpf editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12278-12286 id: zanella-beguelin21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12278 lastpage: 12286 published: 2021-07-01 00:00:00 +0000 - title: 'Exponential Lower Bounds for Batch Reinforcement Learning: Batch RL can be Exponentially Harder than Online RL' abstract: 'Several practical applications of reinforcement learning involve an agent learning from past data without the possibility of further exploration. Often these applications require us to 1) identify a near optimal policy or to 2) estimate the value of a target policy. For both tasks we derive exponential information-theoretic lower bounds in discounted infinite horizon MDPs with a linear function representation for the action value function even if 1) realizability holds, 2) the batch algorithm observes the exact reward and transition functions, and 3) the batch algorithm is given the best a priori data distribution for the problem class. Our work introduces a new ‘oracle + batch algorithm’ framework to prove lower bounds that hold for every distribution. The work shows an exponential separation between batch and online reinforcement learning.' volume: 139 URL: https://proceedings.mlr.press/v139/zanette21a.html PDF: http://proceedings.mlr.press/v139/zanette21a/zanette21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zanette21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Andrea family: Zanette editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12287-12297 id: zanette21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12287 lastpage: 12297 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Binary Decision Trees by Argmin Differentiation' abstract: 'We address the problem of learning binary decision trees that partition data for some downstream task. We propose to learn discrete parameters (i.e., for tree traversals and node pruning) and continuous parameters (i.e., for tree split functions and prediction functions) simultaneously using argmin differentiation. We do so by sparsely relaxing a mixed-integer program for the discrete parameters, to allow gradients to pass through the program to continuous parameters. We derive customized algorithms to efficiently compute the forward and backward passes. This means that our tree learning procedure can be used as an (implicit) layer in arbitrary deep networks, and can be optimized with arbitrary loss functions. We demonstrate that our approach produces binary trees that are competitive with existing single tree and ensemble approaches, in both supervised and unsupervised settings. Further, apart from greedy approaches (which do not have competitive accuracies), our method is faster to train than all other tree-learning baselines we compare with.' volume: 139 URL: https://proceedings.mlr.press/v139/zantedeschi21a.html PDF: http://proceedings.mlr.press/v139/zantedeschi21a/zantedeschi21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zantedeschi21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Valentina family: Zantedeschi - given: Matt family: Kusner - given: Vlad family: Niculae editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12298-12309 id: zantedeschi21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12298 lastpage: 12309 published: 2021-07-01 00:00:00 +0000 - title: 'Barlow Twins: Self-Supervised Learning via Redundancy Reduction' abstract: 'Self-supervised learning (SSL) is rapidly closing the gap with supervised methods on large computer vision benchmarks. A successful approach to SSL is to learn embeddings which are invariant to distortions of the input sample. However, a recurring issue with this approach is the existence of trivial constant solutions. Most current methods avoid such solutions by careful implementation details. We propose an objective function that naturally avoids collapse by measuring the cross-correlation matrix between the outputs of two identical networks fed with distorted versions of a sample, and making it as close to the identity matrix as possible. This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors. The method is called Barlow Twins, owing to neuroscientist H. Barlow’s redundancy-reduction principle applied to a pair of identical networks. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. Intriguingly it benefits from very high-dimensional output vectors. Barlow Twins outperforms previous methods on ImageNet for semi-supervised classification in the low-data regime, and is on par with current state of the art for ImageNet classification with a linear classifier head, and for transfer tasks of classification and object detection.' volume: 139 URL: https://proceedings.mlr.press/v139/zbontar21a.html PDF: http://proceedings.mlr.press/v139/zbontar21a/zbontar21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zbontar21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jure family: Zbontar - given: Li family: Jing - given: Ishan family: Misra - given: Yann family: LeCun - given: Stephane family: Deny editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12310-12320 id: zbontar21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12310 lastpage: 12320 published: 2021-07-01 00:00:00 +0000 - title: 'You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling' abstract: 'Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear. We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant). This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark, for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at https://github.com/mlpen/YOSO.' volume: 139 URL: https://proceedings.mlr.press/v139/zeng21a.html PDF: http://proceedings.mlr.press/v139/zeng21a/zeng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zeng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhanpeng family: Zeng - given: Yunyang family: Xiong - given: Sathya family: Ravi - given: Shailesh family: Acharya - given: Glenn M family: Fung - given: Vikas family: Singh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12321-12332 id: zeng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12321 lastpage: 12332 published: 2021-07-01 00:00:00 +0000 - title: 'DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning' abstract: 'Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents. While significant achievements have been made in various perfect- and imperfect-information games, DouDizhu (a.k.a. Fighting the Landlord), a three-player card game, is still unsolved. DouDizhu is a very challenging domain with competition, collaboration, imperfect information, large state space, and particularly a massive set of possible actions where the legal actions vary significantly from turn to turn. Unfortunately, modern reinforcement learning algorithms mainly focus on simple and small action spaces, and not surprisingly, are shown not to make satisfactory progress in DouDizhu. In this work, we propose a conceptually simple yet effective DouDizhu AI system, namely DouZero, which enhances traditional Monte-Carlo methods with deep neural networks, action encoding, and parallel actors. Starting from scratch in a single server with four GPUs, DouZero outperformed all the existing DouDizhu AI programs in days of training and was ranked the first in the Botzone leaderboard among 344 AI agents. Through building DouZero, we show that classic Monte-Carlo methods can be made to deliver strong results in a hard domain with a complex action space. The code and an online demo are released at https://github.com/kwai/DouZero with the hope that this insight could motivate future work.' volume: 139 URL: https://proceedings.mlr.press/v139/zha21a.html PDF: http://proceedings.mlr.press/v139/zha21a/zha21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zha21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Daochen family: Zha - given: Jingru family: Xie - given: Wenye family: Ma - given: Sheng family: Zhang - given: Xiangru family: Lian - given: Xia family: Hu - given: Ji family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12333-12344 id: zha21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12333 lastpage: 12344 published: 2021-07-01 00:00:00 +0000 - title: 'DORO: Distributional and Outlier Robust Optimization' abstract: 'Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution. For such settings, a line of recent work has proposed the use of a variant of empirical risk minimization(ERM) known as distributionally robust optimization (DRO). In this work, we apply DRO to real, large-scale tasks with subpopulation shift, and observe that DRO performs relatively poorly, and moreover has severe instability. We identify one direct cause of this phenomenon: sensitivity of DRO to outliers in the datasets. To resolve this issue, we propose the framework of DORO, for Distributional and Outlier Robust Optimization. At the core of this approach is a refined risk function which prevents DRO from overfitting to potential outliers. We instantiate DORO for the Cressie-Read family of Rényi divergence, and delve into two specific instances of this family: CVaR and $\chi^2$-DRO. We theoretically prove the effectiveness of the proposed method, and empirically show that DORO improves the performance and stability of DRO with experiments on large modern datasets, thereby positively addressing the open question raised by Hashimoto et al., 2018. Codes are available at https://github.com/RuntianZ/doro.' volume: 139 URL: https://proceedings.mlr.press/v139/zhai21a.html PDF: http://proceedings.mlr.press/v139/zhai21a/zhai21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhai21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Runtian family: Zhai - given: Chen family: Dan - given: Zico family: Kolter - given: Pradeep family: Ravikumar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12345-12355 id: zhai21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12345 lastpage: 12355 published: 2021-07-01 00:00:00 +0000 - title: 'Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization?' abstract: 'Can models with particular structure avoid being biased towards spurious correlation in out-of-distribution (OOD) generalization? Peters et al. (2016) provides a positive answer for linear cases. In this paper, we use a functional modular probing method to analyze deep model structures under OOD setting. We demonstrate that even in biased models (which focus on spurious correlation) there still exist unbiased functional subnetworks. Furthermore, we articulate and confirm the functional lottery ticket hypothesis: the full network contains a subnetwork with proper structure that can achieve better OOD performance. We then propose Modular Risk Minimization to solve the subnetwork selection problem. Our algorithm learns the functional structure from a given dataset, and can be combined with any other OOD regularization methods. Experiments on various OOD generalization tasks corroborate the effectiveness of our method.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21a.html PDF: http://proceedings.mlr.press/v139/zhang21a/zhang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dinghuai family: Zhang - given: Kartik family: Ahuja - given: Yilun family: Xu - given: Yisen family: Wang - given: Aaron family: Courville editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12356-12367 id: zhang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12356 lastpage: 12367 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons' abstract: 'It is well-known that standard neural networks, even with a high classification accuracy, are vulnerable to small $\ell_\infty$-norm bounded adversarial perturbations. Although many attempts have been made, most previous works either can only provide empirical verification of the defense to a particular attack method, or can only develop a certified guarantee of the model robustness in limited scenarios. In this paper, we seek for a new approach to develop a theoretically principled neural network that inherently resists $\ell_\infty$ perturbations. In particular, we design a novel neuron that uses $\ell_\infty$-distance as its basic operation (which we call $\ell_\infty$-dist neuron), and show that any neural network constructed with $\ell_\infty$-dist neurons (called $\ell_{\infty}$-dist net) is naturally a 1-Lipschitz function with respect to $\ell_\infty$-norm. This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs. We then prove that such networks have enough expressive power to approximate any 1-Lipschitz function with robust generalization guarantee. We further provide a holistic training strategy that can greatly alleviate optimization difficulties. Experimental results show that using $\ell_{\infty}$-dist nets as basic building blocks, we consistently achieve state-of-the-art performance on commonly used datasets: 93.09% certified accuracy on MNIST ($\epsilon=0.3$), 35.42% on CIFAR-10 ($\epsilon=8/255$) and 16.31% on TinyImageNet ($\epsilon=1/255$).' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21b.html PDF: http://proceedings.mlr.press/v139/zhang21b/zhang21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bohang family: Zhang - given: Tianle family: Cai - given: Zhou family: Lu - given: Di family: He - given: Liwei family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12368-12379 id: zhang21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12368 lastpage: 12379 published: 2021-07-01 00:00:00 +0000 - title: 'Efficient Lottery Ticket Finding: Less Data is More' abstract: 'The lottery ticket hypothesis (LTH) reveals the existence of winning tickets (sparse but critical subnetworks) for dense networks, that can be trained in isolation from random initialization to match the latter’s accuracies. However, finding winning tickets requires burdensome computations in the train-prune-retrain process, especially on large-scale datasets (e.g., ImageNet), restricting their practical benefits. This paper explores a new perspective on finding lottery tickets more efficiently, by doing so only with a specially selected subset of data, called Pruning-Aware Critical set (PrAC set), rather than using the full training set. The concept of PrAC set was inspired by the recent observation, that deep networks have samples that are either hard to memorize during training, or easy to forget during pruning. A PrAC set is thus hypothesized to capture those most challenging and informative examples for the dense model. We observe that a high-quality winning ticket can be found with training and pruning the dense network on the very compact PrAC set, which can substantially save training iterations for the ticket finding process. Extensive experiments validate our proposal across diverse datasets and network architectures. Specifically, on CIFAR-10, CIFAR-100, and Tiny ImageNet, we locate effective PrAC sets at 35.32% 78.19% of their training set sizes. On top of them, we can obtain the same competitive winning tickets for the corresponding dense networks, yet saving up to 82.85% 92.77%, 63.54% 74.92%, and 76.14% 86.56% training iterations, respectively. Crucially, we show that a PrAC set found is reusable across different network architectures, which can amortize the extra cost of finding PrAC sets, yielding a practical regime for efficient lottery ticket finding.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21c.html PDF: http://proceedings.mlr.press/v139/zhang21c/zhang21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhenyu family: Zhang - given: Xuxi family: Chen - given: Tianlong family: Chen - given: Zhangyang family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12380-12390 id: zhang21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12380 lastpage: 12390 published: 2021-07-01 00:00:00 +0000 - title: 'Robust Policy Gradient against Strong Data Corruption' abstract: 'We study the problem of robust reinforcement learning under adversarial corruption on both rewards and transitions. Our attack model assumes an \textit{adaptive} adversary who can arbitrarily corrupt the reward and transition at every step within an episode, for at most $\epsilon$-fraction of the learning episodes. Our attack model is strictly stronger than those considered in prior works. Our first result shows that no algorithm can find a better than $O(\epsilon)$-optimal policy under our attack model. Next, we show that surprisingly the natural policy gradient (NPG) method retains a natural robustness property if the reward corruption is bounded, and can find an $O(\sqrt{\epsilon})$-optimal policy. Consequently, we develop a Filtered Policy Gradient (FPG) algorithm that can tolerate even unbounded reward corruption and can find an $O(\epsilon^{1/4})$-optimal policy. We emphasize that FPG is the first that can achieve a meaningful learning guarantee when a constant fraction of episodes are corrupted. Complimentary to the theoretical results, we show that a neural implementation of FPG achieves strong robust learning performance on the MuJoCo continuous control benchmarks.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21d.html PDF: http://proceedings.mlr.press/v139/zhang21d/zhang21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xuezhou family: Zhang - given: Yiding family: Chen - given: Xiaojin family: Zhu - given: Wen family: Sun editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12391-12401 id: zhang21d issued: date-parts: - 2021 - 7 - 1 firstpage: 12391 lastpage: 12401 published: 2021-07-01 00:00:00 +0000 - title: 'Near Optimal Reward-Free Reinforcement Learning' abstract: 'We study the reward-free reinforcement learning framework, which is particularly suitable for batch reinforcement learning and scenarios where one needs policies for multiple reward functions. This framework has two phases: in the exploration phase, the agent collects trajectories by interacting with the environment without using any reward signal; in the planning phase, the agent needs to return a near-optimal policy for arbitrary reward functions. %This framework is suitable for batch RL setting and the setting where there are multiple reward functions of interes We give a new efficient algorithm, \textbf{S}taged \textbf{S}ampling + \textbf{T}runcated \textbf{P}lanning (\algoname), which interacts with the environment at most $O\left( \frac{S^2A}{\epsilon^2}\poly\log\left(\frac{SAH}{\epsilon}\right) \right)$ episodes in the exploration phase, and guarantees to output a near-optimal policy for arbitrary reward functions in the planning phase, where $S$ is the size of state space, $A$ is the size of action space, $H$ is the planning horizon, and $\epsilon$ is the target accuracy relative to the total reward. Notably, our sample complexity scales only \emph{logarithmically} with $H$, in contrast to all existing results which scale \emph{polynomially} with $H$. Furthermore, this bound matches the minimax lower bound $\Omega\left(\frac{S^2A}{\epsilon^2}\right)$ up to logarithmic factors. Our results rely on three new techniques : 1) A new sufficient condition for the dataset to plan for an $\epsilon$-suboptimal policy % for any totally bounded reward function ; 2) A new way to plan efficiently under the proposed condition using soft-truncated planning; 3) Constructing extended MDP to maximize the truncated accumulative rewards efficiently.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21e.html PDF: http://proceedings.mlr.press/v139/zhang21e/zhang21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zihan family: Zhang - given: Simon family: Du - given: Xiangyang family: Ji editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12402-12412 id: zhang21e issued: date-parts: - 2021 - 7 - 1 firstpage: 12402 lastpage: 12412 published: 2021-07-01 00:00:00 +0000 - title: 'Bayesian Attention Belief Networks' abstract: 'Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks. Most such models use deterministic attention while stochastic attention is less explored due to the optimization difficulties or complicated model design. This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights with a hierarchy of gamma distributions, and an encoder network by stacking Weibull distributions with a deterministic-upward-stochastic-downward structure to approximate the posterior. The resulting auto-encoding networks can be optimized in a differentiable way with a variational lower bound. It is simple to convert any models with deterministic attention, including pretrained ones, to the proposed Bayesian attention belief networks. On a variety of language understanding tasks, we show that our method outperforms deterministic attention and state-of-the-art stochastic attention in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our method on neural machine translation and visual question answering, showing great potential of incorporating our method into various attention-related tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21f.html PDF: http://proceedings.mlr.press/v139/zhang21f/zhang21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shujian family: Zhang - given: Xinjie family: Fan - given: Bo family: Chen - given: Mingyuan family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12413-12426 id: zhang21f issued: date-parts: - 2021 - 7 - 1 firstpage: 12413 lastpage: 12426 published: 2021-07-01 00:00:00 +0000 - title: 'Understanding Failures in Out-of-Distribution Detection with Deep Generative Models' abstract: 'Deep generative models (DGMs) seem a natural fit for detecting out-of-distribution (OOD) inputs, but such models have been shown to assign higher probabilities or densities to OOD images than images from the training distribution. In this work, we explain why this behavior should be attributed to model misestimation. We first prove that no method can guarantee performance beyond random chance without assumptions on which out-distributions are relevant. We then interrogate the typical set hypothesis, the claim that relevant out-distributions can lie in high likelihood regions of the data distribution, and that OOD detection should be defined based on the data distribution’s typical set. We highlight the consequences implied by assuming support overlap between in- and out-distributions, as well as the arbitrariness of the typical set for OOD detection. Our results suggest that estimation error is a more plausible explanation than the misalignment between likelihood-based OOD detection and out-distributions of interest, and we illustrate how even minimal estimation error can lead to OOD detection failures, yielding implications for future work in deep generative modeling and OOD detection.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21g.html PDF: http://proceedings.mlr.press/v139/zhang21g/zhang21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lily family: Zhang - given: Mark family: Goldstein - given: Rajesh family: Ranganath editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12427-12436 id: zhang21g issued: date-parts: - 2021 - 7 - 1 firstpage: 12427 lastpage: 12436 published: 2021-07-01 00:00:00 +0000 - title: 'Poolingformer: Long Document Modeling with Pooling Attention' abstract: 'In this paper, we introduce a two-level attention schema, Poolingformer, for long document modeling. Its first level uses a smaller sliding window pattern to aggregate information from neighbors. Its second level employs a larger window to increase receptive fields with pooling attention to reduce both computational cost and memory consumption. We first evaluate Poolingformer on two long sequence QA tasks: the monolingual NQ and the multilingual TyDi QA. Experimental results show that Poolingformer sits atop three official leaderboards measured by F1, outperforming previous state-of-the-art models by 1.9 points (79.8 vs. 77.9) on NQ long answer, 1.9 points (79.5 vs. 77.6) on TyDi QA passage answer, and 1.6 points (67.6 vs. 66.0) on TyDi QA minimal answer. We further evaluate Poolingformer on a long sequence summarization task. Experimental results on the arXiv benchmark continue to demonstrate its superior performance.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21h.html PDF: http://proceedings.mlr.press/v139/zhang21h/zhang21h.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21h.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hang family: Zhang - given: Yeyun family: Gong - given: Yelong family: Shen - given: Weisheng family: Li - given: Jiancheng family: Lv - given: Nan family: Duan - given: Weizhu family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12437-12446 id: zhang21h issued: date-parts: - 2021 - 7 - 1 firstpage: 12437 lastpage: 12446 published: 2021-07-01 00:00:00 +0000 - title: 'Probabilistic Generating Circuits' abstract: 'Generating functions, which are widely used in combinatorics and probability theory, encode function values into the coefficients of a polynomial. In this paper, we explore their use as a tractable probabilistic model, and propose probabilistic generating circuits (PGCs) for their efficient representation. PGCs are strictly more expressive efficient than many existing tractable probabilistic models, including determinantal point processes (DPPs), probabilistic circuits (PCs) such as sum-product networks, and tractable graphical models. We contend that PGCs are not just a theoretical framework that unifies vastly different existing models, but also show great potential in modeling realistic data. We exhibit a simple class of PGCs that are not trivially subsumed by simple combinations of PCs and DPPs, and obtain competitive performance on a suite of density estimation benchmarks. We also highlight PGCs’ connection to the theory of strongly Rayleigh distributions.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21i.html PDF: http://proceedings.mlr.press/v139/zhang21i/zhang21i.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21i.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Honghua family: Zhang - given: Brendan family: Juba - given: Guy family: Van Den Broeck editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12447-12457 id: zhang21i issued: date-parts: - 2021 - 7 - 1 firstpage: 12447 lastpage: 12457 published: 2021-07-01 00:00:00 +0000 - title: 'PAPRIKA: Private Online False Discovery Rate Control' abstract: 'In hypothesis testing, a \emph{false discovery} occurs when a hypothesis is incorrectly rejected due to noise in the sample. When adaptively testing multiple hypotheses, the probability of a false discovery increases as more tests are performed. Thus the problem of \emph{False Discovery Rate (FDR) control} is to find a procedure for testing multiple hypotheses that accounts for this effect in determining the set of hypotheses to reject. The goal is to minimize the number (or fraction) of false discoveries, while maintaining a high true positive rate (i.e., correct discoveries). In this work, we study False Discovery Rate (FDR) control in multiple hypothesis testing under the constraint of differential privacy for the sample. Unlike previous work in this direction, we focus on the \emph{online setting}, meaning that a decision about each hypothesis must be made immediately after the test is performed, rather than waiting for the output of all tests as in the offline setting. We provide new private algorithms based on state-of-the-art results in non-private online FDR control. Our algorithms have strong provable guarantees for privacy and statistical performance as measured by FDR and power. We also provide experimental results to demonstrate the efficacy of our algorithms in a variety of data environments.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21j.html PDF: http://proceedings.mlr.press/v139/zhang21j/zhang21j.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21j.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wanrong family: Zhang - given: Gautam family: Kamath - given: Rachel family: Cummings editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12458-12467 id: zhang21j issued: date-parts: - 2021 - 7 - 1 firstpage: 12458 lastpage: 12467 published: 2021-07-01 00:00:00 +0000 - title: 'Learning from Noisy Labels with No Change to the Training Process' abstract: 'There has been much interest in recent years in developing learning algorithms that can learn accurate classifiers from data with noisy labels. A widely-studied noise model is that of \emph{class-conditional noise} (CCN), wherein a label $y$ is flipped to a label $\tilde{y}$ with some associated noise probability that depends on both $y$ and $\tilde{y}$. In the multiclass setting, all previously proposed algorithms under the CCN model involve changing the training process, by introducing a ‘noise-correction’ to the surrogate loss to be minimized over the noisy training examples. In this paper, we show that this is really unnecessary: one can simply perform class probability estimation (CPE) on the noisy examples, e.g. using a standard (multiclass) logistic regression algorithm, and then apply noise-correction only in the final prediction step. This means that the training algorithm itself does not need any change, and one can simply use standard off-the-shelf implementations with no modification to the code for training. Our approach can handle general multiclass loss matrices, including the usual 0-1 loss but also other losses such as those used for ordinal regression problems. We also provide a quantitative regret transfer bound, which bounds the target regret on the true distribution in terms of the CPE regret on the noisy distribution; in doing so, we extend the notion of strong properness introduced for binary losses by Agarwal (2014) to the multiclass case. Our bound suggests that the sample complexity of learning under CCN increases as the noise matrix approaches singularity. We also provide fixes and potential improvements for noise estimation methods that involve computing anchor points. Our experiments confirm our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21k.html PDF: http://proceedings.mlr.press/v139/zhang21k/zhang21k.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21k.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mingyuan family: Zhang - given: Jane family: Lee - given: Shivani family: Agarwal editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12468-12478 id: zhang21k issued: date-parts: - 2021 - 7 - 1 firstpage: 12468 lastpage: 12478 published: 2021-07-01 00:00:00 +0000 - title: 'Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation' abstract: 'Boundary based blackbox attack has been recognized as practical and effective, given that an attacker only needs to access the final model prediction. However, the query efficiency of it is in general high especially for high dimensional image data. In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency. In particular, we propose a theoretical framework to analyze and show three key characteristics to improve the query efficiency. We prove that there exists an optimal scale for projective gradient estimation. Our framework also explains the satisfactory performance achieved by existing boundary black-box attacks. Based on our theoretical framework, we propose Progressive-Scale enabled projective Boundary Attack (PSBA) to improve the query efficiency via progressive scaling techniques. In particular, we employ Progressive-GAN to optimize the scale of projections, which we call PSBA-PGAN. We evaluate our approach on both spatial and frequency scales. Extensive experiments on MNIST, CIFAR-10, CelebA, and ImageNet against different models including a real-world face recognition API show that PSBA-PGAN significantly outperforms existing baseline attacks in terms of query efficiency and attack success rate. We also observe relatively stable optimal scales for different models and datasets. The code is publicly available at https://github.com/AI-secure/PSBA.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21l.html PDF: http://proceedings.mlr.press/v139/zhang21l/zhang21l.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21l.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jiawei family: Zhang - given: Linyi family: Li - given: Huichen family: Li - given: Xiaolu family: Zhang - given: Shuang family: Yang - given: Bo family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12479-12490 id: zhang21l issued: date-parts: - 2021 - 7 - 1 firstpage: 12479 lastpage: 12490 published: 2021-07-01 00:00:00 +0000 - title: 'FOP: Factorizing Optimal Joint Policy of Maximum-Entropy Multi-Agent Reinforcement Learning' abstract: 'Value decomposition recently injects vigorous vitality into multi-agent actor-critic methods. However, existing decomposed actor-critic methods cannot guarantee the convergence of global optimum. In this paper, we present a novel multi-agent actor-critic method, FOP, which can factorize the optimal joint policy induced by maximum-entropy multi-agent reinforcement learning (MARL) into individual policies. Theoretically, we prove that factorized individual policies of FOP converge to the global optimum. Empirically, in the well-known matrix game and differential game, we verify that FOP can converge to the global optimum for both discrete and continuous action spaces. We also evaluate FOP on a set of StarCraft II micromanagement tasks, and demonstrate that FOP substantially outperforms state-of-the-art decomposed value-based and actor-critic methods.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21m.html PDF: http://proceedings.mlr.press/v139/zhang21m/zhang21m.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21m.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianhao family: Zhang - given: Yueheng family: Li - given: Chen family: Wang - given: Guangming family: Xie - given: Zongqing family: Lu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12491-12500 id: zhang21m issued: date-parts: - 2021 - 7 - 1 firstpage: 12491 lastpage: 12500 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization' abstract: 'Many weakly supervised classification methods employ a noise transition matrix to capture the class-conditional label corruption. To estimate the transition matrix from noisy data, existing methods often need to estimate the noisy class-posterior, which could be unreliable due to the overconfidence of neural networks. In this work, we propose a theoretically grounded method that can estimate the noise transition matrix and learn a classifier simultaneously, without relying on the error-prone noisy class-posterior estimation. Concretely, inspired by the characteristics of the stochastic label corruption process, we propose total variation regularization, which encourages the predicted probabilities to be more distinguishable from each other. Under mild assumptions, the proposed method yields a consistent estimator of the transition matrix. We show the effectiveness of the proposed method through experiments on benchmark and real-world datasets.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21n.html PDF: http://proceedings.mlr.press/v139/zhang21n/zhang21n.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21n.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yivan family: Zhang - given: Gang family: Niu - given: Masashi family: Sugiyama editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12501-12512 id: zhang21n issued: date-parts: - 2021 - 7 - 1 firstpage: 12501 lastpage: 12512 published: 2021-07-01 00:00:00 +0000 - title: 'Quantile Bandits for Best Arms Identification' abstract: 'We consider a variant of the best arm identification task in stochastic multi-armed bandits. Motivated by risk-averse decision-making problems, our goal is to identify a set of $m$ arms with the highest $\tau$-quantile values within a fixed budget. We prove asymmetric two-sided concentration inequalities for order statistics and quantiles of random variables that have non-decreasing hazard rate, which may be of independent interest. With these inequalities, we analyse a quantile version of Successive Accepts and Rejects (Q-SAR). We derive an upper bound for the probability of arm misidentification, the first justification of a quantile based algorithm for fixed budget multiple best arms identification. We show illustrative experiments for best arm identification.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21o.html PDF: http://proceedings.mlr.press/v139/zhang21o/zhang21o.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21o.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mengyan family: Zhang - given: Cheng Soon family: Ong editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12513-12523 id: zhang21o issued: date-parts: - 2021 - 7 - 1 firstpage: 12513 lastpage: 12523 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Better Robust Generalization with Shift Consistency Regularization' abstract: 'While adversarial training becomes one of the most promising defending approaches against adversarial attacks for deep neural networks, the conventional wisdom through robust optimization may usually not guarantee good generalization for robustness. Concerning with robust generalization over unseen adversarial data, this paper investigates adversarial training from a novel perspective of shift consistency in latent space. We argue that the poor robust generalization of adversarial training is owing to the significantly dispersed latent representations generated by training and test adversarial data, as the adversarial perturbations push the latent features of natural examples in the same class towards diverse directions. This is underpinned by the theoretical analysis of the robust generalization gap, which is upper-bounded by the standard one over the natural data and a term of feature inconsistent shift caused by adversarial perturbation {–} a measure of latent dispersion. Towards better robust generalization, we propose a new regularization method {–} shift consistency regularization (SCR) {–} to steer the same-class latent features of both natural and adversarial data into a common direction during adversarial training. The effectiveness of SCR in adversarial training is evaluated through extensive experiments over different datasets, such as CIFAR-10, CIFAR-100, and SVHN, against several competitive methods.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21p.html PDF: http://proceedings.mlr.press/v139/zhang21p/zhang21p.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21p.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shufei family: Zhang - given: Zhuang family: Qian - given: Kaizhu family: Huang - given: Qiufeng family: Wang - given: Rui family: Zhang - given: Xinping family: Yi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12524-12534 id: zhang21p issued: date-parts: - 2021 - 7 - 1 firstpage: 12524 lastpage: 12534 published: 2021-07-01 00:00:00 +0000 - title: 'On-Policy Deep Reinforcement Learning for the Average-Reward Criterion' abstract: 'We develop theory and algorithms for average-reward on-policy Reinforcement Learning (RL). We first consider bounding the difference of the long-term average reward for two policies. We show that previous work based on the discounted return (Schulman et al. 2015, Achiam et al. 2017) results in a non-meaningful lower bound in the average reward setting. By addressing the average-reward criterion directly, we then derive a novel bound which depends on the average divergence between the policies and on Kemeny’s constant. Based on this bound, we develop an iterative procedure which produces a sequence of monotonically improved policies for the average reward criterion. This iterative procedure can then be combined with classic Deep Reinforcement Learning (DRL) methods, resulting in practical DRL algorithms that target the long-run average reward criterion. In particular, we demonstrate that Average-Reward TRPO (ATRPO), which adapts the on-policy TRPO algorithm to the average-reward criterion, significantly outperforms TRPO in the most challenging MuJuCo environments.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21q.html PDF: http://proceedings.mlr.press/v139/zhang21q/zhang21q.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21q.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yiming family: Zhang - given: Keith W family: Ross editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12535-12545 id: zhang21q issued: date-parts: - 2021 - 7 - 1 firstpage: 12535 lastpage: 12545 published: 2021-07-01 00:00:00 +0000 - title: 'Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution' abstract: 'Model quantization is challenging due to many tedious hyper-parameters such as precision (bitwidth), dynamic range (minimum and maximum discrete values) and stepsize (interval between discrete values). Unlike prior arts that carefully tune these values, we present a fully differentiable approach to learn all of them, named Differentiable Dynamic Quantization (DDQ), which has several benefits. (1) DDQ is able to quantize challenging lightweight architectures like MobileNets, where different layers prefer different quantization parameters. (2) DDQ is hardware-friendly and can be easily implemented using low-precision matrix-vector multiplication, making it capable in many hardware such as ARM. (3) Extensive experiments show that DDQ outperforms prior arts on many networks and benchmarks, especially when models are already efficient and compact. e.g., DDQ is the first approach that achieves lossless 4-bit quantization for MobileNetV2 on ImageNet.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21r.html PDF: http://proceedings.mlr.press/v139/zhang21r/zhang21r.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21r.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhaoyang family: Zhang - given: Wenqi family: Shao - given: Jinwei family: Gu - given: Xiaogang family: Wang - given: Ping family: Luo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12546-12556 id: zhang21r issued: date-parts: - 2021 - 7 - 1 firstpage: 12546 lastpage: 12556 published: 2021-07-01 00:00:00 +0000 - title: 'iDARTS: Differentiable Architecture Search with Stochastic Implicit Gradients' abstract: 'Differentiable ARchiTecture Search(DARTS) has recently become the mainstream in the neural architecture search (NAS) due to its efficiency and simplicity. With a gradient-based bi-level optimization, DARTS alternately optimizes the inner model weights and the outer architecture parameter in a weight-sharing supernet. A key challenge to the scalability and quality of the learned architectures is the need for differentiating through the inner-loop optimisation. While much has been discussed about several potentially fatal factors in DARTS, the architecture gradient, a.k.a. hypergradient, has received less attention. In this paper, we tackle the hypergradient computation in DARTS based on the implicit function theorem, making it only depends on the obtained solution to the inner-loop optimization and agnostic to the optimization path. To further reduce the computational requirements, we formulate a stochastic hypergradient approximation for differentiable NAS, and theoretically show that the architecture optimization with the proposed method is expected to converge to a stationary point. Comprehensive experiments on two NAS benchmark search spaces and the common NAS search space verify the effectiveness of our proposed method. It leads to architectures outperforming, with large margins, those learned by the baseline methods.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21s.html PDF: http://proceedings.mlr.press/v139/zhang21s/zhang21s.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21s.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Miao family: Zhang - given: Steven W. family: Su - given: Shirui family: Pan - given: Xiaojun family: Chang - given: Ehsan M family: Abbasnejad - given: Reza family: Haffari editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12557-12566 id: zhang21s issued: date-parts: - 2021 - 7 - 1 firstpage: 12557 lastpage: 12566 published: 2021-07-01 00:00:00 +0000 - title: 'Deep Coherent Exploration for Continuous Control' abstract: 'In policy search methods for reinforcement learning (RL), exploration is often performed by injecting noise either in action space at each step independently or in parameter space over each full trajectory. In prior work, it has been shown that with linear policies, a more balanced trade-off between these two exploration strategies is beneficial. However, that method did not scale to policies using deep neural networks. In this paper, we introduce deep coherent exploration, a general and scalable exploration framework for deep RL algorithms for continuous control, that generalizes step-based and trajectory-based exploration. This framework models the last layer parameters of the policy network as latent variables and uses a recursive inference step within the policy update to handle these latent variables in a scalable manner. We find that deep coherent exploration improves the speed and stability of learning of A2C, PPO, and SAC on several continuous control tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21t.html PDF: http://proceedings.mlr.press/v139/zhang21t/zhang21t.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21t.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yijie family: Zhang - given: Herke family: Van Hoof editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12567-12577 id: zhang21t issued: date-parts: - 2021 - 7 - 1 firstpage: 12567 lastpage: 12577 published: 2021-07-01 00:00:00 +0000 - title: 'Average-Reward Off-Policy Policy Evaluation with Function Approximation' abstract: 'We consider off-policy policy evaluation with function approximation (FA) in average-reward MDPs, where the goal is to estimate both the reward rate and the differential value function. For this problem, bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad (Sutton & Barto, 2018). To address the deadly triad, we propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting. In terms of estimating the differential value function, the algorithms are the first convergent off-policy linear function approximation algorithms. In terms of estimating the reward rate, the algorithms are the first convergent off-policy linear function approximation algorithms that do not require estimating the density ratio. We demonstrate empirically the advantage of the proposed algorithms, as well as their nonlinear variants, over a competitive density-ratio-based approach, in a simple domain as well as challenging robot simulation tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21u.html PDF: http://proceedings.mlr.press/v139/zhang21u/zhang21u.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21u.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shangtong family: Zhang - given: Yi family: Wan - given: Richard S family: Sutton - given: Shimon family: Whiteson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12578-12588 id: zhang21u issued: date-parts: - 2021 - 7 - 1 firstpage: 12578 lastpage: 12588 published: 2021-07-01 00:00:00 +0000 - title: 'Matrix Sketching for Secure Collaborative Machine Learning' abstract: 'Collaborative learning allows participants to jointly train a model without data sharing. To update the model parameters, the central server broadcasts model parameters to the clients, and the clients send updating directions such as gradients to the server. While data do not leave a client device, the communicated gradients and parameters will leak a client’s privacy. Attacks that infer clients’ privacy from gradients and parameters have been developed by prior work. Simple defenses such as dropout and differential privacy either fail to defend the attacks or seriously hurt test accuracy. We propose a practical defense which we call Double-Blind Collaborative Learning (DBCL). The high-level idea is to apply random matrix sketching to the parameters (aka weights) and re-generate random sketching after each iteration. DBCL prevents clients from conducting gradient-based privacy inferences which are the most effective attacks. DBCL works because from the attacker’s perspective, sketching is effectively random noise that outweighs the signal. Notably, DBCL does not much increase computation and communication costs and does not hurt test accuracy at all.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21v.html PDF: http://proceedings.mlr.press/v139/zhang21v/zhang21v.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21v.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mengjiao family: Zhang - given: Shusen family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12589-12599 id: zhang21v issued: date-parts: - 2021 - 7 - 1 firstpage: 12589 lastpage: 12599 published: 2021-07-01 00:00:00 +0000 - title: 'MetaCURE: Meta Reinforcement Learning with Empowerment-Driven Exploration' abstract: 'Meta reinforcement learning (meta-RL) extracts knowledge from previous tasks and achieves fast adaptation to new tasks. Despite recent progress, efficient exploration in meta-RL remains a key challenge in sparse-reward tasks, as it requires quickly finding informative task-relevant experiences in both meta-training and adaptation. To address this challenge, we explicitly model an exploration policy learning problem for meta-RL, which is separated from exploitation policy learning, and introduce a novel empowerment-driven exploration objective, which aims to maximize information gain for task identification. We derive a corresponding intrinsic reward and develop a new off-policy meta-RL framework, which efficiently learns separate context-aware exploration and exploitation policies by sharing the knowledge of task inference. Experimental evaluation shows that our meta-RL method significantly outperforms state-of-the-art baselines on various sparse-reward MuJoCo locomotion tasks and more complex sparse-reward Meta-World tasks.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21w.html PDF: http://proceedings.mlr.press/v139/zhang21w/zhang21w.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21w.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Jin family: Zhang - given: Jianhao family: Wang - given: Hao family: Hu - given: Tong family: Chen - given: Yingfeng family: Chen - given: Changjie family: Fan - given: Chongjie family: Zhang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12600-12610 id: zhang21w issued: date-parts: - 2021 - 7 - 1 firstpage: 12600 lastpage: 12610 published: 2021-07-01 00:00:00 +0000 - title: 'World Model as a Graph: Learning Latent Landmarks for Planning' abstract: 'Planning, the ability to analyze the structure of a problem in the large and decompose it into interrelated subproblems, is a hallmark of human intelligence. While deep reinforcement learning (RL) has shown great promise for solving relatively straightforward control tasks, it remains an open problem how to best incorporate planning into existing deep RL paradigms to handle increasingly complex environments. One prominent framework, Model-Based RL, learns a world model and plans using step-by-step virtual rollouts. This type of world model quickly diverges from reality when the planning horizon increases, thus struggling at long-horizon planning. How can we learn world models that endow agents with the ability to do temporally extended reasoning? In this work, we propose to learn graph-structured world models composed of sparse, multi-step transitions. We devise a novel algorithm to learn latent landmarks that are scattered (in terms of reachability) across the goal space as the nodes on the graph. In this same graph, the edges are the reachability estimates distilled from Q-functions. On a variety of high-dimensional continuous control tasks ranging from robotic manipulation to navigation, we demonstrate that our method, named L3P, significantly outperforms prior work, and is oftentimes the only method capable of leveraging both the robustness of model-free RL and generalization of graph-search algorithms. We believe our work is an important step towards scalable planning in reinforcement learning.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21x.html PDF: http://proceedings.mlr.press/v139/zhang21x/zhang21x.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21x.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Lunjun family: Zhang - given: Ge family: Yang - given: Bradly C family: Stadie editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12611-12620 id: zhang21x issued: date-parts: - 2021 - 7 - 1 firstpage: 12611 lastpage: 12620 published: 2021-07-01 00:00:00 +0000 - title: 'Breaking the Deadly Triad with a Target Network' abstract: 'The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear $Q$-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21y.html PDF: http://proceedings.mlr.press/v139/zhang21y/zhang21y.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21y.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shangtong family: Zhang - given: Hengshuai family: Yao - given: Shimon family: Whiteson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12621-12631 id: zhang21y issued: date-parts: - 2021 - 7 - 1 firstpage: 12621 lastpage: 12631 published: 2021-07-01 00:00:00 +0000 - title: 'Multiscale Invertible Generative Networks for High-Dimensional Bayesian Inference' abstract: 'We propose a Multiscale Invertible Generative Network (MsIGN) and associated training algorithm that leverages multiscale structure to solve high-dimensional Bayesian inference. To address the curse of dimensionality, MsIGN exploits the low-dimensional nature of the posterior, and generates samples from coarse to fine scale (low to high dimension) by iteratively upsampling and refining samples. MsIGN is trained in a multi-stage manner to minimize the Jeffreys divergence, which avoids mode dropping in high-dimensional cases. On two high-dimensional Bayesian inverse problems, we show superior performance of MsIGN over previous approaches in posterior approximation and multiple mode capture. On the natural image synthesis task, MsIGN achieves superior performance in bits-per-dimension over baseline models and yields great interpret-ability of its neurons in intermediate layers.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21z.html PDF: http://proceedings.mlr.press/v139/zhang21z/zhang21z.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21z.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Shumao family: Zhang - given: Pengchuan family: Zhang - given: Thomas Y family: Hou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12632-12641 id: zhang21z issued: date-parts: - 2021 - 7 - 1 firstpage: 12632 lastpage: 12641 published: 2021-07-01 00:00:00 +0000 - title: 'Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation' abstract: 'In this paper, we study meta learning for support (i.e., the set of non-zero entries) recovery in high-dimensional precision matrix estimation where we reduce the sufficient sample complexity in a novel task with the information learned from other auxiliary tasks. In our setup, each task has a different random true precision matrix, each with a possibly different support. We assume that the union of the supports of all the true precision matrices (i.e., the true support union) is small in size. We propose to pool all the samples from different tasks, and \emph{improperly} estimate a single precision matrix by minimizing the $\ell_1$-regularized log-determinant Bregman divergence. We show that with high probability, the support of the \emph{improperly} estimated single precision matrix is equal to the true support union, provided a sufficient number of samples per task $n \in O((\log N)/K)$, for $N$-dimensional vectors and $K$ tasks. That is, one requires less samples per task when more tasks are available. We prove a matching information-theoretic lower bound for the necessary number of samples, which is $n \in \Omega((\log N)/K)$, and thus, our algorithm is minimax optimal. Then for the novel task, we prove that the minimization of the $\ell_1$-regularized log-determinant Bregman divergence with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to $O(\log(|S_{\text{off}}|))$ where $|S_{\text{off}}|$ is the number of off-diagonal elements in the support union and is much less than $N$ for sparse matrices. We also prove a matching information-theoretic lower bound of $\Omega(\log(|S_{\text{off}}|))$ for the necessary number of samples.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21aa.html PDF: http://proceedings.mlr.press/v139/zhang21aa/zhang21aa.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21aa.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Qian family: Zhang - given: Yilin family: Zheng - given: Jean family: Honorio editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12642-12652 id: zhang21aa issued: date-parts: - 2021 - 7 - 1 firstpage: 12642 lastpage: 12652 published: 2021-07-01 00:00:00 +0000 - title: 'Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample Complexity' abstract: 'In this paper we consider the problem of learning an $\epsilon$-optimal policy for a discounted Markov Decision Process (MDP). Given an MDP with $S$ states, $A$ actions, the discount factor $\gamma \in (0,1)$, and an approximation threshold $\epsilon > 0$, we provide a model-free algorithm to learn an $\epsilon$-optimal policy with sample complexity $\tilde{O}(\frac{SA\ln(1/p)}{\epsilon^2(1-\gamma)^{5.5}})$ \footnote{In this work, the notation $\tilde{O}(\cdot)$ hides poly-logarithmic factors of $S,A,1/(1-\gamma)$, and $1/\epsilon$.} and success probability $(1-p)$. For small enough $\epsilon$, we show an improved algorithm with sample complexity $\tilde{O}(\frac{SA\ln(1/p)}{\epsilon^2(1-\gamma)^{3}})$. While the first bound improves upon all known model-free algorithms and model-based ones with tight dependence on $S$, our second algorithm beats all known sample complexity bounds and matches the information theoretic lower bound up to logarithmic factors.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21ab.html PDF: http://proceedings.mlr.press/v139/zhang21ab/zhang21ab.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21ab.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zihan family: Zhang - given: Yuan family: Zhou - given: Xiangyang family: Ji editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12653-12662 id: zhang21ab issued: date-parts: - 2021 - 7 - 1 firstpage: 12653 lastpage: 12662 published: 2021-07-01 00:00:00 +0000 - title: 'Learning to Rehearse in Long Sequence Memorization' abstract: 'Existing reasoning tasks often have an important assumption that the input contents can be always accessed while reasoning, requiring unlimited storage resources and suffering from severe time delay on long sequences. To achieve efficient reasoning on long sequences with limited storage resources, memory augmented neural networks introduce a human-like write-read memory to compress and memorize the long input sequence in one pass, trying to answer subsequent queries only based on the memory. But they have two serious drawbacks: 1) they continually update the memory from current information and inevitably forget the early contents; 2) they do not distinguish what information is important and treat all contents equally. In this paper, we propose the Rehearsal Memory (RM) to enhance long-sequence memorization by self-supervised rehearsal with a history sampler. To alleviate the gradual forgetting of early information, we design self-supervised rehearsal training with recollection and familiarity tasks. Further, we design a history sampler to select informative fragments for rehearsal training, making the memory focus on the crucial information. We evaluate the performance of our rehearsal memory by the synthetic bAbI task and several downstream tasks, including text/video question answering and recommendation on long sequences.' volume: 139 URL: https://proceedings.mlr.press/v139/zhang21ac.html PDF: http://proceedings.mlr.press/v139/zhang21ac/zhang21ac.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhang21ac.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhu family: Zhang - given: Chang family: Zhou - given: Jianxin family: Ma - given: Zhijie family: Lin - given: Jingren family: Zhou - given: Hongxia family: Yang - given: Zhou family: Zhao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12663-12673 id: zhang21ac issued: date-parts: - 2021 - 7 - 1 firstpage: 12663 lastpage: 12673 published: 2021-07-01 00:00:00 +0000 - title: 'Dataset Condensation with Differentiable Siamese Augmentation' abstract: 'In many machine learning problems, large-scale datasets have become the de-facto standard to train state-of-the-art deep networks at the price of heavy computation load. In this paper, we focus on condensing large training sets into significantly smaller synthetic sets which can be used to train deep neural networks from scratch with minimum drop in performance. Inspired from the recent training set synthesis methods, we propose Differentiable Siamese Augmentation that enables effective use of data augmentation to synthesize more informative synthetic images and thus achieves better performance when training networks with augmentations. Experiments on multiple image classification benchmarks demonstrate that the proposed method obtains substantial gains over the state-of-the-art, 7% improvements on CIFAR10 and CIFAR100 datasets. We show with only less than 1% data that our method achieves 99.6%, 94.9%, 88.5%, 71.5% relative performance on MNIST, FashionMNIST, SVHN, CIFAR10 respectively. We also explore the use of our method in continual learning and neural architecture search, and show promising results.' volume: 139 URL: https://proceedings.mlr.press/v139/zhao21a.html PDF: http://proceedings.mlr.press/v139/zhao21a/zhao21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhao21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Bo family: Zhao - given: Hakan family: Bilen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12674-12685 id: zhao21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12674 lastpage: 12685 published: 2021-07-01 00:00:00 +0000 - title: 'Joining datasets via data augmentation in the label space for neural networks' abstract: 'Most, if not all, modern deep learning systems restrict themselves to a single dataset for neural network training and inference. In this article, we are interested in systematic ways to join datasets that are made of similar purposes. Unlike previous published works that ubiquitously conduct the dataset joining in the uninterpretable latent vectorial space, the core to our method is an augmentation procedure in the label space. The primary challenge to address the label space for dataset joining is the discrepancy between labels: non-overlapping label annotation sets, different labeling granularity or hierarchy and etc. Notably we propose a new technique leveraging artificially created knowledge graph, recurrent neural networks and policy gradient that successfully achieve the dataset joining in the label space. Empirical results on both image and text classification justify the validity of our approach.' volume: 139 URL: https://proceedings.mlr.press/v139/zhao21b.html PDF: http://proceedings.mlr.press/v139/zhao21b/zhao21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhao21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Junbo family: Zhao - given: Mingfeng family: Ou - given: Linji family: Xue - given: Yunkai family: Cui - given: Sai family: Wu - given: Gang family: Chen editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12686-12696 id: zhao21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12686 lastpage: 12696 published: 2021-07-01 00:00:00 +0000 - title: 'Calibrate Before Use: Improving Few-shot Performance of Language Models' abstract: 'GPT-3 can perform numerous tasks when provided a natural language prompt that contains a few training examples. We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the examples can cause accuracy to vary from near chance to near state-of-the-art. We demonstrate that this instability arises from the bias of language models towards predicting certain answers, e.g., those that are placed near the end of the prompt or are common in the pre-training data. To mitigate this, we first estimate the model’s bias towards each answer by asking for its prediction when given a training prompt and a content-free test input such as "N/A". We then fit calibration parameters that cause the prediction for this input to be uniform across answers. On a diverse set of tasks, this contextual calibration procedure substantially improves GPT-3 and GPT-2’s accuracy (up to 30.0% absolute) across different choices of the prompt, while also making learning considerably more stable.' volume: 139 URL: https://proceedings.mlr.press/v139/zhao21c.html PDF: http://proceedings.mlr.press/v139/zhao21c/zhao21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhao21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zihao family: Zhao - given: Eric family: Wallace - given: Shi family: Feng - given: Dan family: Klein - given: Sameer family: Singh editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12697-12706 id: zhao21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12697 lastpage: 12706 published: 2021-07-01 00:00:00 +0000 - title: 'Few-Shot Neural Architecture Search' abstract: 'Efficient evaluation of a network architecture drawn from a large search space remains a key challenge in Neural Architecture Search (NAS). Vanilla NAS evaluates each architecture by training from scratch, which gives the true performance but is extremely time-consuming. Recently, one-shot NAS substantially reduces the computation cost by training only one supernetwork, a.k.a. supernet, to approximate the performance of every architecture in the search space via weight-sharing. However, the performance estimation can be very inaccurate due to the co-adaption among operations. In this paper, we propose few-shot NAS that uses multiple supernetworks, called sub-supernet, each covering different regions of the search space to alleviate the undesired co-adaption. Compared to one-shot NAS, few-shot NAS improves the accuracy of architecture evaluation with a small increase of evaluation cost. With only up to 7 sub-supernets, few-shot NAS establishes new SoTAs: on ImageNet, it finds models that reach 80.5% top-1 accuracy at 600 MB FLOPS and 77.5% top-1 accuracy at 238 MFLOPS; on CIFAR10, it reaches 98.72% top-1 accuracy without using extra data or transfer learning. In Auto-GAN, few-shot NAS outperforms the previously published results by up to 20%. Extensive experiments show that few-shot NAS significantly improves various one-shot methods, including 4 gradient-based and 6 search-based methods on 3 different tasks in NasBench-201 and NasBench1-shot-1.' volume: 139 URL: https://proceedings.mlr.press/v139/zhao21d.html PDF: http://proceedings.mlr.press/v139/zhao21d/zhao21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhao21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Yiyang family: Zhao - given: Linnan family: Wang - given: Yuandong family: Tian - given: Rodrigo family: Fonseca - given: Tian family: Guo editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12707-12718 id: zhao21d issued: date-parts: - 2021 - 7 - 1 firstpage: 12707 lastpage: 12718 published: 2021-07-01 00:00:00 +0000 - title: 'Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks' abstract: 'Recent findings have shown multiple graph learning models, such as graph classification and graph matching, are highly vulnerable to adversarial attacks, i.e. small input perturbations in graph structures and node attributes can cause the model failures. Existing defense techniques often defend specific attacks on particular multiple graph learning tasks. This paper proposes an attack-agnostic graph-adaptive 1-Lipschitz neural network, ERNN, for improving the robustness of deep multiple graph learning while achieving remarkable expressive power. A K_l-Lipschitz Weibull activation function is designed to enforce the gradient norm as K_l at layer l. The nearest matrix orthogonalization and polar decomposition techniques are utilized to constraint the weight norm as 1/K_l and make the norm-constrained weight close to the original weight. The theoretical analysis is conducted to derive lower and upper bounds of feasible K_l under the 1-Lipschitz constraint. The combination of norm-constrained weight and activation function leads to the 1-Lipschitz neural network for expressive and robust multiple graph learning.' volume: 139 URL: https://proceedings.mlr.press/v139/zhao21e.html PDF: http://proceedings.mlr.press/v139/zhao21e/zhao21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhao21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xin family: Zhao - given: Zeru family: Zhang - given: Zijie family: Zhang - given: Lingfei family: Wu - given: Jiayin family: Jin - given: Yang family: Zhou - given: Ruoming family: Jin - given: Dejing family: Dou - given: Da family: Yan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12719-12735 id: zhao21e issued: date-parts: - 2021 - 7 - 1 firstpage: 12719 lastpage: 12735 published: 2021-07-01 00:00:00 +0000 - title: 'Fused Acoustic and Text Encoding for Multimodal Bilingual Pretraining and Speech Translation' abstract: 'Recently, representation learning for text and speech has successfully improved many language related tasks. However, all existing methods suffer from two limitations: (a) they only learn from one input modality, while a unified representation for both speech and text is needed by tasks such as end-to-end speech translation, and as a result, (b) they can not exploit various large-scale text and speech data and their performance is limited by the scarcity of parallel speech translation data. To address these problems, we propose a Fused Acoustic and Text Masked Language Model (FAT-MLM) which jointly learns a unified representation for both acoustic and text input from various types of corpora including parallel data for speech recognition and machine translation, and even pure speech and text data. Within this cross-modal representation learning framework, we further present an end-to-end model for Fused Acoustic and Text Speech Translation (FAT-ST). Experiments on three translation directions show that by fine-tuning from FAT-MLM, our proposed speech translation models substantially improve translation quality by up to +5.9 BLEU.' volume: 139 URL: https://proceedings.mlr.press/v139/zheng21a.html PDF: http://proceedings.mlr.press/v139/zheng21a/zheng21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zheng21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Renjie family: Zheng - given: Junkun family: Chen - given: Mingbo family: Ma - given: Liang family: Huang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12736-12746 id: zheng21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12736 lastpage: 12746 published: 2021-07-01 00:00:00 +0000 - title: 'Two Heads are Better Than One: Hypergraph-Enhanced Graph Reasoning for Visual Event Ratiocination' abstract: 'Even with a still image, humans can ratiocinate various visual cause-and-effect descriptions before, at present, and after, as well as beyond the given image. However, it is challenging for models to achieve such task–the visual event ratiocination, owing to the limitations of time and space. To this end, we propose a novel multi-modal model, Hypergraph-Enhanced Graph Reasoning. First it represents the contents from the same modality as a semantic graph and mines the intra-modality relationship, therefore breaking the limitations in the spatial domain. Then, we introduce the Graph Self-Attention Enhancement. On the one hand, this enables semantic graph representations from different modalities to enhance each other and captures the inter-modality relationship along the line. On the other hand, it utilizes our built multi-modal hypergraphs in different moments to boost individual semantic graph representations, and breaks the limitations in the temporal domain. Our method illustrates the case of "two heads are better than one" in the sense that semantic graph representations with the help of the proposed enhancement mechanism are more robust than those without. Finally, we re-project these representations and leverage their outcomes to generate textual cause-and-effect descriptions. Experimental results show that our model achieves significantly higher performance in comparison with other state-of-the-arts.' volume: 139 URL: https://proceedings.mlr.press/v139/zheng21b.html PDF: http://proceedings.mlr.press/v139/zheng21b/zheng21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zheng21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Wenbo family: Zheng - given: Lan family: Yan - given: Chao family: Gou - given: Fei-Yue family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12747-12760 id: zheng21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12747 lastpage: 12760 published: 2021-07-01 00:00:00 +0000 - title: 'How Framelets Enhance Graph Neural Networks' abstract: 'This paper presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. We decompose an input graph into low-pass and high-pass frequencies coefficients for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds high-frequency information at different scales. Compared to ReLU, shrinkage activation improves model performance on denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with well-preserved prediction performance.' volume: 139 URL: https://proceedings.mlr.press/v139/zheng21c.html PDF: http://proceedings.mlr.press/v139/zheng21c/zheng21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zheng21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xuebin family: Zheng - given: Bingxin family: Zhou - given: Junbin family: Gao - given: Yuguang family: Wang - given: Pietro family: Lió - given: Ming family: Li - given: Guido family: Montufar editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12761-12771 id: zheng21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12761 lastpage: 12771 published: 2021-07-01 00:00:00 +0000 - title: 'Probabilistic Sequential Shrinking: A Best Arm Identification Algorithm for Stochastic Bandits with Corruptions' abstract: 'We consider a best arm identification (BAI) problem for stochastic bandits with adversarial corruptions in the fixed-budget setting of T steps. We design a novel randomized algorithm, Probabilistic Sequential Shrinking(u) (PSS(u)), which is agnostic to the amount of corruptions. When the amount of corruptions per step (CPS) is below a threshold, PSS(u) identifies the best arm or item with probability tending to 1 as T{\rightarrow}$\infty$. Otherwise, the optimality gap of the identified item degrades gracefully with the CPS.We argue that such a bifurcation is necessary. In PSS(u), the parameter u serves to balance between the optimality gap and success probability. The injection of randomization is shown to be essential to mitigate the impact of corruptions. To demonstrate this, we design two attack strategies that are applicable to any algorithm. We apply one of them to a deterministic analogue of PSS(u) known as Successive Halving (SH) by Karnin et al. (2013). The attack strategy results in a high failure probability for SH, but PSS(u) remains robust. In the absence of corruptions, PSS(2)’s performance guarantee matches SH’s. We show that when the CPS is sufficiently large, no algorithm can achieve a BAI probability tending to 1 as T{\rightarrow}$\infty$. Numerical experiments corroborate our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/zhong21a.html PDF: http://proceedings.mlr.press/v139/zhong21a/zhong21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhong21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zixin family: Zhong - given: Wang Chi family: Cheung - given: Vincent family: Tan editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12772-12781 id: zhong21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12772 lastpage: 12781 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Distraction-Robust Active Visual Tracking' abstract: 'In active visual tracking, it is notoriously difficult when distracting objects appear, as distractors often mislead the tracker by occluding the target or bringing a confusing appearance. To address this issue, we propose a mixed cooperative-competitive multi-agent game, where a target and multiple distractors form a collaborative team to play against a tracker and make it fail to follow. Through learning in our game, diverse distracting behaviors of the distractors naturally emerge, thereby exposing the tracker’s weakness, which helps enhance the distraction-robustness of the tracker. For effective learning, we then present a bunch of practical methods, including a reward function for distractors, a cross-modal teacher-student learning strategy, and a recurrent attention mechanism for the tracker. The experimental results show that our tracker performs desired distraction-robust active visual tracking and can be well generalized to unseen environments. We also show that the multi-agent game can be used to adversarially test the robustness of trackers.' volume: 139 URL: https://proceedings.mlr.press/v139/zhong21b.html PDF: http://proceedings.mlr.press/v139/zhong21b/zhong21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhong21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fangwei family: Zhong - given: Peng family: Sun - given: Wenhan family: Luo - given: Tingyun family: Yan - given: Yizhou family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12782-12792 id: zhong21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12782 lastpage: 12792 published: 2021-07-01 00:00:00 +0000 - title: 'Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping' abstract: 'Modern tasks in reinforcement learning have large state and action spaces. To deal with them efficiently, one often uses predefined feature mapping to represent states and actions in a low dimensional space. In this paper, we study reinforcement learning for discounted Markov Decision Processes (MDPs), where the transition kernel can be parameterized as a linear function of certain feature mapping. We propose a novel algorithm which makes use of the feature mapping and obtains a $\tilde O(d\sqrt{T}/(1-\gamma)^2)$ regret, where $d$ is the dimension of the feature space, $T$ is the time horizon and $\gamma$ is the discount factor of the MDP. To the best of our knowledge, this is the first polynomial regret bound without accessing a generative model or making strong assumptions such as ergodicity of the MDP. By constructing a special class of MDPs, we also show that for any algorithms, the regret is lower bounded by $\Omega(d\sqrt{T}/(1-\gamma)^{1.5})$. Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $(1-\gamma)^{-0.5}$ factor.' volume: 139 URL: https://proceedings.mlr.press/v139/zhou21a.html PDF: http://proceedings.mlr.press/v139/zhou21a/zhou21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhou21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dongruo family: Zhou - given: Jiafan family: He - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12793-12802 id: zhou21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12793 lastpage: 12802 published: 2021-07-01 00:00:00 +0000 - title: 'Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation' abstract: 'While deep neural networks provide good performance for a range of challenging tasks, calibration and uncertainty estimation remain major challenges, especially under distribution shift. In this paper, we propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation, calibration, and out-of-distribution robustness with deep networks. Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle, but is computationally intractable to evaluate exactly for all but the simplest of model classes. We propose to use approximate Bayesian inference technqiues to produce a tractable approximation to the CNML distribution. Our approach can be combined with any approximate inference algorithm that provides tractable posterior densities over model parameters. We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration when faced with distribution shift.' volume: 139 URL: https://proceedings.mlr.press/v139/zhou21b.html PDF: http://proceedings.mlr.press/v139/zhou21b/zhou21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhou21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aurick family: Zhou - given: Sergey family: Levine editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12803-12812 id: zhou21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12803 lastpage: 12812 published: 2021-07-01 00:00:00 +0000 - title: 'Optimal Estimation of High Dimensional Smooth Additive Function Based on Noisy Observations' abstract: 'Given $\bx_j = \btheta + \bepsilon_j$, $j=1,...,n$ where $\btheta \in \RR^d$ is an unknown parameter and $\bepsilon_j$ are i.i.d. Gaussian noise vectors, we study the estimation of $f(\btheta)$ for a given smooth function $f:\RR^d \rightarrow \RR$ equipped with an additive structure. We inherit the idea from a recent work which introduced an effective bias reduction technique through iterative bootstrap and derive a bias-reducing estimator. By establishing its normal approximation results, we show that the proposed estimator can achieve asymptotic normality with a looser constraint on smoothness compared with general smooth function due to the additive structure. Such results further imply that the proposed estimator is asymptotically efficient. Both upper and lower bounds on mean squared error are proved which shows the proposed estimator is minimax optimal for the smooth class considered. Numerical simulation results are presented to validate our analysis and show its superior performance of the proposed estimator over the plug-in approach in terms of bias reduction and building confidence intervals.' volume: 139 URL: https://proceedings.mlr.press/v139/zhou21c.html PDF: http://proceedings.mlr.press/v139/zhou21c/zhou21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhou21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Fan family: Zhou - given: Ping family: Li editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12813-12823 id: zhou21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12813 lastpage: 12823 published: 2021-07-01 00:00:00 +0000 - title: 'Incentivized Bandit Learning with Self-Reinforcing User Preferences' abstract: 'In this paper, we investigate a new multi-armed bandit (MAB) online learning model that considers real-world phenomena in many recommender systems: (i) the learning agent cannot pull the arms by itself and thus has to offer rewards to users to incentivize arm-pulling indirectly; and (ii) if users with specific arm preferences are well rewarded, they induce a "self-reinforcing" effect in the sense that they will attract more users of similar arm preferences. Besides addressing the tradeoff of exploration and exploitation, another key feature of this new MAB model is to balance reward and incentivizing payment. The goal of the agent is to maximize the total reward over a fixed time horizon $T$ with a low total payment. Our contributions in this paper are two-fold: (i) We propose a new MAB model with random arm selection that considers the relationship of users’ self-reinforcing preferences and incentives; and (ii) We leverage the properties of a multi-color Polya urn with nonlinear feedback model to propose two MAB policies termed "At-Least-$n$ Explore-Then-Commit" and "UCB-List". We prove that both policies achieve $O(log T)$ expected regret with $O(log T)$ expected payment over a time horizon $T$. We conduct numerical simulations to demonstrate and verify the performances of these two policies and study their robustness under various settings.' volume: 139 URL: https://proceedings.mlr.press/v139/zhou21d.html PDF: http://proceedings.mlr.press/v139/zhou21d/zhou21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhou21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Tianchen family: Zhou - given: Jia family: Liu - given: Chaosheng family: Dong - given: Jingyuan family: Deng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12824-12834 id: zhou21d issued: date-parts: - 2021 - 7 - 1 firstpage: 12824 lastpage: 12834 published: 2021-07-01 00:00:00 +0000 - title: 'Towards Defending against Adversarial Examples via Attack-Invariant Features' abstract: 'Deep neural networks (DNNs) are vulnerable to adversarial noise. Their adversarial robustness can be improved by exploiting adversarial examples. However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples. To solve this problem, in this paper, we propose to remove adversarial noise by learning generalizable invariant features across attacks which maintain semantic classification information. Specifically, we introduce an adversarial feature learning mechanism to disentangle invariant features from adversarial noise. A normalization term has been proposed in the encoded space of the attack-invariant features to address the bias issue between the seen and unseen types of attacks. Empirical evaluations demonstrate that our method could provide better protection in comparison to previous state-of-the-art approaches, especially against unseen types of attacks and adaptive attacks.' volume: 139 URL: https://proceedings.mlr.press/v139/zhou21e.html PDF: http://proceedings.mlr.press/v139/zhou21e/zhou21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhou21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dawei family: Zhou - given: Tongliang family: Liu - given: Bo family: Han - given: Nannan family: Wang - given: Chunlei family: Peng - given: Xinbo family: Gao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12835-12845 id: zhou21e issued: date-parts: - 2021 - 7 - 1 firstpage: 12835 lastpage: 12845 published: 2021-07-01 00:00:00 +0000 - title: 'Asymmetric Loss Functions for Learning with Noisy Labels' abstract: 'Robust loss functions are essential for training deep neural networks with better generalization power in the presence of noisy labels. Symmetric loss functions are confirmed to be robust to label noise. However, the symmetric condition is overly restrictive. In this work, we propose a new class of loss functions, namely asymmetric loss functions, which are robust to learning from noisy labels for arbitrary noise type. Subsequently, we investigate general theoretical properties of asymmetric loss functions, including classification-calibration, excess risk bound, and noise-tolerance. Meanwhile, we introduce the asymmetry ratio to measure the asymmetry of a loss function, and the empirical results show that a higher ratio will provide better robustness. Moreover, we modify several common loss functions, and establish the necessary and sufficient conditions for them to be asymmetric. Experiments on benchmark datasets demonstrate that asymmetric loss functions can outperform state-of-the-art methods.' volume: 139 URL: https://proceedings.mlr.press/v139/zhou21f.html PDF: http://proceedings.mlr.press/v139/zhou21f/zhou21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhou21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xiong family: Zhou - given: Xianming family: Liu - given: Junjun family: Jiang - given: Xin family: Gao - given: Xiangyang family: Ji editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12846-12856 id: zhou21f issued: date-parts: - 2021 - 7 - 1 firstpage: 12846 lastpage: 12856 published: 2021-07-01 00:00:00 +0000 - title: 'Examining and Combating Spurious Features under Distribution Shift' abstract: 'A central goal of machine learning is to learn robust representations that capture the fundamental relationship between inputs and output labels. However, minimizing training errors over finite or biased datasets results in models latching on to spurious correlations between the training input/output pairs that are not fundamental to the problem at hand. In this paper, we define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics. We prove that even when there is only bias of the input distribution (i.e. covariate shift), models can still pick up spurious features from their training data. Group distributionally robust optimization (DRO) provides an effective tool to alleviate covariate shift by minimizing the worst-case training losses over a set of pre-defined groups. Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations that occur in the data. To address this, we further propose to minimize the worst-case losses over a more flexible set of distributions that are defined on the joint distribution of groups and instances, instead of treating each group as a whole at optimization time. Through extensive experiments on one image and two language tasks, we show that our model is significantly more robust than comparable baselines under various partitions.' volume: 139 URL: https://proceedings.mlr.press/v139/zhou21g.html PDF: http://proceedings.mlr.press/v139/zhou21g/zhou21g.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhou21g.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chunting family: Zhou - given: Xuezhe family: Ma - given: Paul family: Michel - given: Graham family: Neubig editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12857-12867 id: zhou21g issued: date-parts: - 2021 - 7 - 1 firstpage: 12857 lastpage: 12867 published: 2021-07-01 00:00:00 +0000 - title: 'Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm' abstract: 'Sparse adversarial attacks can fool deep neural networks (DNNs) by only perturbing a few pixels (regularized by $\ell_0$ norm). Recent efforts combine it with another $\ell_\infty$ imperceptible on the perturbation magnitudes. The resultant sparse and imperceptible attacks are practically relevant, and indicate an even higher vulnerability of DNNs that we usually imagined. However, such attacks are more challenging to generate due to the optimization difficulty by coupling the $\ell_0$ regularizer and box constraints with a non-convex objective. In this paper, we address this challenge by proposing a homotopy algorithm, to jointly tackle the sparsity and the perturbation bound in one unified framework. Each iteration, the main step of our algorithm is to optimize an $\ell_0$-regularized adversarial loss, by leveraging the nonmonotone Accelerated Proximal Gradient Method (nmAPG) for nonconvex programming; it is followed by an $\ell_0$ change control step, and an optional post-attack step designed to escape bad local minima. We also extend the algorithm to handling the structural sparsity regularizer. We extensively examine the effectiveness of our proposed \textbf{homotopy attack} for both targeted and non-targeted attack scenarios, on CIFAR-10 and ImageNet datasets. Compared to state-of-the-art methods, our homotopy attack leads to significantly fewer perturbations, e.g., reducing 42.91% on CIFAR-10 and 75.03% on ImageNet (average case, targeted attack), at similar maximal perturbation magnitudes, when still achieving 100% attack success rates. Our codes are available at: {\small\url{https://github.com/VITA-Group/SparseADV_Homotopy}}.' volume: 139 URL: https://proceedings.mlr.press/v139/zhu21a.html PDF: http://proceedings.mlr.press/v139/zhu21a/zhu21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhu21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Mingkang family: Zhu - given: Tianlong family: Chen - given: Zhangyang family: Wang editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12868-12877 id: zhu21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12868 lastpage: 12877 published: 2021-07-01 00:00:00 +0000 - title: 'Data-Free Knowledge Distillation for Heterogeneous Federated Learning' abstract: 'Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.' volume: 139 URL: https://proceedings.mlr.press/v139/zhu21b.html PDF: http://proceedings.mlr.press/v139/zhu21b/zhu21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhu21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhuangdi family: Zhu - given: Junyuan family: Hong - given: Jiayu family: Zhou editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12878-12889 id: zhu21b issued: date-parts: - 2021 - 7 - 1 firstpage: 12878 lastpage: 12889 published: 2021-07-01 00:00:00 +0000 - title: 'Spectral vertex sparsifiers and pair-wise spanners over distributed graphs' abstract: 'Graph sparsification is a powerful tool to approximate an arbitrary graph and has been used in machine learning over graphs. As real-world networks are becoming very large and naturally distributed, distributed graph sparsification has drawn considerable attention. In this work, we design communication-efficient distributed algorithms for constructing spectral vertex sparsifiers, which closely preserve effective resistance distances on a subset of vertices of interest in the original graphs, under the well-established message passing communication model. We prove that the communication cost approximates the lower bound with only a small gap. We further provide algorithms for constructing pair-wise spanners which approximate the shortest distances between each pair of vertices in a target set, instead of all pairs, and incur communication costs that are much smaller than those of existing algorithms in the message passing model. Experiments are performed to validate the communication efficiency of the proposed algorithms under the guarantee that the constructed sparsifiers have a good approximation quality.' volume: 139 URL: https://proceedings.mlr.press/v139/zhu21c.html PDF: http://proceedings.mlr.press/v139/zhu21c/zhu21c.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhu21c.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Chunjiang family: Zhu - given: Qinqing family: Liu - given: Jinbo family: Bi editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12890-12900 id: zhu21c issued: date-parts: - 2021 - 7 - 1 firstpage: 12890 lastpage: 12900 published: 2021-07-01 00:00:00 +0000 - title: 'Few-shot Language Coordination by Modeling Theory of Mind' abstract: 'No man is an island. Humans develop the ability to communicate with a large community by coordinating with different interlocutors within short conversations. This ability is largely understudied by the research on building neural language communicative agents. We study the task of few-shot language coordination: agents quickly adapting to their conversational partners’ language abilities. Different from current communicative agents trained with self-play, we in- investigate this more general paradigm by requiring the lead agent to coordinate with a population of agents each of whom has different linguistic abilities. This leads to a general agent able to quickly adapt to communicating with unseen agents in the population. Unlike prior work, success here requires the ability to model the partner’s beliefs, a vital component of human communication. Drawing inspiration from the study of theory-of-mind (ToM; Premack & Woodruff (1978)), we study the effect of the speaker explicitly modeling the listener’s mental state. Learning by communicating with a population, the speakers, as shown in our experiments, acquire the ability to learn to predict the reactions of their partner upon various messages on-the-fly. The speaker’s predictions for the future actions help it generate the best instructions in order to maximize communicative goal with message costs. To examine our hypothesis that the instructions generated with ToM modeling yield better communication per- performance, we employ our agents in both a referential game and a language navigation task. Positive results from our experiments also hint at the importance of explicitly modeling language acquisition as a socio-pragmatic progress.' volume: 139 URL: https://proceedings.mlr.press/v139/zhu21d.html PDF: http://proceedings.mlr.press/v139/zhu21d/zhu21d.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhu21d.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Hao family: Zhu - given: Graham family: Neubig - given: Yonatan family: Bisk editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12901-12911 id: zhu21d issued: date-parts: - 2021 - 7 - 1 firstpage: 12901 lastpage: 12911 published: 2021-07-01 00:00:00 +0000 - title: 'Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels' abstract: 'The label noise transition matrix, characterizing the probabilities of a training instance being wrongly annotated, is crucial to designing popular solutions to learning with noisy labels. Existing works heavily rely on finding “anchor points” or their approximates, defined as instances belonging to a particular class almost surely. Nonetheless, finding anchor points remains a non-trivial task, and the estimation accuracy is also often throttled by the number of available anchor points. In this paper, we propose an alternative option to the above task. Our main contribution is the discovery of an efficient estimation procedure based on a clusterability condition. We prove that with clusterable representations of features, using up to third-order consensuses of noisy labels among neighbor representations is sufficient to estimate a unique transition matrix. Compared with methods using anchor points, our approach uses substantially more instances and benefits from a much better sample complexity. We demonstrate the estimation accuracy and advantages of our estimates using both synthetic noisy labels (on CIFAR-10/100) and real human-level noisy labels (on Clothing1M and our self-collected human-annotated CIFAR-10). Our code and human-level noisy CIFAR-10 labels are available at https://github.com/UCSC-REAL/HOC.' volume: 139 URL: https://proceedings.mlr.press/v139/zhu21e.html PDF: http://proceedings.mlr.press/v139/zhu21e/zhu21e.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhu21e.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Zhaowei family: Zhu - given: Yiwen family: Song - given: Yang family: Liu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12912-12923 id: zhu21e issued: date-parts: - 2021 - 7 - 1 firstpage: 12912 lastpage: 12923 published: 2021-07-01 00:00:00 +0000 - title: 'Commutative Lie Group VAE for Disentanglement Learning' abstract: 'We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data. Traditionally, such a structure is fixed to be a vector space with data variations represented by translations along individual latent dimensions. We argue this simple structure is suboptimal since it requires the model to learn to discard the properties (e.g. different scales of changes, different levels of abstractness) of data variations, which is an extra work than equivariance learning. Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations. Considering it is hard to conduct training on group structures, we focus on Lie groups and adopt a parameterization using Lie algebra. Based on the parameterization, some disentanglement learning constraints are naturally derived. A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning. Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.' volume: 139 URL: https://proceedings.mlr.press/v139/zhu21f.html PDF: http://proceedings.mlr.press/v139/zhu21f/zhu21f.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhu21f.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Xinqi family: Zhu - given: Chang family: Xu - given: Dacheng family: Tao editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12924-12934 id: zhu21f issued: date-parts: - 2021 - 7 - 1 firstpage: 12924 lastpage: 12934 published: 2021-07-01 00:00:00 +0000 - title: 'Accumulated Decoupled Learning with Gradient Staleness Mitigation for Convolutional Neural Networks' abstract: 'Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), which includes a module-wise gradient accumulation in order to mitigate the gradient staleness. Unlike prior arts ignoring the gradient staleness, we quantify the staleness in such a way that its mitigation can be quantitatively visualized. As a new learning scheme, the proposed ADL is theoretically shown to converge to critical points in spite of its asynchronism. Extensive experiments on CIFAR-10 and ImageNet datasets are conducted, demonstrating that ADL gives promising generalization results while the state-of-the-art methods experience reduced generalization and divergence. In addition, our ADL is shown to have the fastest training speed among the compared methods.' volume: 139 URL: https://proceedings.mlr.press/v139/zhuang21a.html PDF: http://proceedings.mlr.press/v139/zhuang21a/zhuang21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zhuang21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Huiping family: Zhuang - given: Zhenyu family: Weng - given: Fulin family: Luo - given: Toh family: Kar-Ann - given: Haizhou family: Li - given: Zhiping family: Lin editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12935-12944 id: zhuang21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12935 lastpage: 12944 published: 2021-07-01 00:00:00 +0000 - title: 'Demystifying Inductive Biases for (Beta-)VAE Based Architectures' abstract: 'The performance of Beta-Variational-Autoencoders and their variants on learning semantically meaningful, disentangled representations is unparalleled. On the other hand, there are theoretical arguments suggesting the impossibility of unsupervised disentanglement. In this work, we shed light on the inductive bias responsible for the success of VAE-based architectures. We show that in classical datasets the structure of variance, induced by the generating factors, is conveniently aligned with the latent directions fostered by the VAE objective. This builds the pivotal bias on which the disentangling abilities of VAEs rely. By small, elaborate perturbations of existing datasets, we hide the convenient correlation structure that is easily exploited by a variety of architectures. To demonstrate this, we construct modified versions of standard datasets in which (i) the generative factors are perfectly preserved; (ii) each image undergoes a mild transformation causing a small change of variance; (iii) the leading VAE-based disentanglement architectures fail to produce disentangled representations whilst the performance of a non-variational method remains unchanged.' volume: 139 URL: https://proceedings.mlr.press/v139/zietlow21a.html PDF: http://proceedings.mlr.press/v139/zietlow21a/zietlow21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zietlow21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Dominik family: Zietlow - given: Michal family: Rolinek - given: Georg family: Martius editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12945-12954 id: zietlow21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12945 lastpage: 12954 published: 2021-07-01 00:00:00 +0000 - title: 'Recovering AES Keys with a Deep Cold Boot Attack' abstract: 'Cold boot attacks inspect the corrupted random access memory soon after the power has been shut down. While most of the bits have been corrupted, many bits, at random locations, have not. Since the keys in many encryption schemes are being expanded in memory into longer keys with fixed redundancies, the keys can often be restored. In this work we combine a deep error correcting code technique together with a modified SAT solver scheme in order to apply the attack to AES keys. Even though AES consists Rijndael SBOX elements, that are specifically designed to be resistant to linear and differential cryptanalysis, our method provides a novel formalization of the AES key scheduling as a computational graph, which is implemented by neural message passing network. Our results show that our methods outperform the state of the art attack methods by a very large gap.' volume: 139 URL: https://proceedings.mlr.press/v139/zimerman21a.html PDF: http://proceedings.mlr.press/v139/zimerman21a/zimerman21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zimerman21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Itamar family: Zimerman - given: Eliya family: Nachmani - given: Lior family: Wolf editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12955-12966 id: zimerman21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12955 lastpage: 12966 published: 2021-07-01 00:00:00 +0000 - title: 'Learning Fair Policies in Decentralized Cooperative Multi-Agent Reinforcement Learning' abstract: 'We consider the problem of learning fair policies in (deep) cooperative multi-agent reinforcement learning (MARL). We formalize it in a principled way as the problem of optimizing a welfare function that explicitly encodes two important aspects of fairness: efficiency and equity. We provide a theoretical analysis of the convergence of policy gradient for this problem. As a solution method, we propose a novel neural network architecture, which is composed of two sub-networks specifically designed for taking into account these two aspects of fairness. In experiments, we demonstrate the importance of the two sub-networks for fair optimization. Our overall approach is general as it can accommodate any (sub)differentiable welfare function. Therefore, it is compatible with various notions of fairness that have been proposed in the literature (e.g., lexicographic maximin, generalized Gini social welfare function, proportional fairness). Our method is generic and can be implemented in various MARL settings: centralized training and decentralized execution, or fully decentralized. Finally, we experimentally validate our approach in various domains and show that it can perform much better than previous methods, both in terms of efficiency and equity.' volume: 139 URL: https://proceedings.mlr.press/v139/zimmer21a.html PDF: http://proceedings.mlr.press/v139/zimmer21a/zimmer21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zimmer21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Matthieu family: Zimmer - given: Claire family: Glanois - given: Umer family: Siddique - given: Paul family: Weng editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12967-12978 id: zimmer21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12967 lastpage: 12978 published: 2021-07-01 00:00:00 +0000 - title: 'Contrastive Learning Inverts the Data Generating Process' abstract: 'Contrastive learning has recently seen tremendous success in self-supervised learning. So far, however, it is largely unclear why the learned representations generalize so effectively to a large variety of downstream tasks. We here prove that feedforward models trained with objectives belonging to the commonly used InfoNCE family learn to implicitly invert the underlying generative model of the observed data. While the proofs make certain statistical assumptions about the generative model, we observe empirically that our findings hold even if these assumptions are severely violated. Our theory highlights a fundamental connection between contrastive learning, generative modeling, and nonlinear independent component analysis, thereby furthering our understanding of the learned representations as well as providing a theoretical foundation to derive more effective contrastive losses.' volume: 139 URL: https://proceedings.mlr.press/v139/zimmermann21a.html PDF: http://proceedings.mlr.press/v139/zimmermann21a/zimmermann21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zimmermann21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Roland S. family: Zimmermann - given: Yash family: Sharma - given: Steffen family: Schneider - given: Matthias family: Bethge - given: Wieland family: Brendel editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12979-12990 id: zimmermann21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12979 lastpage: 12990 published: 2021-07-01 00:00:00 +0000 - title: 'Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning' abstract: 'To rapidly learn a new task, it is often essential for agents to explore efficiently - especially when performance matters from the first timestep. One way to learn such behaviour is via meta-learning. Many existing methods however rely on dense rewards for meta-training, and can fail catastrophically if the rewards are sparse. Without a suitable reward signal, the need for exploration during meta-training is exacerbated. To address this, we propose HyperX, which uses novel reward bonuses for meta-training to explore in approximate hyper-state space (where hyper-states represent the environment state and the agent’s task belief). We show empirically that HyperX meta-learns better task-exploration and adapts more successfully to new tasks than existing methods.' volume: 139 URL: https://proceedings.mlr.press/v139/zintgraf21a.html PDF: http://proceedings.mlr.press/v139/zintgraf21a/zintgraf21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zintgraf21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Luisa M family: Zintgraf - given: Leo family: Feng - given: Cong family: Lu - given: Maximilian family: Igl - given: Kristian family: Hartikainen - given: Katja family: Hofmann - given: Shimon family: Whiteson editor: - given: Marina family: Meila - given: Tong family: Zhang page: 12991-13001 id: zintgraf21a issued: date-parts: - 2021 - 7 - 1 firstpage: 12991 lastpage: 13001 published: 2021-07-01 00:00:00 +0000 - title: 'Provable Robustness of Adversarial Training for Learning Halfspaces with Noise' abstract: 'We analyze the properties of adversarial training for learning adversarially robust halfspaces in the presence of agnostic label noise. Denoting $\mathsf{OPT}_{p,r}$ as the best classification error achieved by a halfspace that is robust to perturbations of $\ell^{p}$ balls of radius $r$, we show that adversarial training on the standard binary cross-entropy loss yields adversarially robust halfspaces up to classification error $\tilde O(\sqrt{\mathsf{OPT}_{2,r}})$ for $p=2$, and $\tilde O(d^{1/4} \sqrt{\mathsf{OPT}_{\infty, r}})$ when $p=\infty$. Our results hold for distributions satisfying anti-concentration properties enjoyed by log-concave isotropic distributions among others. We additionally show that if one instead uses a non-convex sigmoidal loss, adversarial training yields halfspaces with an improved robust classification error of $O(\mathsf{OPT}_{2,r})$ for $p=2$, and $O(d^{1/4} \mathsf{OPT}_{\infty, r})$ when $p=\infty$. To the best of our knowledge, this is the first work showing that adversarial training provably yields robust classifiers in the presence of noise.' volume: 139 URL: https://proceedings.mlr.press/v139/zou21a.html PDF: http://proceedings.mlr.press/v139/zou21a/zou21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zou21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Difan family: Zou - given: Spencer family: Frei - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 13002-13011 id: zou21a issued: date-parts: - 2021 - 7 - 1 firstpage: 13002 lastpage: 13011 published: 2021-07-01 00:00:00 +0000 - title: 'On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients' abstract: 'Hamiltonian Monte Carlo (HMC), built based on the Hamilton’s equation, has been witnessed great success in sampling from high-dimensional posterior distributions. However, it also suffers from computational inefficiency, especially for large training datasets. One common idea to overcome this computational bottleneck is using stochastic gradients, which only queries a mini-batch of training data in each iteration. However, unlike the extensive studies on the convergence analysis of HMC using full gradients, few works focus on establishing the convergence guarantees of stochastic gradient HMC algorithms. In this paper, we propose a general framework for proving the convergence rate of HMC with stochastic gradient estimators, for sampling from strongly log-concave and log-smooth target distributions. We show that the convergence to the target distribution in $2$-Wasserstein distance can be guaranteed as long as the stochastic gradient estimator is unbiased and its variance is upper bounded along the algorithm trajectory. We further apply the proposed framework to analyze the convergence rates of HMC with four standard stochastic gradient estimators: mini-batch stochastic gradient (SG), stochastic variance reduced gradient (SVRG), stochastic average gradient (SAGA), and control variate gradient (CVG). Theoretical results explain the inefficiency of mini-batch SG, and suggest that SVRG and SAGA perform better in the tasks with high-precision requirements, while CVG performs better for large dataset. Experiment results verify our theoretical findings.' volume: 139 URL: https://proceedings.mlr.press/v139/zou21b.html PDF: http://proceedings.mlr.press/v139/zou21b/zou21b.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zou21b.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Difan family: Zou - given: Quanquan family: Gu editor: - given: Marina family: Meila - given: Tong family: Zhang page: 13012-13022 id: zou21b issued: date-parts: - 2021 - 7 - 1 firstpage: 13012 lastpage: 13022 published: 2021-07-01 00:00:00 +0000 - title: 'A Functional Perspective on Learning Symmetric Functions with Neural Networks' abstract: 'Symmetric functions, which take as input an unordered, fixed-size set, are known to be universally representable by neural networks that enforce permutation invariance. These architectures only give guarantees for fixed input sizes, yet in many practical applications, including point clouds and particle physics, a relevant notion of generalization should include varying the input size. In this work we treat symmetric functions (of any size) as functions over probability measures, and study the learning and representation of neural networks defined on measures. By focusing on shallow architectures, we establish approximation and generalization bounds under different choices of regularization (such as RKHS and variation norms), that capture a hierarchy of functional spaces with increasing degree of non-linear learning. The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes, as we verify empirically.' volume: 139 URL: https://proceedings.mlr.press/v139/zweig21a.html PDF: http://proceedings.mlr.press/v139/zweig21a/zweig21a.pdf edit: https://github.com/mlresearch//v139/edit/gh-pages/_posts/2021-07-01-zweig21a.md series: 'Proceedings of Machine Learning Research' container-title: 'Proceedings of the 38th International Conference on Machine Learning' publisher: 'PMLR' author: - given: Aaron family: Zweig - given: Joan family: Bruna editor: - given: Marina family: Meila - given: Tong family: Zhang page: 13023-13032 id: zweig21a issued: date-parts: - 2021 - 7 - 1 firstpage: 13023 lastpage: 13032 published: 2021-07-01 00:00:00 +0000