Generalized Multi-view Shared Subspace Learning using View Bootstrapping

Published on Aug 5, 2021in IEEE Transactions on Signal Processing5.028
· DOI :10.1109/TSP.2021.3102751
Krishna Somandepalli10
Estimated H-index: 10
(SC: University of Southern California),
Shrikanth S. Narayanan91
Estimated H-index: 91
(SC: University of Southern California)
Sources
Abstract
A key objective in multiview learning is to model the information common to multiple parallel views of a class of objects/events to improve downstream tasks such as classification and clustering. In this context, two open research challenges remain; achieving null scalability: how can we incorporate information from hundreds of views per event into a model? and being null view-agnostic: how to learn robust multiview representations without knowledge of how these views are acquired? In this work, we study a neural method based on multiview correlation to capture the information shared across a large number of views by subsampling them in a view-agnostic manner during training. We analyze the error of this bootstrapped multiview correlation objective using matrix concentration theory to provide an upper bound on the number of views to subsample for a given embedding dimension. Our experiments on a diverse set of audio and visual tasks—multi-channel acoustic activity classification, spoken word recognition, 3D object classification, and pose-invariant face recognition—demonstrate the robustness of view bootstrapping to model a large number of views. Results and analysis underscore the applicability of our method for a view-agnostic learning setting.
📖 Papers frequently viewed together
3 Authors (Chang Xu, ..., Chao Xu)
660 Citations
17 Citations
2012KDD: Knowledge Discovery and Data Mining
2 Authors (Jintao Zhang, Jun Huan)
100 Citations
References33
Newest
Cited By0
Newest