The Perception-Action Loop in a Predictive Agent.

Published on Jan 1, 2020in Cognitive Science
Murchana Baruah1
Estimated H-index: 1
,
Bonny Banerjee11
Estimated H-index: 11
Source
Abstract
References23
Newest
Monitoring using sensors is ubiquitous in our environment. In this paper, a state estimation model is proposed for continuous activity monitoring from multimodal and heterogenous sensor data. Each sensor is modeled as an independent agent in the predictive coding framework. It can sample its environment, communicate with other agents, and adapt its internal model to its environment in an unsupervised manner. Using controlled experiments, we show that limitations of each sensor, such as inference...
3 CitationsSource
Jun 1, 2020 in CVPR (Computer Vision and Pattern Recognition)
#1Murchana BaruahH-Index: 1
#2Bonny Banerjee (U of M: University of Memphis)H-Index: 11
view all 2 authors...
2 CitationsSource
Feb 1, 2018 in NeurIPS (Neural Information Processing Systems)
#1Mike Wu (Stanford University)H-Index: 8
#2Noah D. Goodman (Stanford University)H-Index: 60
Multiple modalities often co-occur when describing natural phenomena. Learning a joint representation of these modalities should yield deeper and more useful representations. Previous generative approaches to multi-modal input either do not learn a joint distribution or require additional computation to handle missing data. Here, we introduce a multimodal variational autoencoder (MVAE) that uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multi-modal i...
88 Citations
#1Shamima Najnin (U of M: University of Memphis)H-Index: 7
#2Bonny Banerjee (U of M: University of Memphis)H-Index: 11
Abstract Predictive coding has been hypothesized as a universal principle guiding the operation in different brain areas. In this paper, a predictive coding framework for a developmental agent with perception (audio), action (vocalization), and learning capabilities is proposed. The agent learns concurrently to plan optimally and the associations between sensory and motor parameters, by minimizing the sensory prediction error in an unsupervised manner. The proposed agent is solely driven by sens...
11 CitationsSource
Feb 17, 2017 in CVPR (Computer Vision and Pattern Recognition)
#1Gregory Cohen (USYD: University of Sydney)H-Index: 11
#2Saeed Afshar (USYD: University of Sydney)H-Index: 11
Last. André van Schaik (USYD: University of Sydney)H-Index: 29
view all 4 authors...
The MNIST dataset has become a standard benchmark for learning, classification and computer vision systems. Contributing to its widespread adoption are the understandable and intuitive nature of the task, its relatively small size and storage requirements and the accessibility and ease-of-use of the database itself. The MNIST database was derived from a larger dataset known as the NIST Special Database 19 which contains digits, uppercase and lowercase handwritten letters. This paper introduces a...
291 Citations
#1Jia Han (SUS: Shanghai University of Sport)H-Index: 11
#2Gordon Waddington (UC: University of Canberra)H-Index: 30
Last. Yu Liu (SUS: Shanghai University of Sport)H-Index: 21
view all 5 authors...
Abstract To control movement, the brain has to integrate proprioceptive information from a variety of mechanoreceptors. The role of proprioception in daily activities, exercise, and sports has been extensively investigated, using different techniques, yet the proprioceptive mechanisms underlying human movement control are still unclear. In the current work we have reviewed understanding of proprioception and the three testing methods: threshold to detection of passive motion, joint position repr...
176 CitationsSource
#1Jayanta K. Dutta (U of M: University of Memphis)H-Index: 7
#2Bonny BanerjeeH-Index: 11
Last. Chandan K. Reddy (WSU: Wayne State University)H-Index: 28
view all 3 authors...
Outlier detection has been an active area of research for a few decades. We propose a new definition of outlier that is useful for high-dimensional data. According to this definition, given a dictionary of atoms learned using the sparse coding objective, the outlierness of a data point depends jointly on two factors: the frequency of each atom in reconstructing all data points (or its negative log activity ratio, NLAR) and the strength by which it is used in reconstructing the current point. A R...
12 CitationsSource
Dec 7, 2015 in NeurIPS (Neural Information Processing Systems)
#1Junyoung Chung (UdeM: Université de Montréal)H-Index: 17
#2Kyle Kastner (UdeM: Université de Montréal)H-Index: 11
Last. Yoshua Bengio (UdeM: Université de Montréal)H-Index: 192
view all 6 authors...
In this paper, we explore the inclusion of latent random variables into the hidden state of a recurrent neural network (RNN) by combining the elements of the variational autoencoder. We argue that through the use of high-level latent random variables, the variational RNN (VRNN)1 can model the kind of variability observed in highly structured sequential data such as natural speech. We empirically evaluate the proposed model against other related sequential models on four speech datasets and one h...
469 Citations
Jul 6, 2015 in ICML (International Conference on Machine Learning)
#1Tim SalimansH-Index: 26
#2Diederik P. Kingma (UvA: University of Amsterdam)H-Index: 26
Last. Max Welling (UvA: University of Amsterdam)H-Index: 83
view all 3 authors...
Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables. This enables us to explore a new synthesis of variational inference and Monte Carlo methods where we incorporate one or more steps of MCMC into our variational approximation. By doing so we obtain a rich class of inference algorithms bridging the gap between variational methods and MCMC, and offering the ...
283 Citations
#1Karol Gregor (Google)H-Index: 22
#2Ivo Danihelka (Google)H-Index: 26
Last. Daan Wierstra (Google)H-Index: 51
view all 5 authors...
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates imag...
987 Citations
Cited By1
Newest
Round-the-clock monitoring of human behavior and emotions is required in many healthcare applications which is very expensive but can be automated using machine learning (ML) and sensor technologies. Unfortunately, the lack of infrastructure for collection and sharing of such data is a bottleneck for ML research applied to healthcare. Our goal is to circumvent this bottleneck by simulating a human body in virtual environment. This will allow generation of potentially infinite amounts of shareabl...