Masoumeh Heidari Kapourchali
University of Memphis
Modular designHuman–computer interactionMachine learningDistributed computingActivity recognitionBenchmark (computing)BusinessVirtual actorWearable computerIntelligibility (communication)Artificial intelligencePsychologyLinear modelSet (psychology)Intelligent sensorPattern recognitionAcousticsVirtual machinePerceptionPrincipal component analysisKnowledge transferAuditory systemEqual-loudness contourHearing lossFlexibility (engineering)UnobservableInternal modelSpeech recognitionCommunication policiesDistributed decisionHearing impairedTestbedGait (human)Computer scienceMulti-agent systemEmbeddingMatrix decompositionSpeech productionState (computer science)Feature extractionAudiologySparse matrixKnowledge managementFeature learningComputational modelCluster analysisFunction (engineering)Real-time computingStatistical hypothesis testingSpeech corpus
9Publications
5H-index
12Citations
Publications 9
Newest
Round-the-clock monitoring of human behavior and emotions is required in many healthcare applications which is very expensive but can be automated using machine learning (ML) and sensor technologies. Unfortunately, the lack of infrastructure for collection and sharing of such data is a bottleneck for ML research applied to healthcare. Our goal is to circumvent this bottleneck by simulating a human body in virtual environment. This will allow generation of potentially infinite amounts of shareabl...
Monitoring using sensors is ubiquitous in our environment. In this paper, a state estimation model is proposed for continuous activity monitoring from multimodal and heterogenous sensor data. Each sensor is modeled as an independent agent in the predictive coding framework. It can sample its environment, communicate with other agents, and adapt its internal model to its environment in an unsupervised manner. Using controlled experiments, we show that limitations of each sensor, such as inference...
3 CitationsSource
Apr 3, 2020 in AAAI (National Conference on Artificial Intelligence)
#1Masoumeh Heidari Kapourchali (U of M: University of Memphis)H-Index: 5
#2Bonny Banerjee (U of M: University of Memphis)H-Index: 11
We propose an agent model capable of actively and selectively communicating with other agents to predict its environmental state efficiently. Selecting whom to communicate with is a challenge when the internal model of other agents is unobservable. Our agent learns a communication policy as a mapping from its belief state to with whom to communicate in an online and unsupervised manner, without any reinforcement. Human activity recognition from multimodal, multisource and heterogeneous sensor da...
1 CitationsSource
#1Masoumeh Heidari Kapourchali (U of M: University of Memphis)H-Index: 5
#2Bonny Banerjee (U of M: University of Memphis)H-Index: 11
In the Internet of Things (IoT), heterogenous sensors generate time-series data with different properties. The problem of unsupervised feature learning from a time-series dataset poses two challenges. First, it is known that centroids obtained by clustering time-series with high overlap do not reflect their patterns, i.e., subsequence time-series clustering is meaningless . In this paper, we show that principal component analysis, sparse coding, and non-negative matrix factorization are also mea...
3 CitationsSource
2 Citations
A corpus of recordings of deaf speech is introduced. Adults who were pre- or post-lingually deafened as well as those with normal hearing read standardized speech passages totaling 11 h of .wav recordings. Preliminary acoustic analyses are included to provide a glimpse of the kinds of analyses that can be conducted with this corpus of recordings. Long term average speech spectra as well as spectral moment analyses provide considerable insight into differences observed in the speech of talkers ju...
2 CitationsSource
Dec 1, 2016 in ICMLA (International Conference on Machine Learning and Applications)
#1Bonny Banerjee (U of M: University of Memphis)H-Index: 11
#2Masoumeh Heidari Kapourchali (U of M: University of Memphis)H-Index: 5
Last. Monique PoussonH-Index: 6
view all 7 authors...
Does a hearing-impaired individual's speech reflect his hearing loss, and if it does, can the nature of hearing loss be inferred from his speech? To investigate these questions, at least four hours of speech data were recorded from each of 37 adult individuals, both male and female, belonging to four classes: 7 normal, and 30 severely-to-profoundly hearing impaired with high, medium or low speech intelligibility. Acoustic kernels were learned for each individual by capturing the distribution of ...
Source
Sep 8, 2016 in INTERSPEECH (Conference of the International Speech Communication Association)
#1Shamima Najnin (U of M: University of Memphis)H-Index: 7
#2Bonny Banerjee (U of M: University of Memphis)H-Index: 11
Last. Monique PoussonH-Index: 6
view all 8 authors...
2 CitationsSource