SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

Published: Nov 4, 2016
Abstract
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different...
Paper Details
Title
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Published Date
Nov 4, 2016
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.