Recurrent neural networks, and in particular long short-term memory networks (LSTMs), are a remarkably effective tool for sequence processing that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise.

We present LSTMVis a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We provide data for the tool to analyze specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis.

See for example the pattern described for the parens dataset or the patterns in the children book dataset. Start a matching by using the 'fast' or 'precise' button.

Please let us know what you think by commenting below or contacting @harvardnlp on twitter.

Please enable JavaScript to view the comments powered by Disqus.