Please read the detailed intro about GLTR.

Each text is analyzed by how likely each word would be the predicted word given the context to the left. If the actual used word would be in the Top 10 predicted words the background is colored green, for Top 100 in yellow, Top 1000 red, otherwise violet. Try some sample texts from below and see for yourself if you can spot the difference between machine generated text and human generated text or try your own. (Tip: hover over the words for more detail)

The histograms show some statistic about the text: Frac(p) describes the fraction of probability for the actual word divided by the maximum probability of any word at this position. The Top 10 entropy describes the entropy along the top 10 results for each word.

Test-Model:

Quick start - select a demo text:

or enter a text:

The cat was playing in the garden.

analyze

top k count frac(p) histogram top 10 entropy(p) histogram Top K Frac P Colors (top k):

Tweet about GLTR

MIT-IBM Watson AI lab and Harvard NLP



