It’s impressive that Generative models like Open AI’s GPT-2 automatically create texts using limited input. But controlling the attributes (topics, context, sentiment) of these texts, and paragraphs need an extra layer of work that includes architectural modifications/specific data understanding, etc. This work is done by a team of professionals from Uber, Caltech, and the Hong Kong University of Science and Technology. They worked on the model and created the Plug and Play Language Model (PPLM), which takes one or two attributes classifier and combines it with a pre-trained language model.

Paper with Initial Results: https://arxiv.org/pdf/1912.02164.pdf

Github: https://github.com/uber-research/PPLM

Colab Notebook: https://colab.research.google.com/drive/1Ux0Z4-ruiVtJ6jUk98uk6FqfvGHCOYL3

Demo: https://transformer.huggingface.co/model/pplm

Installation

Setup