2 min read

TabNine is a language-agnostic autocompleter that leverages machine learning to provide responsive, reliable, and relevant code suggestions. In a blog post shared last week, Jacob Jackson, TabNine’s creator, introduced Deep TabNine that uses deep learning to significantly improve suggestion quality.

What is Deep TabNine?

Deep TabNine is based on OpenAI’s GPT-2 model that uses the Transformer architecture. This architecture was intended for solving problems in natural language processing, Deep TabNine uses it to understand the English in code. For instance, the model can negate words with an if/else statement. While training, the model’s goal is to predict the next token given the tokens that come before it.

Trained on nearly 2 million files from GitHub, Deep TabNine comes with pre-existing knowledge, instead of learning only from a user’s current project. Additionally, the model also refers to documentation written in natural language to infer function names, parameters, and return types. It is capable of using small clues that are difficult for a traditional tool to access. For instance, it understands that the return type of app.get_user() is assumed to be an object with setter methods and the return type of app.get_users() is assumed to be a list.

How can you access Deep TabNine?

Although integrating a deep learning model comes with several benefits, using it demands a lot of computing power. Jackson clearly mentioned that running it on a laptop will not deliver low latency that TabNine’s users are accustomed to. As a solution, they are offering TabNine Cloud (Beta), a service that will enable users to use TabNine’s servers for GPU-accelerated autocompletion. To get access to TabNine Cloud, you can sign up here.

However, there are many who prefer to keep their code on their machines. To ensure the privacy and security of your code, the TabNine team is working on the following use cases:

They are promising to come up with a reduced-size model in the future that can run on a laptop with reasonable latency for individual developers.

in the future that can run on a laptop with reasonable latency for individual developers. Enterprises will have an option to license the model and run it on their hardware. They are also offering to train a custom model that will understand the unique patterns and style specific to an enterprise’s codebase.

Developers have already started its beta testing and are quite impressed:

Autocompletion with deep learning https://t.co/WenacHVj7z very cool! I tried related ideas a long while ago in days of char-rnn but it wasn't very useful at the time. With new toys (GPT-2) and more focus this may start to work quite well. pic.twitter.com/XSV9O7yxpf — Andrej Karpathy (@karpathy) July 18, 2019

Deep TabNine from @TabNineInc: Absolutely mind-blowing autocompletion with a GPT-2-based model trained on around 2 million files from GitHub and supporting Python, C++, Objective-C, Rust, Scala, Kotlin etc etc. https://t.co/nyMTtmyqcj pic.twitter.com/hGxmvb6hGi — Ruslan Abdikeev (@aruslan) July 18, 2019

You can check out the official announcement by TabNine to know more in detail.

Read Next

Implementing autocompletion in a React Material UI application [Tutorial]

Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more

Conda 4.6.0 released with support for more shells, better interoperability among others