A month ago, we launched Deep TabNine, which uses deep learning to provide code completion suggestions. Deep learning models require a lot of computing power to run, so we launched it as a cloud service. This required you to upload your code to our servers, which is a drawback for many developers.

We’re excited to announce TabNine Local, which lifts this restriction by allowing you to run Deep TabNine on your own machine. Here’s a video of Deep TabNine running on a laptop:

You can try it for yourself by installing TabNine:

Performance

Although the model has 358 million parameters, it runs on a dual-core laptop with excellent performance: around 30 milliseconds per token. (A single completion usually consists of around 5 tokens, though sometimes it can contain up to 20.) We originally thought we would need to switch to a smaller model in order to run on a laptop, but this turned out not to be necessary.

Since TabNine Cloud and TabNine Local use the same model, they have similar suggestion quality. TabNine Cloud can use more beams and a longer context, so its suggestions are slightly better. If you enable both TabNine Local and TabNine Cloud, each query will be sent to both endpoints, and you will receive suggestions from whichever one responds first (preferring TabNine Cloud if both have responded). This lets you use TabNine Cloud in most cases, while providing a seamless experience if you lose internet access or experience high network latency.

System requirements

TabNine Local supports Windows, macOS, and Linux. It uses FMA3 instructions, which have been supported by Intel since 2013 and AMD since 2012.

There are no hard requirements for processor speed. TabNine will benchmark the system upon startup and adapt its hyperparameters to the system’s capabilities.

The model consumes 692 MB of disk space.