Experiment tracking tool for machine learning projects

Track and organize your entire experimentation process from exploratory analysis, to model training runs and hyperparameter sweeps, and everything in between.



Get Started, It’s Free!

Get Started, It’s Free!





Track experiments

Log metrics, hyperparameters, data versions, hardware usage and more. Work on any infra, any language, scripts or notebooks.

Record data exploration

Experiments don’t have to stop with running training scripts. Version your exploratory data analysis and share with your team.

Organize teamwork

Manage your team with organizations, projects, and user roles. Organize experiments with tags and custom views.

Quick and simple setup

Start tracking experiments in minutes, work like you used to… just log it

Insert a few lines of code into your standard training and validation scripts and start logging your experiment data.

Run on your laptop, in the cloud, on Google Colab or wherever you want.

Use in the scripts or in Jupyter notebooks. Run experiments your way just let us track them.

pip install neptune-client

import neptune neptune.init('awesome-project') neptune.create_experiment('great-idea') # any training or validation code you want neptune.log_metric('auc', score) neptune.log_image('diagnostics', 'roc_auc.png') neptune.log_artifact('model_weights.h5')

python train.py

pip install neptune-client

import neptune neptune.init('awesome-project') neptune.create_experiment('great-idea') # any training or validation code you want neptune.log_metric('auc', score) neptune.log_image('diagnostics', 'roc_auc.png') neptune.log_artifact('model_weights.h5')

python train.py



Get Started, It’s Free!

Get Started, It’s Free!





Great UI

UI that scales, super customizable and designed for teams

Log and organize millions of experiment runs.

Create custom views for data scientists or managers, and save them for later.

Search through experiments quickly with a powerful language.

Comparison tools

Inteligent table that shows you diffs and more

When you compare multiple experiment runs sometimes it is difficult to figure out what is different and what you should look for.

We’ve created a table that automatically finds the columns and values that are different and displays them for you!

Data Versioning

Track versions of your datasets, group results by datasets

Datasets change during the project lifetime. You can log datasets signatures as you run experiemnts and group the results by dataset in the UI.

log_data_version('/path/to/my_data.csv') # local log_s3_data_version('my-bucket', 'train_dir/') # s3

log_data_version('/path/to/my_data.csv') # local log_s3_data_version('my-bucket', 'train_dir/') # s3

User Management

Organize your projects, give different roles to different people

You can assign people to different organizations and projects.

You can choose whether they should be able to edit experiment data or simply view what is happening and comment on it.

Notebook Autosnapshots

Experiment in the notebooks, let us autosave your .ipynb code

When you are running some quick and dirty experiments in the notebooks some parameters or code changes can be lost.

So we created an extension to snapshot your .ipynb code whenever you run your experiment!

Notebook Versioning and Diffing

Version your exploratory analysis, get notebook diffs for free

Experimentation doesn’t stop at the training script so we created an extension to track the exploratory data analysis or results exploration that you do in your Jupyter notebooks.

With that you can save, share and diff the analysis of your entire team!

Query API

Fetch experiment data from the app, visualize results in notebooks

Do you want to access experiments data like metrics, hyperparmaters, or model binaries programatically?

Neptune lets you fetch everything you or your teammates logged directly to your scripts or notebooks!

Integrations

Connect with the tools you use, start tracking in minutes

Neptune provides loggers for all the major machine learning frameworks and hyperparameter optimization libraries so that you don’t have to implement them yourself.

Something is missing? Tell us and we will do that for you!

# Pytorch Lightning trainer = Trainer(logger=NeptuneLogger(...)) # Catalyst runner = SupervisedRunner() runner.train(callbacks=[NeptuneLogger(...)]) # Fastai learn.callbacks.append(NeptuneMonitor()) learn.fit_one_cycle(...) # Optuna study.optimize(..., callbacks=[NeptuneMonitor()])

# Pytorch Lightning trainer = Trainer(logger=NeptuneLogger(...)) # Catalyst runner = SupervisedRunner() runner.train(callbacks=[NeptuneLogger(...)]) # Fastai learn.callbacks.append(NeptuneMonitor()) learn.fit_one_cycle(...) # Optuna study.optimize(..., callbacks=[NeptuneMonitor()])









We integrate with your favourite frameworks and tools

































































User stories

Lets me see the progress anytime

“Neptune allow us to keep all of our experiments organized in a single space. Being able to see my team’s work results any time I need makes it effortless to track progress and enables easier coordination.”



Gives us flexibility we need

“For me the most important thing about Neptune is its flexibility. Even

if I’m training with Keras or Tensorflow on my local laptop, and my colleagues are using fast.ai on a virtual machine, we can share our

results in a common environment.”



Hooks to multiple frameworks

“What we like about Neptune is that it easily hooks into multiple frameworks. Keeping track of machine learning experiments systematically over time and visualizing the output adds a lot of value for us.”



Manage your ML experiments. Create Free Account.



Get Started, It’s Free!

Get Started, It’s Free!



Manage your ML experiments.





Get Started, It’s Free!

Get Started, It’s Free!

