mlpack 3.0.0 released!

It is my pleasure to announce the release of mlpack 3. This is the culmination of more than a decade worth of development and more than 100 contributors from around the world—including one AI contributor. This release includes, among other things, Python bindings, a generic optimization infrastructure, support for deep learning, and improved implementations of machine learning algorithms.

Community-Led Growth

I'm proud to say that over the years we have grown the project into a community-led effort for fast machine learning implementations in C++.

In 2007, mlpack was just a small project at a single lab in Georgia Tech focusing only on nearest neighbor search and related techniques. Now, in 2018, it's developed and used all around the world (and even in space!), it's a regular part of Google Summer of Code, and it implements all manner of general and specialized machine learning techniques.

The list of contributors is too long to list here, but everyone played a part in making this release happen. The About page on the mlpack website has a list of all the contributors, and you could also look at Github's contributor list. Thank you to each and every contributor!

Interfaces to Python and Other Languages

For the mlpack 3 release, we have created a system to provide bindings to Python that have the same interface as our command-line bindings. In addition, we are planning to generate bindings for other languages, such as MATLAB, Java, Scala, and C#, as well as others.

Here are some links to quickstart guides for those bindings:

And you can download the new source package here:

New And Improved Functionality

Since the last release (mlpack 2.2.5), lots has been added and changed. Much of this is due to projects from Google Summer of Code. A shortlist of new and improved functionality:

Optimization infrastructure (more here and here)

Deep learning infrastructure with support for FNNs, CNNs, and RNNs, as well as lots of existing layer types and support for custom layers.

New optimizers added: AdaGrad , CMAES , CNE , FrankWolfe , GradientDescent , GridSearch , IQN , Katyusha , LineSearch , ParallelSGD , SARAH , SCD , SGDR , SMORMS3 , SPALeRA , SVRG .

, , , , , , , , , , , , , , , . Fast random forest implementation added to the set of classifiers that mlpack implements.

Added a hyperparameter tuning and cross-validation infrastructure.

Modular By Design

Since mlpack is designed in a modular way, you can drop in custom functionality for a specific task. For instance, if you want to use a custom metric for nearest neighbor search or if you want to use a custom criterion for splitting your decision trees, you simply need to write the code and it plugs in with no runtime overhead.

In addition, because mlpack is built on Armadillo, you can plug in any BLAS you like—OpenBLAS is a good, fast, choice with builtin parallelization. You could even use NVBLAS, which will outsource heavy-duty matrix computations to the GPU, if you have GPUs available.

Read, Download, Explore

So, head on over to http://www.mlpack.org/ and check out the new release! And if you're interested in following the development or contributing, check out the Github project.