The answer by @unutbu works very nicely for applying any function to the rows of an array. In this particular case, there are some mathematical symmetries you can use that will speed things up considerably if you are working with large arrays.

Here is a modified version of your function:

def mahalanobis_sqdist3(x, mean, Sigma): Sigma_inv = np.linalg.inv(Sigma) xdiff = x - mean return (xdiff.dot(Sigma_inv)*xdiff).sum(axis=-1)

If you end up using any sort of large Sigma , I would recommend that you cache Sigma_inv and pass that in as an argument to your function instead. Since it is 4x4 in this example, this doesn't matter. I'll show how to deal with large Sigma anyway for anyone else who comes across this.

If you aren't going to be using the same Sigma repeatedly, you won't be able to cache it, so, instead of inverting the matrix, you could use a different method to solve the linear system. Here I'll use the LU decomposition built in to SciPy. This only improves the time if the number of columns of x is large relative to its number of rows.

Here is a function that shows that approach:

from scipy.linalg import lu_factor, lu_solve def mahalanobis_sqdist4(x, mean, Sigma): xdiff = x - mean Sigma_inv = lu_factor(Sigma) return (xdiff.T*lu_solve(Sigma_inv, xdiff.T)).sum(axis=0)

Here are some timings. I'll include the version with einsum as mentioned in the other answer.

import numpy as np Sig1 = np.array([[ 0.16043333, 0.11808333, 0.02408333, 0.01943333], [ 0.11808333, 0.13583333, 0.00625 , 0.02225 ], [ 0.02408333, 0.00625 , 0.03916667, 0.00658333], [ 0.01943333, 0.02225 , 0.00658333, 0.01093333]]) mean1 = np.array([ 5.028, 3.48 , 1.46 , 0.248]) x = np.random.rand(25, 4) %timeit np.apply_along_axis(mahalanobis_sqdist, 1, x, mean1, Sig1) %timeit mahalanobis_sqdist2(x, mean1, Sig1) %timeit mahalanobis_sqdist3(x, mean1, Sig1) %timeit mahalanobis_sqdist4(x, mean1, Sig1)

giving:

1000 loops, best of 3: 973 µs per loop 10000 loops, best of 3: 36.2 µs per loop 10000 loops, best of 3: 40.8 µs per loop 10000 loops, best of 3: 83.2 µs per loop

However, changing the sizes of the arrays involved changes the timing results. For example, letting x = np.random.rand(2500, 4) , the timings are:

10 loops, best of 3: 95 ms per loop 1000 loops, best of 3: 355 µs per loop 10000 loops, best of 3: 131 µs per loop 1000 loops, best of 3: 337 µs per loop

And letting x = np.random.rand(1000, 1000) , Sigma1 = np.random.rand(1000, 1000) , and mean1 = np.random.rand(1000) , the timings are:

1 loops, best of 3: 1min 24s per loop 1 loops, best of 3: 2.39 s per loop 10 loops, best of 3: 155 ms per loop 10 loops, best of 3: 99.9 ms per loop

Edit: I noticed that one of the other answers used the Cholesky decomposition. Given that Sigma is symmetric and positive definite, we can actually do better than my above results. There are some good routines from BLAS and LAPACK available through SciPy that can work with symmetric positive-definite matrices. Here are two faster versions.

from scipy.linalg.fblas import dsymm def mahalanobis_sqdist5(x, mean, Sigma_inv): xdiff = x - mean Sigma_inv = la.inv(Sigma) return np.einsum('...i,...i->...',dsymm(1., Sigma_inv, xdiff.T).T, xdiff) from scipy.linalg.flapack import dposv def mahalanobis_sqdist6(x, mean, Sigma): xdiff = x - mean return np.einsum('...i,...i->...', xdiff, dposv(Sigma, xdiff.T)[1].T)

The first one still inverts Sigma. If you pre-compute the inverse and reuse it, it is much faster (the 1000x1000 case takes 35.6ms on my machine with the pre-computed inverse). I also used einsum to take the product then sum along the last axis. This ended up being marginally faster than doing something like (A * B).sum(axis=-1) . These two functions give the following timings:

First test case:

10000 loops, best of 3: 55.3 µs per loop 100000 loops, best of 3: 14.2 µs per loop

Second test case:

10000 loops, best of 3: 121 µs per loop 10000 loops, best of 3: 79 µs per loop

Third test case: