I need to implement a gaussian log likelihood loss function in Tensorflow, however I am not sure if what I wrote is correct. I think this is the correct definition of the loss function.

I went around implementing it like this:

two_pi = 2*np.pi def gaussian_density_function(x, mean, stddev): stddev2 = tf.pow(stddev, 2) z = tf.multiply(two_pi, stddev2) z = tf.pow(z, 0.5) arg = -0.5*(x-mean) arg = tf.pow(arg, 2) arg = tf.div(arg, stddev2) return tf.divide(tf.exp(arg), z) mean_x, var_x = tf.nn.moments(dae_output_tensor, [0]) stddev_x = tf.sqrt(var_x) loss_op_AE = -gaussian_density_function(inputs, mean_x, stddev_x) loss_op_AE = tf.reduce_mean(loss_op_AE)

I want to use this as the loss function for an autoencoder, however, I am not sure this implementation is correct, since I get a NaN out of loss_op_AE.

EDIT: I also tried using:

mean_x, var_x = tf.nn.moments(autoencoder_output, axes=[1,2]) stddev_x = tf.sqrt(var_x) dist = tf.contrib.distributions.Normal(mean_x, stddev_x) loss_op_AE = -dist.pdf(inputs)

and I get the same NaN values.