$\begingroup$

Most interpretations of p-values are sinful! The conventional usage of p-values is badly flawed; a fact that, in my opinion, calls into question the standard approaches to the teaching of hypothesis tests and tests of significance.

Haller and Krause have found that statistical instructors are almost as likely as students to misinterpret p-values. (Take the test in their paper and see how you do.) Steve Goodman makes a good case for discarding the conventional (mis-)use of the p -value in favor of likelihoods. The Hubbard paper is also worth a look.

Haller and Krauss. Misinterpretations of significance: A problem students share with their teachers. Methods of Psychological Research (2002) vol. 7 (1) pp. 1-20 (PDF)

Hubbard and Bayarri. Confusion over Measures of Evidence (p's) versus Errors (α's) in Classical Statistical Testing. The American Statistician (2003) vol. 57 (3)

Goodman. Toward evidence-based medical statistics. 1: The P value fallacy. Ann Intern Med (1999) vol. 130 (12) pp. 995-1004 (PDF)

Also see:

Wagenmakers, E-J. A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779-804.

for some clear cut cases where even the nominally "correct" interpretation of a p-value has been made incorrect due to the choices made by the experimenter.

Update (2016): In 2016, American Statistical Association issued a statement on p-values, see here. This was, in a way, a response to the "ban on p-values" issued by a psychology journal about a year earlier.