What happens when a study produces evidence that doesn't support a scientific hypothesis?

Scientists have a few different ways of describing this event. Sometimes, the results of such a study are called 'null results'. They may also be called 'negative results'. In my opinion, both of these terms are useful, although I slightly prefer 'null' on the grounds that the term 'negative' tends to draw an unfavorable contrast with 'positive' results. Whereas, my impression is that 'null' makes it clear that these are results in their own right, as they are evidence consistent with the null hypothesis. Yet there's another way of talking about evidence inconsistent with a hypothesis - such results are sometimes treated as not being results at all. In this way of speaking, to "get a result" in a certain study means to find a positive result. To "get no results" or "find nothing" means to find only null results - which, on this view, have no value of their own, serving only to mark the absence of some (positive) findings. This 'non-result' idiom is common usage in science - at least in my experience - but in my view, it's misleading and harmful. A null result is still a result, and it contributes just as much to our knowledge of the world as a positive result does. We may be disappointed by null results, and we may feel that they are not as exciting as the results we hoped to find, but these are really nothing more than our own subjective responses. The view that null results aren't really results is at the root of much of publication bias and motivates p-hacking. The only true "non"-results are results that are so low quality that they are uninformative, either due to poor experimental design or errors in data collection. These failed results may be, on the face of it, either positive or negative.