What shows up and what doesn't show up

Reading this article in The Economist about some problems encountered by scientific research, I stumble across this intriguing paragraph:

"There also seems to be a bias towards publishing positive results. For instance, a study earlier this year found that among the studies submitted to America’s Food and Drug Administration about the effectiveness of antidepressants, almost all of those with positive results were published, whereas very few of those with negative results were. But negative results are potentially just as informative as positive results, if not as exciting."

Why do I blog this? That's surely a problem I encountered when writing during my PhD work. It's funny to notice this bias, especially when you know the value of negative results. It's clearly true that there is a tendency to prefer selling "positive" experimental results than negative ones... which are in general less described.

Being interested in UX research, I also draw a parallel between this example and the sort of things we are looking for in user research: most of the work is about usage (what people do with/without technology), but it's also important to consider what people don't do. There are surely some relevant questions to ask concerning what shows up and what doesn't show up in field research.