Bad [data visualization] science?

I’m an avid reader of Ben Goldacre’s Bad Science blog, and equally enthusiastic for Stephen Few’s Perceptual Edge, so it’s interesting when they pick up on the same theme from different perspectives.

Ben has recently publish Bad Pharma (on my reading list, but not yet digested, so I can’t comment on its efficacy… but past performance suggests an enlightening and entertaining, if alarming, read), and has an insightful explanation of why and how he wrote the book on his blog.

The [extremely summarised] gist is the need for full disclosure in medical trials; Ben was one of the founders of the AllTrials campaign.

Stephen’s latest blog entry addresses the lack of scientific rigour in the field of information visualization. His suggestion (assertion?) that many students and professors of the subject lack the basic skills necessary for scientific research is worrying, and several of the symptoms and root causes he proposes correlate with Ben’s assessment of medical trials.

I recognised the feeling of resonance when I hit point 7 of Stephen’s article:

Researchers should be encouraged to publish their work even when it fails to establish what they expected. The only failure in research is research done poorly. Findings that conflict with expectations are still valuable findings. Even poorly done research is valuable if the authors admit their mistakes and learn from them.

This is the precise point that Ben makes – all research should be published, irrespective of outcome; scientific method relies on disclosure of all results, not just selected results that favour a preferred bias. Indeed point 1 on Stephen’s list is a beautifully succinct summary of Karl Popper‘s empirical falsification.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s