Home > Uncategorized > Scientific Reliability from an Informatics Perspective

Scientific Reliability from an Informatics Perspective

It is an old saw that science is founded reproducibility… However, the truth is that, reproducibility has always been more difficult than generally assumed — even where the underlying phenomenon and are robust. Since Ioannidis’s PLOS article  in 2005, there has been increasing attention in the medical research to the issue of reproducibility; and attention has been unprecedented in the last two years, with even the New York Times commenting on  “jarring” instances of irreproducible, unreliable, or fraudulent research results.
Scientific reproducibility is most viewed through a methodological or statistical lens, and increasingly, through a computational lens. (See for example, our book on reproducible statistical computation.)  Over the last several years, I’ve taken part in collaborations to that approach reproducibility from the perspective of informatics: as a flow of information across a lifecycle that spans collection, analysis, publication, and reuse.
I had the opportunity to present a sketch of this approach at a recent workshop on reproducibility at the National Academy of Sciences, and at one our Program on Information Science brown bag talks.
The slides from the brown bag talk discuss some definitions of reproducibility, and outline a model for understanding reproducibility as an an information flow:
 
(Also see these videos from workshop on informatics approaches, and other definitions of reproducibility)
In the talk, the talk shows how  reproducibility claims as generally discussed in science, are not crisply defined, and the same reproducibility terminology is used to refer to very different sorts of assertions about the world, experiments, and systems. I outline an approach which takes each type of reproducibility claim and assesses: What are the use cases involving this claim? What does each type of reproducibility claim imply for  information properties, flow and systems? What are proposed or potential interventions in information systems that would strengthen the claims?
For example, a set of reproducibility issues is associated with validation of results. There are several distinct use cases and claims embedded in this — one of which I label as “fact-checking” because of its similarities to the eponymous journalistic use case:
  • Use Case: Post-publication reviewer wants to establish that published claims correspond to analysis method performed.
  • Reproducibility claim: Given public data identifier & analysis algorithm, an independent application of the algorithm yields a new estimate that is within the originally reported uncertainty.
  • Some potential supporting informatics claims:
    1. Instance of data retrieved via identifier is semantically equivalent to instance of data used to support published claim
    2. analysis algorithm is robust to choice of reasonable alternative implementation
    3. implementation of algorithm is robust to reasonable choice of execution details and context
    4. published direct claims about data are semantically equivalent to subset of claims produced by authors previous application of analysis
  • Some potential informatic interventions:
    • In support of claim 1:
      • Detailed provenance history for data from collection through analysis and deposition.
      • Automatic replication of direct data claims from deposited source
      • Cryptographic evidence
        (e.g. cryptographic signed {analysis output including, cryptographic hash of data} & {cryptographic hash of data retrieved via identifier})
    • In support of claim 2:
      • Standard implementation, subject to community review
      • Report of results of application of implementation on standard testbed
      • Availability of implementation for inspection
Overall, my conjecture is that if we wish to support reproducibility  broadly in information systems there are a number of properties/design principles for of information systems that will enhance reproducibility. Within information systems I conjecture that we should designing to maintain properties of: transparency, auditability, provenance, fixity, identification, durability, integrity, repeatability, non-repudiation, and self-documentation. When designing the policies, incentives, and human interactions with these systems we should consider: barriers to entry, ease of use, support for intellectual communities of practice, personalization, credit and attribution, security, performance, sustainability,cost, and trust engineering.
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: