Archive

Archive for the ‘Uncategorized’ Category

Guest Post: DataRescue-Boston@MIT Wrap up

March 2, 2017 Leave a comment

Alex Chassanoff  who is a Postdoctoral Fellow in the Program for Information Science, contributes to this detailed wrapup of the recent Data Rescue Boston event that she helped organize.

 

Data Rescue Boston@MIT Wrap up

Written by event organizers:

Alexandra Chassanoff

Jeffrey Liu

Helen Bailey

Renee Ball

Chi Feng

 

On Saturday, February 18th, the MIT Libraries and the Association of Computational Science and Engineering co-hosted a day long Data Rescue Boston hackathon at Morss Hall in the Walker Memorial Building.  Jeffrey Liu, a Civil and Environmental Engineering graduate student at MIT, organized the event as part of an emerging North American movement to engage communities locally in the safeguarding of potentially vulnerable federal research information.  Since January, Data Rescue events have been springing up at libraries across the country, largely through the combined organizing efforts of Data Refuge and Environmental Data and Governance Initiative.

 

The event was sponsored by MIT Center for Computational Engineering, MIT Department of Civil and Environmental Engineering, MIT Environmental Solutions Initiative, MIT Libraries, MIT Graduate Student Council Initiatives Fund, and the Environmental Data and Governance Initiative.

Here are some snapshot metrics from our event:

# of Organizers: 8
# of Volunteers: ~15
# of Guides: 9
# of Participants: ~130
# URLs researched: 200
# URLs harvested: 53
# GiB harvested: 35
# URLs seeded: 3300 at event (~76000 from attendees finishing after event)
# Agency Primers started: 19
# Cups of Coffee: 300
# Burritos: 120
# Bagels: 450
# Pizzas: 105

Goal 1. Process data

MIT’s data rescuers managed to process a similar amount of data through the seeding and harvesting phases of data rescue as compared to other similarly-sized events.  For reference, Data Rescue San Francisco researched 101 URLs and harvested 25 GB of data at their event.  Data Rescue DC, a two-day event which also included a bagging/describing track which we did not have, harvested 20GB of data, seeded 4776 URLs, bagged 15 datasets and described 40 data sets.   

Goal 2. Expand scope

Another goal of our event was to explore creating new workflows for expanding efforts beyond an existing focus on federal agency environmental and climate data.  Toward that end, we decided to pilot a new track called Surveying which we used to identify and describe programs, datasets and documents at federal agencies still in need of agency primers.  We were lucky enough to have particular domain experts on hand who assisted us with our efforts.  In total, we were able to begin expansion efforts for agencies and departments at the Department of Justice, Department of Labor, Health and Human Services, and the Federal Communications Commission.

Goal 3: Engage and build community

Attendees at our event spanned age groups, occupations, and technical abilities.  Participants included research librarians, concerned scientists, and expert undergraduate hackers; according to national developers for the Data Rescue archiving application, MIT had the largest number of “tech-tel” than any other event thus far.   As part of the Storytelling aspect of Data Rescue events, we captured profiles for twenty-seven of our attendees.  Additionally, we created Data Use Stories that describe how some researchers use specific data sets from the National Water Information System (USGS), the Alternative Fuels Data Center (DOE),  and the Global Historical Climate Network (NOAA).  These stories let us communicate how these data sets are used to better understand our world, as well as make decisions that impact our everyday lives.

The hackathon at MIT was the second event hosted by Data Rescue Boston, which has begun hosting weekly working groups every Thursday at MIT  for continuing engagement on compiling tools and documentation to improve workflow, identify vulnerable data sets, and create resources to help further efforts.   

Future Work

Data rescue events continue to gather steam, with eight major national events planned over the next month.  The next DataRescue Boston event will be held at Northeastern on March 24th. A dozen volunteers and attendees from the MIT event have already signed up to help organize workshops and efforts at the Northeastern event.

Press Coverage of our Event:

http://gizmodo.com/rescuing-government-data-from-trump-has-become-a-nation-1792582499

https://thetech.com/2017/02/22/datarescue-students-collaborate-vital

https://medium.com/binj-reports/saving-science-one-dataset-at-a-time-389c7014199c#.lgrlkca9f


Categories: Uncategorized

Guest Post: Alex Chassanoff on Building A Model for Software Curation

January 21, 2017 Leave a comment

Alex Chassanoff  who is a Postdoctoral Fellow in the program on information science introduces a series of posts on software curation.


Building A Model for Software Curation:

An Introductory Post

 

In October 2016, I began working at the MIT Libraries as a CLIR/DLF Postdoctoral Fellow in Software Curation. CLIR began offering postdoctoral fellowships in data curation in 2012; however, myself and three others were part of the first cohort conducting research in the area of Software Curation.  At our fellowship seminar and training this summer,the four of us joked about not having any idea what we would be doing (and Google wasn’t much help). Indeed, despite years of involvement in digital curation, I was unsure of what it might mean to curate software. As has been well-documented in the library/archival science community, curation of data means many different things tomany different people.  Add in the term “software” and you increase the complexities.

At MIT Libraries, I was given the good fortune of working with two distinguished and esteemed experts in library research: Nancy McGovern, the Director of the Digital Preservation Program and Micah Altman, the Director of Research.   This blog post describes the first phase of our work together in defining a research agenda for software curation as an institutional asset.

Defining Scope

As we began to suss out possible research objectives and assorted activities, we found ourselves circling back to four central questions – which themselves split into associated sub-questions.

  • What is software? What is the purpose and function of software? What does it mean to curate software? How do these practices differ from preservation?
  • When do we curate software? Is it at the time of creation? Or when it becomes acquired by an institution?
  • Why do institutions and researchers curate software?
  • Who is institutionally responsible for curating software and for whom are we curating software?

Developing Focus and Purpose

We also began to outline the types of exploratory research questions we might ask depending on the specific purpose and entities we were creating a model for (see Table 1 below). Of course, these are only some of the entities that we could focus on; we could also broaden our scope to include research questions of interest to software publishers, software journals, or funders interested in software curation.

 

Entity Purpose: Libraries/Archives Purpose: MIT Specific
Research library What does a library need to safeguard + preserve software as an asset? How are other institutions handling this? How are funding agencies considering research on software curation? What are the MIT libraries’ existing and future needs related to software curation?
Software creator What are the best practices software creators should adopt when creating software? How are software creators depositing their software and how are journals recommending they do this? What are the individual needs and existing practices of software creators served by the MIT Libraries?
Software user What are the different kinds of reasons why people may use software? What are the conditions for use? What are the specific curation practices we should implement to make software usable for this community? What do individuals served by the MIT Libraries need to be able to reuse software?

Table 1: Potential purpose(s) of research by entity

Importantly, we wanted to adopt an agile research approach that considered software as an artifact, rather than (simply) as an outcome to be preserved and made accessible.  Curation in this sense might seek to answer ontological questions about software as an entity with significant characteristics at different levels of representation.   Certainly, digital object management approaches that emphasize documentation of significant properties or characteristics are long-standing in the literature.  At the same time, we wanted our approach to address essential curatorial activities (what Abby Smith termed “interventions”) that help ensure digital files remain accessible and usable. [1]  We returned to our shared research vision: to devise a model for software curation strategies to assist research outcomes that rely on the creation, use, reuse, and study of software.

Statement of Research Objectives and Working Definitions

Given the preponderance of definitions for curation and the wide-ranging implications of curating for different purposes and audiences, we thought it would be essential for us to identify and make clear our particular interests.  We developed the following statement to best describe our goals and objectives:

Libraries and archives are increasingly tasked with responsibilities related to the effective long-term preservation and curation of software.  The purpose of our work is to investigate and make recommendations for strategies that institutions can adopt for managing software as complex digital objects across generations of technology.

We also developed the following working definition of software curation for use in our research:

“Software curation encompasses the active practices related to the creation, acquisition, appraisal and selection, description, transformation, preservation, storage, and dissemination/access/reuse of software over short- and long- periods of time.”

What’s Next

The next phase of our research involves formalizing our research approach through the evaluation, selection, and application of relevant models (such as the OAIS Reference Model) and ontologies (such as the SWO). We are also developing different scenarios to establish the roles and responsibilities bound up in software creation, use, and reuse. In addition to reporting on the status of our project, you can expect to read blog posts about both the philosophical and practical implications of curating software in an academic research library setting.

Notes

[1] In the seminal collection Authenticity in a digital environment, Abby Smith noted that “We have to intervene continually to keep digital files alive. We cannot put a digital file on a shelf and decide later about preservation intervention. Storage means active intervention.” See: Abby Smith (2000) “Authenticity in Perspective  Authenticity in a digital environment. Washington, D.C: Council on Library and Information Resources.

Categories: Uncategorized

Guest Post: Zachary Lizee on Digital Literacy and Standards

December 13, 2016 Leave a comment

Zachary Lizee  who is a Graduate Research Intern in the program on information science, reflects on his investigations into information standards, and suggests how  libraries can reach beyond local instruction on digital literacy to scaleable instruction on digital citizenship.

21st century Libraries, Standards Education and Socially Responsible Information Seeking Behavior

Standards and standards development frame, guide, and normalize almost all areas of our lives.  Standards in IT govern interoperability between a variety of devices and platforms, standardized production of various machine parts allows uniform repair and reproduction, and standardization in fields like accounting, health care, or agriculture promotes best industry practices that emphasize safety and quality control.  Informational standards like OpenDocument allows storage and processing of digital information to be accessible by most types of software ensuring that the data is recoverable in the future.[1]  Standards reflect the shared values, aspirations, and responsibilities we as a society project upon each other and our world.

Engineering and other innovative entrepreneurial fields need to have awareness aboutinformation standards and standards development to ensure that the results of research, design, and development in these areas have the most positive net outcome for our world at large, as illustrated by the analysis of healthcare information standards by HIMSS, a professional organization that works to affect informational standards in the healthcare IT field:

In healthcare, standards provide a common language and set of expectations that enable interoperability between systems and/or devices. Ideally, data exchange schema and standards should permit data to be shared between clinician, lab, hospital, pharmacy, and patient regardless of application or application vendor in order to improve healthcare delivery. [2]

As critical issues regarding information privacy quickly increase, standard development organizations and interested stakeholders take an active interest in creating and maintaining standards to regulate how personal data is stored, transferred, and used, which has both public interests and regulation by legal frameworks in mind.[3]

Libraries have traditionally been centers of expertise/access of information collection, curation, dissemination, and instruction.  And the standards around how digital information is produced, used, governed, and transmitted are rapidly evolving with new technologies.[4]  Libraries are participating in the processes of generating information standards to ensure that patrons can freely and safely access information.  For instance, the National Information Standards Organization is developing informational standards to address patron privacy issues in library data management systems:

The NISO Privacy Principles, available at http://www.niso.org/topics/tl/patron_ privacy/, set forth a core set of guidelines by which libraries, systems providers and publishers can foster respect for patron privacy throughout their operations.  The Preamble of the Principles notes that, ‘Certain personal data are often required in order for digital systems to deliver information, particularly subscribed content.’ Additionally, user activity data can provide useful insights on how to improve collections and services. However, the gathering, storage, and use of these data must respect the trust users place in libraries and their partners. There are ways to address these operational needs while also respecting the user’s rights and expectations of privacy.[5]

This effort by NISO (which has librarians on the steering committee) illustrates how libraries engage in outreach and advocacy that is also in concert with the ALA’s Code of Ethics, which states that libraries have the duty to protect patron’s rights to privacy and confidentiality regarding information seeking behavior.  Libraries and librarians have a long tradition of engaging in social responsibility for their patrons and community at large.

Although libraries are sometimes involved, most information standards are created by engineers working in corporate settings, or are considerably influenced by the development of products that become the model.  Most students leave the university without understanding what standards are, how they are developed, and what potential social and political ramifications advancements in the engineering field can have on our world.[6]

There is a trend in the academic and professional communities to foster greater understanding about what standards are, why they are important, and how they relate to influencing and shaping our world.[7]  Understanding the relevance of standards will be an asset that employers in the engineering fields will value and look for.  Keeping informed about the most current standards can drive innovation and increase the market value of an engineer’s research and design efforts.[8]

As informational hubs, libraries have a unique opportunity to participate in developing information literacy regarding standards and standards development.  By infusing philosophies regarding socially responsible research and innovation, using standards instruction as a vehicle, librarians can emphasize the net positive effect of standards and ethics awareness for the individual student and the world at large.

The emergence of MOOCs creates an opportunity for librarians to reach a large audience to instruct patrons in information literacy in a variety of subjects. MOOCs can have a number of advantages when it comes to being able to inform and instruct a large number of people from a variety of geographic locations and across a range of subject areas.[9]

For example, a subject specific librarian for an engineering department at a university could participate with engineering faculty in developing a MOOC that outlines the relative issues, facts, and procedures surrounding standards and standards development to aid the engineering faculty in instructing standards education.  Together, librarians and subject experts could  develop education on the roles that standards and socially responsible behavior factor into the field of engineering.

Students that learn early in their career why standards are an integral element in engineering and related fields have the potential to produce influential ideas, products, and programs that undoubtedly could have positive and constructive effects for society.  Engineering endeavors to design products, methodologies, and other technologies that can have a positive impact on our world.  Standards education in engineering fields can produce students who have a keen understanding of social awareness about human dignity, human justice, overall human welfare, and a sense of global responsibility.

Our world has a number of challenges: poverty, oppression, political and economic strife, environmental issues, and a host of many other dilemmas socially responsible engineers and innovators could address.  The impact of educating engineers and innovators about standards and socially responsible behavior can affect future corporate responsibility, ethical and humanitarian behavior, altruistic technical research and development, which in turn yields a net positive result for the individual, society, and the world.

Recommended Resources:

Notes:

[1] OASIS, “OASIS Open Document Format for Office Applications TC,” <https://www.oasis-open.org/ committees/tc_home.php?wg_abbrev=office>

[2] HIMSS, “Why do we need standards?,” <http://www.himss.org/library/interoperability-standards/why-do-we-need-standards>

[3] Murphy, Craig N. and JoAnne Yates, The International Organization for Standardization (ISO): Global governance through voluntary consensus, London and New York: Routledge, 2009.

[4] See Opening Standards: The Global Politics of Interoperability, edited by Laura DeNardis, Cambridge, Massachusetts: MIT Press, 2011.

[5] “NISO Releases a Set of Principles to Address Privacy of User Data in Library, Content-Provider, and Software-Supplier Systems,” NISO,  <http://www.niso.org/news/pr/view?item_key=678c44da628619119213955b867838b40b6a7d96>

[6] “IEEE Position Paper on the Role of Technical Standards in the Curriculum of Academic Programs in Engineering, Technology and Computing,” IEEE,  <https://www.ieee.org/education_careers/education/eab/position_statements.html>

[7] Northwestern Strategic Standards Management, <http://www.northwestern.edu/standards-management/>

[8] “Education about standards,” ISO, <http://www.iso.org/iso/home/about/training-technical-assistance/standards-in-education.htm>

[9] “MOOC Design and Delivery: Opportunities and Challenges,” Current Issues in Emerging ELearning, V.3, Issue 1,(2016) <http://scholarworks.umb.edu/ciee/?utm_source=scholarworks.umb.edu%2Fciee%2Fvol2%2Fiss1%2F6&utm_medium=PDF&utm_campaign=PDFCoverPages>

Categories: Uncategorized

Making Decisions in a World Awash in Data: We’re going to need a different boat: Comments on Anthony Scriffignano’s Talk

December 8, 2016 Leave a comment

Dr. Anthony Scriffignano, who is SVP/Chief Data Scientist at Dun and Bradstreet, gave this talk on Making Decisions in a World Awash in Data: We’re going to need a different boat
as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Scriffignano argues that the massive collection of ‘unstructured’ data enables a wide set of potential inferences about complex changing relationships.  At the same time, his talk notes that it is increasingly easy to gather sufficient information to take action — while lacking enough information to  form good judgement, and further understanding of the context in which data is collected and flows is essential to developing such good judgements.

Scriffignano summarizes his talk in the following abstract:

l explore some of the ways in which the massive availability of data is changing and the types of questions we must ask in the context of making business decisions.  Truth be told, nearly all organizations struggle to make sense out of the mounting data already within the enterprise.  At the same time, businesses, individuals, and governments continue to try to outpace one another, often in ways that are informed by newly-available data and technology, but just as often using that data and technology in alarmingly inappropriate or incomplete ways.  Multiple “solutions” exist to take data that is poorly understood, promising to derive meaning that is often transient at best.  A tremendous amount of “dark” innovation continues in the space of fraud and other bad behavior (e.g. cyber crime, cyber terrorism), highlighting that there are very real risks to taking a fast-follower strategy in making sense out of the ever-increasing amount of data available.  Tools and technologies can be very helpful or, as Scriffignano puts it, “they can accelerate the speed with which we hit the wall.”  Drawing on unstructured, highly dynamic sources of data, fascinating inference can be derived if we ask the right questions (and maybe use a bit of different math!).  This session will cover three main themes: The new normal (how the data around us continues to change), how are we reacting (bringing data science into the room), and the path ahead (creating a mindset in the organization that evolves).  Ultimately, what we learn is governed as much by the data available as by the questions we ask.  This talk, both relevant and occasionally irreverent, will explore some of the new ways data is being used to expose risk and opportunity and the skills we need to take advantage of a world awash in data.

This covers a broad scope, and Dr. Scriffignano expands  extensively on these and other issue in his blog  — which is well worth reading.  

Dr. Scriffignano’s talk raised a number of interesting provocations. The talk claims, for example that:

On data.

  • No data is real-time — there are always latencies in measurement, transmission, or analysis.
  • Most data is worthless — but there remains a tremendous number of useful signals in data that we don’t understand.
  • Eighty-five percent of data collected today is ‘unstructured’. And unstructured’ data is really data that has structure that we do not yet understand.

On using data.

  • Unstructured data has the potential to support many unanticipated inferences. An example (which Scriffiganno calls a “data-bubble) is a set of photographs of crowd-sourced photos of recurring events — one can find photos that are taken at different times but which show the same location from the same perspective. Despite being convenient samples, they permit new longitudinal comparisons from which one could extract signals of fashion, attention, technology use, attitude, etc. —  and big data collection has created qualitatively new opportunities for inference.
  • When collecting and curating data we need to pay close attention to decision-elasticity — how different would our information have to be to change our optimal action?  In designing a data curation strategy, one needs to weigh the opportunity costs of obtaining data and curating data, against the potential to affect decisions.
  • Increasingly, big data analysis raises ethical questions.  Some of these questions arise directly: what are ethical expectations on use of ‘new’ signals we discover that can be extracted from unstructured data?  Others arise through the algorithms we choose — how they introduce biases– and how do we even understand what algorithms do, especially as use of artificial intelligence grows? Scriffigano’s talk gives as an example of recent AI research in which two algorithms develop their own private encryption scheme.

This is directly relevant to the future of research, and the future of research libraries.  Research will increasingly rely on evidence sources of these types — and increasing need to access, discover and curate this evidence.  And our society will increasingly be shaped by this information, and how we choose to engineer and govern collection and use of this information.  The private sector is pushing ahead fast in this area, and will no doubt generate many innovative data collections and algorithms.  Engagement from university scholars, researchers, and librarians is vital to ensure that society understands these new creations; is able to evaluate their reliability and bias; and has durable and equitable access to them to provide accountability and to support  important discoveries that are not easily monetized.  For those interested in this topic, — the  Program on Information Science has published reports and articles on big data inference and ethics.    

Categories: Uncategorized

The Open Access Network: Comments on Rebecca Kennison’s Talk

October 29, 2016 Leave a comment

Rebecca Kennison, who is the Principal of K|N Consultants, the co-founder of the Open Access Network; and was was the founding director of the Center for Digital Research and Scholarship, gave this talk on Come Together Right Now: An Introduction To The Open Access Network as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Kennison argues that current models of OA publishing based on cost-per-unit are neither scalable nor sustainable.  She further argues that a sustainable model must be based on continuing regular voluntary contributions from research universities.  

In her abstract, Kennison summarizes as follows:

Officially launched just over a year ago, the Open Access Network (OAN) offers a transformative, sustainable, and scalable model of open access (OA) publishing and preservation that encourages partnerships among scholarly societies, research libraries, and other partners (e.g., academic publishers, university presses, collaborative e-archives) who share a common mission to support the creation and distribution of open research and scholarship and to encourage more affordable education, which can be a direct outcome of OA publishing. Our ultimate goal is to develop a collective funding approach that is fair and open and that fully sustains the infrastructure needed to support the full life-cycle for communication of the scholarly record, including new and evolving forms of research output. Simply put, we intend to Make Knowledge Public.

Kennison’s talk summarizes the argument in her 2014 paper with Lisa Norberg : A Scalable and Sustainable Approach to Open Access Publishing and Archiving for Humanities and Social Sciences. Those intrigued by these arguments may find a wealth of detail in the full paper.

Kennison argues that this form of network would offer value to three groups  of stakeholders in general:

  • For institutions and libraries, to advance research and scholarship, lower the cost of education, and support lifelong learning.
  • For scholarly societies, university presses: ensure revenue, sustain operations, and support innovation.
  • For individuals, foundations, corporations: provide wide access to research and scholarship to address societal challenges, support education, and grow the economy.

The  Program on Information Science has previously written on the information economics of the commons in Information Wants Someone Else to Pay For It.  Two critical questions posed by an economic analysis of the OAN are first: What is the added value to the individual contributor that they would not obtain unless they individually contribute? (Note that this is different from the group value above — since any stakeholder gets these values if the OAN exists, whether or not they contribute to it.)  Second, under what conditions does the approach lead to the right amount of information being produced?  For example both market-based solutions and pure-altruistic solutions to producing knowledge outputs yield something — they just don’t yield anything close to the social optimum level of knowledge production and use. What reasons do we have to believe that the fee-structure of the OAN comes closer?

In addition, Kennison discussed the field of linguistics as a prototype. It is a comparatively small discipline (1000’s of researchers) and the output is focused in approximately 60 journals.  Notably, a number of high-profile departments recently changed their tenure and promotion policy to recognize the OA journal Glossa as the equivalent of top journal Lingua, when the former’s board departed in protest.

This is a particularly interesting example because successful management of knowledge commons is often built around coherent communities. For commons management to work — as Ostrom’s work shows, behavior must be reliably observable within a community, the community must be able to impose its own effective and graduated sanctions, and determine its own rules for doing so.  I conjecture that particular technical and/or policy-based solutions to knowledge commons management  (let’s call these “knowledge infrastructure”) have the potential to scale when three conditions hold: (1) the knowledge infrastructure addresses a vertical community that includes an interdependent set of producers and consumers of knowledge, (2) the approach provides substantial incentives for individuals in that vertical community to contribute while (a) providing public goods to both that community and (b) to a larger community; and (3) the approach is built upon community-specific extensions of more general-purpose infrastructure.

Categories: Uncategorized

Issues in Curating the Open Web at Scale: Comments on Gary Price’s Talk

October 24, 2016 Leave a comment

Gary Price, who is chief editor of InfoDocket, contributing editor of Search Engine Land, co-founder of Full Text Reports and who has worked with internet search firms and library systems developers alike, gave this talk on Issues in Curating the Open Web at Scale as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Price argued that the libraries should be more aggressively engaging with content on the open web (i.e. stuff you find through Google). He further argued that the traditional methods and knowledge used by librarians to curate traditional print collections may be usefully applied to open web content.  

Price has been a leader in developing and observing web discovery — as a director of ask.com, and as author of the first book to cogently summarize the limitations of web search: The Invisible Web: Uncovering Information Sources Search Engines Can’t See.  The talk gave a whirlwind tour of the history of curation of the open web, and noted the many early efforts aimed at curating resource directories that withered away with the ascent of Google.

In his abstract, Price summarizes as follows:

Much of the web remains invisible: resources are undescribed, unindexed or simply buried —  as many people rarely look past the first page of Google searches or are unavailable from traditional library resources. At the same time many traditional library databases pay little attention to quality content from credible sources accessible on the open web.

How do we build collections of quality open-web resources (i.e. documents, specialty databases, and multimedia) and make them accessible to individuals and user groups when and where they need it?

This talk reflects on the emerging tools for systematic programmatic curation; the legal challenges to open-web curation; long term access issues, and the historical challenges to building sustainable communities of curation.

Across his talk, Price stressed three arguments.

First, that much of the web remains invisible: Many databases and structure information sources are not indexed by Google. And although increasing amounts of structured information is indexed — most is behaviorally invisible — since the vast majority of people do not look beyond the first page of Google results.  Further, behavioral invisibility of information is exacerbated by the decreasing support for complex search operators in open web search engines.

Second, Price argued that library curation of the open web would add value: Curation would  make the invisible web visible; counteract gaming of results; and identify credible sources.  

Third, Price argued that a machine-assisted approach can be an effective strategy. He described how tools such as website watcher, archiveit, RSS aggregators, social media monitoring services, and content alerting services can be brought together by a trained curator, to develop continually updated collections of content that are of interest to targeted communities of practice. He argued that familiarity with these tools and approaches should be part of the Librarian’s toolkit – especially for those in liaison roles.

Similar tools are discussed in the courses we teach on professional reputation management — and I’ve found a number (particularly the latter three) useful as an individual professional.  More generally, I speculate that curation of the open web will be a larger part of the library mission — as we have argued in the 2015 National Agenda for Digital Stewardship, organizations rely on more information than they can directly steward.  The central problem is coordinating stakeholders around stewarding collection from which they derive common value.  This remains a deep, and unsolved problem, however, efforts such as The Keeper’s Registry and collaborations such as the International Internet Preservation Society (IIPC) and the  National Digital Stewardship Alliance (NDSA)  are making progress in this area.

Categories: Uncategorized

The Role of Research Funding and Policy Community in Data Citation — Rewards, Incentives, and Infrastructure

August 25, 2016 Leave a comment

Infrastructure and practices for data citation have made substantial progress over the last decade. This increases the potential rewards for data publication and reproducible science, however overall incentives remain relatively weak for many researchers.

This blog post summarizes a presentation given at the National Academies of Sciences as part of  Data Citation Workshop: Developing Policy And Practice.  The slides from the talk are embedded below:

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.


Principles

Academic researchers as a class are drawn to research and scholarship through an interest in puzzle-solving, but they are also substantially incented by recognition and money.  Typically, these incentives are shaped and channeled through the processes and institutions of tenure and review; publication; grant, awards and prizes; industry consulting; and professional collaboration and mentoring. [1]

Citations have been described as academic “currency”, and while this is not literally true, they are a particularly visible form of recognition in the academy, and increasingly tied to monetary incentives as well. [2] Thus rules, norms, and institutions that affect citation practices have a substantial potential to change incentives.

When effort is invisible it is apt to be undervalued. Data has been the “dark matter” of the scholarly ecosystem — data citation aims to make the role of data visible.  While the citation of data is not entirely novel, there has been a concerted effort across researchers, funders, and publishers over approximately the last decade to reengineer data citation standards and tools to create more rational incentives to create reusable and reproducible research. [3]  In more formal terms, the proximate aim of the current data citation movement is to make transparent the linkages between research claims, the evidence base on which these claims are based; and the contributors who are responsible for that evidence base. The longer term aim is to shift the equilibrium of incentives so that building the common scientific evidence base is rewarded in proportion to its benefit to the overall scientific community.

Progress

There has been notable progress in standards, policies, and tools for data citation since the ‘bad old days’ of 2007, which Gary King and I grimly characterized at the time [4]:

How much slower would scientific progress be if the near universal standards for scholarly citation of articles and books had never been developed? Suppose shortly after publication only some printed works could be reliably found by other scholars; or if researchers were only permitted to read an article if they first committed not to criticize it, or were required to coauthor with the original author any work that built on the original. How many discoveries would never have been made if the titles of books and articles in libraries changed unpredictably, with no link back to the old title; if printed works existed in different libraries under different titles; if researchers routinely redistributed modified versions of other authors’ works without changing the title or author listed; or if publishing new editions of books meant that earlier editions were destroyed? …

Unfortunately, no such universal standards exist for citing quantitative data, and so all the problems listed above exist now. Practices vary from field to field, archive to archive, and often from article to article. The data cited may no longer exist, may not be available publicly, or may have never been held by anyone but the investigator. Data listed as available from the author are unlikely to be available for long and will not be available after the author retires or dies. Sometimes URLs are given, but they often do not persist. In recent years, a major archive renumbered all its acquisitions, rendering all citations to data it held invalid; identical data was distributed in different archives with different identifiers; data sets have been expanded or corrected and the old data, on which prior literature is based, was destroyed or renumbered and so is inaccessible; and modified versions of data are routinely distributed under the same name, without any standard for versioning. Copyeditors have no fixed rules, and often no rules whatsoever. Data are sometimes listed in the bibliography, sometimes in the text, sometimes not at all, and rarely with enough information to guarantee future access to the identical data set. Replicating published tables and figures even without having to rerun the original experiment, is often difficult or impossible.

A decade ago, while some publishers had data transparency policies — they were routinely honored in the breach. Now, a number of high profile journals both require that authors cite or include the data on which their publications rest; and have mechanisms to enforce this. PLOS is a notable example — its Data Availability Statement [5] states not only that data should be shared, but that articles should provide the persistent identifiers of shared data, and that these should resolve to well-known repositories.

A decade ago, the only major funder that had an organization-wide data sharing policy was NIH [6] — and this policy had notable limitations —  it was limited to large grants, and the resource sharing statements it required were brief, not peer reviewed, and not monitored. Today, as Jerry Sheehan noted in his presentation on Increasing Access to the Results of Federally Funded Scientific Research: Data Management and Citation, almost all federal support for research now complies with the Holdren memo, which requires policies and data management plans “describing how they will provide for long-term preservation of, and access to, scientific data”. [7]  A number of foundation funders have adopted similar policies. Furthermore as, panelist Patricia Knezek noted, data management plans are now part of the peer review process at the National Science Foundation, and datasets may be included in the biosketches that are part of the funding application process.   

A decade ago, few journals published replication data, and no high-profile journals existed that published data.  Over the last several years, the number of data journals has increased, and Nature Research launched Scientific Data — which has substantially raised the visibility of data publications.

A decade ago, tools for data citation were non-existent, and the infrastructure for general open data sharing outside of specific community collections was essentially limited to ICPSR’s publication related archive [8] and Harvard’s’ Virtual Data Center [9] (which later became the Dataverse Network). Today as panelists throughout the day noted [10] infrastructure such as CKAN, Figshare, and close to a dozen Dataverse-based archives accept open data from any field; [11] there are rich public directories of archives such as RE3data; and large data citation indices such Datacite, and the TR Data Citation Index, enable data citations to be discovered and evaluated. [12]

These new tools are critical for creating data sharing incentives and rewards.  They allow data to be shared and discovered for reuse, reuse to be attributed, and that attribution to be incorporated into metrics of scholarly productivity and impact. Moreover much of this infrastructure exists in large part because they received substantial startup support from the research funding community.

Perforations

While open repositories and data citation indices enable researchers to more effectively get credit for data that is cited directly, and there is limited evidence that sharing research data is associated with higher citation rates, data sharing and citation remains quite limited. [13] As Marsha McNutt notes in her talk on Data Sharing: Some Cultural Perspectives, progress likely depends at least as much on cultural and organizational change, as on technical advances.   

Formally, the indexing of data citations enables citations to data to contribute to a researcher’s H-index, and other measures of scholarly productivity. As speaker Dianne Martin noted in the panel on Reward/Incentive Structures that her institution  (George Washington University) had begun to recognize data sharing and citation in the tenure and review process.

Despite the substantial progress over the last decade, there is little evidence that the incorporation of data citation and publication into tenure and review is yet either systematic or widespread.  Overall, positive incentives for citing data still appear to remain relatively weak:

  1. It remains the case that data is often used without being cited.[14]
  2. Even where data is cited — most individual data publications  (with notable exceptions primarily in the category of large community-databases) are neither in high impact publications nor highly cited.  Since scientists achieve significant recognition most often through publication in “high-impact” journals, and increasingly through publishing articles that are highly-cited — devoting effort to data publishing has a high opportunity cost.
  3. Even when cited, publishing one’s data is often perceived as increasing the likelihood that others will “leapfrog” your research, and publish high-impact articles with priority. Since scientific recognition relies strongly on priority of publication, this risk is a disincentive.
  4. While data citation and publication likely strengthens reproducibility, it also makes it easier for others to criticize published work. In the absence of strong positive rewards for reproducible research, this risk may be a disincentive overall.  

 

Funders and policy-makers have the potential to do more to strengthen positive incentives. Funders should support for mechanisms to assess and assign “transitive” credit, which would provides some share of the credit for publications to the other data and publications on which they would rely. [15] And funders and policy makers should support strong positive incentives for reproducible research — such as funding, and explicit recognition. [16]

Thus far, much of the efforts by funders, who are key stakeholders, focus on compliance. And in general, compliance has substantial limits as a design principle:

  • Compliance generates incentives to follow a stated rule — but not generally to go beyond to promote the values that motivated the rule.
  • Actors still need resources to comply, and as Chandler and other speakers on the panel on Supporting Organizations Facing The Challenges of Data Citation, compliance with data sharing is often viewed as an unfunded mandate.
  • Compliance-based incentives are prone to failure where the standards for compliance are ambiguous or conflicting.
  • Further, actors have incentives to comply with rules only when they have an expectation that behavior can be monitored, that the rule-maker will monitor behavior, and that violations of the rules will be penalized.
  • Moreover, external incentives, such as compliance, can displace existing internal motivations and social norms [17] — yielding a reduction in the desired behavior. Thus we should expect to promote the value the rule supports.

Journals have increased the monitoring of data transparency and sharing — primarily through policies like PLOS’s that require the author to supply before publication a replication data set and/or an explicit data citation or persistent identifiers that resolves to data in a well-known repository. This appears to be substantially increasing compliance with journal policies that had been on the books for over a decade.

However, neither universities nor funders are routinely auditing or monitoring compliance with data management plans.  As panelist Patricia Knezek emphasizes,  there are many questions about how funders will monitor compliance, how to incent compliance after the award is complete, and regarding uncertainties about the division of responsibility for compliance between the funded institution and the funded investigator.  Further, as noted in the panelists discussion with the workshop audience, data management plans for funded research made available to the public along with the abstracts creates a barrier to community-based monitoring and norms; scientists in the federal government are not currently subject to the same data sharing and management requirements as scientists in academia; and there is a need to support ‘convening’ organizations such as FORCE11, and the Research Data Alliance to bring multiple stakeholders to the table to align strategies on incentives and compliance .

Finally, as Cliff Lynch noted in the final discussion session of the workshop, compliance with data sharing requirements often comes into conflict with confidentiality requirements for the protection of data obtained from individuals and businesses, especially in the social, behavioral, and health sciences.  This is not a fundamental conflict  — it is possible to enable access to data without any intellectual property restrictions while still maintaining privacy. [18] However, absent common policies and legal instruments for intellectually-open but personally-confidential data, confidentiality requirements are a barrier (or sometimes an excuse) to open data.

References

[1] See for a review: Stephan PE. How economics shapes science. Cambridge, MA: Harvard University Press; 2012 Jan 15.

[2] Cronin B. The citation process. The role and significance of citations in scientific communication. London: Taylor Graham, 1984. 1984;1.

[3] Altman M, Crosas M. The evolution of data citation: from principles to implementation. IAssist Quarterly. 2013 Mar 1;37(1-4):62-70.

[4] Altman, Micah, and Gary King. “A proposed standard for the scholarly citation of quantitative data.” D-lib Magazine 13.3/4 (2007).

[5] See: http://journals.plos.org/plosone/s/data-availability

[6] See Final NIH Statement on Sharing Research Data, 2003, NOT-OD-03-032. Available from: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-03-032.html

[7]  Holdren, J.P. 2013, “Increasing Access to the Results of Federally Funded Scientific Research “, OSTP. Available from: https://www.whitehouse.gov/sites/default/files/microsites/ostp/ostp_public_access_memo_2013.pdf

[8]  King, Gary. “Replication, replication.” PS: Political Science & Politics 28.03 (1995): 444-452.

[9]  Altman M. Open source software for Libraries: from Greenstone to the Virtual Data Center and beyond. IASSIST Quarterly. 2002;25.

[10]  See particularly the presentation and discussion on Tools and Connections; Supporting Organizations Facing The Challenges of Data Citation, and Reward/Incentive Structures.  

[11] See http://dataverse.org/ , http://ckan.org/ , https://figshare.com/

[12] See http://www.re3data.org/ ,https://www.datacite.org/ , http://wokinfo.com/products_tools/multidisciplinary/dci/

[13]  Borgman CL. The conundrum of sharing research data. Journal of the American Society for Information Science and Technology. 2012 Jun 1;63(6):1059-78.

[14]  Read, Kevin B., Jerry R. Sheehan, Michael F. Huerta, Lou S. Knecht, James G. Mork, and Betsy L. Humphreys. “Sizing the Problem of Improving Discovery and Access to NIH-Funded Data: A Preliminary Study.” PloS one10, no. 7 (2015): e0132735.

[15] See Katz, D.S., Choi, S.C.T., Wilkins-Diehr, N., Hong, N.C., Venters, C.C., Howison, J., Seinstra, F., Jones, M., Cranston, K., Clune, T.L. and de Val-Borro, M., 2015. Report on the second workshop on sustainable software for science: Practice and experiences (WSSSPE2). arXiv preprint arXiv:1507.01715.

[16]  See Nosek BA, Spies JR, Motyl M. Scientific utopia II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science. 2012 Nov 1;7(6):615-31., Brandon, Alec, and John A. List. “Markets for replication.” Proceedings of the National Academy of Sciences 112.50 (2015): 15267-15268.

[17] Gneezy, U. and Rustichini, A., 2000. A fine is a price, . J. Legal Studies, 29,

Categories: Uncategorized