Guest Post: Curation as Context: Software in the Stacks

April 21, 2017 Comments off

Alex Chassanoff  who is a Postdoctoral Fellow in the program on information science continues a series of posts on software curation.

“Curation as Context:

Software in the Stacks”

As scholarly landscapes shift, differing definitions for similar activities may emerge from different communities of practice.   As I mentioned in my previous blog post, there are many distinct terms for (and perspectives on) curating digital content depending on the setting and whom you ask [1].  Documenting and discussing these semantic differences can play an important role in crystallizing shared, meaningful understandings.  

In the academic research library world,  the so-called data deluge has presented library and information professionals with an opportunity to assist scholars in the active management of their digital content [2].  Curating research output as institutional content is a relatively young, though growing phenomenon.  Research data management (RDM) groups and services are increasingly common in research libraries, partially fueled by changes in federal funding grant application requirements to encourage data management planning.  In fact, according to a recent content analysis of academic library websites, 185 libraries are now offering RDM services [3].  The charge for RDM groups can vary widely; tasks can range from advising faculty on issues related to privacy and confidentiality, to instructing students on potential avenues for publishing open-access research data.

As these types of services increase, many research libraries are looking to life cycle models as foundations for crafting curation strategies for digital content [4].  On the one hand, life cycle models recognize the importance of continuous care and necessary interventions that managing such content requires.  Life cycle models also provide a simplified view of essential stages and practices, focusing attention on how data flows through a continuum.  At the same time, the data flow perspective can obscure both the messiness of the research process and the complexities of managing dynamic digital content [5,6].  What strategies for curation can best address scenarios where digital content is touched at multiple times by multiple entities for multiple purposes?  

Christine Borgman notes the multifaceted role that data can play in the digital scholarship ecosystem, serving a variety of functions and purposes for different audiences.  Describing the most salient characteristics of that data may or may not serve the needs of future use and/or reuse. She writes:

These technical descriptions of “data” obscure the social context in which data exist, however. Observations that are research  findings  for  one  scientist  may  be background context to another. Data that are adequate evidence for one purpose (e.g., determining whether water quality is safe for surfing) are inadequate for others (e.g., government standards for testing drinking water). Similarly, data that are synthesized for one purpose may be “raw” for another. [7]

Particular data sets may be used and then reused for entirely different intentions.  In fact, enabling reuse is a hallmark objective for many current initiatives in libraries/archives.  While forecasting future use is beyond our scope, understanding more about how digital content is created and used in the wider scholarly ecosystem can prove useful for anticipating future needs.  As Henry Lowood argues, “How researchers will actually put their hands and eyes on historical software and data collections generally has been bracketed out of data curation models focused on preservation”[8].  

As an example, consider the research practices and output of faculty member Alice, who produces research tools and methodologies for data analysis. If we were to document the components used and/or created by Alice for this particular research project, it might include the following:

 

  • Software program(s) for computing published results
  • Dependencies for software program(s) for replicating published results
  • Primary data collected and used in analysis
  • Secondary data collected and used in analysis
  • Data result(s) produced by analysis
  • Published journal article

 

We can envision at least two uses of this particular instantiation of scholarly output. First, the statistical results of the data can be verified by replicating the conditions of the analysis.   Second, the statistical approach executed by the software program can be executed on a new inputted data set.  In this way, software can simultaneously serve as both an outcome to be preserved and as a methodological means to an (new) end.  

There are certain affordances in thinking about strategies for curation-as-context, outside the life cycle perspective.  Rather than emphasizing content as an outcome to be made accessible and preserved through a particular workflow, curation could instead aim to encompass the characterization of well-formed research objects, with an emphasis on understanding the conditions of their creation, production, use, and reuse.   Recalling our description of Alice above, we can see how each component of the process can be brought together to represent an instantiation of a contextually-rich research object.

Curation-as-context approaches can help us map the always-already in flux terrain of dynamic digital content.  In thinking about curating software as a complex object for access, use, and future use, we can imagine how mapping the existing functions, purposes, relationships, and content flows of software within the larger digital scholarship ecosystem may help us anticipate future use, while documenting contemporary use.  As Cal Lee writes:

Relationships to other digital objects can dramatically affect the ways in which digital objects have been perceived and experienced. In order for a future user to make sense of a digital object, it could be useful for that user to know precisely what set of surrogate representations – e.g. titles, tags, captions, annotations, image thumbnails, video keyframes – were associated with a digital object at a given point in time. It can also be important for a future user to know the constraints and requirements for creation of such surrogates within a given system (e.g. whether tagging was required, allowed, or unsupported; how thumbnails and keyframes were generated), in order to understand the expression, use and perception of an object at a given point in time [9].

Going back to our previous blog post, we can see how questions like “How are researchers creating and managing their digital content” are essential counterparts to questions like “What do individuals served by the MIT Libraries need to able to reuse software?” Our project aims to produce software curation strategies at MIT Libraries that embrace Reagan Moore’s theoretical view of digital preservation, whereby “information generated in the past is sent into the future” [10].  In other words, what can we learn about software today that makes an essential contribution to meaningful access and use tomorrow?  

Works Cited

[1] Palmer, C., Weber, N., Muñoz, T, and Renar, A. (2013), “Foundations of data curation: The pedagogy and practice of ‘purposeful work’ with research data”, Archives Journal, Vol 3.

[2] Hey, T.  and Trefethen, A. (2008), “E-science, cyberinfrastructure, and scholarly communication”, in Olson, G.M. Zimmerman, A., and Bos, N. (Eds), Scientific Collaboration on the Internet, MIT Press, Cambridge, MA.

[3] Yoon, A. and Schultz, T. (2017), “Research data management services in academic libraries in the US: A content analysis of libraries’ websites” (in press). College and Research Libraries.

[4] Ray, J. (2014), Research Data Management: Practical Strategies for Information Professionals, Purdue University Press, West Lafayette, IN.

[5] Carlson, J. (2014), “The use of lifecycle models in developing and supporting data services”, in Ray, J. (Ed),  Research Data Management: Practical Strategies for Information Professionals, Purdue University Press, West Lafayette, IN.

[6] Ball, A. (2010), “Review of the state of the art of the digital curation of research data”, University of Bath.

[7] Borgman, C., Wallis, J. and Enyedy, N. (2006), “Little science confronts the data deluge: Habitat ecology, embedded sensor networks, and digital libraries”, Center for Embedded Network Sensing, 7(1–2), 17 – 30. doi: 10.1007/s00799-007-0022-9. UCLA: Center for Embedded Network Sensing.  

[8] Lowood, H. (2013), “The lures of software preservation”, Preserving.exe: Towards a national strategy for software preservation, National Digital Information Infrastructure and Preservation Program of the Library of Congress.

[9] Lee, C. (2011), “A framework for contextual information in digital collections”, Journal of Documentation 67(1).

[10] Moore, R. (2008), “Towards a theory of digital preservation”, International Journal of Digital Curation 3(1).

 

Categories: Uncategorized

Guest Post: DataRescue-Boston@MIT Wrap up

March 2, 2017 Leave a comment

Alex Chassanoff  who is a Postdoctoral Fellow in the Program for Information Science, contributes to this detailed wrapup of the recent Data Rescue Boston event that she helped organize.

 

Data Rescue Boston@MIT Wrap up

Written by event organizers:

Alexandra Chassanoff

Jeffrey Liu

Helen Bailey

Renee Ball

Chi Feng

 

On Saturday, February 18th, the MIT Libraries and the Association of Computational Science and Engineering co-hosted a day long Data Rescue Boston hackathon at Morss Hall in the Walker Memorial Building.  Jeffrey Liu, a Civil and Environmental Engineering graduate student at MIT, organized the event as part of an emerging North American movement to engage communities locally in the safeguarding of potentially vulnerable federal research information.  Since January, Data Rescue events have been springing up at libraries across the country, largely through the combined organizing efforts of Data Refuge and Environmental Data and Governance Initiative.

 

The event was sponsored by MIT Center for Computational Engineering, MIT Department of Civil and Environmental Engineering, MIT Environmental Solutions Initiative, MIT Libraries, MIT Graduate Student Council Initiatives Fund, and the Environmental Data and Governance Initiative.

Here are some snapshot metrics from our event:

# of Organizers: 8
# of Volunteers: ~15
# of Guides: 9
# of Participants: ~130
# URLs researched: 200
# URLs harvested: 53
# GiB harvested: 35
# URLs seeded: 3300 at event (~76000 from attendees finishing after event)
# Agency Primers started: 19
# Cups of Coffee: 300
# Burritos: 120
# Bagels: 450
# Pizzas: 105

Goal 1. Process data

MIT’s data rescuers managed to process a similar amount of data through the seeding and harvesting phases of data rescue as compared to other similarly-sized events.  For reference, Data Rescue San Francisco researched 101 URLs and harvested 25 GB of data at their event.  Data Rescue DC, a two-day event which also included a bagging/describing track which we did not have, harvested 20GB of data, seeded 4776 URLs, bagged 15 datasets and described 40 data sets.   

Goal 2. Expand scope

Another goal of our event was to explore creating new workflows for expanding efforts beyond an existing focus on federal agency environmental and climate data.  Toward that end, we decided to pilot a new track called Surveying which we used to identify and describe programs, datasets and documents at federal agencies still in need of agency primers.  We were lucky enough to have particular domain experts on hand who assisted us with our efforts.  In total, we were able to begin expansion efforts for agencies and departments at the Department of Justice, Department of Labor, Health and Human Services, and the Federal Communications Commission.

Goal 3: Engage and build community

Attendees at our event spanned age groups, occupations, and technical abilities.  Participants included research librarians, concerned scientists, and expert undergraduate hackers; according to national developers for the Data Rescue archiving application, MIT had the largest number of “tech-tel” than any other event thus far.   As part of the Storytelling aspect of Data Rescue events, we captured profiles for twenty-seven of our attendees.  Additionally, we created Data Use Stories that describe how some researchers use specific data sets from the National Water Information System (USGS), the Alternative Fuels Data Center (DOE),  and the Global Historical Climate Network (NOAA).  These stories let us communicate how these data sets are used to better understand our world, as well as make decisions that impact our everyday lives.

The hackathon at MIT was the second event hosted by Data Rescue Boston, which has begun hosting weekly working groups every Thursday at MIT  for continuing engagement on compiling tools and documentation to improve workflow, identify vulnerable data sets, and create resources to help further efforts.   

Future Work

Data rescue events continue to gather steam, with eight major national events planned over the next month.  The next DataRescue Boston event will be held at Northeastern on March 24th. A dozen volunteers and attendees from the MIT event have already signed up to help organize workshops and efforts at the Northeastern event.

Press Coverage of our Event:

http://gizmodo.com/rescuing-government-data-from-trump-has-become-a-nation-1792582499

https://thetech.com/2017/02/22/datarescue-students-collaborate-vital

https://medium.com/binj-reports/saving-science-one-dataset-at-a-time-389c7014199c#.lgrlkca9f


Categories: Uncategorized

Guest Post: Alex Chassanoff on Building A Model for Software Curation

January 21, 2017 Leave a comment

Alex Chassanoff  who is a Postdoctoral Fellow in the program on information science introduces a series of posts on software curation.


Building A Model for Software Curation:

An Introductory Post

 

In October 2016, I began working at the MIT Libraries as a CLIR/DLF Postdoctoral Fellow in Software Curation. CLIR began offering postdoctoral fellowships in data curation in 2012; however, myself and three others were part of the first cohort conducting research in the area of Software Curation.  At our fellowship seminar and training this summer,the four of us joked about not having any idea what we would be doing (and Google wasn’t much help). Indeed, despite years of involvement in digital curation, I was unsure of what it might mean to curate software. As has been well-documented in the library/archival science community, curation of data means many different things tomany different people.  Add in the term “software” and you increase the complexities.

At MIT Libraries, I was given the good fortune of working with two distinguished and esteemed experts in library research: Nancy McGovern, the Director of the Digital Preservation Program and Micah Altman, the Director of Research.   This blog post describes the first phase of our work together in defining a research agenda for software curation as an institutional asset.

Defining Scope

As we began to suss out possible research objectives and assorted activities, we found ourselves circling back to four central questions – which themselves split into associated sub-questions.

  • What is software? What is the purpose and function of software? What does it mean to curate software? How do these practices differ from preservation?
  • When do we curate software? Is it at the time of creation? Or when it becomes acquired by an institution?
  • Why do institutions and researchers curate software?
  • Who is institutionally responsible for curating software and for whom are we curating software?

Developing Focus and Purpose

We also began to outline the types of exploratory research questions we might ask depending on the specific purpose and entities we were creating a model for (see Table 1 below). Of course, these are only some of the entities that we could focus on; we could also broaden our scope to include research questions of interest to software publishers, software journals, or funders interested in software curation.

 

Entity Purpose: Libraries/Archives Purpose: MIT Specific
Research library What does a library need to safeguard + preserve software as an asset? How are other institutions handling this? How are funding agencies considering research on software curation? What are the MIT libraries’ existing and future needs related to software curation?
Software creator What are the best practices software creators should adopt when creating software? How are software creators depositing their software and how are journals recommending they do this? What are the individual needs and existing practices of software creators served by the MIT Libraries?
Software user What are the different kinds of reasons why people may use software? What are the conditions for use? What are the specific curation practices we should implement to make software usable for this community? What do individuals served by the MIT Libraries need to be able to reuse software?

Table 1: Potential purpose(s) of research by entity

Importantly, we wanted to adopt an agile research approach that considered software as an artifact, rather than (simply) as an outcome to be preserved and made accessible.  Curation in this sense might seek to answer ontological questions about software as an entity with significant characteristics at different levels of representation.   Certainly, digital object management approaches that emphasize documentation of significant properties or characteristics are long-standing in the literature.  At the same time, we wanted our approach to address essential curatorial activities (what Abby Smith termed “interventions”) that help ensure digital files remain accessible and usable. [1]  We returned to our shared research vision: to devise a model for software curation strategies to assist research outcomes that rely on the creation, use, reuse, and study of software.

Statement of Research Objectives and Working Definitions

Given the preponderance of definitions for curation and the wide-ranging implications of curating for different purposes and audiences, we thought it would be essential for us to identify and make clear our particular interests.  We developed the following statement to best describe our goals and objectives:

Libraries and archives are increasingly tasked with responsibilities related to the effective long-term preservation and curation of software.  The purpose of our work is to investigate and make recommendations for strategies that institutions can adopt for managing software as complex digital objects across generations of technology.

We also developed the following working definition of software curation for use in our research:

“Software curation encompasses the active practices related to the creation, acquisition, appraisal and selection, description, transformation, preservation, storage, and dissemination/access/reuse of software over short- and long- periods of time.”

What’s Next

The next phase of our research involves formalizing our research approach through the evaluation, selection, and application of relevant models (such as the OAIS Reference Model) and ontologies (such as the SWO). We are also developing different scenarios to establish the roles and responsibilities bound up in software creation, use, and reuse. In addition to reporting on the status of our project, you can expect to read blog posts about both the philosophical and practical implications of curating software in an academic research library setting.

Notes

[1] In the seminal collection Authenticity in a digital environment, Abby Smith noted that “We have to intervene continually to keep digital files alive. We cannot put a digital file on a shelf and decide later about preservation intervention. Storage means active intervention.” See: Abby Smith (2000) “Authenticity in Perspective  Authenticity in a digital environment. Washington, D.C: Council on Library and Information Resources.

Categories: Uncategorized

Guest Post: Zachary Lizee on Digital Literacy and Standards

December 13, 2016 Leave a comment

Zachary Lizee  who is a Graduate Research Intern in the program on information science, reflects on his investigations into information standards, and suggests how  libraries can reach beyond local instruction on digital literacy to scaleable instruction on digital citizenship.

21st century Libraries, Standards Education and Socially Responsible Information Seeking Behavior

Standards and standards development frame, guide, and normalize almost all areas of our lives.  Standards in IT govern interoperability between a variety of devices and platforms, standardized production of various machine parts allows uniform repair and reproduction, and standardization in fields like accounting, health care, or agriculture promotes best industry practices that emphasize safety and quality control.  Informational standards like OpenDocument allows storage and processing of digital information to be accessible by most types of software ensuring that the data is recoverable in the future.[1]  Standards reflect the shared values, aspirations, and responsibilities we as a society project upon each other and our world.

Engineering and other innovative entrepreneurial fields need to have awareness aboutinformation standards and standards development to ensure that the results of research, design, and development in these areas have the most positive net outcome for our world at large, as illustrated by the analysis of healthcare information standards by HIMSS, a professional organization that works to affect informational standards in the healthcare IT field:

In healthcare, standards provide a common language and set of expectations that enable interoperability between systems and/or devices. Ideally, data exchange schema and standards should permit data to be shared between clinician, lab, hospital, pharmacy, and patient regardless of application or application vendor in order to improve healthcare delivery. [2]

As critical issues regarding information privacy quickly increase, standard development organizations and interested stakeholders take an active interest in creating and maintaining standards to regulate how personal data is stored, transferred, and used, which has both public interests and regulation by legal frameworks in mind.[3]

Libraries have traditionally been centers of expertise/access of information collection, curation, dissemination, and instruction.  And the standards around how digital information is produced, used, governed, and transmitted are rapidly evolving with new technologies.[4]  Libraries are participating in the processes of generating information standards to ensure that patrons can freely and safely access information.  For instance, the National Information Standards Organization is developing informational standards to address patron privacy issues in library data management systems:

The NISO Privacy Principles, available at http://www.niso.org/topics/tl/patron_ privacy/, set forth a core set of guidelines by which libraries, systems providers and publishers can foster respect for patron privacy throughout their operations.  The Preamble of the Principles notes that, ‘Certain personal data are often required in order for digital systems to deliver information, particularly subscribed content.’ Additionally, user activity data can provide useful insights on how to improve collections and services. However, the gathering, storage, and use of these data must respect the trust users place in libraries and their partners. There are ways to address these operational needs while also respecting the user’s rights and expectations of privacy.[5]

This effort by NISO (which has librarians on the steering committee) illustrates how libraries engage in outreach and advocacy that is also in concert with the ALA’s Code of Ethics, which states that libraries have the duty to protect patron’s rights to privacy and confidentiality regarding information seeking behavior.  Libraries and librarians have a long tradition of engaging in social responsibility for their patrons and community at large.

Although libraries are sometimes involved, most information standards are created by engineers working in corporate settings, or are considerably influenced by the development of products that become the model.  Most students leave the university without understanding what standards are, how they are developed, and what potential social and political ramifications advancements in the engineering field can have on our world.[6]

There is a trend in the academic and professional communities to foster greater understanding about what standards are, why they are important, and how they relate to influencing and shaping our world.[7]  Understanding the relevance of standards will be an asset that employers in the engineering fields will value and look for.  Keeping informed about the most current standards can drive innovation and increase the market value of an engineer’s research and design efforts.[8]

As informational hubs, libraries have a unique opportunity to participate in developing information literacy regarding standards and standards development.  By infusing philosophies regarding socially responsible research and innovation, using standards instruction as a vehicle, librarians can emphasize the net positive effect of standards and ethics awareness for the individual student and the world at large.

The emergence of MOOCs creates an opportunity for librarians to reach a large audience to instruct patrons in information literacy in a variety of subjects. MOOCs can have a number of advantages when it comes to being able to inform and instruct a large number of people from a variety of geographic locations and across a range of subject areas.[9]

For example, a subject specific librarian for an engineering department at a university could participate with engineering faculty in developing a MOOC that outlines the relative issues, facts, and procedures surrounding standards and standards development to aid the engineering faculty in instructing standards education.  Together, librarians and subject experts could  develop education on the roles that standards and socially responsible behavior factor into the field of engineering.

Students that learn early in their career why standards are an integral element in engineering and related fields have the potential to produce influential ideas, products, and programs that undoubtedly could have positive and constructive effects for society.  Engineering endeavors to design products, methodologies, and other technologies that can have a positive impact on our world.  Standards education in engineering fields can produce students who have a keen understanding of social awareness about human dignity, human justice, overall human welfare, and a sense of global responsibility.

Our world has a number of challenges: poverty, oppression, political and economic strife, environmental issues, and a host of many other dilemmas socially responsible engineers and innovators could address.  The impact of educating engineers and innovators about standards and socially responsible behavior can affect future corporate responsibility, ethical and humanitarian behavior, altruistic technical research and development, which in turn yields a net positive result for the individual, society, and the world.

Recommended Resources:

Notes:

[1] OASIS, “OASIS Open Document Format for Office Applications TC,” <https://www.oasis-open.org/ committees/tc_home.php?wg_abbrev=office>

[2] HIMSS, “Why do we need standards?,” <http://www.himss.org/library/interoperability-standards/why-do-we-need-standards>

[3] Murphy, Craig N. and JoAnne Yates, The International Organization for Standardization (ISO): Global governance through voluntary consensus, London and New York: Routledge, 2009.

[4] See Opening Standards: The Global Politics of Interoperability, edited by Laura DeNardis, Cambridge, Massachusetts: MIT Press, 2011.

[5] “NISO Releases a Set of Principles to Address Privacy of User Data in Library, Content-Provider, and Software-Supplier Systems,” NISO,  <http://www.niso.org/news/pr/view?item_key=678c44da628619119213955b867838b40b6a7d96>

[6] “IEEE Position Paper on the Role of Technical Standards in the Curriculum of Academic Programs in Engineering, Technology and Computing,” IEEE,  <https://www.ieee.org/education_careers/education/eab/position_statements.html>

[7] Northwestern Strategic Standards Management, <http://www.northwestern.edu/standards-management/>

[8] “Education about standards,” ISO, <http://www.iso.org/iso/home/about/training-technical-assistance/standards-in-education.htm>

[9] “MOOC Design and Delivery: Opportunities and Challenges,” Current Issues in Emerging ELearning, V.3, Issue 1,(2016) <http://scholarworks.umb.edu/ciee/?utm_source=scholarworks.umb.edu%2Fciee%2Fvol2%2Fiss1%2F6&utm_medium=PDF&utm_campaign=PDFCoverPages>

Categories: Uncategorized

Making Decisions in a World Awash in Data: We’re going to need a different boat: Comments on Anthony Scriffignano’s Talk

December 8, 2016 Leave a comment

Dr. Anthony Scriffignano, who is SVP/Chief Data Scientist at Dun and Bradstreet, gave this talk on Making Decisions in a World Awash in Data: We’re going to need a different boat
as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Scriffignano argues that the massive collection of ‘unstructured’ data enables a wide set of potential inferences about complex changing relationships.  At the same time, his talk notes that it is increasingly easy to gather sufficient information to take action — while lacking enough information to  form good judgement, and further understanding of the context in which data is collected and flows is essential to developing such good judgements.

Scriffignano summarizes his talk in the following abstract:

l explore some of the ways in which the massive availability of data is changing and the types of questions we must ask in the context of making business decisions.  Truth be told, nearly all organizations struggle to make sense out of the mounting data already within the enterprise.  At the same time, businesses, individuals, and governments continue to try to outpace one another, often in ways that are informed by newly-available data and technology, but just as often using that data and technology in alarmingly inappropriate or incomplete ways.  Multiple “solutions” exist to take data that is poorly understood, promising to derive meaning that is often transient at best.  A tremendous amount of “dark” innovation continues in the space of fraud and other bad behavior (e.g. cyber crime, cyber terrorism), highlighting that there are very real risks to taking a fast-follower strategy in making sense out of the ever-increasing amount of data available.  Tools and technologies can be very helpful or, as Scriffignano puts it, “they can accelerate the speed with which we hit the wall.”  Drawing on unstructured, highly dynamic sources of data, fascinating inference can be derived if we ask the right questions (and maybe use a bit of different math!).  This session will cover three main themes: The new normal (how the data around us continues to change), how are we reacting (bringing data science into the room), and the path ahead (creating a mindset in the organization that evolves).  Ultimately, what we learn is governed as much by the data available as by the questions we ask.  This talk, both relevant and occasionally irreverent, will explore some of the new ways data is being used to expose risk and opportunity and the skills we need to take advantage of a world awash in data.

This covers a broad scope, and Dr. Scriffignano expands  extensively on these and other issue in his blog  — which is well worth reading.  

Dr. Scriffignano’s talk raised a number of interesting provocations. The talk claims, for example that:

On data.

  • No data is real-time — there are always latencies in measurement, transmission, or analysis.
  • Most data is worthless — but there remains a tremendous number of useful signals in data that we don’t understand.
  • Eighty-five percent of data collected today is ‘unstructured’. And unstructured’ data is really data that has structure that we do not yet understand.

On using data.

  • Unstructured data has the potential to support many unanticipated inferences. An example (which Scriffiganno calls a “data-bubble) is a set of photographs of crowd-sourced photos of recurring events — one can find photos that are taken at different times but which show the same location from the same perspective. Despite being convenient samples, they permit new longitudinal comparisons from which one could extract signals of fashion, attention, technology use, attitude, etc. —  and big data collection has created qualitatively new opportunities for inference.
  • When collecting and curating data we need to pay close attention to decision-elasticity — how different would our information have to be to change our optimal action?  In designing a data curation strategy, one needs to weigh the opportunity costs of obtaining data and curating data, against the potential to affect decisions.
  • Increasingly, big data analysis raises ethical questions.  Some of these questions arise directly: what are ethical expectations on use of ‘new’ signals we discover that can be extracted from unstructured data?  Others arise through the algorithms we choose — how they introduce biases– and how do we even understand what algorithms do, especially as use of artificial intelligence grows? Scriffigano’s talk gives as an example of recent AI research in which two algorithms develop their own private encryption scheme.

This is directly relevant to the future of research, and the future of research libraries.  Research will increasingly rely on evidence sources of these types — and increasing need to access, discover and curate this evidence.  And our society will increasingly be shaped by this information, and how we choose to engineer and govern collection and use of this information.  The private sector is pushing ahead fast in this area, and will no doubt generate many innovative data collections and algorithms.  Engagement from university scholars, researchers, and librarians is vital to ensure that society understands these new creations; is able to evaluate their reliability and bias; and has durable and equitable access to them to provide accountability and to support  important discoveries that are not easily monetized.  For those interested in this topic, — the  Program on Information Science has published reports and articles on big data inference and ethics.    

Categories: Uncategorized

The Open Access Network: Comments on Rebecca Kennison’s Talk

October 29, 2016 Leave a comment

Rebecca Kennison, who is the Principal of K|N Consultants, the co-founder of the Open Access Network; and was was the founding director of the Center for Digital Research and Scholarship, gave this talk on Come Together Right Now: An Introduction To The Open Access Network as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Kennison argues that current models of OA publishing based on cost-per-unit are neither scalable nor sustainable.  She further argues that a sustainable model must be based on continuing regular voluntary contributions from research universities.  

In her abstract, Kennison summarizes as follows:

Officially launched just over a year ago, the Open Access Network (OAN) offers a transformative, sustainable, and scalable model of open access (OA) publishing and preservation that encourages partnerships among scholarly societies, research libraries, and other partners (e.g., academic publishers, university presses, collaborative e-archives) who share a common mission to support the creation and distribution of open research and scholarship and to encourage more affordable education, which can be a direct outcome of OA publishing. Our ultimate goal is to develop a collective funding approach that is fair and open and that fully sustains the infrastructure needed to support the full life-cycle for communication of the scholarly record, including new and evolving forms of research output. Simply put, we intend to Make Knowledge Public.

Kennison’s talk summarizes the argument in her 2014 paper with Lisa Norberg : A Scalable and Sustainable Approach to Open Access Publishing and Archiving for Humanities and Social Sciences. Those intrigued by these arguments may find a wealth of detail in the full paper.

Kennison argues that this form of network would offer value to three groups  of stakeholders in general:

  • For institutions and libraries, to advance research and scholarship, lower the cost of education, and support lifelong learning.
  • For scholarly societies, university presses: ensure revenue, sustain operations, and support innovation.
  • For individuals, foundations, corporations: provide wide access to research and scholarship to address societal challenges, support education, and grow the economy.

The  Program on Information Science has previously written on the information economics of the commons in Information Wants Someone Else to Pay For It.  Two critical questions posed by an economic analysis of the OAN are first: What is the added value to the individual contributor that they would not obtain unless they individually contribute? (Note that this is different from the group value above — since any stakeholder gets these values if the OAN exists, whether or not they contribute to it.)  Second, under what conditions does the approach lead to the right amount of information being produced?  For example both market-based solutions and pure-altruistic solutions to producing knowledge outputs yield something — they just don’t yield anything close to the social optimum level of knowledge production and use. What reasons do we have to believe that the fee-structure of the OAN comes closer?

In addition, Kennison discussed the field of linguistics as a prototype. It is a comparatively small discipline (1000’s of researchers) and the output is focused in approximately 60 journals.  Notably, a number of high-profile departments recently changed their tenure and promotion policy to recognize the OA journal Glossa as the equivalent of top journal Lingua, when the former’s board departed in protest.

This is a particularly interesting example because successful management of knowledge commons is often built around coherent communities. For commons management to work — as Ostrom’s work shows, behavior must be reliably observable within a community, the community must be able to impose its own effective and graduated sanctions, and determine its own rules for doing so.  I conjecture that particular technical and/or policy-based solutions to knowledge commons management  (let’s call these “knowledge infrastructure”) have the potential to scale when three conditions hold: (1) the knowledge infrastructure addresses a vertical community that includes an interdependent set of producers and consumers of knowledge, (2) the approach provides substantial incentives for individuals in that vertical community to contribute while (a) providing public goods to both that community and (b) to a larger community; and (3) the approach is built upon community-specific extensions of more general-purpose infrastructure.

Categories: Uncategorized

Issues in Curating the Open Web at Scale: Comments on Gary Price’s Talk

October 24, 2016 Leave a comment

Gary Price, who is chief editor of InfoDocket, contributing editor of Search Engine Land, co-founder of Full Text Reports and who has worked with internet search firms and library systems developers alike, gave this talk on Issues in Curating the Open Web at Scale as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Price argued that the libraries should be more aggressively engaging with content on the open web (i.e. stuff you find through Google). He further argued that the traditional methods and knowledge used by librarians to curate traditional print collections may be usefully applied to open web content.  

Price has been a leader in developing and observing web discovery — as a director of ask.com, and as author of the first book to cogently summarize the limitations of web search: The Invisible Web: Uncovering Information Sources Search Engines Can’t See.  The talk gave a whirlwind tour of the history of curation of the open web, and noted the many early efforts aimed at curating resource directories that withered away with the ascent of Google.

In his abstract, Price summarizes as follows:

Much of the web remains invisible: resources are undescribed, unindexed or simply buried —  as many people rarely look past the first page of Google searches or are unavailable from traditional library resources. At the same time many traditional library databases pay little attention to quality content from credible sources accessible on the open web.

How do we build collections of quality open-web resources (i.e. documents, specialty databases, and multimedia) and make them accessible to individuals and user groups when and where they need it?

This talk reflects on the emerging tools for systematic programmatic curation; the legal challenges to open-web curation; long term access issues, and the historical challenges to building sustainable communities of curation.

Across his talk, Price stressed three arguments.

First, that much of the web remains invisible: Many databases and structure information sources are not indexed by Google. And although increasing amounts of structured information is indexed — most is behaviorally invisible — since the vast majority of people do not look beyond the first page of Google results.  Further, behavioral invisibility of information is exacerbated by the decreasing support for complex search operators in open web search engines.

Second, Price argued that library curation of the open web would add value: Curation would  make the invisible web visible; counteract gaming of results; and identify credible sources.  

Third, Price argued that a machine-assisted approach can be an effective strategy. He described how tools such as website watcher, archiveit, RSS aggregators, social media monitoring services, and content alerting services can be brought together by a trained curator, to develop continually updated collections of content that are of interest to targeted communities of practice. He argued that familiarity with these tools and approaches should be part of the Librarian’s toolkit – especially for those in liaison roles.

Similar tools are discussed in the courses we teach on professional reputation management — and I’ve found a number (particularly the latter three) useful as an individual professional.  More generally, I speculate that curation of the open web will be a larger part of the library mission — as we have argued in the 2015 National Agenda for Digital Stewardship, organizations rely on more information than they can directly steward.  The central problem is coordinating stakeholders around stewarding collection from which they derive common value.  This remains a deep, and unsolved problem, however, efforts such as The Keeper’s Registry and collaborations such as the International Internet Preservation Society (IIPC) and the  National Digital Stewardship Alliance (NDSA)  are making progress in this area.

Categories: Uncategorized