Creative Data Literacy: Commentary on Catherine D’Ignazio’s Program on Information Science Talk

June 6, 2017 Leave a comment

Catherine D’Ignazio is an Assistant Professor of Civic Media and Data Visualization at Emerson College, a principal investigator at the Engagement Lab, and a research affiliate at the MIT Media Lab/Center for Civic Media. She presented this talk, entitled, Creative Data Literacy: Bridging the Gap Between Data-Haves and Have-Nots as part of Program on Information Science Brown Bag Series.

In her talk, illustrated by the slides below, D’Ignazio points to the gap between those people that collect and use data, and those people who are the subject of data collection.

D’Ignazio abstracted her talk as follows:

Communities, governments, libraries and organizations are swimming in data—demographic data, participation data, government data, social media data—but very few understand what to do with it. Though governments and foundations are creating open data portals and corporations are creating APIs, these rarely focus on use, usability, building community or creating impact. So although there is an explosion of data, there is a significant lag in data literacy at the scale of communities and citizens. This creates a situation of data-haves and have-nots which is troubling for an open data movement that seeks to empower people with data. But there are emerging technocultural practices that combine participation, creativity, and context to connect data to everyday life. These include data journalism, citizen science, emerging forms for documenting and publishing metadata, novel public engagement in government processes, and participatory data art. This talk surveys these practices both lovingly and critically, including their aspirations and the challenges they face in creating citizens that are truly empowered with data.

In her talk, D’Ignazio makes five recommendations on how to help people learn data literacy:

  • Many tutorials on data use abstract or standardized examples examining cars (or widgets) — this does not connect with most audiences. Ground your curriculum in community-centered problems and examples.
  • Frequently, people encounter data “in the wild” without metadata or other context that are needed for constructing meaning with it. To address this, have learners create data biographies — which explain who and how the data was collected and used, and its purposes, impacts and limitations.
  • Data is messy, and learners should not always be introduced to it through a clean, static data set but through encountering the complex process of collection.
  • Design tools that are learner-centric: focused, guided, inviting, and expandable.
  • People like monsters better than they like bar charts — so favor creative community-centered outputs over abstract purity.

Much more detail on these recommendation can be found in D’Ignazio’s professional writings.

D’Ignazio’s talk illustrated two more general tensions. One general tension is between a narrow conception of data literacy as coding, spreadsheets, statistics; and a broader conception that is not yet crisply defined but is distinct from statistical-, information-, IT-, media-, and Visual- literacies. This resonates with work done by our program’s research intern Zach Lizee on Digital Literacy and Digital Citizenship in which he argues for a form of literacy that prepares learners to engage with the evolving role of information in the world, and to use that engagement to advocate policy and standards that enact their values.

D’Ignazio’s talks also highlights a broad general tension that currently exists between the aspiration of open data and data journalism to empower the broader public, and the structural inequalities in our systems of data collection, sharing, analysis, and meaning-making. This tension is very much in play with respect to Libraries and Universities approaches to open access.

Much of academia, and many policy-makers have embraced the potential value of Open Access to content. The MIT libraries’ vision also embraces the challenge of building an open source platform to enable global discovery and access to this content. Following the themes of D’Ignazio’s talk and based on our research, I conjecture that library open platforms could be of tremendous worth — but not for the reasons one usually expects.

The worth of software, and of information and communication technology systems and platforms generally, is typically measured by how much it is used, what functions it provides, and what content/data it enables one to use. However the importance of Library participation in the development of open information platforms goes beyond this. Libraries have not distinguished themselves from the Googles, Twitters and Facebooks of the world in making open content discoverable, or in the functionality that their platforms provide to create, annotate, share, and make meaning from this content: The commercial sector has both the capacity and the incentives to do this — as it’s profitable.

The worth of a library open platform is in the core library values that it enacts: broad inclusion/participation, long-term (intergenerational) persistence, transparency, and privacy. These are not values that current commercial platforms support — because the commercial sector lacks an incentives to create these. To go beyond open access to equity in participation in the creation and understanding of knowledge, libraries, museums, archives, and others that share thesevalues must lead in creating open platforms.

Reflecting the themes of D’Ignazio’s talk, the research we conduct here, in the Program on Information Science, engages with the expanding scope of information literacy, and with inequalities in access to information. For those interested in these and other projects, we have published blog posts and reports in these areas.

Categories: Uncategorized

Becoming a Practitioner Scholar in Technology for Development (And Involving students!): Commentary on Laura Hosman’s Talk

Professor Laura Hosman, who is Assistant Professor at Arizona State University (with a joint appointment in the School for the Future of Innovation in Society and in The Polytechnic School) gave this talk Becoming a Practitioner Scholar in Technology for Development as part of the Program on Information Science Brown Bag Series.

In her talk, illustrated by the slides below, Hosman argues that, for a large part of the world “the library of the future” will be based on cellphones, intranets, and digital-but-offline content.

 

 

Hosman abstracted her talk as follows:

Access to high-quality, relevant information is absolutely foundational for a quality education. Yet, so many schools across the developing world lack fundamental resources, like textbooks, libraries, electricity, and Internet connectivity. The SolarSPELL (Solar Powered Educational Learning Library) is designed specifically to address these infrastructural challenges, by bringing relevant, digital educational content to offline, off-grid locations. SolarSPELL is a portable, ruggedized, solar-powered digital library that broadcasts a webpage with open-access educational content over an offline WiFi hotspot, content that is curated for a particular audience in a specified locality—in this case, for schoolchildren and teachers in remote locations. It is a hands-on, iteratively developed project that has involved undergraduate students in all facets and at every stage of development. This talk will examine the design, development, and deployment of a for-the-field technology that looks simple but has a quite complex background.

In her talk, Hosman describes how the inspiration for her current line of research and practice started when she received a request to aid deployment of the One Laptop Per Child project in Haiti. The original project had allocated twenty-five million dollars to laptop purchasing, but failed to note that electric power was not available in many of the areas they needed to reach — so they asked for Professor Hosman’s help in finding an alternative power source. Over the course of her work, the focus of her interventions has shifted from solar power systems, to portable computer labs, to portable libraries — and she noted that every successful approach involved evolution and iteration.

Hosman observes that for much of the world’s populations electricity is a missing prerequisite to computing and to connectivity. She also notes that access to computing for most of the world comes through cell phones, not laptops. (And she recalls even finding that the inhabitants of remote islands occasionally had better cellphones than she carried.) Her talk notes that there are over seven billion cell phones in the world — which is over three times the number of computers worldwide, and many thousands of times the number of libraries.

Hosman originally titled her talk The Solar Powered Educational Learning Library – Experiential Learning And Iterative Development. The talk’s new title reflects one of three core themes that ran through the talk — the importance of people. Hosman argues that technology is never by itself sufficient (there is no “magic bullet”) — to improve people’s lives, we need to understand and engineer for people’s engagement with technology.

The SolarSPELL project has engaged with people in surprising ways. Not only is it designed around the needs of the target clients, but it has continuously involved Laura’s engineering students in its design and improvement; and has further involved high-school students in construction. Under Hosman’s direction, university and high school students worked together to construct a hundred SolarSPELL’s using mainly parts ordered from amazon. Moreover, Peace Corps volunteers are a critical part of the project. The people in the Corps provide the grass-roots connections that spark people to initially try the SolarSPELL, and provide a persistent human connection that supports continuing engagement.

A second theme of the talk is the importance of open and curated content. Simply making a collection freely available on-line is not enough, when we want most people in the world to be able to access it. For collections to be meaningfully accessible they need to available for bulk download; they need to be usable under an open license; they need to be selected for a community of use that does not have the option of seeking more content online; and they need to contain all of the context needed for that community to understand them.

A final theme that Hosman stresses is that any individual (scholar, practitioner, actor) will never have all the skills needed to address complex problems in the complex real world — solving real world problems requires a multidisciplinary approach. SolarSPELL demonstrates this through combining expertise in electrical engineering, content curation, libraries, software development, education, and in the sociology and politics of the region. Notably, the ASU libraries have been a valuable partner in the SolarSPELL project, and have even participated in fieldwork. Much more information about this work and its impact can be found in Hosman’s scholarly papers.

The MIT libraries have embraced a vision of serving a global community of scholars and learners. Hosman’s work demonstrates the existence of large communities of learners that would benefit from open educational and research materials — but whose technology needs are not met by most current information platforms (even open ones). Our aim is that future platforms not only enable research and educational content to reach such communities, but also that local communities worldwide can contribute their local knowledge, perspective, and commentary to the world’s library.

Surprisingly, the digital preservation research conducted at the libraries is of particular relevance to tackling these challenges. The goal of digital preservation can be thought of as communicating with the future — and in order to accomplish this, we need to be able to capture both content and context, steward it over time (managing provenance, versions, and authenticity), and prepare it to be accessed through communication systems and technologies that do not yet exist. A corollary is that properly curated content should be readily capable of being stored and delivered offline — which is currently a major challenge for access by the broader community.

Reflecting the themes of Hosman’s talk, the research we conduct here, in the Program on Information Science, is fundamentally interdisciplinary: For example our research in information privacy has involved librarians, computer scientists, statisticians, legal scholars, and many others. Our Program also aims to bridge research and practice, support translational and applied research, which often requires sustained engagement with grassroots stakeholders. For example, the success of the DIY redistricting (aka. “participative GIS”) efforts in which we’ve collaborate relied on sustained engagement with grassroots good-government organizations (such as Common Cause and League of Women Voters); students; and the media. For those interested in these and other projects, we have published reports and articles describing them.

Categories: Uncategorized

Guest Post: Curation as Context: Software in the Stacks

April 21, 2017 Comments off

Alex Chassanoff  who is a Postdoctoral Fellow in the program on information science continues a series of posts on software curation.

“Curation as Context:

Software in the Stacks”

As scholarly landscapes shift, differing definitions for similar activities may emerge from different communities of practice.   As I mentioned in my previous blog post, there are many distinct terms for (and perspectives on) curating digital content depending on the setting and whom you ask [1].  Documenting and discussing these semantic differences can play an important role in crystallizing shared, meaningful understandings.  

In the academic research library world,  the so-called data deluge has presented library and information professionals with an opportunity to assist scholars in the active management of their digital content [2].  Curating research output as institutional content is a relatively young, though growing phenomenon.  Research data management (RDM) groups and services are increasingly common in research libraries, partially fueled by changes in federal funding grant application requirements to encourage data management planning.  In fact, according to a recent content analysis of academic library websites, 185 libraries are now offering RDM services [3].  The charge for RDM groups can vary widely; tasks can range from advising faculty on issues related to privacy and confidentiality, to instructing students on potential avenues for publishing open-access research data.

As these types of services increase, many research libraries are looking to life cycle models as foundations for crafting curation strategies for digital content [4].  On the one hand, life cycle models recognize the importance of continuous care and necessary interventions that managing such content requires.  Life cycle models also provide a simplified view of essential stages and practices, focusing attention on how data flows through a continuum.  At the same time, the data flow perspective can obscure both the messiness of the research process and the complexities of managing dynamic digital content [5,6].  What strategies for curation can best address scenarios where digital content is touched at multiple times by multiple entities for multiple purposes?  

Christine Borgman notes the multifaceted role that data can play in the digital scholarship ecosystem, serving a variety of functions and purposes for different audiences.  Describing the most salient characteristics of that data may or may not serve the needs of future use and/or reuse. She writes:

These technical descriptions of “data” obscure the social context in which data exist, however. Observations that are research  findings  for  one  scientist  may  be background context to another. Data that are adequate evidence for one purpose (e.g., determining whether water quality is safe for surfing) are inadequate for others (e.g., government standards for testing drinking water). Similarly, data that are synthesized for one purpose may be “raw” for another. [7]

Particular data sets may be used and then reused for entirely different intentions.  In fact, enabling reuse is a hallmark objective for many current initiatives in libraries/archives.  While forecasting future use is beyond our scope, understanding more about how digital content is created and used in the wider scholarly ecosystem can prove useful for anticipating future needs.  As Henry Lowood argues, “How researchers will actually put their hands and eyes on historical software and data collections generally has been bracketed out of data curation models focused on preservation”[8].  

As an example, consider the research practices and output of faculty member Alice, who produces research tools and methodologies for data analysis. If we were to document the components used and/or created by Alice for this particular research project, it might include the following:

 

  • Software program(s) for computing published results
  • Dependencies for software program(s) for replicating published results
  • Primary data collected and used in analysis
  • Secondary data collected and used in analysis
  • Data result(s) produced by analysis
  • Published journal article

 

We can envision at least two uses of this particular instantiation of scholarly output. First, the statistical results of the data can be verified by replicating the conditions of the analysis.   Second, the statistical approach executed by the software program can be executed on a new inputted data set.  In this way, software can simultaneously serve as both an outcome to be preserved and as a methodological means to an (new) end.  

There are certain affordances in thinking about strategies for curation-as-context, outside the life cycle perspective.  Rather than emphasizing content as an outcome to be made accessible and preserved through a particular workflow, curation could instead aim to encompass the characterization of well-formed research objects, with an emphasis on understanding the conditions of their creation, production, use, and reuse.   Recalling our description of Alice above, we can see how each component of the process can be brought together to represent an instantiation of a contextually-rich research object.

Curation-as-context approaches can help us map the always-already in flux terrain of dynamic digital content.  In thinking about curating software as a complex object for access, use, and future use, we can imagine how mapping the existing functions, purposes, relationships, and content flows of software within the larger digital scholarship ecosystem may help us anticipate future use, while documenting contemporary use.  As Cal Lee writes:

Relationships to other digital objects can dramatically affect the ways in which digital objects have been perceived and experienced. In order for a future user to make sense of a digital object, it could be useful for that user to know precisely what set of surrogate representations – e.g. titles, tags, captions, annotations, image thumbnails, video keyframes – were associated with a digital object at a given point in time. It can also be important for a future user to know the constraints and requirements for creation of such surrogates within a given system (e.g. whether tagging was required, allowed, or unsupported; how thumbnails and keyframes were generated), in order to understand the expression, use and perception of an object at a given point in time [9].

Going back to our previous blog post, we can see how questions like “How are researchers creating and managing their digital content” are essential counterparts to questions like “What do individuals served by the MIT Libraries need to able to reuse software?” Our project aims to produce software curation strategies at MIT Libraries that embrace Reagan Moore’s theoretical view of digital preservation, whereby “information generated in the past is sent into the future” [10].  In other words, what can we learn about software today that makes an essential contribution to meaningful access and use tomorrow?  

Works Cited

[1] Palmer, C., Weber, N., Muñoz, T, and Renar, A. (2013), “Foundations of data curation: The pedagogy and practice of ‘purposeful work’ with research data”, Archives Journal, Vol 3.

[2] Hey, T.  and Trefethen, A. (2008), “E-science, cyberinfrastructure, and scholarly communication”, in Olson, G.M. Zimmerman, A., and Bos, N. (Eds), Scientific Collaboration on the Internet, MIT Press, Cambridge, MA.

[3] Yoon, A. and Schultz, T. (2017), “Research data management services in academic libraries in the US: A content analysis of libraries’ websites” (in press). College and Research Libraries.

[4] Ray, J. (2014), Research Data Management: Practical Strategies for Information Professionals, Purdue University Press, West Lafayette, IN.

[5] Carlson, J. (2014), “The use of lifecycle models in developing and supporting data services”, in Ray, J. (Ed),  Research Data Management: Practical Strategies for Information Professionals, Purdue University Press, West Lafayette, IN.

[6] Ball, A. (2010), “Review of the state of the art of the digital curation of research data”, University of Bath.

[7] Borgman, C., Wallis, J. and Enyedy, N. (2006), “Little science confronts the data deluge: Habitat ecology, embedded sensor networks, and digital libraries”, Center for Embedded Network Sensing, 7(1–2), 17 – 30. doi: 10.1007/s00799-007-0022-9. UCLA: Center for Embedded Network Sensing.  

[8] Lowood, H. (2013), “The lures of software preservation”, Preserving.exe: Towards a national strategy for software preservation, National Digital Information Infrastructure and Preservation Program of the Library of Congress.

[9] Lee, C. (2011), “A framework for contextual information in digital collections”, Journal of Documentation 67(1).

[10] Moore, R. (2008), “Towards a theory of digital preservation”, International Journal of Digital Curation 3(1).

 

Categories: Uncategorized

Guest Post: DataRescue-Boston@MIT Wrap up

March 2, 2017 Leave a comment

Alex Chassanoff  who is a Postdoctoral Fellow in the Program for Information Science, contributes to this detailed wrapup of the recent Data Rescue Boston event that she helped organize.

 

Data Rescue Boston@MIT Wrap up

Written by event organizers:

Alexandra Chassanoff

Jeffrey Liu

Helen Bailey

Renee Ball

Chi Feng

 

On Saturday, February 18th, the MIT Libraries and the Association of Computational Science and Engineering co-hosted a day long Data Rescue Boston hackathon at Morss Hall in the Walker Memorial Building.  Jeffrey Liu, a Civil and Environmental Engineering graduate student at MIT, organized the event as part of an emerging North American movement to engage communities locally in the safeguarding of potentially vulnerable federal research information.  Since January, Data Rescue events have been springing up at libraries across the country, largely through the combined organizing efforts of Data Refuge and Environmental Data and Governance Initiative.

 

The event was sponsored by MIT Center for Computational Engineering, MIT Department of Civil and Environmental Engineering, MIT Environmental Solutions Initiative, MIT Libraries, MIT Graduate Student Council Initiatives Fund, and the Environmental Data and Governance Initiative.

Here are some snapshot metrics from our event:

# of Organizers: 8
# of Volunteers: ~15
# of Guides: 9
# of Participants: ~130
# URLs researched: 200
# URLs harvested: 53
# GiB harvested: 35
# URLs seeded: 3300 at event (~76000 from attendees finishing after event)
# Agency Primers started: 19
# Cups of Coffee: 300
# Burritos: 120
# Bagels: 450
# Pizzas: 105

Goal 1. Process data

MIT’s data rescuers managed to process a similar amount of data through the seeding and harvesting phases of data rescue as compared to other similarly-sized events.  For reference, Data Rescue San Francisco researched 101 URLs and harvested 25 GB of data at their event.  Data Rescue DC, a two-day event which also included a bagging/describing track which we did not have, harvested 20GB of data, seeded 4776 URLs, bagged 15 datasets and described 40 data sets.   

Goal 2. Expand scope

Another goal of our event was to explore creating new workflows for expanding efforts beyond an existing focus on federal agency environmental and climate data.  Toward that end, we decided to pilot a new track called Surveying which we used to identify and describe programs, datasets and documents at federal agencies still in need of agency primers.  We were lucky enough to have particular domain experts on hand who assisted us with our efforts.  In total, we were able to begin expansion efforts for agencies and departments at the Department of Justice, Department of Labor, Health and Human Services, and the Federal Communications Commission.

Goal 3: Engage and build community

Attendees at our event spanned age groups, occupations, and technical abilities.  Participants included research librarians, concerned scientists, and expert undergraduate hackers; according to national developers for the Data Rescue archiving application, MIT had the largest number of “tech-tel” than any other event thus far.   As part of the Storytelling aspect of Data Rescue events, we captured profiles for twenty-seven of our attendees.  Additionally, we created Data Use Stories that describe how some researchers use specific data sets from the National Water Information System (USGS), the Alternative Fuels Data Center (DOE),  and the Global Historical Climate Network (NOAA).  These stories let us communicate how these data sets are used to better understand our world, as well as make decisions that impact our everyday lives.

The hackathon at MIT was the second event hosted by Data Rescue Boston, which has begun hosting weekly working groups every Thursday at MIT  for continuing engagement on compiling tools and documentation to improve workflow, identify vulnerable data sets, and create resources to help further efforts.   

Future Work

Data rescue events continue to gather steam, with eight major national events planned over the next month.  The next DataRescue Boston event will be held at Northeastern on March 24th. A dozen volunteers and attendees from the MIT event have already signed up to help organize workshops and efforts at the Northeastern event.

Press Coverage of our Event:

http://gizmodo.com/rescuing-government-data-from-trump-has-become-a-nation-1792582499

https://thetech.com/2017/02/22/datarescue-students-collaborate-vital

https://medium.com/binj-reports/saving-science-one-dataset-at-a-time-389c7014199c#.lgrlkca9f


Categories: Uncategorized

Guest Post: Alex Chassanoff on Building A Model for Software Curation

January 21, 2017 Leave a comment

Alex Chassanoff  who is a Postdoctoral Fellow in the program on information science introduces a series of posts on software curation.


Building A Model for Software Curation:

An Introductory Post

 

In October 2016, I began working at the MIT Libraries as a CLIR/DLF Postdoctoral Fellow in Software Curation. CLIR began offering postdoctoral fellowships in data curation in 2012; however, myself and three others were part of the first cohort conducting research in the area of Software Curation.  At our fellowship seminar and training this summer,the four of us joked about not having any idea what we would be doing (and Google wasn’t much help). Indeed, despite years of involvement in digital curation, I was unsure of what it might mean to curate software. As has been well-documented in the library/archival science community, curation of data means many different things tomany different people.  Add in the term “software” and you increase the complexities.

At MIT Libraries, I was given the good fortune of working with two distinguished and esteemed experts in library research: Nancy McGovern, the Director of the Digital Preservation Program and Micah Altman, the Director of Research.   This blog post describes the first phase of our work together in defining a research agenda for software curation as an institutional asset.

Defining Scope

As we began to suss out possible research objectives and assorted activities, we found ourselves circling back to four central questions – which themselves split into associated sub-questions.

  • What is software? What is the purpose and function of software? What does it mean to curate software? How do these practices differ from preservation?
  • When do we curate software? Is it at the time of creation? Or when it becomes acquired by an institution?
  • Why do institutions and researchers curate software?
  • Who is institutionally responsible for curating software and for whom are we curating software?

Developing Focus and Purpose

We also began to outline the types of exploratory research questions we might ask depending on the specific purpose and entities we were creating a model for (see Table 1 below). Of course, these are only some of the entities that we could focus on; we could also broaden our scope to include research questions of interest to software publishers, software journals, or funders interested in software curation.

 

Entity Purpose: Libraries/Archives Purpose: MIT Specific
Research library What does a library need to safeguard + preserve software as an asset? How are other institutions handling this? How are funding agencies considering research on software curation? What are the MIT libraries’ existing and future needs related to software curation?
Software creator What are the best practices software creators should adopt when creating software? How are software creators depositing their software and how are journals recommending they do this? What are the individual needs and existing practices of software creators served by the MIT Libraries?
Software user What are the different kinds of reasons why people may use software? What are the conditions for use? What are the specific curation practices we should implement to make software usable for this community? What do individuals served by the MIT Libraries need to be able to reuse software?

Table 1: Potential purpose(s) of research by entity

Importantly, we wanted to adopt an agile research approach that considered software as an artifact, rather than (simply) as an outcome to be preserved and made accessible.  Curation in this sense might seek to answer ontological questions about software as an entity with significant characteristics at different levels of representation.   Certainly, digital object management approaches that emphasize documentation of significant properties or characteristics are long-standing in the literature.  At the same time, we wanted our approach to address essential curatorial activities (what Abby Smith termed “interventions”) that help ensure digital files remain accessible and usable. [1]  We returned to our shared research vision: to devise a model for software curation strategies to assist research outcomes that rely on the creation, use, reuse, and study of software.

Statement of Research Objectives and Working Definitions

Given the preponderance of definitions for curation and the wide-ranging implications of curating for different purposes and audiences, we thought it would be essential for us to identify and make clear our particular interests.  We developed the following statement to best describe our goals and objectives:

Libraries and archives are increasingly tasked with responsibilities related to the effective long-term preservation and curation of software.  The purpose of our work is to investigate and make recommendations for strategies that institutions can adopt for managing software as complex digital objects across generations of technology.

We also developed the following working definition of software curation for use in our research:

“Software curation encompasses the active practices related to the creation, acquisition, appraisal and selection, description, transformation, preservation, storage, and dissemination/access/reuse of software over short- and long- periods of time.”

What’s Next

The next phase of our research involves formalizing our research approach through the evaluation, selection, and application of relevant models (such as the OAIS Reference Model) and ontologies (such as the SWO). We are also developing different scenarios to establish the roles and responsibilities bound up in software creation, use, and reuse. In addition to reporting on the status of our project, you can expect to read blog posts about both the philosophical and practical implications of curating software in an academic research library setting.

Notes

[1] In the seminal collection Authenticity in a digital environment, Abby Smith noted that “We have to intervene continually to keep digital files alive. We cannot put a digital file on a shelf and decide later about preservation intervention. Storage means active intervention.” See: Abby Smith (2000) “Authenticity in Perspective  Authenticity in a digital environment. Washington, D.C: Council on Library and Information Resources.

Categories: Uncategorized

Guest Post: Zachary Lizee on Digital Literacy and Standards

December 13, 2016 Leave a comment

Zachary Lizee  who is a Graduate Research Intern in the program on information science, reflects on his investigations into information standards, and suggests how  libraries can reach beyond local instruction on digital literacy to scaleable education on to catalyze information citizenship.

21st century Libraries, Standards Education and Socially Responsible Information Seeking Behavior

Standards and standards development frame, guide, and normalize almost all areas of our lives.  Standards in IT govern interoperability between a variety of devices and platforms, standardized production of various machine parts allows uniform repair and reproduction, and standardization in fields like accounting, health care, or agriculture promotes best industry practices that emphasize safety and quality control.  Informational standards like OpenDocument allows storage and processing of digital information to be accessible by most types of software ensuring that the data is recoverable in the future.[1]  Standards reflect the shared values, aspirations, and responsibilities we as a society project upon each other and our world.

Engineering and other innovative entrepreneurial fields need to have awareness aboutinformation standards and standards development to ensure that the results of research, design, and development in these areas have the most positive net outcome for our world at large, as illustrated by the analysis of healthcare information standards by HIMSS, a professional organization that works to affect informational standards in the healthcare IT field:

In healthcare, standards provide a common language and set of expectations that enable interoperability between systems and/or devices. Ideally, data exchange schema and standards should permit data to be shared between clinician, lab, hospital, pharmacy, and patient regardless of application or application vendor in order to improve healthcare delivery. [2]

As critical issues regarding information privacy quickly increase, standard development organizations and interested stakeholders take an active interest in creating and maintaining standards to regulate how personal data is stored, transferred, and used, which has both public interests and regulation by legal frameworks in mind.[3]

Libraries have traditionally been centers of expertise/access of information collection, curation, dissemination, and instruction.  And the standards around how digital information is produced, used, governed, and transmitted are rapidly evolving with new technologies.[4]  Libraries are participating in the processes of generating information standards to ensure that patrons can freely and safely access information.  For instance, the National Information Standards Organization is developing informational standards to address patron privacy issues in library data management systems:

The NISO Privacy Principles, available at http://www.niso.org/topics/tl/patron_ privacy/, set forth a core set of guidelines by which libraries, systems providers and publishers can foster respect for patron privacy throughout their operations.  The Preamble of the Principles notes that, ‘Certain personal data are often required in order for digital systems to deliver information, particularly subscribed content.’ Additionally, user activity data can provide useful insights on how to improve collections and services. However, the gathering, storage, and use of these data must respect the trust users place in libraries and their partners. There are ways to address these operational needs while also respecting the user’s rights and expectations of privacy.[5]

This effort by NISO (which has librarians on the steering committee) illustrates how libraries engage in outreach and advocacy that is also in concert with the ALA’s Code of Ethics, which states that libraries have the duty to protect patron’s rights to privacy and confidentiality regarding information seeking behavior.  Libraries and librarians have a long tradition of engaging in social responsibility for their patrons and community at large.

Although libraries are sometimes involved, most information standards are created by engineers working in corporate settings, or are considerably influenced by the development of products that become the model.  Most students leave the university without understanding what standards are, how they are developed, and what potential social and political ramifications advancements in the engineering field can have on our world.[6]

There is a trend in the academic and professional communities to foster greater understanding about what standards are, why they are important, and how they relate to influencing and shaping our world.[7]  Understanding the relevance of standards will be an asset that employers in the engineering fields will value and look for.  Keeping informed about the most current standards can drive innovation and increase the market value of an engineer’s research and design efforts.[8]

As informational hubs, libraries have a unique opportunity to participate in developing information literacy regarding standards and standards development.  By infusing philosophies regarding socially responsible research and innovation, using standards instruction as a vehicle, librarians can emphasize the net positive effect of standards and ethics awareness for the individual student and the world at large.

The emergence of MOOCs creates an opportunity for librarians to reach a large audience to instruct patrons in information literacy in a variety of subjects. MOOCs can have a number of advantages when it comes to being able to inform and instruct a large number of people from a variety of geographic locations and across a range of subject areas.[9]

For example, a subject specific librarian for an engineering department at a university could participate with engineering faculty in developing a MOOC that outlines the relative issues, facts, and procedures surrounding standards and standards development to aid the engineering faculty in instructing standards education.  Together, librarians and subject experts could  develop education on the roles that standards and socially responsible behavior factor into the field of engineering.

Students that learn early in their career why standards are an integral element in engineering and related fields have the potential to produce influential ideas, products, and programs that undoubtedly could have positive and constructive effects for society.  Engineering endeavors to design products, methodologies, and other technologies that can have a positive impact on our world.  Standards education in engineering fields can produce students who have a keen understanding of social awareness about human dignity, human justice, overall human welfare, and a sense of global responsibility.

Our world has a number of challenges: poverty, oppression, political and economic strife, environmental issues, and a host of many other dilemmas socially responsible engineers and innovators could address.  The impact of educating engineers and innovators about standards and socially responsible behavior can affect future corporate responsibility, ethical and humanitarian behavior, altruistic technical research and development, which in turn yields a net positive result for the individual, society, and the world.

Recommended Resources:

Notes:

[1] OASIS, “OASIS Open Document Format for Office Applications TC,” <https://www.oasis-open.org/ committees/tc_home.php?wg_abbrev=office>

[2] HIMSS, “Why do we need standards?,” <http://www.himss.org/library/interoperability-standards/why-do-we-need-standards>

[3] Murphy, Craig N. and JoAnne Yates, The International Organization for Standardization (ISO): Global governance through voluntary consensus, London and New York: Routledge, 2009.

[4] See Opening Standards: The Global Politics of Interoperability, edited by Laura DeNardis, Cambridge, Massachusetts: MIT Press, 2011.

[5] “NISO Releases a Set of Principles to Address Privacy of User Data in Library, Content-Provider, and Software-Supplier Systems,” NISO,  <http://www.niso.org/news/pr/view?item_key=678c44da628619119213955b867838b40b6a7d96>

[6] “IEEE Position Paper on the Role of Technical Standards in the Curriculum of Academic Programs in Engineering, Technology and Computing,” IEEE,  <https://www.ieee.org/education_careers/education/eab/position_statements.html>

[7] Northwestern Strategic Standards Management, <http://www.northwestern.edu/standards-management/>

[8] “Education about standards,” ISO, <http://www.iso.org/iso/home/about/training-technical-assistance/standards-in-education.htm>

[9] “MOOC Design and Delivery: Opportunities and Challenges,” Current Issues in Emerging ELearning, V.3, Issue 1,(2016) <http://scholarworks.umb.edu/ciee/?utm_source=scholarworks.umb.edu%2Fciee%2Fvol2%2Fiss1%2F6&utm_medium=PDF&utm_campaign=PDFCoverPages>

Categories: Uncategorized

Making Decisions in a World Awash in Data: We’re going to need a different boat: Comments on Anthony Scriffignano’s Talk

December 8, 2016 Leave a comment

Dr. Anthony Scriffignano, who is SVP/Chief Data Scientist at Dun and Bradstreet, gave this talk on Making Decisions in a World Awash in Data: We’re going to need a different boat
as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Scriffignano argues that the massive collection of ‘unstructured’ data enables a wide set of potential inferences about complex changing relationships.  At the same time, his talk notes that it is increasingly easy to gather sufficient information to take action — while lacking enough information to  form good judgement, and further understanding of the context in which data is collected and flows is essential to developing such good judgements.

Scriffignano summarizes his talk in the following abstract:

l explore some of the ways in which the massive availability of data is changing and the types of questions we must ask in the context of making business decisions.  Truth be told, nearly all organizations struggle to make sense out of the mounting data already within the enterprise.  At the same time, businesses, individuals, and governments continue to try to outpace one another, often in ways that are informed by newly-available data and technology, but just as often using that data and technology in alarmingly inappropriate or incomplete ways.  Multiple “solutions” exist to take data that is poorly understood, promising to derive meaning that is often transient at best.  A tremendous amount of “dark” innovation continues in the space of fraud and other bad behavior (e.g. cyber crime, cyber terrorism), highlighting that there are very real risks to taking a fast-follower strategy in making sense out of the ever-increasing amount of data available.  Tools and technologies can be very helpful or, as Scriffignano puts it, “they can accelerate the speed with which we hit the wall.”  Drawing on unstructured, highly dynamic sources of data, fascinating inference can be derived if we ask the right questions (and maybe use a bit of different math!).  This session will cover three main themes: The new normal (how the data around us continues to change), how are we reacting (bringing data science into the room), and the path ahead (creating a mindset in the organization that evolves).  Ultimately, what we learn is governed as much by the data available as by the questions we ask.  This talk, both relevant and occasionally irreverent, will explore some of the new ways data is being used to expose risk and opportunity and the skills we need to take advantage of a world awash in data.

This covers a broad scope, and Dr. Scriffignano expands  extensively on these and other issue in his blog  — which is well worth reading.  

Dr. Scriffignano’s talk raised a number of interesting provocations. The talk claims, for example that:

On data.

  • No data is real-time — there are always latencies in measurement, transmission, or analysis.
  • Most data is worthless — but there remains a tremendous number of useful signals in data that we don’t understand.
  • Eighty-five percent of data collected today is ‘unstructured’. And unstructured’ data is really data that has structure that we do not yet understand.

On using data.

  • Unstructured data has the potential to support many unanticipated inferences. An example (which Scriffiganno calls a “data-bubble) is a set of photographs of crowd-sourced photos of recurring events — one can find photos that are taken at different times but which show the same location from the same perspective. Despite being convenient samples, they permit new longitudinal comparisons from which one could extract signals of fashion, attention, technology use, attitude, etc. —  and big data collection has created qualitatively new opportunities for inference.
  • When collecting and curating data we need to pay close attention to decision-elasticity — how different would our information have to be to change our optimal action?  In designing a data curation strategy, one needs to weigh the opportunity costs of obtaining data and curating data, against the potential to affect decisions.
  • Increasingly, big data analysis raises ethical questions.  Some of these questions arise directly: what are ethical expectations on use of ‘new’ signals we discover that can be extracted from unstructured data?  Others arise through the algorithms we choose — how they introduce biases– and how do we even understand what algorithms do, especially as use of artificial intelligence grows? Scriffigano’s talk gives as an example of recent AI research in which two algorithms develop their own private encryption scheme.

This is directly relevant to the future of research, and the future of research libraries.  Research will increasingly rely on evidence sources of these types — and increasing need to access, discover and curate this evidence.  And our society will increasingly be shaped by this information, and how we choose to engineer and govern collection and use of this information.  The private sector is pushing ahead fast in this area, and will no doubt generate many innovative data collections and algorithms.  Engagement from university scholars, researchers, and librarians is vital to ensure that society understands these new creations; is able to evaluate their reliability and bias; and has durable and equitable access to them to provide accountability and to support  important discoveries that are not easily monetized.  For those interested in this topic, — the  Program on Information Science has published reports and articles on big data inference and ethics.    

Categories: Uncategorized