Brown Bag: New Models of Scholarly Communication for Digital Scholarship, by Stephen Griffin, University of Pittsburgh

October 30, 2014 Leave a comment

My colleague,  Stephen Griffin,  who is Visiting Professor and Mellon Cyberscholar at the University of Pittsburgh, School of Information Sciences, presented this talk  as part of the Program on Information Science Brown Bag Series.  Steve is an expert in Digital Libraries and has a broad perspective on the evolution of library and information science — having had  a 32-year career at the National Science Foundation (NSF), as a Program Director in the Division of Information and Intelligent Systems. Steve lead the  Interagency Digital Libraries Initiatives and International Digital Libraries Collaborative Research Programs, which supported many notable digital library projects (including my first large research project).

In his talk, below, Steve discusses how how research libraries can play a key and expanded role in enabling digital scholarship and creating the supporting activities that sustain it.

In his abstract, Steve describes his talk as follows:

Contemporary research and scholarship is increasingly characterized by the use of large-scale datasets and computationally intensive tasks.  A broad range of scholarly activities is reliant upon many kinds of information objects, located in distant geographical locations expressed in different formats on a variety of mediums.  These data can be dynamic in nature, constantly enriched by other users and automated processes.

Accompanying data-centered approaches to inquiry have been cultural shifts in the scholarly community that challenge long-standing assumptions that underpin the structure and mores of academic institutions, as well as to call into question the efficacy and fairness of traditional models of scholarly communication.  Scholars are now demanding new models of scholarly communication that capture a comprehensive record of workflows and accommodate the complex and accretive nature of digital scholarship.  Computation and data-intensive digital scholarship present special challenges in this regard, as reproducing the results may not be possible based solely on the descriptive information presented in traditional journal publications.  Scholars are also calling for greater authority in the publication of their works and rights management.

Agreement is growing on how best to manage and share massive amounts of diverse and complex information objects.  Open standards and technologies allow interoperability across institutional repositories.  Content level interoperability based on semantic web and linked open data standards is becoming more common.   Information research objects are increasingly thought of as social as well as data objects – promoting knowledge creation and sharing and possessing qualities that promote new forms of scholarly arrangements and collaboration.  These developments are important to advance the conduct and communication of contemporary research.  At the same time, the scope of problem domains can expand, disciplinary boundaries fade and interdisciplinary research can thrive.

This talk will present alternative paths for expanding the scope and reach of digital scholarship and robust models of scholarly communication necessary for full reporting.  Academic research libraries will play a key and expanded role in enabling digital scholarship and creating the supporting activities that sustain it.  The overall goals are to increase research productivity and impact, and to give scholars a new type of intellectual freedom of expression.

From my point of view, a number of themes ran through Steve’s presentation:

  • Grand challenges in computing have shifted focus from computing capacity to managing and understanding information; … and repositories have shifted from simple discovery towards data integration.
  • As more information has become available the space of problems we can examine expands from natural sciences to other scientific areas — especially to a large array of problems in social science and humanities; but
    …  research funding is shifting further away from social sciences and humanities.
  • Reproducibiity has become a crisis in sciences; and
    … reproducibility requires a comprehensive record of the research process and scholarly workflow
  • Data sharing and support for replication still occurs primarily at the end of the scientific workflow
    … accelerating the research cycle requires integrating sharing of data and analysis in much earlier stages of workflow, towards a continually open research process.

Steve’s talk includes a number of recommendations for libraries. First and foremost to my view is that libraries will need to act as partners with scientists in their research, in order to support open science, accelerated science, and the integration of information management and sharing workflows into earlier stages of the research process I agree with this wholeheartedly and have made it a part of the mission of our Program.

The talk suggests a set of specific priorities for libraries. I don’t think one set of priorities will fit all set of research libraries – because pursuit of projects is necessarily, and appropriately opportunistic — and depends on the competitive advantage of the institutions involved and the needs of local stakeholders. However, I would recommend adding rapid fabrication, scholarly evaluation, crowdsourcing, library publishing, long-term access generally to the list of priorities in the talk.

Steve’s talk  makes the point Libraries will need to act as partners with scientists in their research, in order to support accelerated science, that integration information management and reproducibility into earlier stages of the research process.

Categories: Uncategorized

Redistricting and Technology

October 28, 2014 Leave a comment

This talk, presented as guest lecture in Ron Rivest’s and Charles Stewart’s class on Elections and Technology, reflects on the use of technology in redistricting, and lessons learned about open data, public participation, technology, and data management from conducting crowd-sourced election mapping efforts.

Some observations:

  • On technical implementation: There is still a substantial gap between the models and methods used in technology stack, and that used in mapping and elections.  The domain of electoral geography deals with census, and administrative units; legally defined relationships among units; randomized interventions — where GIS deals with polygons, layers, and geospatial relationships. These concept often maps — with some exceptions — and in political geography, one can run into a lot of problems if one doesn’t pay attention to the exceptions. For example, spatial contiguity is often the same as legal contiguity, but not always — and implementing the “not always” part implies a whole set of separate data structures, algorithms, and interfaces.
  • On policy & transparency: We often assume that transparency is satisfied by making the rules (the algorithm) clear, and the inputs to the rules  (the data) publicly available. In election technology, however, code matters too — its impossible to verify or correct implementation of an algorithm without the code; and the form of the data matters —  transparent data containing complete information, in accessible formats, available through a standard API, accompanied by documentation, and evidence of authenticity.
  • On policy & participation: Redistricting plans are a form of policy proposal. Technology is necessary to enable of richer participation in redistricting — it enables individuals to make complete, coherent alternative proposals to those offered by the legislature. Technology is not sufficient, although the courts sometimes pay attention to these publicly submitted maps, legislatures have strong incentives to act in self-interested ways. Institutional changes are needed before fully participative redistricting becomes a reality.
  • On policy implementation: engagement with existing grass-roots organizations and the media was critical for participation. Don’t assume that if you build it, anyone will come…
  • On methodology: Crowd-sourcing enables us to sample from plans that are discoverable by humans — this is really useful as  unbiased random-sampling of legal redistricting plans is not feasible. By crowd-sourcing large sets of plans we can examine the achievable trade-offs among redistricting criteria, and conduct a “revealed preference” analysis  to determine legislative intent.
  • Ad-hoc, miscellaneous, preliminary observations: Field experiments in this area are hard —  there are a lot of moving parts to manage  – creating the state of the practice, while meeting the timeline of politics, while working to keep the methodology (etc.) clean enough to analyze later. And always remember Kransberg’s 1rst law: technology is neither good nor bad — neither is it neutral.

We’ve also written quite a few articles, book chapters, etc. on the topic that expand on many of these topics.

 

Categories: Uncategorized

Examples of Big Data and Privacy Problems

October 3, 2014 Leave a comment

Personal information continues to become more available, increasingly easy to link to individuals, and increasingly important for research. New laws, regulations and policies governing information privacy continue to emerge, increasing the complexity of management. Trends in information collection and management — cloud storage, “big” data, and debates about the right to limit access to published but personal information complicate data management, and make traditional approaches to managing confidential data decreasingly effective.

The slides below provide an overview changing landscape of information privacy with a focus on the possible consequences of these changes for researchers and research institutions.
Personal information continues to become more available, increasingly easy to link to individuals, and increasingly important for research, and was originally presented as part of the Program on Information Science Brown Bag Series

Across the emerging examples of data and big prvacy, a number of different challenges recur that appear to be novel to big data, and which drew the attention of the attending experts. In our privacy research collaborations we have started to assign names for  these privacy problems for easy reference:

  1. The “data density” problem — many forms of “big” data used in computational social science measure more attributes, contain more granularity and provide richer and more complex structure than traditional data sources. This creates a number of challenges for traditional confidentiality protections including:
    1. Since big data often has quite different distributional properties from “traditional data”, traditional methods of generalization and suppression cannot be used without sacrificing large amounts of utility.
    2. Traditional methods concentrate on protecting tabular data. However, computational social science increasingly makes use of text, spatial traces, networks, images and data in a wide variety of heterogenous structures.
  2. The “data exhaust” problem – traditional studies of humans focused on data collected explicitly for that purpose. Computational social science increasingly uses data that is collected for other purposes. This creates a number of challenges, including:
    1. Access to “data exhaust” cannot easily be limited by the researcher – although a researcher may limit access to their own copy, the exhaust may be available from commercial sources; or similar measurements may be available from other exhaust streams. This increases the risk that any sensitive information linked with the exhaust streams can be reassociated with an individual.
    2. Data exhaust often produces fine-grained observations of individuals over time. Because of regularities in human behavior, patterns in data exhaust can be used to ‘fingerprint’ an individual – enabling potential reidentification even in the absence of explicit identifiers or quasi-identifiers.
  3. The “it’s only ice cream” problem – traditional approaches to protecting confidential data focus on protecting “sensitive” attributes, such as measures of disfavored behavior, or “identifying” attributes, such as gender or weight.  Attributes such as “favorite flavor of ice cream” or “favorite foreign movie” would not traditionally be protected – and could even be disclosed in an identified form. However the richness, variety, and coverage of big data used in computational social science substantially increases the risk that any ‘nonsensitive’ attribute could, in combination with other  publicly available, nonsensitive information, be used to identify an individual. This makes it increasingly difficult to predict and ameliorate the risks to confidentiality associated with release of the data.
  4. The “doesn’t stay in Vegas” problem – in traditional social science research, most of the information used was obtained and used within approximately the same context – accessing information outside of its original context was often quite costly.  Increasingly, computational social science uses information that was shared in a local context for a small audience, but is available in a global context, and to a world audience.  This creates a number of challenges, including:
    1. The scope of the consent, whether implied or express, of the individuals being studied using new data sources may be unclear. And commercial service according to terms of service and privacy policies may not clearly disclose third-party research uses.
    2. Data may be collected over a long period of time under evolving terms of service and expectations
    3. Data may be collected across a broad variety of locations – each of which may have different expectations and legal rules regarding confidentiality.
    4. Future uses of the data and concomitant risks are not apparent at the time of collection, when notice and consent may be given.
  5. The “algorithmic discrimination” problem – in traditional social science, models for analysis and decision-making were human-mediated. The use of big data with many measures, and/or complex models (e.g. machine-learning models) or models lacking formal inferential definitions (e.g. many clustering models), can lead to algorithmic discrimination that is neither intended by nor immediately discernable to the researcher.

Our forthcoming working papers from the Privacy Tools for Sharing Research Data explore these issues in more detail.

Categories: Uncategorized

Worldmap: A Spatial Infrastructure to Support Teaching and Research (Summary of the September Brown Bag Talk by Ben Lewis)

September 23, 2014 Leave a comment

My colleague,  Ben Lewis,  who is system architect and project manager for WorldMap, created at the Center for Geographic Analysis at Harvard presented this talk  as part of the Program on Information Science Brown Bag Series.  Ben is an expert in GIS systems and platforms and has developed many interesting tools in this area.

In his talk, below, Ben discusses the  WorldMap platform (http://worldmap.harvard.edu), which is claimed to be the largest open source collaborative mapping system in the world, with over 13,000 map layers contributed by thousands of users from around the world. Researchers may upload large spatial datasets to the system, create data-driven visualizations, edit data, and control access. Users may keep their data private, share it in groups, or publish to the world. Ben discussed current work to create and maintain a global registry of map services and take us a step closer to one-stop-access for public geospatial data.

A number of themes ran through Ben’s presentation:

  • Space time coordinates are an organizing facet for a  huge variety of human and natural information — everything that happens, happens at a particular time and place.
  • Most of the geospatial web cannot be discovered through standard search engines. A major goal of Ben’s projects is to expose this “dark geoweb”, which he estimates to comprise millions of map layers.
  • Libraries need to be increasingly savvy about space in choosing and developing platforms for discovery and analysis, so that their clients can benefit from advances in GIS services and platforms and geospatial collections.
Categories: Uncategorized

10 Simple Steps to Building a Reputation as a Researcher, in Your Early Career

September 17, 2014 Leave a comment

This talk was sponsored by the MIT Postdoctoral Association with support from the Office of the Vice President for Research.

In the rapidly changing world of research and scholarly communications researchers are faced with a rapidly growing range of options to publicly disseminate, review, and discuss research—options which will affect their long-term reputation. Junior scholars must be especially thoughtful in choosing how much effort to invest in dissemination and communication, and what strategies to use.

In this talk, I briefly discuss a number of review of bibliometric and scientometric studies of quantitative research impact, a sampling of influential qualitative writings advising this area, and an environmental scan of emerging researcher profile systems. Based on this review, and on professional experience on dozens of review panels, I suggest some steps junior researchers may consider when disseminating their research and participating in public review and discussion.

My somewhat idiosyncratic recommendations are in three categories. The tactical, strategic, and “next steps”:

Tactical Recommendations

  • Identify and use opportunities to communicate:
    • Accept invited talks, where practical
    • Announce when you will be speaking, teaching
    • Share your presentations, writings, and data
  • Create a scholarly identit
    • Obtain an ORCID, domain name, twitter handle, LinkeIn profile, Google Scholar profile
    • Create a short bio and longer CV
    • Develop a research theme, and signature idea
  • Communicate broadly
    • Publish writings as Open Access when possible
    • Publish data and software as open data and open source
    • Use social media (LinkedIN, Twitter) to announce new publications, teaching, speaking
  • Develop communications skills early
    • Take writing lessons early
    • Take public speaking lessons early
  • Monitor your impact
    • Monitor news, citation, social media metrics, and altmetrics that reflect the impact of your work
    • Keep records
    • Do this systematically, regularly, but not reactively or obsessively
  • Focus on Clarity and Significance
    • Do research that is important to you and that you think is important to the world
    • When writing about your research, work to maximize clarity – including in abstracts, titles, and citations
  • Give credit generously
    • Cite software you use
    • Cite data on which your analyses rely
    • Don’t be afraid to cite your own work
    • Discuss authorship early, and document contributions publicly

Unordered Strategic Recommendations

  • Do research that is important to you and that you think is important to the world
  • Manage your research program – find a core theme, a signature idea, and regularly review comparative strengths, comparative weaknesses, timely opportunities and future threats
  • Collaborate with people you respect, and like working with, start with small steps
  • Take a positive and sustained interest in the work and career of others, this is the foundation of professional networking
  • Make a moderate, but systematic effort to understand and monitor the institutions within which your work is embedded.
  • Identify your core strengths. Build a career around those.
  • Identify the weaknesses that are continual stumbling blocks. Make them good enough.
  • Pay attention to your world: exercise, sleep, diet, stress, relationships
  • Don’t manage your time – manage your life: know your values, choose your priorities, monitor your progress
  • Align your career with your core values

Ten Things to try right now…

Identify yourself 

1.  Register for an ORCID identifier

2. Register for information hubs: LinkedIN, Slideshare, and a domain name of your own

3. Register for Twitter

Describe yourself …
write these and post to your LinkedIN and ORCID Profiles

4. Write and share a 1-paragraph bio

5. Describe your research program in 2 paragraph

6. Create a CV

Share…

7. Share (on Twitter & LinkedIN) news about something you did or published; an upcoming event in which you will participate; interesting news  and publications in your field

8.  Make writing; data; publication; software available as Open Access (through your institutional repository, SlideShare,, Dataverse, FigShare)

Monitor…
check and record these things regularly, but not too frequently (once a month) — and no need to react or adjust immediately

9. Set up tracking of your citations, mentions, and topics you are interested in using  Google scholar and  Google alert,

10. Find your Klout score, H-index.

In the full presentation, I show how to gather impact data, review findings from bibliometric research on how to increase impact by choosing titles, venues, and the like; and consider the advice for success given by the scores of books I’ve scanned on this topic.

The full presentation is available here:

 

Categories: Uncategorized

Developing good scholarly (alt)metrics.

July 22, 2014 Leave a comment

To summarize, altmetrics  should  build on existing statistical and social science methods for developing reliable measures. The draft white paper from the NISO altmetrics project suggests many interesting potential action items, but does not yet incorporate, suggest or reference a framework for systematic definition or evaluation of  metrics.

NISO offered a recent opportunity to comment on the draft recommendation on their ‘Altmetrics Standards Project’. MIT is a non-voting NISO member, and I am the current ‘representative’ to NISO. The following is my commentary, on the draft recommendation. You may also be interested in reading the other commentaries on this draft.

Response to request for public comments on on ‘NISO Altmetrics Standards Project White Paper ’

Scholarly metrics should be broadly understood as measurement constructs applied to the domain of scholarly/research (broadly, any form of rigorous enquiry), outputs, actors, impacts (i.e. broader consequences), and the relationships among them. Most traditional formal scholarly metrics, such as the H-Index, Journal impact Factor, and citation count, are relatively simple summary statistics applied to the attributes of a corpus of bibliographic citations extracted from a selection of peer-reviewed journals. The Altmetrics movement aims to develop more sophisticated measures, based on a broader set of attributes, and covering a deeper corpus of outputs.

As the Draft aptly notes, in general our current scholarly metrics, and the decision systems around them are far from rigorous: “Unfortunately, the scientific rigor applied to using these numbers for evaluation is often far below the rigor scholars use in their own scholarship.” [1]

The Draft takes a step towards a more rigorous understanding of alt metrics. It’s primary contribution is to suggest a set of potential action items to increase clarity and understanding.

However, the Draft does not yet identify either the key elements of a rigorous (or systematic) foundation for defining scholarly metrics, their properties, and quality. Nor does the Draft identify key research in evaluation and measurement that provide a potential foundation. The aim of these comments is to start to fill this structural.

Informally speaking, good scholarly metrics are fit for use in a scholarly incentive system. More formally, most scholarly metrics are parts of larger evaluation and incentive systems, where the metric is used to support descriptive and predictive/causal inference, in support of some decision.

Defining metrics formally in this way also helps to clarify what characteristics of metrics are important for determining their quality and usefulness.

- Characteristics supporting any inference. Classical test theory is well developed in this area. [2] Useful metric supports some form of inference, and reliable inference requires reliablilty.[3]  Informally, good metrics should yield the similar results across  repeated measurements of the same purported phenomenon.
- Characteristics supporting descriptive inference. Since an objective of most incentive systems is descriptive, good measures must have appropriate measurement validity. [4] In informal terms, all measures should be internally consistent;  and the metric should be related to the concept being measured.
- Characteristics supporting prediction or intervention. Since objective of most incentive systems is both descriptive and predictive/causal inference, good measures must aid accurate and unbiased  inference. [5] In informal terms, the metric should demonstrably be able to increase the accuracy of predicting something relevant to scholarly evaluation.
- Characteristics supporting decisions. Decision theory is well developed in this area [6]: The usefulness of metrics is dependent on the cost of computing the metric, and the value of the information that the metric produces. The value of the information depends on the expected value of the optimal decisions that would be produced with and without that information. In informal terms, good metrics provide information that helps one avoid costly mistakes, and good metrics cost less than the expected of the mistakes one avoids by using them.
- Characteristics supporting evaluation systems. This is a more complex area, but the field of game theory and mechanism design are most relevant.  Measures that are used in a strategic context must be resistant to manipulation — either (a) requiring extensive resources to manipulate, (b) requiring extensive coordination across independent actors to manipulate, or by (c) inventing truthful revelation. Trust engineering is another relevant area — characteristics such as transparency, monitoring, and punishment of bad behavior, among other systems factors, may have substantial effects. [8]

The above characteristics comprise a large part of the scientific basis for assessing the quality and usefulness of scholarly metrics. They are necessarily abstract, but closely related to the categories of action items already in the report. In particular to Definitions; Research Evaluation; Data Quality; and Grouping. Specifically, we recommend adding the following action items respectively:

- [Definitions] Develop specific definitions of altmetrics that are consistent with best practice in the social-science field on the development of  measures
- [Research evaluation] – Promote evaluation of the construct and predictive validity  of individual scholarly metrics, compared to  the best available evaluations of scholarly impact.
- [Data Quality and Gaming] – Promote the evaluation and documentation of the reliability of measures, their predictive validity, cost of computing, potential value of information, and susceptibility to manipulation based on the resources available, incentives, or collaboration among parties.

[1] NISO Altmetrics Standards Project White Paper, Draft 4, June 6 2014;  page 8
[2] See chapter 5-7 in Raykov, Tenko, and George A. Marcoulides. Introduction to psychometric theory. Taylor & Francis, 2010.
[3] See chapter 6 in Raykov, Tenko, and George A. Marcoulides. Introduction to psychometric theory. Taylor & Francis, 2010.
[4] See chapter 7 in Raykov, Tenko, and George A. Marcoulides. Introduction to psychometric theory. Taylor & Francis, 2010.
[5] See Morgan, Stephen L., and Christopher Winship. Counterfactuals and causal inference: Methods and principles for social research. Cambridge University Press, 2007.
[6] See Pratt, John Winsor, Howard Raiffa, and Robert Schlaifer. Introduction to statistical decision theory. MIT press, 1995.
[7] See ch 7. in Fudenberg, Drew, and Jean Tirole. “Game theory, 1991.” Cambridge, Massachusetts (1991).
[8] Schneier, Bruce. Liars and outliers: enabling the trust that society needs to thrive. John Wiley & Sons, 2012.

Categories: Uncategorized

New Discovery Tools for Digital Humanities and Spatial Data (Summary of the July, Brown Bag Talk by Lex Berman)

July 17, 2014 Leave a comment

My colleague, (Merrick) Lex Berman,  who is Web Service Manager & GIS Specialist, at the Center for Geographic Analysis at Harvard presented this  as part of the Program on Information Science Brown Bag Series.  Lex is an expert in applications related to digital humanities, GIS, and Chinese history — and has developed many interesting tools in this area.

In his talk, Lex notes how the library catalog has evolved from the description of items in physical collections into a wide-reaching net of services and tools for managing both physical collections and networked resources:  The line between descriptive metadata and actual content is becoming blurred.   Librarians and catalogers are now in the position of being not only docents of collections, but innovators in digital research, and this opens up a number of opportunities for retooling library discovery tools.   His presentation will presented survey of methods and projects that have extended traditional catalogs of libraries and museums into online collections of digital objects in the field of humanities — focusing on projects that use historical place names and geographic identifiers for linked open data will be discussed.

A number of themes ran through Lex’s presentation: One theme is the unbinding of information — how collections are split into pieces that can be repurposed, but which also need to be linked to their context to remain understandable. Another theme is that knowledge is no longer bounded, footnotes and references are no longer stopping points, from the point of view of the user, all collections are unbounded, and the line between references to information and the information itself has become increasingly blurred. A third theme was the pervasiveness of information about place and space — all human activity takes place within a specific context of time and space, and  implicit references to places exist in many places in the library catalog such as in the  titles, and descriptions of works. A fourth them is that user expectations are changing – they expect instant, machine -readable information, geospatial information, mapping, and facetting as a matter of course.

Lex suggested a number of  entry points for Libraries to investigate and pilot spatial discovery:

  • Build connections to existing catalogs, which already have implicit reference to space and place
  • Expose information through simple API’s and formats, like GEORSS
  • Use and contribute to open services like gazetteers

 

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.

Join 506 other followers