Artificial Intelligence + Libraries

The impact of artificial intelligence on libraries and archives, and in turn, the direct bearing of librarianship on the development of intelligent information systems, have become topics of great interest in recent years. On the one hand, artificial intelligence is being deployed more and more to improve library collections and services, from machine-learning-generated metadata to advanced methods for search and discovery. On the other hand, values central to contemporary academic libraries, and to knowledge and memory institutions, more generally -for instance, open access, information literacy, accessibility, and intellectual freedom- can and should inform larger conversations about the development of artificial intelligence, and in particular, the collection and stewardship of the data that powers AI.

During the 2019-2020 academic year, the Ahmanson Lab will host a series of talks exploring these and other intersections between artificial intelligence and libraries.


The Library Age

Nicole Coleman
Digital Research Architect
Research Director, Humanities + Design
Stanford Libraries
October 7, 2019 | 2:00PM
Ahmanson Lab | LVL 301 (Map)

Advances in technology are opportunities for optimization to benefit or advantage humanity. But how do we know when and what to optimize? With our most powerful optimizing engines today fueled on information from the past, we need curators and subject experts to help us make sure that we are not feeding our systems garbage. Stanford Library's AI studio is a convening of subject experts and technologists to find solutions to the challenges of the information-engineering-out-of-control that constitutes so many of our encounters with AI today.

Nicole is the Digital Research Architect at Stanford University Libraries, working within the Digital Library Systems and Services group. With Dan Edelstein, she co-directs Humanities + Design, a research lab based at the Center for Spatial and Textual Analysis (CESTA).

RSVP for this talk.


Mass Metadata Generation at Getty Digital

Nathaniel Deines
Project Manager
Getty Digital
Getty Research Institute
February 24, 2010 | 2:00PM
Ahmanson Lab | LVL 301 (Map)

Nathaniel Deines, a project manager focusing on large-scale digital projects at Getty Digital, will provide an overview of their methods for mass metadata generation, including artificial intelligence (computer vision, specifically), GIS tools, and crowdsourcing. Getty Digital at the J. Paul Getty Trust, through engagement and collaboration, enables the programs of The Getty and the broader cultural heritage community to create, manage, and publish high-quality, interoperable, and easy-to-use digital content in a coherent and sustainable platform. Major new areas of concentration are digitization, linked open data, enabling computational art history research, and building a shared, distributed data environment.
 
Nathaniel Deines is an IT Project Manager in Getty Digital at the J. Paul Getty Trust. Prior to joining Getty Digital, he was a project manager in the Digital Art History department of the Getty Research Institute. Nathaniel received his bachelor’s degree from the University of Washington in the Comparative History of Ideas and his M.A. in Aesthetics and Politics from CalArts where he focused on a critical theory of technology.

RSVP for this talk.