September
11

Hybrid -Research Ethics and Policy Series (REPS)/Bronstein Lecture: "AI is Nothing Without Us: A Call for a Human Rights Approach to the Future of Trustworthy Science (?)", led by Mary L. Gray

12:00pm - 1:00pm • Hybrid: RCH B102AB, Richards Bldg., 3700 Hamilton Walk (and virtual via Zoom)

2024-09-11 12:00:00 2024-09-11 13:00:00 America/New_York Hybrid -Research Ethics and Policy Series (REPS)/Bronstein Lecture: "AI is Nothing Without Us: A Call for a Human Rights Approach to the Future of Trustworthy Science (?)", led by Mary L. Gray AI is Nothing Without Us: A Call for a Human Rights Approach to the Future of Trustworthy Science (?) Mary L. Gray   Senior Principal Researcher at Microsoft Research Faculty Associate, Berkman Klein Center for Internet and Society, Harvard University Every corner of science hopes to plug artificial intelligence (AI) to accelerate its efforts. But developing AI—using examples of prior decisions to create computational models of 'typical output'—often involves studying how people move, think, feel, and interact with each other and their environments, erasing clear lines between conducting social research and building useful computer software. This talk argues that AI, dependent on sampling and experimenting with our social worlds, will require that all AI-driven scientific inquiry center its commitments to human rights, if we are to hold onto the public's trust in science. It recounts the controversial 2014 Facebook-Cornell study on emotional contagion and compares it to a more participatory approach to AI innovation, like the ASL Citizen study, which prioritizes respecting participants’ autonomy and dignity to outline how we could map out a human rights approach to ethical research that picks up where the Belmont Report left off. I will argue for reforms in data deidentification techniques and emphasize the importance of shifting from informed consent to meaningful contribution and community involvement in AI research. By building on principles from the Belmont Report and integrating perspectives like mutuality, care ethics, and dwelling, researchers can foster public trust and ensure AI development benefits society. I will end the talk with a call for immediate action to establish responsible AI governance, at the institutional and federal level, to maintain public confidence and support the trustworthy advancement of AI not only as a scientific tool but as a research practice. Mary L. Gray is Senior Principal Researcher at Microsoft Research and Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society. She maintains a faculty position in the Luddy School of Informatics, Computing, and Engineering with affiliations in Anthropology and Gender Studies at Indiana University. Mary, an anthropologist and media scholar by training, focuses on how people’s everyday uses of technologies transform labor, identity, and human rights. Mary earned her PhD in Communication from the University of California at San Diego in 2004, under the direction of Susan Leigh Star. In 2020, Mary was named a MacArthur Fellow for her contributions to anthropology and the study of technology, digital economies, and society. Lunch Provided. Streaming Available via Zoom. Hybrid: RCH B102AB, Richards Bldg., 3700 Hamilton Walk (and virtual via Zoom) Penn Medical Ethics

AI is Nothing Without Us:

A Call for a Human Rights Approach to the Future of Trustworthy Science (?)

Senior Principal Researcher at Microsoft Research
Faculty Associate, Berkman Klein Center for Internet and Society, Harvard University

Every corner of science hopes to plug artificial intelligence (AI) to accelerate its efforts. But developing AI—using examples of prior decisions to create computational models of 'typical output'—often involves studying how people move, think, feel, and interact with each other and their environments, erasing clear lines between conducting social research and building useful computer software. This talk argues that AI, dependent on sampling and experimenting with our social worlds, will require that all AI-driven scientific inquiry center its commitments to human rights, if we are to hold onto the public's trust in science. It recounts the controversial 2014 Facebook-Cornell study on emotional contagion and compares it to a more participatory approach to AI innovation, like the ASL Citizen study, which prioritizes respecting participants’ autonomy and dignity to outline how we could map out a human rights approach to ethical research that picks up where the Belmont Report left off. I will argue for reforms in data deidentification techniques and emphasize the importance of shifting from informed consent to meaningful contribution and community involvement in AI research. By building on principles from the Belmont Report and integrating perspectives like mutuality, care ethics, and dwelling, researchers can foster public trust and ensure AI development benefits society. I will end the talk with a call for immediate action to establish responsible AI governance, at the institutional and federal level, to maintain public confidence and support the trustworthy advancement of AI not only as a scientific tool but as a research practice.

Mary L. Gray is Senior Principal Researcher at Microsoft Research and Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society. She maintains a faculty position in the Luddy School of Informatics, Computing, and Engineering with affiliations in Anthropology and Gender Studies at Indiana University. Mary, an anthropologist and media scholar by training, focuses on how people’s everyday uses of technologies transform labor, identity, and human rights. Mary earned her PhD in Communication from the University of California at San Diego in 2004, under the direction of Susan Leigh Star. In 2020, Mary was named a MacArthur Fellow for her contributions to anthropology and the study of technology, digital economies, and society.

Lunch Provided. Streaming Available via Zoom.

Loading tweets...