Large Language Models

May 2021 - Current

Summary

Large Language Models (LLM) — machine learning algorithms that can recognize, predict, and generate human languages on the basis of very large text-based data sets—have captured the imagination of scientists, entrepreneurs, and tech-watchers. While the technology can improve the effectiveness and efficiency of automated question answering, machine translation, and text summarization systems, even enabling super intelligent machines, some early studies have already suggested that the same shortcomings found in other types of artificial intelligence (AI)-based decision-making systems and digital technologies may also plague LLMs.

This project explores the social, ethical, and equity dimensions of LLMs to develop a nuanced understanding of LLMs and their implications.

The key question is: How might we develop and regulate LLMs to maximize their societal benefits while minimizing their harm?

Shobita Parthasarathy

Funding partners

This project is funded by a generous grant from the Alfred P. Sloan Foundation

STPP's Technology Assessment Project (TAP) is a research-intensive think tank dedicated to anticipating the implications of emerging technologies and using these insights to develop better technology policies. It uses an analogical case study approach to analyze the social, economic, ethical, equity, and political dimensions of emerging technologies. 

This case study analysis and recommendations will provide concrete steps that scientists, engineers, companies, governments, and civil society advocates can take to manage the technology and its various applications, and will inform future areas of engagement by the Sloan Foundation and its grantees.

Josh Greenberg, Sloan Foundation Program Director

Have any questions?

Contact