Large language models (LLMs)—machine learning algorithms that can recognize, summarize, translate, predict, and generate human languages on the basis of very large text-based datasets—are likely to provide the most convincing computer-generated imitation of human language yet. Because language generated by LLMs will be more sophisticated and human-like than their predecessors, and because they perform better on tasks for which they have not been explicitly trained, we expect that they will be widely used. Policymakers might use them to assess public sentiment about pending legislation, patients could summarize and evaluate the state of biomedical knowledge to empower their interactions with healthcare professionals, and scientists could translate research findings across languages. In sum, LLMs have the potential to transform how and with whom we communicate.
However, LLMs have already generated serious concerns. Because they are trained on text from old books and web pages, LLMs reproduce historical biases and hateful speech towards marginalized communities. They also require enormous amounts of energy and computing power, and thus are likely to accelerate climate change and other forms of environmental degradation. In this report, we analyze the implications of LLM development and adoption using what we call the analogical case study (ACS) method. This method examines the history of similar past technologies–in terms of form, function, and projected implications–to anticipate the implications of emerging technologies.
The report first summarizes the LLM landscape and the technology’s basic features. We then outline the implications identified through our ACS approach. We conclude that LLMs will produce enormous social change including: 1) exacerbating environmental injustice; 2) accelerating our thirst for data; 3) becoming quickly integrated into existing infrastructure; 4) reinforcing inequality; 5) reorganizing labor and expertise, and 6) increasing social fragmentation. LLMs will transform a range of sectors, but the final section of the report focuses on how these changes could unfold in one specific area: scientific research. Finally, using these insights we provide informed guidance on how to develop, manage, and govern LLMs.
The key question is: How might we develop and regulate LLMs to maximize their societal benefits while minimizing their harm?
About the Technology Assessment Project
STPP's Technology Assessment Project (TAP) research anticipates the implications of emerging technologies and uses these insights to develop better technology policies.
We use an analogical case study approach to analyze the social, economic, ethical, equity, and political dimensions of emerging technologies, such as facial recognition, autonomous vehicles, CRISPR therapy in humans, and COVID contact tracing apps. Our distinctive evaluation approach can be applied to technologies in a range of areas.