Facial recognition (FR) technology was long considered science fiction, but it is now part of everyday life for people all over the world. FR systems identify or verify an individual’s identity based on a digitized image alone, and are commonly used for identity verification, security, and surveillance in a variety of settings including law enforcement, commerce, and transportation. Schools have also begun to use it to track students and visitors for a range of uses, from automating attendance to school security. FR can be used to identify people in photos, videos, and in real time, and is usually framed as more efficient and accurate than other forms of identity verification. However, a growing body of evidence suggests that it will erode individual privacy and disproportionately burden people of color, women, people with disabilities, and trans and gender non-conforming people.
In this project, we focused on the use of facial recognition in schools because it was not yet widespread and because it would impact particularly vulnerable populations. Through an iterative process, we developed historical case studies of similar technologies, analyzed their social, economic, and political impacts, and produced recommendations for policymakers and stakeholders.
We strongly recommend that use of FR be banned in schools.
Recent Impact: In its 2023 report on the benefits and harms of facial recognition technology in K-12 schools, New York state’s Office of Information Technology Services cited STPP's report on the topic. As it considered not only accuracy but also the likelihood that the technology would exacerbate bias and harm against already marginalized communities, the report referred to our findings that the use of facial recognition technology in schools would normalize a constant state of surveillance, as they have with the use of closed circuit television in the UK. The state legislature banned this use in response to the Office of Information Technology Services's report.
STPP's Technology Assessment Project (TAP) is a research-intensive think tank dedicated to anticipating the implications of emerging technologies and using these insights to develop better technology policies. It uses an analogical case study approach to analyze the social, economic, ethical, equity, and political dimensions of emerging technologies.
- Facial Recognition Heads to Class. Will Students Benefit?
- Police use of facial recognition technology soars in Minnesota
- Schools Adopt Face Recognition in the Name of Fighting Covid
- Fixing FERPA to Protect Marginalized Students, The Regulatory Review
- Does Michigan Proposal 2 Go Far Enough To Protect Your Digital Data?
- Predicting Technology Futures by Examining Our Past, by Hannah Rosenfeld, The Commons
- Facial recognition cameras in schools erode privacy and normalize surveillance by Jurgita Lapienytė, Cybernews
- U-M study finds facial recognition technology in schools presents many problems, recommends ban
- Knox Schools Contract Provides Little Protection Over Student Data in Some Virtual Classrooms
- Why Biden nominee Miguel Cardona should make sure facial recognition stays out of schools
- Millions of Faces Scanned Without Approval. We Need Rules for Facial Recognition
- Modern policing: The controversy over facial recognition
- Modern policing: Use of facial recognition in Detroit, Detroit Public Television
- Millions of faces scanned without approval. We need rules for facial recognition, Los Angeles Times
- New York State to Suspend Facial Recognition in Schools, Privacy Hub
- Schools face mounting pressure to drop e-proctoring software, Business Insider
- How police monitor social media to find crime and track suspects, MLive
- Those Facebook selfies you’re posting could end up in a police database, Penn Live Patriot-News