Large Language Models (LLMs) are artificial intelligence tools that can read, summarize and translate texts and predict future words in a sentence letting them generate sentences similar to how humans talk and write. Shobita Parthasarathy, professor of public policy and director of the Science, Technology, and Public Policy Program, recently released a report about how LLMs could exacerbate existing inequalities.
"Big companies are all doing it because they assume that there is a very large lucrative market out there. History is often full of racism, sexism, colonialism and various forms of injustice. So the technology can actually reinforce and may even exacerbate those issues," Parthasarathy told Asian Scientist. "They’re all privately driven and privately tested, and companies get to decide what they think a good large language model is. We really need broader public scrutiny for large language model regulation because they are likely to have enormous societal impact."
More news from the Ford School