ChatGPT, Microsoft’s artificial-intelligence (AI) algorithm, has been subject to intense scrutiny since its launch in November, becoming instantly popular because it was made free and easily accessible. Proponents are amazed at its ability to synthesize information into readable, accurate reports. Education advocates are wary of its misuse by students and even scholars. Others are noting that the technology, which is also called a large language model (LLM), can be inaccurate and also is hampered by inherent biases. LLMs are artificial intelligence tools that can read, summarize and translate texts and predict future words, letting them generate sentences and paragraphs similar to how humans talk and write. Now, other tech companies like Google are releasing their own competitor LLMs.
Through the Technology Assessment Project, Shobita Parthasarathy, professor of public policy and director of the Science, Technology, and Public Policy (STPP) Program, and her research team released a report about the social, environmental, equity, and political implications of LLMs. One of their major findings is that even if LLM developers filter out overtly objectionable text, these technologies are likely to exacerbate existing biases and inequities.
“Because of the concentrated development landscape and the nature of LLM datasets,” the report states, “the new technologies will not represent marginalized communities adequately. They are likely to systematically minimize and misrepresent these voices while amplifying the perspectives of the already powerful.”
The report also observes that because LLMs are based on older English-language texts and have no sense of time, they are likely to reflect outdated understandings of cultures, particularly those that have been historically marginalized.
This work has gotten more recognition as a part of the ongoing debate about ChatGPT and its rivals.
A recent article in Nature recently cited Parthasarathy’s work, noting that “besides directly producing toxic content, there are concerns that AI chatbots will embed historical biases or ideas about the world from their training data, such as the superiority of particular cultures. Because the firms that are creating big LLMs are mostly in, and from, these cultures, they might make little attempt to overcome such biases, which are systemic and hard to rectify, she adds.”
Parthasarathy previously had told Asian Scientist. "Big companies are all doing it because they assume that there is a very large lucrative market out there. History is often full of racism, sexism, colonialism and various forms of injustice. So the technology can actually reinforce and may even exacerbate those issues. They’re all privately driven and privately tested, and companies get to decide what they think a good large language model is. We really need broader public scrutiny for large language model regulation because they are likely to have enormous societal impact."
More news from the Ford School