Technology companies have been at the center of many public controversies recently—from hacking threats and data security to fake news and manipulating algorithms. Do the ethics of technological advancement—"tech ethics"—have any influence on the companies' behaviors?
Ben Green, Ford School assistant professor and postdoctoral scholar in the Michigan Society of Fellows, seeks to answer this question in an upcoming paper.
"Tech ethics is vague and toothless," Green writes in a soon-to-be-published paper that was quoted in Fortune magazine. "It is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production."
Green also recently published an essay with Amba Kak in Slate discussing the increasing presence of human oversight in A.I. policy.
"And although placing humans back in the 'loop' of A.I. seems reassuring, this approach is instead 'loopy' in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems," the authors argue.
Even though A.I. was created to counter the subjectivity of humans, humans are tasked with finding those human errors within A.I., Green and Kak explain.
"Policymakers and companies eager to find a 'regulatory fix' to harmful uses of technology must acknowledge and engage with the limits of human oversight rather than presenting human involvement—even 'meaningful' human involvement—as an antidote to algorithmic harms," the authors demand.
Read the news items featuring Green below:
- Eye on A.I., Fortune, June 8, 2021
-
The False Comfort of Human Oversight as an Antidote to A.I. Harm, Slate, June 15, 2021
More news from the Ford School