Interview with Johanna Okerlund, STPP Postdoctoral Fellow

March 4, 2021
In recent years, it has become clear to me that technology and AI systems are not neutral, and despite there not being a clear path forward, it is still crucial to take social considerations as the starting point when working with AI.

Johanna Okerlund

Ph.D.
Postdoctoral Fellow

Johanna Okerlund is an STPP postdoctoral fellow working on the Rethinking Computer Science Education: Bringing Public Interest Technology into Undergraduate and Postdoctoral Training project. Dr. Okerlund, whose background is in computer science, will receive training in the equity, justice, and policy dimensions of data and technology, and then help rework University of Michigan’s undergraduate computer science curriculum to include sustained attention to social, moral, equity, and policy dimensions of data and technology.

How has your role as a teacher and educator influenced your work?
We are facing really complex challenges such as political polarization, lack of a common understanding of truth or reality, racism, income inequality. Many of these challenges relate to or are exacerbated by technology. I may not see solutions to these challenges in my lifetime and as a technologist, I'm not sure how much progress I will be able to make towards them myself. While this may seem discouraging, I do see hope in future generations and I get a glimpse of that hope with the students I mentor and teach.

I don't consider my role to be to prepare them for the specific kinds of jobs or to contribute to a particular technological landscape, but rather that I am helping them prepare to be able to shape the types of jobs that exist in the future or shape the technological landscape itself. Working as a teacher and educator allows me to think further into the future, not in terms of specifics about what the future will look like, but rather in terms of what kinds of skills and mindsets are needed for radical and creative envisioning of a more equitable and just world.

How did you become interested in AI and Human-Computer Interaction?
To be honest, I was initially interested in both AI and Human-Computer Interaction because I thought they were fun. My background is in computing and I thought it was really neat how computers could be programmed to recognize objects, generate music, or recommend content for humans to consume. I was drawn to the creative potential of Human-Computer Interaction. In computing, we are often limited in our conceptualization of a computer as a screen, a mouse, and a keyboard. Human-Computer Interaction asks how interaction with digital information could be more embodied, tangible, and embedded in our environment or communities in an intuitive way.

I was initially interested in interactive systems for novel forms of creative expression through sound, visuals, or something we had not yet thought of. Both AI and Human-Computer Interaction seemed to offer the possibility of unlocking untapped human potential and I was interested to discover what that looked like. I have continued to think about ways AI or Human-Computer Interaction can offer cathartic experiences for humans or humanity. Now, however, those ideas are coupled with mindfulness of the fine line between technology that emancipates and technology that reinforces problematic norms.

How did you become interested in the social and ethical issues related to AI?
When I first learned about AI, I considered it to be separate from social or ethical issues. I had a separate interest in justice and ethics, such as through reading about feminist theory and income inequality, but my takeaways from conversations or readings on these topics was that there is not yet a clear path forward. Most of my effort related to social and ethical issues went towards articulating the problems, or discussing why possible solutions would not work. I did not feel a sense of agency for being able to make any progress towards solving the problems, so I kept these interests separate from what I thought about relative to AI or computing. In recent years, however, it has become clear to me that technology and AI systems are not neutral, and despite there not being a clear path forward, it is still crucial to take social consideration as the starting point when working with AI.

What are you excited to work on or learn during your postdoc with STPP?
I am interested to learn more about societal and political perspectives on technology. Most of my work as a technologist and as a Human-Computer Interaction researcher is centered around the technology itself, focusing on the design and evaluation of interactive systems. My understanding of the implications of a particular technology are usually grounded in the way people use it and the interactions that immediately surround it. STPP focuses on technology from many different perspectives- politics, funding, social construction, history, which are important considerations. I am excited to step outside of my discipline and understand how others are approaching these topics.

Part of my postdoc position involves figuring out how to integrate social and ethics issues into the technical practice of computing or Computer Science courses. When students are learning to code, for example, what social considerations should be part of that experience? One of the challenges is that the low level technical practice of coding is often separated from thinking about real-world applications and even more separated from thinking about the implications of those applications. I am wondering how a broad critical understanding of science and technology can inform even the lowest level technical endeavors and whether a critical mass of technologists engaging in such critical practice can help drive a shift in the culture around technology relative to social and ethical issues.