Fireside Chat with Dr. Ayanna Howard on Bias and Intersectionality in AI

One Concern
4 min readNov 22, 2021

--

In October, One Concern held a fireside chat with Dr. Ayanna Howard, Dean of Ohio State College of Engineering and founder of Zyrobotics, on AI bias and intersectionality. As part of our company’s efforts to continually uncover our blind spots, we have an open dialogue with leading industry experts to help us advance and challenge our AI modeling for climate-related financial risk.

Below we’ve condensed the takeaways from the discussion. Thank you to Dr. Howard and Anand Sampat, Director of Solutions at One Concern, for sharing their insights.

What is Intersectionality, and How is it Relevant to Machine Learning?

At its core, intersectionality as a concept explains how human attributes such as race, gender, and disability can overlay to reveal systems of discrimination that are greater than the sum of their individual parts. Scholar Kimberlé Crenshaw first popularized the concept of intersectionality in 1989 to describe the employment discrimination that Black women face on the basis of both race and gender. Unfortunately, since AI algorithms reflect the real world, intersectional bias also exists as blind spots in AI algorithms.

In Dr. Howard’s research, she found that facial recognition algorithms that might work well to identify individuals by gender, plummeted in accuracy when gender and age, for example, were combined. “Intersectionality in AI is when you have multiple attributes, and you’re evaluating the accuracy of an algorithm, and the accuracy is different when you have the combined attributes together as opposed to separate. So that really could impact your algorithm and its impact,” said Dr. Howard.

AI algorithms can be biased toward specific groups and struggle to produce equitable outcomes for intersectional groups. Examples of bias in AI abound, including facial recognition software struggling to recognize darker skin tones to autocorrect and hate speech algorithms being unable to recognize local vernacular in different areas of the world. Earlier natural language models were trained off newspaper text from The New York Times and The Wall Street Journal, which doesn’t holistically represent the rest of the world, according to Dr. Howard.

When models are trained on incomplete or biased data, they can reproduce many societal divides. However, data scientists can mitigate the biases in their models once they are aware of them.

How Can Bias in AI and ML Models Be Mitigated?

AI/ML can reproduce societal biases, especially when models are trained on data that doesn’t reflect the experiences of diverse groups of people.

The first step for One Concern data scientists is to look for various datasets encompassing many demographics and utilize them to ensure their models are as complete as possible. For example, Dr. Howard suggests using creative indicators like rates of Wi-Fi connectivity or home insurance coverage in a community to increase accuracy for intersectional groups when demographic data falls short. However, when diverse datasets aren’t available, we can still take further steps to minimize bias.

Dr. Howard suggested teams make a practice of checking their models with inputs across intersectional groups, such as race and gender, to ensure the accuracy of the models’ outputs don’t change. In fact, her team is developing a diversity scorecard for AI/ML models, allowing teams such as ours to reveal the performance of our algorithms against social indicators.

Secondly, data scientists can train their models to solve their own biases using synthetic data sets once biases are found. In Dr. Howard’s research, she has found that retraining models to account for bias often improves their accuracy for all groups, not just marginalized communities. Of course, oversampling and synthetic data isn’t always 100% accurate, but it’s far more effective than no intersectional inputs at all. “By creating models that can simulate [the data we need], it is not perfect, but it allows us to increase our sample size,” said Dr. Howard.

One Concern utilizes synthetic data and data interpolation to close the gap between available ground truth data, and our digital twin can do the same for intersectional groups when we are testing our resilience intelligence products across demographics.

Relevance of Intersectionality to One Concern

We are building the world’s first digital twin of the built and natural environments and everything that connects the two whether the power grid, ports, or airports that serve a particular location. For example, the U.S. digital twin encompasses 57 billion recovery curves, 7 trillion downtime data points, and 14 trillion damage recovery data points that consider demographic data to identify resilience divides.

Our research has found a resilience divide exists in the U.S.: there are vast differences in how demographic groups within the same community experience a disaster.

Low-income communities experience disparate impacts from the general population during extreme events. They face longer downtime and more severe disruption. Disasters exacerbate the existing inequalities in our communities, leaving already vulnerable individuals even more exposed to the impacts of a hazard.

However, developers and commercial real estate firms can help close the resilience divide, and further their own ESG goals, by investing in resilient, affordable housing. Through mitigative efforts such as seismic retrofitting, firms can free up public resources to protect networks, such as power during a disaster. Intersectionality is a critical element of effective resilience action.

To continue uncovering the resilience divide — and begin to close it through education and disaster mitigation — ensuring our models are unbiased is a key priority for One Concern.

--

--

One Concern
One Concern

Written by One Concern

We’re advancing science and technology to build global resilience, and make disasters less disastrous

No responses yet