Brian Kenny:
Revolutions often have humble origins, a small group with big ideas gathering to plant seeds of disruption. So, it was in the dog days of summer in 1956, when 10 academics gathered on the campus of Dartmouth College to discuss how to make machines use language and form abstractions and concepts to solve the kinds of problems now reserved for humans. The conference led to the founding of a new field of study, artificial intelligence. Six decades hence, we are in the midst of an AI revolution that is already dramatically changing entire sectors like healthcare, transportation, education, banking, and retail. But AI is not without its critics. Elon Musk famously said that, “With artificial intelligence, we’re summoning the demon.” While Stephen Hawking believed the development of full artificial intelligence could spell the end of the human race. So, whose job is it to make sure that such a vision never comes to pass? Today on Cold Call, we’ve invited Professor Tsedal Neeley to discuss her case entitled, “Timnit Gebru: Silenced No More on AI Bias and The Harms of Large Language Models.” I’m your host, Brian Kenny, and you’re listening to Cold Call on the HBR Presents Network.
Brian Kenny:
Tsedal Neeley’s work focuses on how leaders can scale their organizations by developing and implementing global and digital strategies. She’s also the co-author of the book, The Digital Mindset: What it Really Takes to Thrive in The Age of Data, Algorithms and AI. Thank you for joining me today, Tsedal.
Tsedal Neeley:
I’m so happy to be back.
Brian Kenny:
It’s great to have you on again. We’ve had you on the show, I think two or three times at this point. So, you must be doing something right if you keep coming back.
Tsedal Neeley:
I’m just always waiting for the opportunity to come back. And this case that we’re going to discuss is one of my favorites this year.
Brian Kenny:
Yes. So, you have not discussed this with the class yet, is that right?
Tsedal Neeley:
No, I haven’t.
Brian Kenny:
Okay all right, well, this will be good because we’re going to ask some questions that we think may surface in the classroom. But I also want to take our listeners kind of between the lines of the case, and get a better understanding of why you wrote it in the first place, and how it matches to the kind of research that you like to do. And particularly the ideas in your new book. So, let’s just dig in. Can you set the stage for us? What’s the central case in the issue? And when you do discuss it in the classroom, what’s your cold call going to be?
Tsedal Neeley:
So the central case in the Timnit Gebru story is that here you have an AI expert, a computer scientist, who looks at the harms, and the risks that come from artificial intelligence. And she’s working at Google at this time, and raised some concerns to the company about their large language models. The company didn’t like it. And, ultimately, she claims to have been fired. Google claims that she had resigned. But the bottom line is they parted ways. The cold call for this case is, was this situation doomed from the start? Can you have an AI ethics, an AI bias expert, assessing the technology inside of a company? Or do you need an outsider to ensure that biases are not embedded in your system, and your training mechanisms?
Brian Kenny:
Yes. And this raises issues. We’ve heard a lot about Google. I think Google is one of those companies that everybody knows what it is, but it’s such a behemoth organization that you’re not necessarily aware of all the things that they’re up to. So, for me, this brought in a whole different dimension of what Google is doing. And we’ve heard about cultural issues at Google too. So, I think this case brings some of that to the surface as well. So, lots of interesting things that come out of this. How did you hear about this story, and what made you decide to write a case about it?
Tsedal Neeley:
So, I’ve known Timnit Gebru since she was an undergraduate at Stanford University. So, I met her when she was a freshman, and I was a first year doctoral student. And you knew that this woman was going to be special. And, at that time, it wasn’t clear that she would be one of the pioneering voices when it comes to visualization in AI and, ultimately, AI ethics and bias. She took the tech world by storm. In 2018, working with Joy Buolamwini, Timnit analyzed facial recognition software made by three companies. One of which she was working at, at the time. And their work became a landmark study. It was called, “Gender Shades.” And it showed that the darker the skin tone that people had, the more unlikely it was that faces would be accurately recognized by AI. And they were the first to bring this to the forefront, and show the extent to which there’s so many inaccuracies that ultimately hurt populations of color through the AI systems that were at play. Timnit is one of those people who sees things clearly. Everyone is talking about AI today, and AI ethics, and AI bias. She was thinking about this over a decade ago.
Brian Kenny:
So, the case title refers to large language models. And I think it would be helpful for people just to understand what that means in the context of AI before we go further into the case details.
Tsedal Neeley:
Large language models today, many might recognize it as a term, but many will hear about large language models because it’s the type of AI. And in 2018, Google unveiled their large language model named BERT. And what BERT does is that it takes data. I mean, when I say data, we’re talking about millions, billions, half a trillion words. And those models, eventually, are used to make predictions, and contribute to Google’s probabilities work. And so these large language models, they take text, anything, words, and ultimately they become intelligent through training. And are used to focus, to personalize, to customize et cetera. And so, the problem with these large language models that Timnit was very worried about is that the larger the models, the less those who are using these models are able to identify biases that may be embedded in them, and impossible to sanitize them. So, along with co-authors, she was trying to slow down the production of these large language models in order to say, “Wait a minute, do we need them to be this big? And if so, how do we make sure that there aren’t harms that are perpetuated through these models?”
Brian Kenny:
Yes. What kind of harm are you talking about? Like what would be some of the ways that this would manifest itself?
Tsedal Neeley:
One of the ways that it would manifest itself is that biases get replicated, duplicated, and scaled exponentially when it comes to communities that are being policed, when it comes to Black and brown people, when it comes to refugees. So, these models are not capable of extracting the biases that they are built on because, by definition, humans are biased. And the text that humans will generate and produce will have biases in them. So, people like Timnit are saying these large language models can harm people who are not involved in their design, who are powerless because biases will be embedded in them. So, let’s slow down, let’s understand them.
Brian Kenny:
So, if you have a homogeneous group that’s sort of designing the model, and feeding the information into the model, all of the bias that goes along with that group is going to be there because there wasn’t any input from underrepresented groups. Is that…?
Tsedal Neeley:
That’s exactly right. And the thing about AI, AI scales exponentially. And so, that’s exactly what they were worried about.
Brian Kenny:
So, let’s talk a little bit about Timnit’s background. She’s a local person, grew up not far from Harvard Business School, is that right?
Tsedal Neeley:
She went to high school in Somerville, Massachusetts. But she landed there because she’s the product of East Africa. She was born in Ethiopia, Addis Ababa. Her family both come from Ethiopia and Eritrea. And there was a border dispute in the ’90s between Eritrea and Ethiopia, which made her and her family vulnerable to conscription to fight in the war. And, to avoid it, her family left. And when I say her family, it’s her mother, and her two sisters. And she has her sisters in Ireland. Her mother eventually, with her landed, in Massachusetts. But Timnit comes from a highly technology driven family. Her mother is an economist. But her father is an electrical engineer, a PhD. Both sisters are electrical engineers. And she says, she grew up loving math, and science, and physics. And never imagined a life that didn’t involve technology and engineering. So, it was a natural place for her to land as a gifted math and science person.
Brian Kenny:
How did she get involved then, in sort of being an AI ethicist? What exactly does that job description look like, if you’re an AI ethicist?
Tsedal Neeley:
It’s interesting because she would call herself an AI researcher. And as part of being an AI researcher, a component of it is ensuring that ethics, and biases are not creeping into models. How do you become one? Timnit got her PhD from Stanford University in computer science. And she was trained in a lab, one of the early labs that was trying to use images from the internet. Can we use images as input into AI systems? And discovered this whole area and, ultimately, recognized the problem of AI bias. The thing that’s interesting about her is that she saw so sharply these issues, and she talks about this because of who she is, because of her background, because of seeing how people get negatively affected when they’re not part of a system, or process. And the clarity by which she saw AI bias issues early on, to me, it just blows my mind because everyone talks about it today. Timnit was one of the first to see it and document it.
Brian Kenny:
And she has sort of a philosophy of the way that she thinks about DEI, and advocacy on this front. Can you talk a little bit about that?
Tsedal Neeley:
Absolutely. What I learned, Brian, with this case, and talking to her, and reading work, or even things that she has published is that AI bias or AI, in general, is inextricably tied to DEI. You cannot separate them. And this is one of the lessons that I learned, because what she says is, the recipient of, or the subject of, or those who will suffer the consequences of AI will be communities with limited power. And they’re the ones who are least present to help influence the technology, the models that are getting built. So, one of the things that Timnit did early on was co-found a group called “Black in AI.” She would go to these conferences with 5000, 6000, 7000 people–AI conferences–and she would say that there would literally be 4 or 5 Black people-
Brian Kenny:
Wow.
Tsedal Neeley:
Out of that huge number. And so she, as a problem solver, she sees the issue, and early on founded this group. And it’s a very global group to bring the numbers up, to create a platform for people to collaborate, to work together, and to also build the capabilities that could support future technology development.
Brian Kenny:
I found that whole notion of connecting DEI, and AI really revelatory because it sort of reinforces… We talk a lot about systemic issues, systemic justice issues, systemic issues in banking and retail. And it sounded like we were building that sort of systematic bias into the AI world, which would then sort of perpetuate all this stuff, right?
Tsedal Neeley:
At scale.
Brian Kenny:
At scale, yes.
Tsedal Neeley:
That’s exactly right. That’s the biggest lesson that I learned, honestly, with this work, and this case is that any company, any organization, any group interested in digital transformation, and bringing AI into their work, and using data to create algorithms, and models cannot ignore the DEI component. And, in fact, they need to make sure that they have the right people looking at the work, helping design the work, developing the work because otherwise, flawed humans will create flawed systems.
Brian Kenny:
And this is exactly what Timnit was doing. So here she is, she’s at Google. She, obviously, feels like it’s important to shine a light on what she sees as issues in this area. How was she received by her colleagues at Google?
Tsedal Neeley:
It’s interesting because what she would say is that there are people who really appreciated her because she was vocal. She would make sure that she supports colleagues. If she sees someone getting interrupted systematically, a minority person, she would speak up. She would try to improve the culture for women, and for people of color at Google. She is fearless. That’s one of the questions I asked her, where does it come from, this fearlessness? You speak out, you just are unafraid in ways that is unfamiliar to me. But she just has this fire within. And if she sees truth, if she sees something, she’s unafraid to speak up. Now to your question, how is she perceived? To have someone who’s always speaking up fearlessly, doesn’t give you peace. Right?
Brian Kenny:
Right.
Tsedal Neeley:
Challenges you. Challenges your culture, challenges status quo. So, you can imagine how that could be difficult for some portion of an organization, particularly leaders. We don’t like people who agitate.
Brian Kenny:
It seemed like Google was actively involved in sort of coaching the people who were looking at the AI ethics within the company. And to the extent where they almost were giving them some guidelines about, well, here’s how you should communicate about this. And Timnit didn’t follow suit on some of that. Can you talk a little bit about that?
Tsedal Neeley:
Well, the biggest problem that this case documents is her firing or resignation, depending on which side you’re on, was because Google asked her to retract her paper, the large language models paper, or remove the names of any Google affiliates. And the idea that they don’t want insiders criticizing or critiquing any of their technology systems. And Timnit’s response, and this was very public, if you’re on Twitter you can see this, this was very public, says, “No, I can’t retract this paper because I have co-authors and collaborators. And so, they’re counting on this publication. But what are the issues? Tell me what the issues, tell me how to revise this work.” And she didn’t get a lot of details on how to revise the paper, but on some procedural issues, they eventually ousted her.
Brian Kenny:
Yes, and she made that public. She let everybody know about that through this platform that she has, so she wasn’t going away quietly.
Tsedal Neeley:
No.
Brian Kenny:
No.
Tsedal Neeley:
She wasn’t going away quietly. And, in fact, she took to Twitter, as she always does, to discuss AI ethics issues, or AI bias issues, and recounted her firing in detail, blow by blow, which is how I first saw it. And said, “Whoa, Timnit, is everything okay there?” But she wanted to make sure that she wouldn’t quietly be fired, and tucked away. She says, “No, I want the world to know.” And this is one of the things that she says in the case that really made me think, “If everyone takes a little bit of a risk in speaking out, in even naming names then, over time, the aggregate will be able to protect people in the future.” In other words, if I take a little bit of a risk and speak out it’ll help everyone in the future because many of us would be sharing the risk. But the reality, Brian, I know few people who are as bold, and as courageous as her.
Brian Kenny:
Yes, because the stakes are super high here. And this wasn’t even a whistleblower situation in the sense that she wasn’t trying to report the company, or call them out. She was just trying to shine a light on what she saw as problems with the AI research they were doing. And do it in a responsible way as a researcher would. So, I found this to be a difficult one to parse because it doesn’t really map to the whistleblower thing. Now, Google did an after action review on this. What did they find? And what did they do as a result of…?
Tsedal Neeley:
Not much. The CEO, Sundar Pichai apologized, acknowledged very publicly what happened, and talked about his regrets of losing one of the top AI experts in the world, who happens to be a Black woman. At the same time, nine US Congress people wrote to him saying that what happened to Timnit is unprecedented censorship. And asking if he’s really committed to AI ethics? Thousands of people at Google and outside of Google signed a petition. I’ll tell you this, the attention that this has garnered and, of course, the media loves the story because it’s an unusual story as well. So, it was featured practically everywhere. And for Timnit, who is actually a soft spoken, self-described lighthearted woman, she felt that she had to control, and manage the narrative about her and the situation.
Brian Kenny:
Of course.
Tsedal Neeley:
Otherwise, the 800 pound gorilla would.
Brian Kenny:
Yes. So, this raises serious issues about whether or not Google can sort of self-police on this front. And then, if you elevate that beyond Google, it’s can the industry self-police? And nobody wants government oversight of these kinds of things, but is it possible for firms to be circumspect in the way that they need to to prevent the kind of a future that Stephen Hawking described that I mentioned in the opening. This is sort of the dynamic that we’re thinking about.
Tsedal Neeley:
This is the exact discussion that we would have in a classroom. This idea of self-policing. Can an organization have members internally to self-police without losing favor? Or do you need an outsider of sorts, an outside group to police your work? Because the damage, because of the scale that comes through these models, is huge. The damage can be huge. So, was this doomed from the start, having an AI ethics team, or research team to assess, appraise, critique the technology built internally? Was this ever going to work? And, moving forward for every company, not just tech company, every company is thinking about AI, and needing to safeguard against bias, what is the best way to ensure that doesn’t happen? We know that the diversity piece is a big deal, DEI. The second question is: how are you going to self-police? Those are the fundamental questions that I think this case will prompt us to discuss.
Brian Kenny:
So what’s Timnit’s next move. She is not standing still. She’s not licking her wounds. She’s got big plans.
Tsedal Neeley:
A year after the firing or resignation, depending on which side but Timnit says she got fired, she actually launched her own institute called DAIR, D-A-I-R. DAIR stands for Distributed AI Research Institute. And it’s a space for independent, community-rooted AI research without big tech’s pervasive influence. And she believes clearly that she has to do her work outside of a company, so that it can be independent, and develop research, develop insights, even help other companies with their own reviews without the influence of a given company. And the distributed part of DAIR is that she has team members that are physically distributed as part of her institute. She has been funded by terrific companies like the MacArthur Foundation to get started. She’s still figuring out a long-term, sustainable revenue model, but some of her Google colleagues have joined her at DAIR.
Brian Kenny:
Now, that’s interesting because when I saw that in the case, I thought, “Well, will Google ever accept the findings of an organization outside of their own?” Well, Microsoft will… Pick your organization that’s steeped in AI, will they find this to be an acceptable source of criticism for what they’re doing?
Tsedal Neeley:
I don’t think they’ll be able to truly know whether their own technology, their own models, their own algorithms can be critiqued in this way. But what Timnit can do much like universities, or an institution like ours is develop insights that can be generalized, or extrapolated to better understand some of these technologies that are emerging. And I think that’s part of what Timnit can do. But I also think that Timnit, and her team will be very helpful for many organizations in terms of ensuring that AI bias and AI ethics aren’t getting violated, so they can help so many companies. I mean, I don’t know if Timnit would completely agree with this view. But I think about her often when I talk to companies who are trying to build their digital capabilities, who are bringing AI into their systems, and who are building algorithms, I think of Timnit.
Brian Kenny:
Yes. Well, it sounds like something like DAIR is overdue and badly needed. This has been a great conversation. I have to ask one more question before I let you go. And that is, if you want our listeners to remember one thing about Timnit, and about this case, what would it be?
Tsedal Neeley:
That if you’re interested in artificial intelligence, you must figure out how you will drive AI bias, or AI ethical issues out of your AI systems. Because it’s not a matter of if, it’s a matter of when biases can become harmful. So, you have to think about bias and ethics when you’re thinking about bringing AI into your organization.
Brian Kenny:
Tsedal, thank you for joining me on Cold Call. Until next time, it’s been great to talk to you about this case. Thanks.
Tsedal Neeley:
Thank you so much. I can’t wait.
Brian Kenny:
If you enjoy Cold Call you might also like our other podcasts: After Hours, Climate Rising, Skydeck, and Managing the Future of Work. Find them on Apple Podcasts or wherever you listen. Be sure to rate and review us on any podcast platform where you listen. If you have any suggestions or just want to say hello, we want to hear from you. Email us at coldcall@hbs.edu.Thanks again for joining us. I’m your host, Brian Kenny, and you’ve been listening to Cold Call, an official podcast of Harvard Business School, brought to you by the HBR Presents network.
Read more
Brian Kenny:
Revolutions often have humble origins, a small group with big ideas gathering to plant seeds of disruption. So, it was in the dog days of summer in 1956, when 10 academics gathered on the campus of Dartmouth College to discuss how to make machines use language and form abstractions and concepts to solve the kinds of problems now reserved for humans. The conference led to the founding of a new field of study, artificial intelligence. Six decades hence, we are in the midst of an AI revolution that is already dramatically changing entire sectors like healthcare, transportation, education, banking, and retail. But AI is not without its critics. Elon Musk famously said that, “With artificial intelligence, we’re summoning the demon.” While Stephen Hawking believed the development of full artificial intelligence could spell the end of the human race. So, whose job is it to make sure that such a vision never comes to pass? Today on Cold Call, we’ve invited Professor Tsedal Neeley to discuss her case entitled, “Timnit Gebru: Silenced No More on AI Bias and The Harms of Large Language Models.” I’m your host, Brian Kenny, and you’re listening to Cold Call on the HBR Presents Network.
Brian Kenny:
Tsedal Neeley’s work focuses on how leaders can scale their organizations by developing and implementing global and digital strategies. She’s also the co-author of the book, The Digital Mindset: What it Really Takes to Thrive in The Age of Data, Algorithms and AI. Thank you for joining me today, Tsedal.
Tsedal Neeley:
I’m so happy to be back.
Brian Kenny:
It’s great to have you on again. We’ve had you on the show, I think two or three times at this point. So, you must be doing something right if you keep coming back.
Tsedal Neeley:
I’m just always waiting for the opportunity to come back. And this case that we’re going to discuss is one of my favorites this year.
Brian Kenny:
Yes. So, you have not discussed this with the class yet, is that right?
Tsedal Neeley:
No, I haven’t.
Brian Kenny:
Okay all right, well, this will be good because we’re going to ask some questions that we think may surface in the classroom. But I also want to take our listeners kind of between the lines of the case, and get a better understanding of why you wrote it in the first place, and how it matches to the kind of research that you like to do. And particularly the ideas in your new book. So, let’s just dig in. Can you set the stage for us? What’s the central case in the issue? And when you do discuss it in the classroom, what’s your cold call going to be?
Tsedal Neeley:
So the central case in the Timnit Gebru story is that here you have an AI expert, a computer scientist, who looks at the harms, and the risks that come from artificial intelligence. And she’s working at Google at this time, and raised some concerns to the company about their large language models. The company didn’t like it. And, ultimately, she claims to have been fired. Google claims that she had resigned. But the bottom line is they parted ways. The cold call for this case is, was this situation doomed from the start? Can you have an AI ethics, an AI bias expert, assessing the technology inside of a company? Or do you need an outsider to ensure that biases are not embedded in your system, and your training mechanisms?
Brian Kenny:
Yes. And this raises issues. We’ve heard a lot about Google. I think Google is one of those companies that everybody knows what it is, but it’s such a behemoth organization that you’re not necessarily aware of all the things that they’re up to. So, for me, this brought in a whole different dimension of what Google is doing. And we’ve heard about cultural issues at Google too. So, I think this case brings some of that to the surface as well. So, lots of interesting things that come out of this. How did you hear about this story, and what made you decide to write a case about it?
Tsedal Neeley:
So, I’ve known Timnit Gebru since she was an undergraduate at Stanford University. So, I met her when she was a freshman, and I was a first year doctoral student. And you knew that this woman was going to be special. And, at that time, it wasn’t clear that she would be one of the pioneering voices when it comes to visualization in AI and, ultimately, AI ethics and bias. She took the tech world by storm. In 2018, working with Joy Buolamwini, Timnit analyzed facial recognition software made by three companies. One of which she was working at, at the time. And their work became a landmark study. It was called, “Gender Shades.” And it showed that the darker the skin tone that people had, the more unlikely it was that faces would be accurately recognized by AI. And they were the first to bring this to the forefront, and show the extent to which there’s so many inaccuracies that ultimately hurt populations of color through the AI systems that were at play. Timnit is one of those people who sees things clearly. Everyone is talking about AI today, and AI ethics, and AI bias. She was thinking about this over a decade ago.
Brian Kenny:
So, the case title refers to large language models. And I think it would be helpful for people just to understand what that means in the context of AI before we go further into the case details.
Tsedal Neeley:
Large language models today, many might recognize it as a term, but many will hear about large language models because it’s the type of AI. And in 2018, Google unveiled their large language model named BERT. And what BERT does is that it takes data. I mean, when I say data, we’re talking about millions, billions, half a trillion words. And those models, eventually, are used to make predictions, and contribute to Google’s probabilities work. And so these large language models, they take text, anything, words, and ultimately they become intelligent through training. And are used to focus, to personalize, to customize et cetera. And so, the problem with these large language models that Timnit was very worried about is that the larger the models, the less those who are using these models are able to identify biases that may be embedded in them, and impossible to sanitize them. So, along with co-authors, she was trying to slow down the production of these large language models in order to say, “Wait a minute, do we need them to be this big? And if so, how do we make sure that there aren’t harms that are perpetuated through these models?”
Brian Kenny:
Yes. What kind of harm are you talking about? Like what would be some of the ways that this would manifest itself?
Tsedal Neeley:
One of the ways that it would manifest itself is that biases get replicated, duplicated, and scaled exponentially when it comes to communities that are being policed, when it comes to Black and brown people, when it comes to refugees. So, these models are not capable of extracting the biases that they are built on because, by definition, humans are biased. And the text that humans will generate and produce will have biases in them. So, people like Timnit are saying these large language models can harm people who are not involved in their design, who are powerless because biases will be embedded in them. So, let’s slow down, let’s understand them.
Brian Kenny:
So, if you have a homogeneous group that’s sort of designing the model, and feeding the information into the model, all of the bias that goes along with that group is going to be there because there wasn’t any input from underrepresented groups. Is that…?
Tsedal Neeley:
That’s exactly right. And the thing about AI, AI scales exponentially. And so, that’s exactly what they were worried about.
Brian Kenny:
So, let’s talk a little bit about Timnit’s background. She’s a local person, grew up not far from Harvard Business School, is that right?
Tsedal Neeley:
She went to high school in Somerville, Massachusetts. But she landed there because she’s the product of East Africa. She was born in Ethiopia, Addis Ababa. Her family both come from Ethiopia and Eritrea. And there was a border dispute in the ’90s between Eritrea and Ethiopia, which made her and her family vulnerable to conscription to fight in the war. And, to avoid it, her family left. And when I say her family, it’s her mother, and her two sisters. And she has her sisters in Ireland. Her mother eventually, with her landed, in Massachusetts. But Timnit comes from a highly technology driven family. Her mother is an economist. But her father is an electrical engineer, a PhD. Both sisters are electrical engineers. And she says, she grew up loving math, and science, and physics. And never imagined a life that didn’t involve technology and engineering. So, it was a natural place for her to land as a gifted math and science person.
Brian Kenny:
How did she get involved then, in sort of being an AI ethicist? What exactly does that job description look like, if you’re an AI ethicist?
Tsedal Neeley:
It’s interesting because she would call herself an AI researcher. And as part of being an AI researcher, a component of it is ensuring that ethics, and biases are not creeping into models. How do you become one? Timnit got her PhD from Stanford University in computer science. And she was trained in a lab, one of the early labs that was trying to use images from the internet. Can we use images as input into AI systems? And discovered this whole area and, ultimately, recognized the problem of AI bias. The thing that’s interesting about her is that she saw so sharply these issues, and she talks about this because of who she is, because of her background, because of seeing how people get negatively affected when they’re not part of a system, or process. And the clarity by which she saw AI bias issues early on, to me, it just blows my mind because everyone talks about it today. Timnit was one of the first to see it and document it.
Brian Kenny:
And she has sort of a philosophy of the way that she thinks about DEI, and advocacy on this front. Can you talk a little bit about that?
Tsedal Neeley:
Absolutely. What I learned, Brian, with this case, and talking to her, and reading work, or even things that she has published is that AI bias or AI, in general, is inextricably tied to DEI. You cannot separate them. And this is one of the lessons that I learned, because what she says is, the recipient of, or the subject of, or those who will suffer the consequences of AI will be communities with limited power. And they’re the ones who are least present to help influence the technology, the models that are getting built. So, one of the things that Timnit did early on was co-found a group called “Black in AI.” She would go to these conferences with 5000, 6000, 7000 people–AI conferences–and she would say that there would literally be 4 or 5 Black people-
Brian Kenny:
Wow.
Tsedal Neeley:
Out of that huge number. And so she, as a problem solver, she sees the issue, and early on founded this group. And it’s a very global group to bring the numbers up, to create a platform for people to collaborate, to work together, and to also build the capabilities that could support future technology development.
Brian Kenny:
I found that whole notion of connecting DEI, and AI really revelatory because it sort of reinforces… We talk a lot about systemic issues, systemic justice issues, systemic issues in banking and retail. And it sounded like we were building that sort of systematic bias into the AI world, which would then sort of perpetuate all this stuff, right?
Tsedal Neeley:
At scale.
Brian Kenny:
At scale, yes.
Tsedal Neeley:
That’s exactly right. That’s the biggest lesson that I learned, honestly, with this work, and this case is that any company, any organization, any group interested in digital transformation, and bringing AI into their work, and using data to create algorithms, and models cannot ignore the DEI component. And, in fact, they need to make sure that they have the right people looking at the work, helping design the work, developing the work because otherwise, flawed humans will create flawed systems.
Brian Kenny:
And this is exactly what Timnit was doing. So here she is, she’s at Google. She, obviously, feels like it’s important to shine a light on what she sees as issues in this area. How was she received by her colleagues at Google?
Tsedal Neeley:
It’s interesting because what she would say is that there are people who really appreciated her because she was vocal. She would make sure that she supports colleagues. If she sees someone getting interrupted systematically, a minority person, she would speak up. She would try to improve the culture for women, and for people of color at Google. She is fearless. That’s one of the questions I asked her, where does it come from, this fearlessness? You speak out, you just are unafraid in ways that is unfamiliar to me. But she just has this fire within. And if she sees truth, if she sees something, she’s unafraid to speak up. Now to your question, how is she perceived? To have someone who’s always speaking up fearlessly, doesn’t give you peace. Right?
Brian Kenny:
Right.
Tsedal Neeley:
Challenges you. Challenges your culture, challenges status quo. So, you can imagine how that could be difficult for some portion of an organization, particularly leaders. We don’t like people who agitate.
Brian Kenny:
It seemed like Google was actively involved in sort of coaching the people who were looking at the AI ethics within the company. And to the extent where they almost were giving them some guidelines about, well, here’s how you should communicate about this. And Timnit didn’t follow suit on some of that. Can you talk a little bit about that?
Tsedal Neeley:
Well, the biggest problem that this case documents is her firing or resignation, depending on which side you’re on, was because Google asked her to retract her paper, the large language models paper, or remove the names of any Google affiliates. And the idea that they don’t want insiders criticizing or critiquing any of their technology systems. And Timnit’s response, and this was very public, if you’re on Twitter you can see this, this was very public, says, “No, I can’t retract this paper because I have co-authors and collaborators. And so, they’re counting on this publication. But what are the issues? Tell me what the issues, tell me how to revise this work.” And she didn’t get a lot of details on how to revise the paper, but on some procedural issues, they eventually ousted her.
Brian Kenny:
Yes, and she made that public. She let everybody know about that through this platform that she has, so she wasn’t going away quietly.
Tsedal Neeley:
No.
Brian Kenny:
No.
Tsedal Neeley:
She wasn’t going away quietly. And, in fact, she took to Twitter, as she always does, to discuss AI ethics issues, or AI bias issues, and recounted her firing in detail, blow by blow, which is how I first saw it. And said, “Whoa, Timnit, is everything okay there?” But she wanted to make sure that she wouldn’t quietly be fired, and tucked away. She says, “No, I want the world to know.” And this is one of the things that she says in the case that really made me think, “If everyone takes a little bit of a risk in speaking out, in even naming names then, over time, the aggregate will be able to protect people in the future.” In other words, if I take a little bit of a risk and speak out it’ll help everyone in the future because many of us would be sharing the risk. But the reality, Brian, I know few people who are as bold, and as courageous as her.
Brian Kenny:
Yes, because the stakes are super high here. And this wasn’t even a whistleblower situation in the sense that she wasn’t trying to report the company, or call them out. She was just trying to shine a light on what she saw as problems with the AI research they were doing. And do it in a responsible way as a researcher would. So, I found this to be a difficult one to parse because it doesn’t really map to the whistleblower thing. Now, Google did an after action review on this. What did they find? And what did they do as a result of…?
Tsedal Neeley:
Not much. The CEO, Sundar Pichai apologized, acknowledged very publicly what happened, and talked about his regrets of losing one of the top AI experts in the world, who happens to be a Black woman. At the same time, nine US Congress people wrote to him saying that what happened to Timnit is unprecedented censorship. And asking if he’s really committed to AI ethics? Thousands of people at Google and outside of Google signed a petition. I’ll tell you this, the attention that this has garnered and, of course, the media loves the story because it’s an unusual story as well. So, it was featured practically everywhere. And for Timnit, who is actually a soft spoken, self-described lighthearted woman, she felt that she had to control, and manage the narrative about her and the situation.
Brian Kenny:
Of course.
Tsedal Neeley:
Otherwise, the 800 pound gorilla would.
Brian Kenny:
Yes. So, this raises serious issues about whether or not Google can sort of self-police on this front. And then, if you elevate that beyond Google, it’s can the industry self-police? And nobody wants government oversight of these kinds of things, but is it possible for firms to be circumspect in the way that they need to to prevent the kind of a future that Stephen Hawking described that I mentioned in the opening. This is sort of the dynamic that we’re thinking about.
Tsedal Neeley:
This is the exact discussion that we would have in a classroom. This idea of self-policing. Can an organization have members internally to self-police without losing favor? Or do you need an outsider of sorts, an outside group to police your work? Because the damage, because of the scale that comes through these models, is huge. The damage can be huge. So, was this doomed from the start, having an AI ethics team, or research team to assess, appraise, critique the technology built internally? Was this ever going to work? And, moving forward for every company, not just tech company, every company is thinking about AI, and needing to safeguard against bias, what is the best way to ensure that doesn’t happen? We know that the diversity piece is a big deal, DEI. The second question is: how are you going to self-police? Those are the fundamental questions that I think this case will prompt us to discuss.
Brian Kenny:
So what’s Timnit’s next move. She is not standing still. She’s not licking her wounds. She’s got big plans.
Tsedal Neeley:
A year after the firing or resignation, depending on which side but Timnit says she got fired, she actually launched her own institute called DAIR, D-A-I-R. DAIR stands for Distributed AI Research Institute. And it’s a space for independent, community-rooted AI research without big tech’s pervasive influence. And she believes clearly that she has to do her work outside of a company, so that it can be independent, and develop research, develop insights, even help other companies with their own reviews without the influence of a given company. And the distributed part of DAIR is that she has team members that are physically distributed as part of her institute. She has been funded by terrific companies like the MacArthur Foundation to get started. She’s still figuring out a long-term, sustainable revenue model, but some of her Google colleagues have joined her at DAIR.
Brian Kenny:
Now, that’s interesting because when I saw that in the case, I thought, “Well, will Google ever accept the findings of an organization outside of their own?” Well, Microsoft will… Pick your organization that’s steeped in AI, will they find this to be an acceptable source of criticism for what they’re doing?
Tsedal Neeley:
I don’t think they’ll be able to truly know whether their own technology, their own models, their own algorithms can be critiqued in this way. But what Timnit can do much like universities, or an institution like ours is develop insights that can be generalized, or extrapolated to better understand some of these technologies that are emerging. And I think that’s part of what Timnit can do. But I also think that Timnit, and her team will be very helpful for many organizations in terms of ensuring that AI bias and AI ethics aren’t getting violated, so they can help so many companies. I mean, I don’t know if Timnit would completely agree with this view. But I think about her often when I talk to companies who are trying to build their digital capabilities, who are bringing AI into their systems, and who are building algorithms, I think of Timnit.
Brian Kenny:
Yes. Well, it sounds like something like DAIR is overdue and badly needed. This has been a great conversation. I have to ask one more question before I let you go. And that is, if you want our listeners to remember one thing about Timnit, and about this case, what would it be?
Tsedal Neeley:
That if you’re interested in artificial intelligence, you must figure out how you will drive AI bias, or AI ethical issues out of your AI systems. Because it’s not a matter of if, it’s a matter of when biases can become harmful. So, you have to think about bias and ethics when you’re thinking about bringing AI into your organization.
Brian Kenny:
Tsedal, thank you for joining me on Cold Call. Until next time, it’s been great to talk to you about this case. Thanks.
Tsedal Neeley:
Thank you so much. I can’t wait.
Brian Kenny:
If you enjoy Cold Call you might also like our other podcasts: After Hours, Climate Rising, Skydeck, and Managing the Future of Work. Find them on Apple Podcasts or wherever you listen. Be sure to rate and review us on any podcast platform where you listen. If you have any suggestions or just want to say hello, we want to hear from you. Email us at coldcall@hbs.edu.Thanks again for joining us. I’m your host, Brian Kenny, and you’ve been listening to Cold Call, an official podcast of Harvard Business School, brought to you by the HBR Presents network.
Close