November 20, 2024

mipueblorest

Technologyeriffic

How a quest to build ethical AI detonated a battle inside Google

  • Since Dr. Timnit Gebru announced she had been pushed out of Google, tensions inside have flared.
  • CEO Sundar Pichai wants AI to be regulated, but it’s unknown whether Google can police its own work.
  • Some employees are questioning their future at the company, but big tech largely funds the field of ethical AI research.
  • Visit Business Insider’s homepage for more stories.

In 2018, while doing her postdoc at Microsoft Research, Dr. Timnit Gebru co-authored what has become a widely cited study revealing how facial recognition technology was less accurate at identifying women and people of color.

A few months later, she landed a job at Google.

There, her mandate was to help lead a new department devoted to the potentially harmful effects of artificial intelligence, a field of research known as ethical AI. Artificial intelligence has become increasingly vital to Google’s business, which has around 200 employees working on what it calls “responsible AI” for its own products.

CEO Sundar Pichai has repeatedly doubled down on AI’s importance to Google, once calling it “more profound than fire or electricity.” The “ethical AI” team, which currently has around a dozen employees and sits within Google’s Research group, was created to study the longer-term risks of artificial intelligence that go beyond Google’s own walls, and how to prevent them.

One day in early December 2020, Gebru was emailing Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a machine-learning engineer at Microsoft. This isn’t unusual — the field of AI research is tightly knit and largely funded by big tech companies; this type of exchange of ideas is common. They were discussing ideas for a public system that would let people report vulnerabilities in artificial intelligence systems, and protect the whistleblowers who found them.

Suddenly, the thread ran cold. The very next day, Gebru tweeted that she had been fired following a debate over a research paper, as well as comments she had made criticizing Google’s efforts in diversity.

The events detonated a battle inside Google’s artificial intelligence division that has reverberated into the wider industry, drawing support for Gebru and her colleagues and concerns over how Google is pursuing research in ethical AI.

Some of Google’s AI employees have demanded management better protect employees and make bolder commitments to building responsible AI. As of the time of publishing, almost 2,700 Google employees have signed a petition in solidarity with Gebru.  

The mess has highlighted a growing problem for Google, which has tried to spearhead work in responsible AI while also benefiting from artificial intelligence to run its advertising, YouTube, and other critical businesses.

CEO Sundar Pichai has made repeated public calls to build ethical guardrails around artificial intelligence – but he has also argued that tech companies should self-regulate to some extent. This tension is now a lightning rod that highlights its challenge in allowing employees the freedom to criticize the mechanisms that underpin its own profit machine.

A list of demands

Jeff Dean

Google AI lead Jeffrey Dean.

Thomas Samson:Getty Images


Last week, when asked about the ongoing turmoil inside Google’s artificial intelligence division, Google CEO Sundar Pichai said events seemed more intense because Google is “a lot more transparent” than other companies. 

But Google perhaps underestimated the blowback it would receive when it pushed Gebru out.

“Some people don’t know the research ecosystem at all, so they don’t understand how doing this to me would reverberate,” Gebru told Business Insider in an interview.

Google, for its part, says it didn’t fire Gebru but accepted her resignation. Before her exit, the company had asked Gebru to retract a research paper she had published with several other employees and outside academics. The paper summarized previous research to conclude that the unchecked pursuit of large language models could carry harmful biases and environmental damage.

Google’s AI Chief, Jeffrey Dean, later said in a statement that he deemed the research paper to be insufficient.

Gebru pushed back at the time, telling management that if she did not get a full explanation for the retraction request she would discuss setting a date for leaving Google. Just before her exit, she also sent an email to an internal resource group criticizing Google’s efforts in diversity. Megan Kacholia, vice president of engineering at Google Research, then emailed Gebru telling her that Google had accepted her resignation. She also said that Gebru’s email to the resource group was grounds to expedite her termination. 

“I think if it was just this paper they wouldn’t have fired me,” Gebru told Insider. “It was a confluence of things. It was constantly speaking up, and they wanted to take this as an opportunity.”

Gebru had been an outspoken critic of the tech industry’s lack of diversity, including Google’s own, and her exit has also raised questions about Google’s diversity efforts.

Following hers unplanned exit, Pichai announced that Google would conduct an investigation into what happened, something employees say is ongoing. One employee told Business Insider that Google has been reaching out to members of the research group about their communications with Gebru in “very sudden meetings.”

Adding to the turmoil, Gebru’s former colleague and ethical AI co-lead Megan Mitchell was locked out of the corporate system last month, after the company said she had shared sensitive files with external accounts. An employee familiar with the situation told Insider that Mitchell was using a script to automatically search for emails relating to Gebru’s case and forward them offline, confirming a previous report by Axios. In doing so, Mitchell had alerted one of Google’s security systems.

Mitchell did not respond to Insider’s request for comment.

In response to Gebru’s ousting, members of the ethical AI team sent demands to management that included a public commitment to academic integrity.

“Google’s short-sighted decision to fire and retaliate against a core member of the Ethical AI team makes it clear that we need swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere,” read a draft of the letter, viewed by Insider.

One of the demands in the initial list was for company vice president Megan Kacholia to be removed from the ethical AI team’s reporting chain. 

Last week, Google placed Zoubin Ghahramani, a senior research director, into a new leadership position between Kacholia and her previous direct reports, including ethical AI lead Samy Bengio, as Business Insider first reported. Google said the change had been planned prior to Gebru’s exit.

Some of the ethical AI team’s initial demands were later deemed to be unactionable by management, according to sources. Members of the ethical AI team have since held regular meetings with Marian Croak, a VP of engineering in the Research division, to finesse a more agreeable set of demands.

The final list, which will be sent to CEO Sundar Pichai, will also include other previously unreported demands made by Black Google employees in the group, a source familiar with the matter told Insider.

Google’s commitment to ethical AI is in question

Timnit Gebru – TechCrunch Disrupt

Timnit Gebru.

Kimberly White/Getty Images


This isn’t the first time Google’s work in AI has created tensions between employees and management. In 2018, Google caved to pressure from its workforce to end a contract with the Pentagon, after employees discovered the deal involved Google building artificial intelligence to analyze military drone footage.

But while Google showed it could back down with enough pressure, there is now a question mark over whether it can properly oversee its own work in AI, which it profits from immensely. Following Gebru’s exit, Reuters reported that Google had introduced a “sensitive topics” review process earlier that year, requiring employees investigating subjects such as race and gender to consult with legal, policy, and public relations teams. 

In at least three cases, employees had been asked to not portray Google’s technologies in a negative light, according to the report. Margaret Mitchell and other employees told Reuters that they believed Google was starting to intervene in research into the potential harms of its own technologies.

Gebru told Insider she had seen the guidelines on sensitive topics in an email, but that the paper that she had argued over with management did not directly call out any Google products.

“When you mentioned Google products you’re supposed to be careful,” she said. “In our paper we didn’t mention any Google products whatsoever. We’re mentioning an underlying technology that is being used by everybody, every company.’

Gebru says, however, that others on the team did come up against rules around sensitive topics. “People in my team had their research watered down,” she said.

Maria Axente, responsible AI lead for PricewaterhouseCoopers, which consults with Google, also believes there needs to be more ways to fund ethical AI research outside of big corporations.

“Governments need to step in to fund research institutions on AI and societal implication,” she told Insider. She also called for a “higher degree of organizational transparency” within companies to ensure the unintended harmful consequences of AI that emerge are known. 

“The biggest challenge that exists is that AI that triggers this type of outcome exists primarily in big tech companies,” she said.

Sources inside Google say employees on the ethical AI team are pushing ahead with work, but some have privately remarked about making moves to other companies including Microsoft and Facebook, according to two current employees familiar with those discussions. 

One of them emphasized that there was no concrete plan for an exit among employees that they knew of, but said “it’s on people’s minds.” 

The other source said that Facebook has been poaching employees in Google’s AI unit since the previous lead on machine fairness, Jacqueline Pan, left Google to join Facebook’s AI team last January.

“Some people are slowly returning to our usual work and acknowledging how weird it feels, some are still focused on the ongoing turmoil,” said one Google employee who works on ethical AI. “I think most of us have pretty fully come to terms with the potential of leaving Google if we have to.”

Google has been seen as a honeypot for academic talent, attracting recruits with the promise of unlimited resources and freedom to work on some of the biggest problems in AI and other fields. As well as publishing research in this field, Google is one of the biggest contributors to AI research conferences.

That isn’t unusual. Big tech has largely propped up research in this field. One study published last year found that 58% of computer science faculty working on AI across four prominent universities had at one point been funded by big tech companies, Wired reported.

But it remains to be seen if Google can police itself the way Pichai believes big tech companies should.

In 2019, the company assembled an AI ethics board. A little over a week later, it was dismantled due to controversies around several of the board members, Vox reported at the time. The company has not created anything similar since. 

Montreal AI Ethics Institute and Microsoft’s Abhishek Gupta said the events send a “terrible signal” to the rest of the AI ethics community. “Especially for young researchers who are looking to perhaps take on some of these roles at corporations.”

“There aren’t any pretenses that Google was ever anything other than a profit-driven megacorp,” said one employee currently working on ethical AI. “I think we all knew going in that there was some amount of compromise and risk there.”

Immediately following Gebru’s exit, Venturebeat reported that computer scientists were refusing to review Google’s AI research.

“I’m not sure how to cite or talk about anything that comes out of Google after this year given that I know the authors of any future paper would have been working in the shadow of one of the most prominent researchers in the world being fired for coming to a conclusion that didn’t endorse Google’s current business model,” Ali Alkhatib, a research fellow at the University of San Francisco’s Center for Applied Data Ethics, told Business Insider.

“I’ll always be asking myself how that gun on the table might have motivated them to come to a certain conclusion, or even just to leave out something important in their analysis, for fear of reprisal.”

Gebru told Insider that she believes there needs to be oversight of big tech companies such as Google which are researching this field.

“If you’re going to have people working on responsible AI, there should be some sort of parameters that are set from the outside on researchers there,” she said. 

“There should also be protection for researchers inside Google or other companies working in this space.”