TECHNOLOGY

RE: WIRED 2021: Timnit Gebru says AI needs to slow down


artificial intelligence researchers Facing the Problem of Accountability: How do you try to ensure that decisions are accountable when making a decision maker Not a responsible person, but he algorithm? Currently, only a few people and organizations have the power – and the resources – to automate the decision-making process.

Organizations depend on To the To approve a loan or form a defendant’s judgment. But the foundations upon which these intelligent systems are built are subject to bias. Bias from the data, from the programmer, and from the bottom line of a strong company can turn into unintended consequences. That’s the truth AI researcher Timnit Gebru warned in an interview with RE:WIRED on Tuesday.

“There were companies claiming [to assess] “The possibility that someone will identify a crime again,” Gebru said. “That was terrifying to me.”

Gebru was a senior engineer at Google who specialized in the ethics of artificial intelligence. She co-led a team tasked with protecting against algorithmic racism, sexism, and other bias. Gebru also co-founded the nonprofit Black in AI, which seeks to improve inclusion, clarity, and health of blacks in its field.

Last year, Google forced it out. But it hasn’t given up its fight to prevent unintended harm from machine learning algorithms.

On Tuesday, Gebru spoke with senior WIRED writer Tom Simonet about incentives in AI research, the role of worker protection, and the vision for the independent institute it plans to do on AI ethics and accountability. Her central point: AI needs to slow down.

“We didn’t have time to think about how to build it because we always put out fires,” she said.

As an Ethiopian refugee attending a public school in suburban Boston, Gebru was quick to learn about the racial dissonance in America. The lectures referred to racism in the past tense, but this does not correspond to what she saw, Gebru said to Simonet earlier this year. She has found similar imbalances over and over again in her tech career.

Gebru’s professional career in hardware field began. But it changed course when it saw barriers to diversity and began to suspect that most AI research had the potential to harm already marginalized groups.

“The confluence of that made me go in a different direction, which is to try to understand and try to reduce the negative societal impacts of AI,” she said.

For two years, Gibero co-led Google’s ethical AI team with computer scientist Margaret Mitchell. The team created AI incident prevention tools for Google product teams. Over time, Gebreu and Mitchell realized that they were being left out of meetings and email chains.

In June 2020, the GPT-3 language model was released and offered the ability to craft sometimes coherent prose. But the Gibero team is concerned about the excitement surrounding it.

“Let’s build bigger, bigger, bigger language models,” Gebru said, referring to popular sentiment. “We had to be like, ‘Let’s just please stop and calm down for a second so we can think about the pros and cons and maybe alternative ways of doing that. “

Her team helped write a research paper on the ethical implications of language models, entitled “On the Dangers of Random Parrots: Can Language Models Be Too Big?”

Others at Google were not happy. Gebru was asked to withdraw the paper or remove the names of Google employees. In response to a request for transparency: Who is the You asked such a cruel job and why? Neither side has budged. Gebru found out from one of her direct reports that she had “resigned”.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button