HomeTech PlusTECH & OTHER NEWSFrom whistleblower laws to unions: How Google’s AI ethics meltdown could shape...

From whistleblower laws to unions: How Google’s AI ethics meltdown could shape policy

It’s been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models.

Of course, this incident didn’t happen in a vacuum. It’s part of an ongoing series of events at the intersection of AI ethics, power, and Big Tech. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru’s dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history.

In an interview with VentureBeat last week, Gebru called the way she was fired disrespectful and described a companywide memo sent by CEO Sundar Pichai as “dehumanizing.” To delve further into possible outcomes following Google’s AI ethics meltdown, VentureBeat spoke with five experts in the field about Gebru’s dismissal and the issues it raises. They also shared thoughts on policy changes needed across governments, corporations, and academia. The people I spoke with agree Google’s decision to fire Gebru was a mistake with far-reaching policy implications.

Rumman Chowdhury is CEO of Parity, a startup auditing algorithms for enterprise customers. She previously worked as global lead for responsible AI at Accenture, where she advised governments and corporations.

“I think just the collateral damage to literally everybody: Google, the industry of AI, of responsible AI … I don’t think they really understand what they’ve done. Otherwise, they wouldn’t have done it,”  Chowdhury told VentureBeat.

Independent external algorithm audits

Christina Colclough is director of the Why Not lab and a member of the Global Partnership on AI or GPAI steering committee. GPAI launched in June with 15 members, including the EU and the U.S., and Brazil and three additional countries joined earlier this month.

After asking “Who the hell is advising Google?” Colclough suggested independent external audits for assessing algorithms.

“You can say for any new technology being developed we need an impact of risk assessment, a human rights assessment, we need to be able to go in and audit that and check for legal compliance,” Colclough continued.

The idea of independent audits is in line with the environmental impact reports construction projects need to submit today. A paper published earlier this year about how businesses can turn ethics principles into practice suggested the creation of a third-party market for auditing algorithms and bias bounties akin to the bug bounties paid by cybersecurity firms. That paper included 60 authors from dozens of influential organizations from academia and industry.

Had California voters passed Prop 25 last month, the bill would have required independent external audits of risk assessment algorithms. In another development in public accountability for AI, the cities of Amsterdam and Helsinki have adopted algorithm registries.

Scrap self-regulation

Chowdhury said it’s now going to be tough for people to believe any ethics team within a Big Tech company is more than just an ethics-washing operation. She also suggested Gebru’s firing introduces a new level of fear when dealing with corporate entities: What are you building? What questions aren’t you asking?

What happened to Gebru, Chowdhury said, should also lead to higher levels of scrutiny or concern about industry interference in academic research. And she warned that Google’s decision to fire Gebru dealt a credibility hit to the broader AI ethics community.

If you’re a close follower of this space, you might have already reached the conclusion that self-regulation at Big Tech companies isn’t possible. You may have arrived at that point in the past few years, or maybe even a decade ago when European Union regulators first launched antitrust actions against Google.

Colclough agrees that the current situation is untenable and asserts that Big Tech companies are using participation in AI ethics research as a way to avoid actual regulation.

“A lot of governments have let this self-regulation take place because it got them off the hook, because they are being lobbied big-time by Big Tech and they don’t want to take responsibility for putting new types of regulation in place,” Colclough said.

She has no doubt that firing Gebru was an act of censorship.

“What is it that she has flagged that Google didn’t want to hear, and therefore silenced her?” Colclough asked. “I don’t know if they’ll ever silence her or her colleagues, but they have definitely shown to the world — and I think that’s a point that needs to be made a lot stronger — that self-regulation can’t be trusted.”

U.S. lawmakers and regulators were slow to challenge Big Tech, but there are now several ongoing antitrust actions in the U.S and other countries. Prior to a Facebook antitrust lawsuit filed last week, Google faced a lawsuit from the Department of Justice and attorneys general last month, the first U.S. case against a major tech company since the 1990s. Alongside anticompetitive business practices, the 60-page indictment alleges that Google utilizes artificial intelligence and user data to maintain its dominance. This fall, a congressional investigation into Big Tech companies concluded that antitrust law reform is needed to protect competitive markets and democracy.

Collective action or tech worker unionization

J. Khadijah Abdurahman runs the public technology project We Be Imaging at Columbia University and recently helped organize the Resistance AI workshop at NeurIPS 2020. Not long after Google fired Gebru, Abdurahman penned a piece asserting the moral collapse of the AI ethics field. She called Gebru’s firing a public display of institutional resistance immobilized. In the piece, she talks about ideas like the need for a social justice war room. She stresses the importance of radically shifting the AI ethics conversation away from the idea of the lone researcher versus Goliath in order to facilitate a broader movement. She also believes collective action is required to address violence found in the tech supply chain, ranging from birth defects experienced by cobalt miners in central Africa to algorithmic bias and misinformation in social media.

What’s needed, she said, is a movement that cuts across class and defines tech workers more broadly — including researchers and engineers, but also Uber drivers, Amazon warehouse workers, and content moderators.

Elaborating on those comments in an interview with VentureBeat, she said “There should not be some lone martyr going toe-to-toe with [Big Tech.] You need a broader coalition of people who are funding and working together to do the work.”

The idea of collective action through unionizing came up at a NeurIPS in a panel conversation on Friday that included Gebru.

At the Resistance AI workshop for practitioners and researchers interested in AI that gives power to marginalized people, Gebru talked about why she still supports people working as researchers at corporations. She also likened the way she was treated to what happened to 2018 Google walkout organizers Meredith Whittaker and Claire Stapleton. On the panel, Gebru was asked whether she thinks unionization would protect ethical AI researchers.

“There’s two things we need to do: We need to look at the momentum that’s happening and figure out what we can achieve based on this momentum, what kind of change we can achieve,” she said in response. “But then we also need to take the time to think through what kinds of things we really need to change so that we don’t rush to have some sort of policy changes. But my short answer is yes, I think some sort of union has to happen, and I do believe there is a lot of hope.”

In an interview this fall, Whittaker called collective employee action and whistleblowing by departing Facebook employees part of a toolkit for tech workers.

Whistleblower protections for AI researchers

In the days before Google fired her,  Gebru’s tweets indicated that all was not well. In one tweet, she asked whether regulation to protect AI ethics researchers similar to that afforded whistleblowers is in the works.

Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection? Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?

— Timnit Gebru (@timnitGebru) December 1, 2020

The Omidyar Network is working with former Pinterest employee Ifeoma Ozoma on a report about what’s needed for tech whistleblowers a spokesperson told VentureBeat. That report is due out next month. Like Gebru at Google, Ozoma describes experiences at Pinterest that led to incidents of disrespect, gaslighting, and racism.

UC Berkeley Center for Law and Technology codirector Sonia Katyal supports strengthening existing whistleblower laws for ethics researchers.

“I would say very strongly that existing law is totally insufficient,” she told VentureBeat. “What we should be concerned about is the world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential.”

In a paper published in the UCLA Law Review last year, Katyal wrote about whistleblower protections as part of a toolkit needed to address issues at the intersection of AI and civil rights. She argues that whistleblower protections may be particularly important in situations where companies rely on self-regulation and in order to combat algorithmic bias.

We know about some malicious uses of big data and AI — like the Cambridge Analytica scandal at Facebook — because of whistleblowers like Christopher Wylie. At the time, Katyal called accounts like Wylie’s the “tip of the iceberg regarding the potential impact of algorithmic bias on today’s society.”

“Given the issues of opacity, inscrutability, and the potential role of both trade secrecy and copyright law in serving as obstacles to disclosure, whistleblowing might be an appropriate avenue to consider in AI,” the paper reads.

One of the central obstacles to greater accountability and transparency in the age of big data are claims by corporations that algorithms are proprietary.

Katyal is concerned about a clash between the rights of a business to not disclose information about an algorithm and the civil rights of an individual to live in a world free of discrimination. This will increasingly become a problem, Katyal warned, as government agencies take data or AI service contracts from private companies.

Other researchers have also found that private companies are generally less likely to share code with papers at research conferences, in court, or with regulators.

There are a variety of existing whistleblower laws in the U.S., including the Whistleblower Protection Act, which offers workers some protection against retaliation. There’s also the Defend Trade Secrets Act (DTSA). Passed in 2016, the law includes a provision that provides protection against trade secret misappropriation claims made by an employer. But Katyal called that argument limited and said the DTSA provision is a small tool in a big, unregulated world of AI.

“The great concern that every company wields to any kind of employee that wants to come forward or share their information or concerns with the public — they know that using the explanation that this is confidential proprietary information is a very powerful way of silencing the employee,” she told VentureBeat.

Plenty of events in recent memory demonstrate why some form of whistleblower protection might be a good idea. A fall 2019 study in Nature found that an algorithm used in hospitals may have been involved in the discrimination against millions of Black people in the United States. A more recent story reveals how an algorithm prevented Black people from receiving kidney transplants.

For a variety of reasons, sources cited for this article cautiously supported additional whistleblower protections. Colclough supports some form of special protections like whistleblower laws but believes it should be part of a broader plan. Such laws may be particularly helpful when it comes to the potential deployment of AI likely to harm lives in areas where bias has already been found, like hiring, health care, and financial lending.

Another option raised by Clocough: Give citizens the right to file grievances with government regulators. As a result of GDPR, EU citizens can report to a national data authority if they think a company is not in compliance with the law, and the national data authority is then obliged to investigate. Freedom from bias and a path toward redress are part of an algorithmic bill of rights proposed last year.

Chowdhury said she supports additional protections, but she cautioned that whistleblowing should be a last resort. She expressed reservations on the grounds that whistleblowers who go public may be painted by conservatives or white supremacists as “SJW lefties trying to get a dunk.”

Before whistleblowing is considered, she believes companies should establish avenues for employees wishing to express constructive dissent. Googlers are given an internal way to share complaints or concerns about a model, employees told VentureBeat and other news outlets during a press event this fall. A Google spokesperson subsequently declined to share which particular use cases or models had attracted the most criticism internally.

But Abdurahman questioned what kind of workers such a law would protect and said “I think that line of inquiry is more defensive than what is required at this moment.”

Eliminate corporate funding of AI ethics research

In the days after Gebru was fired,  more than 2,000 Googlers signed an open letter that alleges “unprecedented research censorship.” In the following days, some AI researchers said they refuse to review Google AI papers until the company addresses grievances raised by the incident. More broadly, what happened at Google calls into question the actual and perceived influence of industry over academic research.

At the NeurIPS Resistance AI workshop, Rediet Abebe, who begins as an associate professor at UC Berkeley next year, explained why she will not accept research funding from Google. She also said she thinks senior faculty in academia should speak up about Big Tech research funding.

“Maybe a single person can do a good job separating out funding sources from what they’re doing, but you have to admit that in aggregate there’s going to be an influence. If a bunch of us are taking money from the same source, there’s going to be a communal shift toward work that is serving that funding institution,” she said.

Jasmine McNealy is an attorney, associate professor of journalism at the University of Florida, and faculty associate with the Berkman Klein Center for Internet and Society at Harvard University.

She recently accepted funding from Google for AI ethics research. She expressed skepticism about the idea that in the present economic environment public universities will be in a position to turn down funding from tech or virtually any other source.

“Unless state legislators and governors say ‘We don’t necessarily like money coming from these kinds of organizations or people,’ I don’t think universities — particularly public universities — are going to stop taking money from organizations,” she said.

More public research funding could be on the way. The Biden administration platform has committed to a $300 billion investment in research and development funding in a number of areas, including artificial intelligence.

Accusations of research censorship at Google come at a time when AI researchers are calling into question corporate influence and drawing comparisons to Big Tobacco funding health research in decades past. Other AI researchers point to a compute divide and growing inequality in the age of deep learning between Big Tech, elite universities, and everybody else.

Google employs more tenure track academic AI talent than any other company and is the most prolific producer of AI research.

Tax Big Tech

Abdurahman, Colclough, and McNealy strongly support raising taxes for tech companies. Such taxes could fund academic research, enforcement agencies with regulatory oversight like the Federal Trade Commission (FTC), and support the public infrastructure and schools that companies rely upon.

“One of the reasons why it has been accepted that big companies paid all this money into research was otherwise there’d be no research, and there’d be no research because there was no money. Now I think we should go back to basics and say you pay into a general fund here, and we will make sure that universities get that money, but without you having influence over the conclusions made,” Colclough said, adding that taxation allows for enforcement of existing anti-discrimination laws.

Enforcement of existing law like the Civil Rights Act, particularly in matters involving public funding, was encouraged in an open letter signed by a group of Black professionals in AI and computing in June.

Taxation that funds enforcement can also draw some attention toward up-and-coming startups, which can, McNealy said, sometimes do things with “just as bad impacts or implications.”

Biden promised in his campaign to make Amazon pay more income taxes, and in the European Union, legislation is being proposed to tax gatekeeper tech companies a 10% sales tax.

Taxation can also fund technology that does not rely on profitability as a measure of value. Abdurahman thinks that the world needs public tools and people need to broaden their imagination beyond a handful of companies supplying all the technology we use.

Though AI in the public sector is often talked about as an austerity measure, Abdurahman defines public interest technology as non-commercial, designed for the social good, and made with a coalition representative of society. She believes that shouldn’t include just researchers, but also the people most impacted by the technology.

“Public Interest tech opens up a whole new world of possibilities, and that’s the line of inquiry that we need to pursue rather than figuring out ‘How do we fix this really screwed up calculus around the edges?’” Abdurahman said. “I think that if we are relying on private tech to police itself we are doomed, and I think that lawmakers and policy developers have a responsibility to open up and fund a space for public interest technology.”

Some of that work might not be profitable, Chowdhury said, but profitability cannot be the only value by which AI is considered.

Require AI researchers to disclose financial ties

J. Khadijah Abdurahman suggests that disclosure of financial ties become standard for AI researchers.

“In any other field, like in pharmaceuticals, you would have to disclose that your research is being funded by those companies because that obviously affects what you’re willing to say and what you can say and what kind of information is available to you,” she said. 

For the first time, this year organizers of the NeurIPS AI research conference required authors to state potential conflicts of interest and impact to society.

Separate AI ethics from computer science

A recent research paper comparing Big Tech and Big Tobacco suggests that academics consider making ethics research into a separate field, akin to the way bioethics is separated from medicine and biology.

But Abdurahman expressed skepticism about that approach since industry and academia are already siloed and separated.

“We need more critical ethical practice, not just this division of those who create and those who say what you created was bad,” she said.

Ethicists and researchers in some machine learning fields have encouraged the creation of interdisciplinary teams, such as AI and social workers, AI and climate change, AI and oceanography, among other fields.

In fact, Gebru was part of an effort to bring the first sociologists to the Google Research team, introducing frameworks like critical race theory when considering fairness.

Final thoughts

What Googlers called a retaliatory attack against Gebru follows a string of major AI ethics flashpoints within Google’s ranks in recent years. When word got out in 2018 that Google was working with the Pentagon on Project Maven to develop computer vision for military drone footage, employees voiced their dissent in an open letter signed by thousands. Later that year, in a protest against Project Maven, sexual harassment, and other issues, tens of thousands of Google employees participated in a walkout at company offices around the world. Then there was Google’s troubled AI ethics board, which survived only a few days.

Two weeks after Gebru’s firing, things still appear to be percolating at the company. On Monday, Business Insider obtained a leaked memo that revealed Google AI chief Jeff Dean had canceled an all-hands end-of-year call. Since VentureBeat interviewed Gebru last week, she has spoken at length with BBC, Slate, and MIT Tech Review.

Earlier this year, I wrote about a fight for the soul of machine learning. I talked about AI companies associated with surveillance, oppression, and white supremacy, while others work to address harm caused by AI and build a more equitable world. Since then, we have seen multiple documented instances of, as AI Now Institute put it today, reasons to give us pause.

Gebru’s treatment highlights how a lack of investment in diversity can create a toxic work environment. It also leads to rational questions like how employees should alert the public to AI that harms human lives if company leadership refuses to address those concerns. And it casts a spotlight on the company’s failure to employ a diverse engineering workforce, despite the fact that such diversity is widely considered essential to minimizing algorithmic bias.

The people I spoke with for this article seem to agree we need to regulate tech that forms and shapes human lives. They also call for stronger accountability and enforcement mechanisms and changes to institution and government policy.

Measures to address the cross-section of issues that the Timnit Gebru episode shines a light on could include a range of policy solutions. These range from taking steps to ensure the independence of academic research to unionization or building a larger movement among tech workers.

By VentureBeat Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS