AI experts warn Facebook’s anti-bias tool is ‘completely insufficient’

0
18
AI Weekly: Facebook's news summarization tool reeks of bad intentions

Attend Transform 2021 to learn the key topics in Enterprise AI & Data. Learn more.

Facebook published a blog post today detailing Fairness Flow. An in-house toolkit that the company claims allows its teams to analyze the performance of some types of AI models in different groups. Fairness Flow was developed in 2018 by Facebook’s Interdisciplinary Responsible AI (RAI) team in consultation with Stanford University, the Center for Social Media Responsibility, the Brookings Institute and the Better Business Bureau Institute for Marketplace Trust and is intended to help engineers develop the models to be determined The performance of Facebook products is cross-group.

The post contradicts the notion that the RAI team is “essentially irrelevant to addressing the larger problems of misinformation, extremism and political polarization [on Facebook’s platform]”As MIT Tech Review’s Karen Hao wrote in an investigative report earlier this month. Hao claims that the work of the RAI team – reducing prejudice about AI – is helping Facebook avoid proposed regulations that are hindering its growth.” The piece also claims that the company’s leadership has repeatedly weakened or stopped initiatives to clean up misinformation on the platform as it would undermine that growth.

According to Facebook, Fairness Flow detects forms of statistical bias in some models and data labels that are widely used on Facebook. Here Facebook defines “bias” as the systematic application of different standards to different groups of people, for example when Facebook’s Instagram system deactivated the accounts of US Black users 50% more often than accounts of whites.

Using a data set of predictions, labels, group membership (e.g. gender or age), and other information, Fairness Flow can subset the data used by a model and estimate its performance. For example, the tool can determine whether a model accurately classifies the content of people in a particular group, or whether a model is underpredicting some groups compared to others. Fairness Flow can also be used to compare labels provided by annotators with expert labels. This provides metrics showing the difficulty in labeling content from groups and the criteria used by the original labelers.

According to Facebook, the Equity Team, a group of products within Instagram that focuses on countering bias, uses “model cards” that use the fairness flow to provide information that may prevent models from being used “inappropriately”. The cards include a bias rating that could be applied to all Instagram models by the end of next year, although Facebook notes that using Fairness Flow is currently optional.

Mike Cook, an AI researcher at Queen Mary University in London, emailed VentureBeat that Facebook’s blog post contains “very little information” about what Fairness Flow actually does. “While the main goal of the tool seems to be to match the expectations of the Facebook engineers with the output of the model … the old adage ‘garbage in, garbage out’ still applies. This tool only confirms that the trash you pulled out matches the trash you put in, ”he said. “To fix these bigger issues, Facebook needs to address the garbage section.”

Cook pointed to the language in the post and suggested that bias may not necessarily be present, as groups may have different positive rates in facts (or “ground truth”). In machine learning, a false positive is an outcome where a model mispredicts something, while a true positive measures the percentage of correct predictions the model made.

“One interpretation of this is that Facebook is okay with bias or prejudice as long as it is sufficiently systemic,” Cook said. “For example, maybe it makes sense to advertise tech jobs mostly to men when Facebook finds that mostly men click on them? This, in my opinion, is in line with the fairness standards set out here as the system does not need to take into account who wrote the ad, what the tone or message of the ad is, what the state of the company it is promoting. or what are the inherent problems in the industry in which the company is based. It simply reacts to the “basic truth” observable in the world. “

A Carnegie Mellon University study published last August found evidence that Facebook’s ad platform discriminates against certain demographic groups. The company claims that its written policies prohibit discrimination and that it uses automated controls introduced as part of the 2019 Settlement to limit when and how advertisers serve ads based on age, gender, and other attributes. However, many previous studies have shown that Facebook’s advertising practices are problematic at best.

According to Facebook, Fairness Flow is available to all product teams in the company and can also be applied to models after use in production. However, Facebook admits that Fairness Flow, the use of which is optional, can only analyze certain types of models – especially monitored models that learn from “sufficient volume” of labeled data. Facebook’s chief scientist Yann LeCun said in a recent interview that removing bias from self-monitored systems that learn from unlabeled data may require training the model with an additional data set that is curated to remove certain biases. “It’s a complicated problem,” he told Fortune.

University of Washington AI researcher Os Keyes described Fairness Flow as “a very standard process” as opposed to a novel way of addressing bias in models. They pointed out that Facebook’s post indicates that the tool compares the accuracy with a single version of the “real truth” rather than assessing what “accuracy” means for labelers in Dubai versus Germany or Kosovo, for example could.

“In other words, that’s nice [Facebook is] Assessment of the accuracy of their basic truths … [but] I’m excited to see where their subject matter experts come from or why they are subject matter experts, ”Keyes told VentureBeat via email. “That is noticeable [the company’s] The solution to the fundamental shortcomings in the development of monolithic technologies is a new monolithic technology. To fix code, write more code. Any awareness of the fundamentally limited nature of fairness … It is even unclear whether their system can recognize the intersection of multiple group identities. “

Exposés about Facebook’s approach to fairness didn’t do much to instill trust in the AI ​​community. A New York University study published in July 2020 estimated that Facebook’s machine learning systems make around 300,000 content moderation errors every day, and problematic posts continue to slide through Facebook’s filters. In a Facebook group that was founded last November and quickly grew to nearly 400,000 people, members calling for a nationwide recount of the 2020 U.S. presidential election exchanged unsubstantiated allegations of alleged electoral fraud and the number of state votes every few seconds.

Separately, a May 2020 article in the Wall Street Journal unearthed an internal Facebook study that found that the majority of people who join extremist groups do so because of the company’s recommendation algorithms. A review of the Human Rights Impact Assessments (HRIAs) Facebook carried out on its product and presence in Myanmar following a Rohingya genocide in that country, the Harvard University co-authored Carr Center concluded that the third-party HRIA largely omitted the mention of HRIA by the Rohingya and did not judge whether algorithms played a role.

The allegations of fueling political polarization and social division prompted Facebook to create a “playbook” to help its employees refute criticism, BuzzFeed News reported in early March. In one example, Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg tried to divert the blame for the Capitol Hill uprising in the US, with Sandberg playing the role of smaller, right-wing platforms despite the spread of hashtags on Facebook promoting pro-Trump , has noted rally in the days and weeks before.

Facebook isn’t running systematic reviews of its algorithms today, despite the fact that the move was recommended by a Facebook civil rights review that was completed last summer.

“The whole [Fairness Flow] Basically, the toolkit can be summarized as follows: “We did what people suggested three years ago, we don’t even let everyone do the thing, and the whole world knows the thing is totally inadequate,” Keyes said . “If [the blog post] is an attempt to answer [recent criticism]It is more of an effort to pretend it never happened than to actually address it. “

VentureBeat

VentureBeat’s mission is to be a digital city square for tech decision makers to gain knowledge of transformative technology and transactions. Our website provides important information on data technologies and strategies to help you run your business. We invite you to become a member of our community and access:

  • current information on the topics of interest to you
  • our newsletters
  • gated thought leader content and discounted access to our valuable events such as Transform 2021: Learn more
  • Network functions and more

become a member