AI plays an important role across our apps — from enabling AR effects, to helping keep bad content off our platforms and enabling us to better support our communities through our COVID-19 Community Help hub. As AI-powered services become more present in everyday life, it’s becoming even more important to understand how AI systems may affect people around the world and how we can strive to ensure the best possible outcomes for everyone.
Several years ago, we created an interdisciplinary Responsible AI (RAI) team to help advance the emerging field of Responsible AI and spread the impact of such work throughout Facebook. The Fairness team is part of RAI, and works with product teams across the company to foster informed, context-specific decisions about how to measure and define fairness in AI-powered products.
Designing an AI system to be fair and inclusive isn’t a one-size-fits-all task. The process involves working hard to try to understand what it means for a product or system to perform well for all people, while carefully balancing any tensions that may exist between stakeholders’ interests. One important step in the process of addressing fairness concerns in products and services is surfacing measurements of potential statistical bias early and systematically. To help do that, Facebook AI developed a tool called Fairness Flow.
Using Fairness Flow, our teams can analyze how some common types of AI models and labels perform across different groups. It’s important to look at fairness group by group because an AI system can perform poorly for some groups even when it appears to perform well for everyone on average.
Fairness Flow works specifically by helping machine learning engineers detect certain forms of potential statistical bias in certain types of AI models and labels. It measures whether models or human-labeled training data perform better or worse for different groups of people. This is so machine learning engineers can see if they must take steps to improve the comparative performance of their models. Some changes they can consider include broadening or improving representation within their training or test data set, examining whether certain features are important, or exploring more complex or less complex models.
Fairness Flow is available to product teams across Facebook and can be applied to models even after they’re deployed to production. However, Fairness Flow can’t analyze all types of models. It’s also a diagnostic tool, so it can’t resolve fairness concerns on its own — that would require input from ethicists and other stakeholders, and context-specific research. Since fairness, by definition, is contextual, a single metric can’t always apply in the same way to all products or AI models.
We’ve long been focused on using AI in ways that benefit society and improve technologies for everyone. When building the Portal Smart Camera, for example, we worked to make sure it performed well for diverse populations. We’ve also used AI to build improved photo descriptions for people who are blind or visually impaired. Despite these accomplishments, we know that as an industry and research community we are still in the early days of understanding the right, holistic processes and playbooks that may be used to achieve fairness at scale.
The AI systems we use have a potential impact on data privacy and security, ethics, the spread of misinformation, social issues and beyond. At Facebook, we’ll keep working to improve these systems and help build technology responsibly.
Fairness Flow is just one tool among many that we are deploying to help ensure that the AI that powers our products and services is inclusive, works well for everyone and treats individuals and communities fairly.
To read the full story visit: ai.facebook.com/blog/how-were-using-fairness-flow-to-help-build-ai-that-works-better-for-everyone