Susanna Ricco and Utsav Prabhu, the co-leads of the Perception Fairness Team at Google Research, are all about collaboration and inclusivity. Their team combines expertise in computer vision and machine learning fairness to ensure that Google’s perception systems are designed to be inclusive from the ground up. They’re guided by Google’s AI Principles, and their research focuses on developing fair and inclusive multimodal ML systems.
They ask important questions like how to use machine learning to model human perception of demographic, cultural, and social identities in a responsible and fair way. They’re also interested in measuring system biases and using those metrics to improve algorithms. Their goal is to build inclusive algorithms and systems and respond quickly to failures when they occur.
One area of their research focuses on analyzing the representation of people in media ML systems. These systems have the power to shape viewers’ beliefs and can reinforce stereotypes or exclude certain groups of people. Their research aims to understand the societal context and create solutions that promote fairness and inclusivity. They’ve even developed tools to study representation in large-scale content collections, partnering with academic researchers, nonprofits, and major brands.
The Perception Fairness Team is constantly expanding their research and applying ML fairness concepts in new domains, such as illustrations and abstract depictions. But it’s not just about who is depicted; it’s also about how they are portrayed and the narrative communicated through the image content and text. They analyze complex bias issues and strive to find the right balance between fairness metrics and other product metrics.
Their work doesn’t stop at analyzing model behavior. They actively collaborate with other researchers and engineers to make algorithmic improvements. For example, they’ve upgraded components in Google Photos and Google Images to improve performance and diversify representation. They’re also exploring the world of generative AI and developing guardrails to mitigate failure modes.
Despite the progress they’ve made, the field of perception fairness technologies is still evolving, and there are plenty of opportunities for breakthrough techniques. The team believes in bridging the gap between measuring images and understanding human identity and expression. They’re working towards creating complex media analytics solutions that accurately indicate true representation. They’re also considering the ethical implications of AI and striving to update depictions to reflect an ever-changing society.
The work of the Perception Fairness Team is important because it ensures that Google’s AI systems are designed with fairness and inclusion in mind. They’re paving the way for more diverse and inclusive AI technologies that can inspire and resonate with a wide range of people.