Facebook civil rights audit urges ‘mandatory’ algorithmic bias detection

moderator-e1591354274848.jpegw1200stripall.jpeg

In a sweeping report critical of Facebook’s progress toward civil rights, an independent review found the company’s efforts to detect algorithmic bias are falling dangerously short and leave users vulnerable to manipulation.

According to an independent audit of the company’s policies and practices that the company released today, the company’s efforts to detect algorithm bias remain primarily in pilot projects being conducted by a handful of teams. The authors of the report, civil rights attorneys Laura Murphy and Megan Cacace, note that the company is increasingly reliant on artificial intelligence for such chores as predicting which ads users might click on and weeding out harmful content.

But these tools, as well as other tentative efforts by Facebook in such areas as diversity on its AI teams, much go further and faster given its platform’s vulnerability, the report says. While the group looked uniquely at Facebook during its two-year review, the call to step up efforts against algorithmic bias raises an issue that any company embracing AI must likely address.

“Facebook has an existing responsibility to ensure that the algorithms and machine learning models that can have important impacts on billions of people do not have unfair or adverse consequences,” the report says. “The Auditors think Facebook needs to approach these issues with a greater sense of urgency.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

The timing of the report is awkward. It comes just as Facebook finds itself the target of one of the largest boycotts it has ever faced. The “Stop Hate for Profit” campaign has convinced more than 300 advertisers to halt spending on Facebook unless it takes bolder steps against racism, misogyny, and disinformation on its network.

Earlier this week, Facebook CEO Mark Zuckerberg met with civil rights groups but insisted they would not cave into financial pressure. That left attendees feeling disappointed with the response.

The report arrives on the heels of that meeting. And in a blog post, Facebook COO Sheryl Sandberg sought to score some points for being the “first social media company to undertake an audit of this kind.” She also nodded toward the timing of a report that was commissioned two years ago.

Still, the post’s title sought to emphasize Facebook’s view that it is fighting the good fight: “Making Progress on Civil Rights – But Still a Long Way to Go.”

“There are no quick fixes to these issues — nor should there be,” Sandberg wrote. “This audit has been a deep analysis of how we can strengthen and advance civil rights at every level of our company — but it is the beginning of the journey, not the end. What has become increasingly clear is that we have a long way to go. As hard as it has been to have our shortcomings exposed by experts, it has undoubtedly been a really important process for our company. We would urge companies in our industry and beyond to do the same.”

The authors, while highlighting many of Facebook’s internal efforts, were less complimentary.

“Many in the civil rights community have become disheartened, frustrated and angry after years of engagement where they implored the company to do more to advance equality and fight discrimination, while also safeguarding free expression,” the authors wrote.

The report dissects Facebook’s work on civil rights accountability, elections and census, content moderation, diversity and advertising. But it also gives special attention to the subject of algorithmic bias.

“AI is often presented as objective, scientific and accurate, but in many cases it is not,” the report says. “Algorithms are created by people who inevitably have biases and assumptions, and those biases can be injected into algorithms through decisions about what data is important or how the algorithm is structured, and by trusting data that reflects past practices, existing or historic inequalities, assumptions, or stereotypes. Algorithms can also drive and exacerbate unnecessary adverse disparities…As algorithms become more ubiquitous in our society it becomes increasingly imperative to ensure that they are fair, unbiased, and non-discriminatory, and that they do not merely magnify pre-existing stereotypes or disparities.”

The authors highlighted Facebook’s Responsible AI (RAI) efforts. That is led by a team of “ethicists, social and political scientists, policy experts, AI researchers and engineers focused on understanding fairness and inclusion concerns associated with the deployment of AI in Facebook products.”

Part of that RAI work involves developing tools and resources that can be used across the company to ensure AI fairness. To date, the group has developed a “four-pronged approach to fairness and inclusion in AI at Facebook.”

  1. Create guidelines and tools to limit unintentional bias.
  2. Develop a fairness consultation process.
  3. Engage with external discussions on AI bias.
  4. Diversify the AI team.

As part of the first pillar, Facebook has created the Fairness Flow tool to assess algorithms by detecting unintended problems with the underlying data and spotting flawed predictions. But Fairness Flow is relatively new and only just in a pilot stage and its use with the teams who have access is voluntary.

Late last year, Facebook began a fairness consultation pilot project that allows teams that detect a bias issue in a product to reach out to other teams internally who may have more expertise for feedback and advice.

While the authors saluted these steps, they also called on them to expanded across the company and that their use be mandatory.

“Auditors strongly believe that processes and guidance designed to prompt issue-spotting and help resolve fairness concerns must be mandatory (not voluntary) and company-wide,” the report says. “That is, all teams building models should be required to follow comprehensive best practice guidance and existing algorithms and machine learning models should be regularly tested. This includes both guidance in building models and systems for testing models.”

The company has also created an AI Task Force to led initiatives to improve its diversity. Facebook is now funding a deep learning course at Georgia Tech to increase the pipeline of diverse job candidates. It’s also in discussions to partner with several other universities to expand the program. And its tapping nonprofits, research, and advocacy groups to broaden its hiring pool.

But again, the review found these initiatives to be too limited in scope. It called for an expansion of hiring efforts as well as greater training and education across the company on these issues.

“While the Auditors believe it is important for Facebook to have a team dedicated to working on AI fairness and bias issues, ensuring fairness and non-discrimination should also be a responsibility for all teams,” the report says. “To that end, the Auditors recommend that training focused on understanding and mitigating against sources of bias and discrimination in AI should be mandatory for all teams building algorithms and machine-learning models at Facebook and part of Facebook’s initial onboarding process.”

Credit: Source link