Officials Charge Meta Algorithms With Bias
Historic case charges Meta algos for violating Fair Housing Act
Meta, the parent of Facebook, has agreed to stop using an advertising tool that U.S. officials said discriminated against users based on race, color, religion, sex, disability and other traits when it comes to housing.
In the first case charging algorithmic bias under the Fair Housing Act (FHA), the U.S. Justice Department said Meta used these algorithms to determine which Facebook users see housing ads and which do not. These algorithms rely partly on characteristics protected under the FHA.
The DOJ said Meta allowed advertisers to target their ads to people based on race, color, religion, sex, disability, familial status and national origin. Its ad targeting tool uses a machine learning algorithm to find Facebook users who look like the target audience chosen by advertisers.
Meta has now agreed to stop using the tool – known as ‘Lookalike Audience’ or ‘Special Ad Audience’ — and develop a new system to address racial and other disparities caused by its use of personalization algorithms. That system will be subject to Department of Justice approval and court oversight.
The company agreed to pay $115,000, the maximum FHA penalty.
Small Fine, Big Symbolism
“This settlement is historic, marking the first time that Meta has agreed to terminate one of its algorithmic targeting tools and modify its delivery algorithms for housing ads in response to a civil rights lawsuit,” said Assistant Attorney General Kristen Clarke of the DOJ’s civil rights division.
“The Justice Department is committed to holding Meta and other technology companies accountable when they abuse algorithms in ways that unlawfully harm marginalized communities,” Clarke added.
Should Meta fail to demonstrate sufficient changes to guard against algorithmic bias, the DOJ will recommence legal proceedings, U.S. attorney Damian Williams for the Southern District of New York stressed.
Independent Reviewer
The DOJ’s lawsuit argues that Meta is liable for disparate treatment because it intentionally classifies users on the basis of FHA-protected characteristics and designs algorithms that rely on users’ FHA-protected characteristics.
Officials further allege that Meta is liable for disparate impact discrimination because the operation of its algorithms affects Facebook users differently based on their membership in protected classes.
Under the settlement agreement, Meta has until the end of the year to stop using the tool and has until December to develop its new system.
An independent, third-party reviewer will then be brought in to verify “on an ongoing basis” whether the new system is compliant with the FHA. Meta will also be forced to provide the reviewer with any information to make the verification. The court will then have ultimate authority to resolve disputes over the information that Meta must disclose.
Meta to Overhaul Ads System
Meta spokeswoman Ashley Settle told AI Business that “we will be building a novel machine learning method within our ads system that will change the way housing ads are delivered to people residing in the U.S. across different demographic groups.”
“While HUD raised concerns about personalized housing ads specifically, we also plan to use this method for ads related to employment and credit in the U.S.,” she added.
“This type of work is unprecedented in the advertising industry and represents a significant technological advancement for how machine learning is used to deliver personalized ads. We are excited to pioneer this effort,” Settle continued.
This article first appeared in IoT World Today’s sister publication AI Business.
About the Author
You May Also Like