Building Inclusive AI Frameworks, Governance at CES 2023
Experts from Meta and NIST outlined how they’re approaching best practices and governance changes
A trickle of standards and legislation is set to work their way up the AI stream throughout the next 12 months. Experts from Meta and the National Institute of Standards and Technology (NIST) dissected the potential impacts on developers and consumers at CES.
According to Farzana Dudhwala privacy policy manager, AI policy and governance at Meta said this year could be the one for more “experimental approaches” to rule-making and governance that spans various fields.
Dudhwala spoke of how Meta is a part of the Open Loop program, which is trying to create AI governance, frameworks and best practices from governments, tech companies, academia and civil society.
She said the Open Loop program improves traditional attempts to provide AI governance by allowing all parties to iron out potential headaches.
“It’s when you get people in the room together and go through these draft laws or potential governance frameworks … that you start to find things that could be improved upon, definitions that sound watertight but actually leave some holes for interpretation which makes it more difficult,” Dudhwala said.
Dudhwala said the group was testing some clauses from the EU’s AI Act, which has yet to be transposed into law. She said some 50 businesses test provisions detailed in the bill, which would categorize all AI systems and force companies to impose restrictions on higher-risk products.
“We’re getting people from all over the ecosystem to participate and try to help bring about more effective laws,” she said. “At the end of the day, we’re all looking for the same thing, which is to be able to innovate in a way that protects citizens and to make sure that we’re still able to realize the benefits that AI brings us, whilst making sure that the risks are mitigated against and that we’re doing this safely.”
Following on from Dudhwala, NIST senior research scientist Elham Tabassi agreed with Dudhwala’s point about bringing multiple stakeholders from various camps together to ensure understanding.
“We want the technologists, we want the computer scientists and engineers around the table but also cognitive scientists, sociologists, even philosophers,” Tabassi said. “The importance of inclusiveness is trying to look at many different angles. That’s aligned with the approach that we’re taking.”
One specific way an inclusive attempt to view frameworks, according to Tabassi, would be to ensure definitions are universal.
“(It is) often the case that we’re using the same term meaning two different things,” Tabassi said.
NIST recently launched plans to develop a voluntary framework for AI risk management. One Tabassi said would be “flexible but structured and measured way.”
“Flexible to allow for innovation to happen and measured because if you cannot measure it, you cannot improve it,” she said. “It takes a rights preserving or rights affirming approach, puts the protections of individual rights at the forefront and tries to operationalize values.”
To ensure diversity on the product side, Dudhwala said her team at Meta has been developing open source data sets that include more diversity to reduce stereotyping and biases. The data sets include more representative data concerning gender identities, race, age and disabilities, with Meta then building models using those datasets to reduce demographic biases.
“For example, when the statement ‘she likes to shop’ is run through our model,” Dudhwala said. “What we do is we add ‘he likes to shop’ and ‘they like to show up’ as well, so that we’re not promoting stereotypes based in society today, and that we’re using AI to increase the diversity of the answers or the services that you might get as a result of that.”
About the Author
You May Also Like