For Lessons in AI Regulations, Look to This Island Nation, AI Summit London
Also, an effective way to fight fake news
As the U.S., U.K., and EU grapple with regulating AI, one country seems to be surging ahead: Taiwan.
“Taiwan has already passed three law amendments that regulates AI harms, and all of them have to do with the erosion of trust,” said Audrey Tang, Taiwan’s minister of digital affairs and chair of its National Institute of Cyber Security, during a panel discussion at the AI Summit London.
She said they revolve around using deepfakes for scams, synthetic porn and election fraud. “In each of these cases, we’re not redressing future harms but we’re already seeing harm from voice cloning for scam calls and so on.”
For example, Tang said she can “very easily” create a deepfake of herself using a MacBook with 96GB RAM to run open-source models trained on her emails, conversations, public transcripts and others.
“It can very easily simulate me, including the acoustic model of my voice and gestures to deepfake me in real time and interactive ways,” Tang said. Since she is a public figure, “pretty much everybody has access to those public parts that I just mentioned.”
Taiwan’s Bottom-Up Approach
The way Taiwan approaches regulation of an emerging technology that could disrupt society is to ask their people about it.
For example, around a decade ago, when Uber wanted to operate in the country, Taiwan “simply asked all the taxi drivers, unions, Uber drivers and passengers, ‘how do you feel about this practice?’” Tang said. Taiwan used an assisted, collective intelligence system to gather the feedback.
The country then crafts laws and amendments based on a “rough consensus” of the people, Tang said.
Today, Taiwan is working with ChatGPT-maker OpenAI and Anthropic on a collective intelligence project to gather feedback on generative AI.
Tang said the idea behind this bottom-up approach is “to empower people closest to … the harms what would be something that you feel will redress the harm.”
Asked about her thoughts on the EU’s AI Act, which is close to officially becoming law, Tang said she is “really happy” that it has specific carve outs for open source models. Liability for the declaration, registration, accountability, fairness and other issues are “mostly for the proprietary application-level uses of generative models,” she added.
“If you’re doing open-source experiments, as long as you work in the open in an interoperable way, then you’re free to try various different ways to align,” Tang said.
Effective Way to Fight Fake News and Bias
Governments around the world are concerned about the impact of generative AI on coming elections, since it makes creating misinformation easier – with potentially devastating consequences.
In Taiwan, the basic education curriculum was switched from literacy to competence, Tang said. “Instead of literacy, which is about consuming information, competence is when you become a maker.”
“We found that it’s not literacy of, for example, reading fact-checking reports that inoculates the mind against disinformation,” she said. Rather, “it is the act of going through fact-checking yourself.”
For instance, Taiwan’s primary school students can fact-check the claims of three presidential candidates engaged in a debate. By doing so, “the child becomes immune to … disinformation or information manipulation.”
As for addressing bias in foundation models, one does not have to go back to fix the data and retrain all over again, which is expensive, Tang said. There are newer techniques such as browser extensions one can compose on top of a foundation model and train on a desktop computer overnight.
This article first appeared on IoT World Today's sister site, AI Business.
Read more about:
AI Summit London 2023About the Author
You May Also Like