US Takes First Step to Formally Regulate AI
Biden administration follows China, Italy, Canada and the U.K.
The Biden administration said today that it is seeking public comment on upcoming AI policies as the U.S. moves to put safeguards in place against harms like bias without dampening innovation.
In a first official step towards potential AI regulations at the federal level, the U.S. Commerce Department’s National Telecommunications and Information Administration (NTIA) wants public input on developing AI audits, assessments, certifications and other tools to engender trust from the public.
“The same way that financial audits created trust of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy,” said Alan Davison, assistant commerce secretary for communications and information, at an event in Pittsburgh, Pennsylvania.
“But real accountability means that entities bear responsibility for what they put out into the world,” he added.
Written comments must be submitted by June 10 here.
The NTIA will be seeking input specifically on the types of certifications AI systems need before they can deploy, what datasets are used and how these are accessed, how to conduct audits and assessments, what AI designs developers should choose, what assurances should the public expect before the AI model is released, among other issues.
“Our initiative will help build an ecosystem of AI audits, assessments and the tools that will help assure businesses and the public that AI systems can be trusted,” Davidson said. “This is vital work.”
There already have been attempts to regulate AI, with more than 130 bills introduced in federal and state legislatures in 2021 that either were passed or proposed. This is a “huge” jump from the early days of social media, cloud computing and even the internet itself, Davidson said.
Meanwhile, China, Italy, Canada and the U.K. are stepping up scrutiny of generative AI.
Italy has temporarily banned ChatGPT and threatened to impose fines until OpenAI complies with its user privacy concerns, while Canada’s privacy chief said it will be scrutinizing the chatbot. Meanwhile, the U.K.’s privacy watchdog said organizations using or developing generative AI must ensure people’s data are protected because it is the law.
China Proposes Generative AI Rules
Earlier this week, China became the first country to propose specific rules for governing generative AI models after several Chinese tech giants announced they were developing ChatGPT-like tools.
The Cyberspace Administration of China (CAC) said companies looking to release generative AI offerings will have to go through a security review before they can release the models to the public.
While it encourages the use of safe software and tools, the CAC said content generated by AI cannot subvert state power, incite secession or disrupt social order, according to The Wall Street Journal.
If a company’s generative AI platform generates content deemed inappropriate, they would have three months to ensure such an occurrence does not happen again or face penalties, the CAC said.
Businesses also would have to ensure users submit their real identities to use their platforms, and companies behind the systems would also be responsible for the data used to train their generative AI products.
The measures are set to come into force later this year, following a period of public consultation.
Alibaba, SenseTime Join the trend
China’s newly proposed rules come amid a wave of interest in generative AI following OpenAI’s ChatGPT.
Several Chinese companies are trying to launch their own similar systems. In the past week alone, two new applications have launched, from SenseTime and Alibaba.
SenseTime, the Chinese AI startup hit by U.S. sanctions over its facial recognition systems, launched SenseChat, a chatbot that can generate answers to user questions as well as write computer code. SenseChat is built atop the company’s SenseNova large language model.
Meanwhile, Alibaba's answer to ChatGPT, called Tongyi Qianwen, is set to be integrated across the company's various businesses. The chatbot's name roughly translates to "seeking an answer by asking a thousand questions." It is designed to work on both English and Chinese language inputs.
However, it has not been smooth sailing for Chinese firms entering the generative AI fray. In March, Baidu launched its generative AI application, Ernie. Investors found the launch demo to be lackluster, and the company’s Hong Kong-listed shares quickly fell by 10%.
Other Chinese companies looking to create their own ChatGPT-like solutions include NetEase, which is making chatbots for its education subsidiary Youdao, and e-commerce platform JD.com, which is developing an ‘industrial’ chatbot, dubbed ChatJD, for retailers and financial firms.
Local governments, including that of Beijing, the capital, said in February that they would help companies invest in building open source generative AI frameworks and accelerate data supplies.
ChatGPT is officially not accessible in China as it is among the blacklisted countries, along with Iran and Venezuela. However, people from China were among the top users of ChatGPT - likely circumventing firewalls with VPNs - comprising a 3% share in a tie with Canada, according to statistics from OpenAI.
This article first appeared on IoT World Today's sister site, AI Business.
About the Authors
You May Also Like