IoT World Austin 2022: IBM Director Says AI Governance Accelerates, Not Slows Down, Results
Overcoming hurdles for operationalizing AI at scale
Governance in AI can often be seen as an impediment to faster innovation because of the need to comply with regulations and adhere to ethical considerations – slowing down time to market.
But Priya Krishnan, IBM’s director of governance and data science product, said it is better to think of it like the safety controls in a car.
“You have the brakes in the car, you have your seat belts – all of these are meant for you to drive safely faster to your destination,” she said at the AI Summit Austin. “It’s not meant to slow you down. Think of AI the same way.”
Robust governance AI has to be comprehensive and consistent as metrics change, Krishnan said. It also has to be end to end, open to complement an organization’s existing tools and offer automated capture of metadata.
The trifecta of people, process and technology is critical in developing a good AI governance solution, but “more often than not that’s where we start,” Krishnan said.
Identify stakeholders, specify the business use case and be clear about what goals to accomplish. “Finally, pick the technology that will help you scale as you move through this landscape especially with changing regulations,” she said.
Steps to Robust AI Governance
AI governance also means bringing together different stakeholders beyond the data science and IT teams. For example, poor governance can affect the company’s brand, so the chief marketing officer would want to get involved. Not complying with regulations introduces financial risk, so the CFO would likely be interested as well.
“It’s not just a team of data scientists creating these models but many stakeholders are involved in this process,” Krishnan said.
AI governance also means managing risk to ensure responsible use of AI across a company’s many business controls and standards, as well as adhering to expanding regulations around the globe, Krishnan added.
IBM’s three pillars for AI governance are as follows:
Life cycle governance: Automate the monitoring and cataloging of models as data scientists are building them. This could reduce the time to build from months to weeks.
For example: An IBM client had a team of data scientists that took one to two months to build their AI models, and then to model risk validators. The validators were a small team and took another month to do their work. They also had a lot of questions for the data scientists, “Did you try this technique? Why is this giving me this answer?”
Meanwhile, the data scientists had already moved on to another project and had to dig up old information. All of this back-and-forth was done through Excel files. Months were wasted. IBM’s solution was to automate the process as the model was being built, cutting down the development time to two weeks, Krishnan said.
Risk management: Automate facts and workflows to comply with changing business standards
For example: An IBM client’s data science team was asked to build a model for a business use case. The team recalled that they did something similar in the past and used that same AI model. However, the older model was built for U.S. use and the new one would be applied to the EU.
“The business controls were entirely different,” Krishnan said. “So they had to go back to make sure the model was corrected.”
Regulatory compliance: Integrate external AI regulations into policies for automated enforcement. Do it right at the beginning by automating the process to be consistent every single time, at scale.
For example: An IBM client has controls in place but missed a correlated variable that introduced new risk and put the model out of compliance.
Make sure the solution will capture indirect or correlated variables for bias and other infractions, Krishnan said.
This article first appeared in IoT World Today’s sister publication AI Business.
About the Author
You May Also Like