Federal Framework Prioritizes Emerging Tech for Government Use

The Federal Risk and Authorization Management Program framework aims to speed up AI cloud service authorization, focusing on chat, code and image generation

Ben Wodecki, Junior Editor - AI Business

July 9, 2024

3 Min Read
Congress building in Washington DC
Getty Images

The Federal Risk and Authorization Management Program (FedRAMP) has unveiled a new framework to prioritize emerging technologies for government use, with an initial focus on generative AI.

The framework, created in line with the terms laid out in President Biden’s AI Executive Order, is designed to ensure certain cloud service offerings obtain faster authorizations so federal agencies can use them more quickly.

FedRAMP’s framework initially focuses on three generative AI applications: chat interfaces, code generation tools and image generation solutions. The framework also covers general-purpose API offerings for each of those three capabilities.

The framework will prioritize three cloud services per application. Once selected, the offerings will be moved to the front of the FedRAMP authorization review queue. However, after reaching the target number of authorized offerings for a specific capability, additional offerings in that category will not receive prioritization.

Cloud service providers wishing to obtain prioritization will need to submit an ET CSO Request Form and an ET Demand Form to FedRAMP. These detail how their offering meets the prioritization criteria and demonstrates agency demand for their service.

Cloud service providers wishing to obtain prioritization will also have to disclose to FedRAMP their model cards, detailed information about a service’s underlying AI model and how it works.

Related:Biden Administration Launches International Cybersecurity Strategy

Some technology solutions providers choose to keep their model cards a closely guarded secret. 

However, FedRAMP contends that access to the underlying model information is necessary so agencies can “determine how to leverage the offerings and best apply them to their mission needs.”

“The model cards are intended to provide transparency regarding what use cases the model is good for, what training data was used, how fresh or recent is that data and any limitations, biases or risks that exist in the offering itself,” the framework states.

While the program aims to speed up the availability of emerging technologies for government use, FedRAMP said that agencies remain responsible for evaluating the functionality and suitability of these technologies for their specific needs. 

The FedRAMP authorization process solely focuses on the security aspects of the solutions, not the quality or suitability of their functionality for specific applications.

FedRAMP will only accept submissions for the generative AI prioritization twice per fiscal year. 

The program plans to continually evaluate the new process and make revisions as needed, with additional emerging technologies routinely added to a dedicated list at least annually, subject to approval from the FedRAMP Board.

Related:White House Unveils Guidelines for Safe, Secure, Responsible AI Use

“This framework will enable routine and consistent prioritization of the most critical cloud-relevant emerging technologies needed for use by federal agencies,” a FedRAMP announcement on the framework reads. “This prioritization will control how FedRAMP prioritizes its own work and review processes and will not address how sponsoring agencies manage their own internal priorities.”

Several government agencies are looking to test or already have adopted generative AI solutions to augment and improve their workflow.

The Air Force built its own ChatGPT-like chatbot to encourage staff to experiment with AI and Homeland Security agencies have been trialing generative AI solutions to improve officer immigration training and to assist in fentanyl-related crime investigations.
Last week, senators introduced a bipartisan bill that would force federal agencies to employ safeguards before purchasing and deploying AI systems. Agencies would be required to install a chief AI officer to oversee risk evaluations.

This story first appeared in IoT World Today's sister publication AI Business.

About the Author

Ben Wodecki

Junior Editor - AI Business

Ben Wodecki is the junior editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to junior editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like