Skip to content

Title: Uncovered Lapses in AI Security: Federal Agencies Struggle with Critical Information Gaps

In the realm of cutting-edge technologies, from facial recognition software to disease identification algorithms, it's become strikingly apparent that numerous agencies admit to possessing insufficient knowledge regarding the acquisitions they procure from commercial vendors.

Title: Uncovered Lapses in AI Security: Federal Agencies Struggle with Critical Information Gaps

Federal authorities are accruing an array of proprietary AI solutions for tasks that could influence individuals' physical well-being and civil liberties, without fully comprehending the specifics of how these systems function or are trained. This reveals recently presented data.

Customs and Border Protection, as well as the Transportation Security Administration, fail to provide documentation detailing the quality of data utilized to construct and evaluate algorithms that monitor travelers' bodies for potential threats, according to their 2024 AI inventory reports.

The Veterans Health Administration is presently acquiring an AI algorithm from a private firm that purportedly predicts chronic diseases among veterans. However, the agency admits being uncertain about the method in which the firm obtained the data related to veterans' medical histories used to train the model.

Moreover, there is no readily available source code for over a hundred algorithms that have the potential to affect people's safety and rights for the organizations employing these systems.

As the incoming Trump administration prepares to annul recently enacted rules for federal AI procurement and safety, the inventory data highlights the extent to which the government has come to rely on private firms for its most risky AI systems.

Varoon Mathur, a former senior AI advisor to the White House responsible for coordinating the AI inventory process, expressed concern over the use of proprietary systems that potentially restrict democratic power from agencies responsible for delivering benefits and services to the public.

"We have to collaborate with proprietary vendors. While this can be beneficial a lot of the time, we often don't know what they're up to. And if we don't control our data, how will we manage risk?" Mathur stated.

Inspects have unearthed issues with some federal agencies' high-risk algorithms, such as a racially biased IRS taxpayer audit model and a VA suicide prevention algorithm prioritizing white men over other groups.

The 2024 inventories provide the most comprehensive insight yet into how the federal government leverages artificial intelligence and its understanding of these systems. For the first time since the inventorying commenced in 2022, agencies had to respond to a series of queries regarding model documentation or source code and the potential risks associated with their AI systems.

From the 1,757 AI systems reportedly employed throughout the year, 227 were deemed likely to impact civil rights or physical safety, and more than half of these highest-risk systems were developed entirely by commercial vendors.

For at least 25 algorithms impacting people's safety or rights, agencies reported that there is no documentation available specifying the maintenance, composition, quality, or intended use of the training and evaluation data. Furthermore, for 105 of these models, agencies admitted they did not have access to the source code. Agencies failed to respond to these queries in 51 instances regarding the documentation and 60 instances regarding the source code for at least 60 of the tools.

Under the Biden administration, the Office of Management and Budget (OMB) issued new directives to agencies, requiring them to perform thorough evaluations of risky AI systems and to ensure vendors provide essential information about the models, including the training data documentation or the code itself.

These rules are more comprehensive than any AI vendors will encounter when selling their products to other companies or state and local governments, although various software vendors have voiced opposition to these requirements, arguing that agencies should determine evaluation and transparency requirements on a case-by-case basis.

"Trust but verify," stated Paul Lekas, the head of global public policy for the Software & Information Industry Association. "We oppose burdensome requirements for AI developers. However, we recognize that some level of attention is necessary to develop sufficient trust for the government to utilize these tools."

The U.S. Chamber of Commerce, in comments submitted to OMB regarding the new rules, argued that the federal government should not request any specific training data or datasets from AI vendors using the government's AI systems. Palantir, a prominent AI supplier, advocated that the federal government should avoid imposing overly prescriptive documentation instruments, allowing AI service providers and vendors to have the flexibility to describe context-specific risk.

Rather than access to training data or source code, AI vendors suggest that in most cases, agencies should be content with model scorecards - documents that characterize the data and machine learning techniques an AI model uses but do not reveal technical secrets.

Cari Miller, who collaborated on developing international standards for buying algorithms and co-founded the AI Procurement Lab, described scorecards as a technocratic solution that was "not a bad starting point but only a starting point" for what vendors of high-risk algorithms should be contractually bound to disclose.

"Procurement is a crucial governance mechanism, it's where the rubber meets the road, it's the front door, it's where you can decide whether or not to allow the bad stuff in," Miller stated. "You need to understand whether the data in the model is representative, is it biased or unbiased? What did they do with that data and where did it come from? Did all of it come from Reddit or Quora? Because if it did, it may not be what you require."

As OMB noted when introducing its AI rules, the federal government serves as the largest single buyer in the US economy, accounting for more than $100 billion in IT purchases in 2023. The direction it takes on AI - what it demands vendors to disclose and how it tests products before implementing them - is likely to establish a standard for AI companies in their product disclosure within smaller government agencies or even private businesses.

The incoming Trump administration has indicated its intent to revoke OMB's rules. The platform for the Republican party, which Trump endorsed, called for the "repeal of Joe Biden's dangerous Executive Order that inhibits AI innovation, and imposes Radical Left-Wing ideas on the development of this technology."

Mathur, the former White House senior AI advisor, hopes that the incoming administration will not adhere to this promise and recalled that Trump also initiated efforts to build trust in federal AI systems with his executive order in 2020.

"If we do not have the code or the data or the algorithm, we will not be able to understand the consequences we are having," Mathur said. "This task was monumental in itself, but it requires continued dedication."

The tech industry predicts that artificial intelligence will significantly shape the future, with developments in AI set to influence various sectors, including healthcare and public safety. In light of this, it's crucial for federal agencies to fully comprehend the inner workings of AI algorithms they rely on, as they could impact individuals' physical well-being and civil liberties.

In their pursuit of advanced AI solutions, organizations should prioritize transparency, ensuring that they have access to necessary documentation and source code for their AI systems, to mitigate potential risks and ensure responsible use of this technology.

Read also:

    Latest