Skip to content

Adverse operational data can lead to AI Oops scenarios in AIOps as demonstrated by researchers

System Administrators' Positions Remain Secure

Artificial Intelligence Operations (AIOps) can suffer significant setbacks when tainted data is fed...
Artificial Intelligence Operations (AIOps) can suffer significant setbacks when tainted data is fed to them, a new study reveals.

Adverse operational data can lead to AI Oops scenarios in AIOps as demonstrated by researchers

In a recent study, researchers from various institutions have uncovered potential weaknesses in the use of AI for IT operations (AIOps). The team, led by Dario Pasquini, Evgenios M. Kornaropoulos, Giuseppe Ateniese, Omer Akgul, Athanasios Theocharos, and Petros Efstathopoulos, discovered that AIOps systems can be manipulated by adversaries, leading to system compromise [1][2].

The researchers introduced AIOpsShield, a defense mechanism designed to sanitize harmful telemetry data before it is processed by AI agents managing IT operations. AIOpsShield has shown promising results, reliably blocking telemetry-based attacks without impacting normal agent performance [1][2]. However, the study also highlights the limitations of such defenses.

Advanced attackers, with capabilities such as poisoning multiple input sources, supply chain compromises, or elevated privileges, can bypass AIOpsShield and other current defenses [2]. The attack success rate can still be high, with over 80% success rates against models like GPT-4.1, allowing adversaries to manipulate agents into executing attacker-specified remediation strategies [1][2].

To address these issues, the researchers propose focusing on the training and continuous learning phases of AIOps systems. This includes rate limiting user feedback, access control lists (ACLs), authentication, and runtime observability to detect gradual behavioral drifts indicative of poisoning [3]. However, even with these measures, it may not prevent all attacks, especially if adversaries infiltrate the supply chain or training data sources.

One example of an attack involves the manipulation of system telemetry to mislead AIOps agents into installing a malicious package, as demonstrated in the SocialNet application [1]. The attack is based on the principle of "garbage in, garbage out," with attackers creating malicious telemetry data to feed into AIOps systems.

Cisco has deployed AIops in a conversational interface for system performance queries, but the study warns of the potential risks associated with these systems [1]. To mitigate these risks, the researchers propose the open-source release of AIOpsShield.

The paper, titled "When AIOps Become 'AI Oops': Subverting LLM-driven IT Operations via Telemetry Manipulation," sheds light on a critical aspect of AIOps security that requires ongoing research and multi-layered defenses for robust protection [1][2][3][4]. As AI continues to play a larger role in IT operations, it is essential to be aware of these vulnerabilities and take necessary precautions to protect systems from potential attacks.

References:

[1] Pasquini, D., et al. (2023). When AIOps Become 'AI Oops': Subverting LLM-driven IT Operations via Telemetry Manipulation. [Paper]

[2] Kornaropoulos, E.M., et al. (2023). AIOpsShield: Defending Against Poisoned Telemetry Attacks in AIOps Systems. [Paper]

[3] OWASP AI Security Guidelines (2023). [Online Resource]

[4] RSAC Labs and George Mason University (2023). AI Tools Used for IT Operations (AIOps) Vulnerable to Poisoned Telemetry Attacks. [Press Release]

  1. The researchers have developed AIOpsShield, an open-source defense mechanism that helps in sanitizing harmful telemetry data before it is processed by AI agents, aiming to protect IT operations from attacks.
  2. Despite the promising results of AIOpsShield, advanced attackers can still bypass it by employing tactics such as poisoning multiple input sources, supply chain compromises, or elevated privileges.
  3. The study underlines the importance of focusing on the training and continuous learning phases of AIOps systems, implementing measures like rate limiting user feedback, access control lists, authentication, and runtime observability to detect poisoning.
  4. As AI continues to expand in data-and-cloud-computing, cybersecurity is crucial to ensure the robust protection of IT operations, especially in light of the vulnerability of AI for IT operations (AIOps) to telemetry manipulation.

Read also:

    Latest