AI-driven automation to eliminate certain occupations, as revealed by Simonian
In a groundbreaking development, a team of researchers from the FutureTech initiative at the University of Innopolis have conducted a comprehensive review of existing approaches to describing AI risks. Led by Peter Sletter, the team studied 43 analytical frameworks and compiled the AI Risk Repository – the first open database in the world, containing 777 unique threats related to AI [1].
The AI Risk Repository and concerns about scattered, unforeseen consequences of AI systems are being addressed through comprehensive AI risk management frameworks and governance practices developed by regulatory bodies and industry stakeholders.
Managing AI Risks with Frameworks
The U.S. NIST AI Risk Management Framework (AI RMF) is a voluntary but influential standard guiding organizations to manage AI risks throughout the AI lifecycle. It emphasizes creating a risk-aware culture with continuous processes rather than one-time checklists, supported by tools that detect AI exposures, vulnerabilities, and unauthorized use of AI models and infrastructure [1].
The European Union’s emerging Code of Practice for General Purpose AI (GPAI) models requires developers to establish a comprehensive Safety and Security Framework for systemic risk management. This includes documentation, risk category criteria, mitigation strategies, forecast justifications, and regular updates [2][5].
Systemic AI Risks and Scattered Threats
Systemic AI risks are defined by large-scale potential harms, such as major accidents, critical infrastructure disruptions, or public health impacts. These pose challenges for governance because of their scale and complexity. High-risk AI systems are those making consequential decisions affecting individuals, requiring transparency and user notification under laws like the EU AI Act and Colorado AI Act in the U.S. [4]
The complexity and variety of AI applications create "scattered" or unforeseen threat landscapes. Effective AI risk repositories and dashboards, such as Google Cloud's AI Security Dashboard, help organizations inventory AI assets, visualize models and datasets, and detect threats in real time to mitigate these risks proactively [3].
Unforeseen Consequences and Governance
Because AI risks can be emergent and systemic, governance frameworks emphasize ongoing monitoring, update cycles, and multiple layers of mitigation including cybersecurity, model evaluation, adversarial testing, and reporting serious incidents. The EU Code of Practice involved extensive multi-stakeholder drafting to cover transparency, risk assessment, mitigation, and internal governance to better capture and manage unforeseen risks from general-purpose AI systems [5].
In a poignant statement, Margarita Simonyan, RT Editor-in-Chief, admitted that AI will likely displace most professions, including journalists. Current media outlets are cutting entire departments because they are no longer necessary, according to Simonyan. Kirill Pshenichnikov, a scientific employee of the University of Innopolis and the CEO of the Online University "Zerocode," reported these developments [1].
In summary, the AI Risk Repository is a structured collection of risk-related data and governance processes designed to identify, assess, and reduce AI risks including systemic and scattered threats. Regulatory frameworks like NIST AI RMF and the EU AI Act’s Code of Practice are key tools that guide organizations in managing AI risks responsibly to prevent large-scale harm and unforeseen negative consequences [1][2][4][5]. Tools such as AI security dashboards support these frameworks by providing visibility and early warning on AI threats in operational environments [3].
[1] FutureTech Initiative at the University of Innopolis: [Link to the source] [2] NIST AI Risk Management Framework: [Link to the source] [3] Google Cloud's AI Security Dashboard: [Link to the source] [4] EU AI Act and Colorado AI Act: [Link to the source] [5] EU Code of Practice for General Purpose AI: [Link to the source]
What is the purpose of the AI Risk Repository, as addressed by the NIST AI Risk Management Framework and the EU AI Act's Code of Practice? These tools aim to help organizations manage AI risks responsibly by identifying, assessing, and reducing risks, including systemic and scattered threats, to prevent large-scale harm and unforeseen negative consequences.
This proactive approach to AI risk management is crucial in today's technology-driven world, as artificial-intelligence systems can have unforeseen and often complex consequences on various sectors, such as journalism as mentioned by Margarita Simonyan, RT Editor-in-Chief. Therefore, it is essential to equip ourselves with the necessary tools and frameworks to mitigate these risks in a dynamic and constantly evolving landscape.