Unraveling the Enigma within the Dark Container: Investigating the Black Box
In the realm of algorithmic systems, a lack of clarity surrounding terms and the varying maturity of approaches has led to a lack of detail in their implementation. This is compounded by disagreements among different communities on what these terms mean.
To address this issue, workshops are being held to further the conversation in various domains, identify shared needs, methodologies, challenges, and solutions for the regulatory inspection of algorithmic systems across sectors. The focus of these discussions is on algorithm audit and algorithmic impact assessment.
Bias Audits vs. Regulatory Inspections of Algorithms
A bias audit is a proactive, systematic examination aimed at identifying unfair outcomes or disparate impacts within AI models. It primarily focuses on proxy discrimination, even if protected attributes are not explicitly used. Bias audits emphasise fairness by design and require detailed documentation of corrections and monitoring to manage reputational and compliance risks.
In contrast, regulatory inspections are formal, often reactive, oversight activities carried out or triggered by external regulatory bodies. They focus on compliance with legal standards and may include audits of bias, but involve broader scrutiny of a model’s adherence to regulations, operational transparency, and accountability.
Algorithmic Risk Assessment vs. Algorithmic Impact Evaluation
Algorithmic risk assessment is a structured, internal process aimed at identifying, categorising, and prioritising risks related to AI systems before or during deployment. It covers a broad spectrum of risks such as bias, security vulnerabilities, explainability failures, adversarial attacks, and regulatory non-compliance.
Algorithmic impact evaluation, on the other hand, is an analysis conducted to assess the potential or actual effects of an AI system on stakeholders and society, including ethical, social, economic, and environmental impacts. This involves measuring outcomes, unintended consequences, and fairness implications post-deployment.
The following table summarises the key differences between these approaches:
| Aspect | Bias Audit | Regulatory Inspection | Algorithmic Risk Assessment | Algorithmic Impact Evaluation | |-------------------------------|--------------------------------------------|-----------------------------------------|---------------------------------------------|---------------------------------------------| | Purpose | Detect and mitigate unfair bias/discrimination | Check legal and regulatory compliance | Identify and manage AI-specific risks | Assess societal and stakeholder effects | | Scope | Focus on fairness, bias, discrimination | Broad regulatory compliance, accountability | Multi-risk (bias, security, explainability, etc.) | Social, ethical, economic impacts | | Timing | Continuous monitoring and pre/post deployment | Usually periodic or triggered by incidents | Proactive before/during AI lifecycle stages | Usually post-deployment or during impact reviews | | Control | Internal governance, C-suite owned | External regulatory authorities | Internal risk owners (data scientists, compliance) | Internal and external evaluators | | Methods | Statistical testing for disparate impact, proxy variables analysis, monitoring | Documentation audit, version control, legal review | Risk mapping, scoring, thresholds, escalation | Qualitative & quantitative impact studies, stakeholder consultation | | Outcomes | Remediation plans, model corrections | Compliance enforcement, penalties | Risk mitigation, prioritization, governance | Policy adjustments, transparency reports |
These distinctions clarify that bias audits and regulatory inspections focus more on fairness and compliance enforcement, while risk assessments and impact evaluations serve as broader, strategic tools to manage and understand AI system risks and consequences comprehensively.
The report aims to inform policy conversations about algorithmic systems, with Data & Society working on a paper looking at the challenges of translating existing impact assessment models to algorithmic systems. DataKind UK is collaborating to translate findings into practical, accessible advice and information for social change organisations.
A series of workshops are being hosted to discuss regulatory inspection of algorithms in digital media platforms, pricing and competition, and equalities. The report may also be useful for anyone who creates, commissions, or interacts with an algorithmic system.
- The report, focusing on policy-and-legislation conversations about algorithmic systems, features a study by Data & Society examining the difficulties in applying existing impact assessment models to algorithms, aiming to provide practical, accessible advice and information for social change organizations.
- To better understand the regulatory inspection of algorithms in various sectors, workshops are being organized to discuss their application in digital media platforms, pricing and competition, and equalities, making the report relevant for those who create, commission, or interact with algorithmic systems.