Skip to content

Online Safety Bill Undergoes Deterioration in the UK; Lawmakers Should Reconsider and Reject It.

Online Regulation Shift: The Online Safety Bill, aimed at enhancing internet safety by controlling content management, child protection, and removal of illicit speech, has moved from the House of Commons to the House of Lords. However, concerns persist regarding certain measures within the...

Online Safety Bill in the UK Amends for the Worse; Parliament Should Reject This Legislation.
Online Safety Bill in the UK Amends for the Worse; Parliament Should Reject This Legislation.

Online Safety Bill Undergoes Deterioration in the UK; Lawmakers Should Reconsider and Reject It.

The UK's recently passed Online Safety Act, formerly known as the Online Safety Bill, aims to regulate online services, protect children, and remove illegal speech. However, the Act raises several concerns related to free expression, privacy, competition, and user rights.

Free Expression Concerns

One of the main points of contention is the requirement for social media platforms to proactively screen and restrict users’ content deemed illegal. Critics argue that this could amount to prior restraint on speech and severely interfere with freedom of expression in unpredictable ways. The Act does not require platforms to notify users when content is blocked or explain why, which could censor public communication and debate extensively.

The Act focuses on removing illegal content such as terrorist material and child sexual abuse content but excludes protection for adults from "legal but harmful" content. While some view this as a positive compromise, there is a risk that even legal offensive speech could be unintentionally suppressed by platforms trying to avoid fines. Over-censorship risks arise because platforms may err on the side of blocking "awful but lawful" content to avoid regulatory penalties, potentially chilling free speech and the diversity of online discourse.

Privacy Concerns

The Act and related laws could compromise privacy, especially since concerns exist around the lack of end-to-end encryption. This exposes users to potential surveillance fears, which in turn may lead to self-censorship and a chilling effect on free speech online. The regulation's broad powers to scan and moderate content also raise privacy issues about how user data and communications are handled by platforms under these requirements.

Competition and Market Impact

The Act’s strict duties and investigative powers can affect competition by pressuring smaller or less-resourced platforms disproportionately, potentially benefiting larger firms better equipped to comply. The additional operational burdens, such as risk assessment and compliance mechanisms, may raise entry barriers and limit market diversity.

User Rights

The Act introduces duties for platforms to put in place mechanisms for users to report illegal content and appeal content removal, suspensions, or bans. However, current criticism highlights gaps in transparency and recourse, as users often are not informed about content takedowns or given reasons, thus limiting accountability. Legal protections for users facing defamation threats or SLAPP lawsuits are also relevant. The Act has been criticized for not sufficiently addressing such issues, as SLAPPs can stifle public interest criticism through prohibitive legal costs, indicating a need for further legal aid and anti-SLAPP measures outside this Bill.

Additional Child Safety Duties

The Act includes responsibilities for protecting children from both illegal content and harmful yet legal content such as pornography, suicide, self-injury, and bullying materials. This means platforms must manage not only content but also how algorithms may amplify harmful content's reach among children, which adds regulatory complexity.

In conclusion, the Online Safety Act presents a balancing challenge between ensuring online safety, safeguarding human rights like free expression and privacy, protecting users (especially children), and maintaining a fair and competitive online ecosystem. Critics warn that unless carefully implemented with robust safeguards, the Act risks over-censorship, privacy erosion, reduced transparency, and potential anti-competitive effects.

  1. The AI-driven content moderation required by the new Online Safety Act, also known as the Online Safety Bill, may inadvertently suppress legal offensive speech due to the risk of over-censorship, leading to a potential chilling effect on free expression.
  2. The broad scanning and moderation powers granted to platforms under the Online Safety Act pose a threat to users' privacy, as it remains unclear how user data and communications will be handled and managed.
  3. The Online Safety Act's strict duties and investigative powers may disproportionately affect smaller or less-resourced platforms, potentially favoring larger, better-equipped firms, which could lead to a less competitive online market.
  4. The Online Safety Act falls short in addressing concerns related to transparency, accountability, and legal protections for users, particularly regarding defamation threats and SLAPP lawsuits, necessitating further legal aid and anti-SLAPP measures outside this Bill.

Read also:

    Latest