Skip to content

AI scientists covertly executed an experiment on Reddit users, testing their ability to alter opinions; findings reveal unsettling results.

Covertly Deployed Chatbots by University of Zurich Researchers Pervade r/changemyview Subreddit, Proving More Effective Than Humans in Influencing Users' Opinions.

Researchers from the University of Zurich covertly deployed persuasive chatbots on the...
Researchers from the University of Zurich covertly deployed persuasive chatbots on the r/changemyview online forum, demonstrating a higher success rate in modifying users' viewpoints compared to human interactions.

AI scientists covertly executed an experiment on Reddit users, testing their ability to alter opinions; findings reveal unsettling results.

Reddit Seeks Legal Action Against University of Zurich Over AI Experiment

Reddit is planning to sue the University of Zurich following an experiment that deployed artificial intelligence chatbots on its platform to covertly influence user opinions. The researchers behind the study aimed to determine whether AI could be used to sway public sentiment, with the bots posing under various disguises, including a domestic violence counselor and a black man critical of the Black Lives Matter movement.

The drive behind this study involved unleashing more than 1,700 AI-generated comments across the r/ChangeMyView subreddit, home to nearly 4 million users who discuss contentious topics. In addition to these bots, another AI was implemented to examine user profiles to ensure persuasive responses.

Moderators of the r/ChangeMyView subreddit were later made aware of the experiment by the Zurich researchers, along with a draft of the study's initial findings, which showed that AI responses were between three and six times more persuasive than those offered by humans. However, Reddit's Chief Legal Officer, Ben Lee, expressed concern over the experiment's moral and legal implications, stating that it violated both academic and human rights norms and went against Reddit's user agreement and rules.

In response to the controversy, the University of Zurich announced that it would not publish the study's findings and committed to strengthening its ethics committee's review process. This would entail coordinating with online communities before conducting experiments that might expose them to unwitting participation.

The concerns about AI's role in online discourse remain, as it was recently revealed that OpenAI's GPT-4.5 Large Language Model had the capability of passing the Turing test, convincing trial participants that they were conversing with another human 73% of the time. Such developments hint at the potential for AI chatbots to increasingly dominate the production of internet content, a possibility that has been labeled the "dead internet" theory.

However, it is important to note that the "dead internet" theory remains a conspiracy theory, at least for now.

In the aftermath of this incident, Reddit is reportedly accelerating plans for user verification tools to help distinguish between humans and sophisticated AI in online communities. The development underscores the urgency for clearer guidelines and stricter oversight of AI experimentation in public forums to safeguard users and maintain the integrity of online communities.

Artificial intelligence, used in the questionable experiment conducted by the University of Zurich, has raised concerns about its influence on general news platforms like Reddit. The proposed user verification tools by Reddit aim to prevent artificial-intelligence-generated content from dominating online communities, addressing the potential ramifications highlighted in the "dead internet" theory.

In this scenario, the need for stricter regulations and guidelines in the realm of artificial-intelligence-based studies and experiments, as well as cybercrime and justice issues, becomes evident, given the ethical and moral implications of such research.

Read also:

    Latest