Discord is updating its community guidelines with a clause that bans sharing information it deems “false or misleading” and “likely to cause physical or societal harm” if acted upon. The rule could apply to lots of information, but Covid-19 rhetoric is the primary example given. The chat service doesn’t want to be a source for “anti-vaccination content” or advice not accepted by the medical community, such as the use of unproven home remedies.
Put briefly, Discord will not allow individuals to “post, promote, or organize communities around false or misleading health information that is likely to result in harm,” wrote Discord senior platform policy specialist Alex Anderson in a blog post explaining the update.
Discord defines false or misleading health information as being any health information which “directly and unequivocally contradicts the most recent consensus reached by the medical community,” and it offers a surprising amount of detail on what it means by that.
The following is a list of topics Discord warns not to make “false or misleading” claims about:
- the safety, side effects, or efficacy of vaccines;
- the ingredients, development, or approval of vaccines;
- alternative, unapproved treatments for disease (including claims that promote harmful forms of self-medication, as well as claims advocating vaccine refusal or alternatives);
- the existence or prevalence of a disease;
- the transmission or symptoms of a disease;
- health guidance, advisories, or mandates (including false claims about preventative measures and actions that could hinder the resolution of a public health emergency);
- the availability or eligibility for health services; and,
- content that implies a health conspiracy by malicious forces (including claims that could cause social unrest or prompt the destruction of critical infrastructure).
On its own, the list could be construed as a blanket prohibition on expressing distrust in any local health mandate or even recommending “alternative” traditional medicines. However, Anderson says that Discord will consider context like intent, and won’t take action unless it believes messages are “likely to cause some form of harm.”
“This policy is not intended to be punitive of polarizing or controversial viewpoints,” he writes. “We allow the sharing of personal health experiences; opinions and commentary (so long as such views are based in fact and will not lead to harm); good-faith discussions about medical science and research; content intended to condemn or debunk health misinformation; and satire and humor that obviously and deliberately intends to mock false or misleading health claims.”
People who hold polarizing or controversial viewpoints will probably disagree with the claim that they’re not being targeted, although it bears mentioning that Discord users who mainly stick to smaller groups may not notice any change, regardless of what they say on the platform.
When I spoke to Discord about privacy in 2019, it told me that it doesn’t proactively monitor the text and voice chat of any given server—with over 150 million monthly active users, how could it? Instead, moderators largely respond to user reports, which are most likely to come from big public servers.
I think it remains unlikely that Discord is scanning the chat logs of every 20-person server looking for narratives related to Covid-19 vaccines and microchips, although there is some precedent for proactive moderation on Discord. In 2018, after a few publications reported that the relative privacy offered by Discord was turning it into a white supremacist hideout, the company made a publicized effort to rid itself of hate group servers. Following that example, it’s possible that Discord will seek out and shut down servers that openly advertise themselves as anti-vax hubs, if any such servers exist. (If I had to guess, I’d say they do.)
The new Discord policies go into effect on March 28. “Malicious impersonation” is also forbidden by the new guidelines, with the note that “satire and parody are okay,” and Discord has given itself permission to consider “relevant off-platform behaviors” when acting on user reports, such as “membership or association with a hate group, illegal activities, and hateful, sexual or other types of violent acts.”
Discord also says it will be cracking down on “false, malicious, or spammy” reports. “If you are found to be reporting in bad faith, we may take action against your account,” the company says.
As someone who doesn’t use Discord as a soapbox for vaccine-related commentary one way or another, the news mostly serves as a reminder that conversations which happen on the platform aren’t entirely private, even on so-called private servers. It’s a moderated social network, so if someone files a report, it is possible for Discord mods to look at your chat logs and issue warnings, suspensions, or bans. For those who want Discord-like features without joining a social network, companies like TeamSpeak still offer paid, private VOIP servers. (For the moment, I’m not too worried about the encryption of my D&D group’s endless scheduling conversations.)