Why won’t YouTube and Google consider moderators to tackle online hate?
https://www.theguardian.com/commentisfree/2017/mar/16/youtube-google-hate-speech-moderators Version 0 of 1. In preparation for an article I am writing, I spent much of the other day researching neo-Nazi music. I explored the music of skinhead bands such as Skrewdriver and Bound for Glory, national socialist black metal groups such as Graveland, and some of the more bizarre manifestations of white power music (German neo-Nazi hip-hop is apparently a thing). And how did I do this research? Did I skulk in dodgy pubs with bull-necked men with shaved heads and bulldogs? No, I used YouTube and Google. I’m aware, though, that users of social media platforms are not confined to Jewish sociologists such as myself. And so throughout my research I was acutely conscious that the ease of access to hateful music and other forms of hate speech is an incredible boon to certain groups. At the same time as I was both exploiting and worrying about openly available far-right material on the internet, the Commons home affairs committee was criticising internet giants, including representatives of Facebook, Twitter and Google, for not taking tough enough action on such material. The MPs pointed to the failure of YouTube to take down videos, including one of David Duke accusing Jews of “white genocide”, and criticised Facebook pages calling for a ban on Islam. It was a tense exchange, with Yvette Cooper accusing the companies of “commercial prostitution” and the Facebook representative tying himself in knots attempting to explain why a page called “Ban Islam” constitutes criticism of a religion rather than an attack on Muslims. The problem is that these companies vacillate between positioning themselves as content-agnostic platforms and as publishers. They certainly have the financial benefits of publishers, but the openness of their platforms also gives them plausible deniability of responsibility for what users publish should they need it. It is striking how successful YouTube has been in ensuring that it is not awash with porn, and how quick it is to take down copyrighted material, compared with how unsuccessful social media companies have been (insofar as they have tried) in limiting the availability of racist material and other kinds of hate speech. But while social media companies certainly need to face up to their responsibilities as publishers, and while excuses that they do not have the capacity to police content more than they currently do won’t wash, there are some genuine dilemmas that mean there is no simple response to the availability of certain sorts of material online. One difficulty that pertains particularly to antisemitic material is that there is a long tradition of encoding antisemitism through a kind of code, rather than through open hatred. “Exposés” of the Rothschilds and George Soros are commonplace online – and are often shared by people with no antisemitic intent – and require specific knowledge to decode and combat. Holocaust denial material can be cloaked in subtle formulations that might be missed by the casual browser. Any attempt by internet companies to restrict the availability of such material would require endless, arduous vigilance and a high degree of specialist expertise. Just as problematic is the risk of “collateral damage” in any attempt to banish certain kinds of content. Take extreme metal. Some bands delight in producing music that flirts with dangerous themes. Slayer’s Angel of Death – a song about the Auschwitz torture-doctor Josef Mengele – is not a pro-Nazi song, but the dividing line between it and neo-Nazi rock is uncomfortably porous. Similarly, genres such as noise, power electronics and industrial are predicated on exploring the darkest excesses of human behaviour. Acts such as Whitehouse and Throbbing Gristle, who are essential part of modern avant-garde music, could be caught in a “dragnet” approach to hateful material. There is also a case against banishing whole classes of hateful views from mainstream platforms. While it is true that Twitter and the rest can act as an echo chamber for the far right, at least it is a porous echo chamber where they can be challenged. The “alt-right” Twitter alternative Gab may or may not succeed, but if it does we may lose the ability to confront the haters. Internet companies need to acknowledge and embrace the fact that they are publishers. That means hiring staff with the necessary expertise – and enough of them – to make the fine judgments as to what to take down, as well as the easy calls. They need to work towards a situation in which the far right and other hate groups are harried enough that availability of their work is limited, but not harried to the extent that they have no option but to turn to an alternative, entirely unpoliced platform. Still, I can’t help but feel that sometimes hate groups only undermine themselves when they spread their work online. Take the laughable spectacle of Nick Griffin telling the nativity story, or this ridiculous attempt at “patriotic” German reggae: thanks to the irresponsibility of the internet giants, I am laughing like a drain. |