On Tumblr & the Woes of Automoderation

So it looks like Tumblr is royally screwing the pooch (possibly bad choice of words) over this “blanket ban on adult content” thats going on over there right now, which seems to have left the site determined to drive a wedge between the service and it’s user base. While the circumstances are different, somewhat depressingly this has revealed its routes in a trend that is common across the modern social media orientated internet, and is simply a new reaction to an existing problem.

So what prompted this whole thing was Apple pulling the Tumblr app from AppStore after finding blogs hosting ILLEGAL content that wasn’t moderated against. Now, we can discuss Apple’s self-appointed Moral Arbiter role on iOS another time (and believe me, I have words there), the key phrase here is ILLEGAL CONTENT Apparently the content in question wasn’t automoderated against because it didn’t match to an industry database so it got ignored. And Tumblr was apparently fine with that UNTIL Apple got wind and pulled an app.

This suggests to me that the existing problem, alongside the ILLEGAL content of course, was Tumblr’s automoderation was lacking, since it didn’t pick this up at all. From what I’ve read, conjecture is it would be something that improved (or human) moderation could have caught. But improved-or-human moderation is expensive to implement. Notice how it wasn’t until Apple pulled an app that Tumblr did anything at all, because an app vanishing threatens their bottom line. Remember, on a social network, you’re not the customer, you are the product. ADVERTISERS are the customer, and lack of access to adverts means no income.

So faced with this issue, Tumblr seems to have gone in to full on panic mode and decided rather than improve things to route out actual ILLEGAL content, they’re going to remove (in their mind, we could have a lengthy discussion on this too) the chances of it happening again by banning ADULT content outright.

Here’s the problem, though: How do you define “ADULT”? The line drawn seems to be stuff that’s considered NSFW, but that’s a hugely subjective area that can easily be used to target art, some subcultures, etc. etc. And It Has.

Just take a look around social networking and you’ll see a raft of images of stuff that’s been flagged by their automoderation as NSFW and, presumably, will be totally verboten after the ban is fully implemented. It includes really innocuous art, articles discussing pornography and, perhaps most hilariously, the announcement of the impending ban from tumblr staff itself. Additionally there has also been worries of it going after vulnerable groups such as the LGBTQ community and similar, as well as this whole bizarre thing about “Female Presenting Nipples which… has inspired much debate. Ultimately the question comes down to: What will the automoderation consider NSFW and ban?

And notice that this continues to be down to Automatic Moderation. The initial problem seems to have been exacerbated by it not being good enough, and the very well founded worry circulating right now is to do with what the same moderation considers bad or good. Yet, Tumblr doesn’t seem interested in actually fixing their system, or better yet actually employing responsible humans in addition to automatic systems, they’ve decided to go with the knee-jerk and the easy, regardless of what harm it might cause.

Ultimately, though, the algorithms will remain in charge, because it’s cheaper than actually taking direct responsibility on what goes on in the pages of your website. If that sounds familiar, it’s the exact same logic that causes the sort of dubious copyright strikes on YouTube or the very iffy moderation of Twitter or any number of weird and shitty goings-on on many websites right now. Time and time again, it seems that sites would rather take steps that push against their user base than actually spend time and money fixing the problems that present themselves.

The way these situations arise is fairly obvious, though, since the growth of big user-generated content sites has made old fashioned manual moderation literally impossible. I believe YouTube passed posting 24 hours of content every hour some time ago, for instance. Problem is the supposed solution, in true “Wanting The Age of the Flying Car without the Age of the Flying Car fashion, became handing it all to the algorithms, and we’re now living with the issues that entails. A solution would be simple, and it wouldn’t even require doing away with the existing framework: For any moderation issues, and larger sites could do this for severe or disputed cases if it’s too much issue, hand what the algorithm picks up to a human. Y’know, actually apply responsibility at some point. But that costs money, so it’ll take a major paradigm shift for it to happen, sadly.

Ultimately, Tumblr will probably go through with their decision, and ultimately they’re free to do so. It’ll mean they’ll haemorrhage users, and it might be a final nail in the coffin of the platform (which I stopped using years ago) that relegates it to a footnote in history. What’s going to be interesting is watching the other platforms, and seeing how they react. The smart ones will hopefully realise the problem and up their responsibility game, chances are most will do nothing. The worrying thing is that this sets a precedence, and other companies start to impinge on their user base rather than change how things are done. It’s an unfortunate side-effect of the trust we’ve put in uncaring companies who’ve been building the modern internet for us, and one we probably need to work on fixing as a user base.

—Would rather have a better internet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s