Interviewer: Jillian York

Dr. Carolina Are is an Innovation Fellow at Northumbria University Centre for Digital Citizens. Her research primarily focuses on the intersection between online abuse and censorship. Her current research project investigates Instagram and TikTok’s approach to malicious flagging against ‘grey area’ content, or content that toes the line of compliance with social media’s community guidelines.

She is also a blogger and creator herself, as well as a writer, pole dance instructor and award-winning activist. Dr. Are sat down for an interview with EFF’s Jillian York to discuss the impact of platform censorship on sex workers and activist communities, the need for systemic change around content moderation, and how there’s hope to be found in the younger generations.

Jillian York: Can you introduce yourself and tell us a bit about your work? Specifically, can you give us an idea of how you became a free speech advocate?

Dr. Carolina Are: Sure, I’m Carolina Are, I’m an Innovation Fellow at Northumbria University Centre for Digital Citizens and I mainly focus on deplatforming, online censorship, and platform governance of speech but also bodies, nudity, and sex work.

I came to it from a pretty personal and selfish perspective, in the sense that I was doing my PhD on the moderation of online abuse and conspiracy theories while also doing pole dance as a hobby. At the time my social media accounts were separate because I still didn’t know how I wanted to present to academia. So I had a pole dance account on Instagram and an academic account on Twitter. This was around the time when FOSTA/ SESTA was approved in the US. In 2019, Instagram started heavily shadowbanning– algorithmically demoting – poledancers’ content. And I was in a really unique position to be observing the moderation of stuff that wasn’t actually getting moderated and should have been getting moderated – it was horrible, it was abusive content– while my videos were getting heavily censored and were not reaching viewers anymore. So I started getting more interested in the moderation of nudity, the political circumstances that surrounded the step of censorship. And I started creating a lot of activism campaigns about it, including one that resulted in Instagram directly apologizing to me and to poledancers about the shadowbanning of pole dance.

So, from there, I kind of shifted my public-facing research to the moderation of nudity and sexual activity and sexuality and just sexual solicitation in general. And I then unified my online persona to reflect both my experiences and my expertise. I guess that’s how I came to it. It started with me, and with what happened to me and the censorship my accounts faced. And because of that, I became a lot more aware of censorship of sex work, of people that have it a lot worse than me, that introduced me to a lot of fantastic activist networks that were protesting that and massively changed the direction of my research.

York: How do you personally define deplatforming and what sort of impact does it have on pole dancers, on sex workers, on all of the different communities that you work with?

What I would define as deplatforming is the removal of content or a full account from a social media platform or an internet platform. This means that you lose access to the account, but you also lose access to any communications that you may have had through that account – if it’s an app, for instance. And you also lose access to your content on that account. So, all of that has massive impacts on people that work and communicate and organize through social media or through their platforms.

Let’s say, if you’re an activist and your main activist network is through platforms –maybe because people have a public-facing persona that is anonymous and they don’t want to give you their data, their email, their phone number– you lose access to them if you are deplatformed. Similarly, if you are a small business or a content creator, and you promote yourself largely through your social media accounts, then you lose your outlet of promotion. You lose your network of customers. You lose everything that helps you make money. And, on top of that, for a lot of people, as a few of the papers I’m currently working on are showing, of course platforms are an office – like a space where they do business – but at the same time they have this hybrid emotional/community role with the added business on top.

So that means that yes, you lose access to your business, you lose access to your activist network, to educational opportunities, to learning opportunities, to organizing opportunities – but you also lose access to your memories. You lose access to your friends. So I’m one of those people that become intermediaries between platforms like Meta and people that have been deleted because of my research. I sometimes put them in touch with the platform in order for them to restore mistakenly deleted accounts. And just recently I helped someone who – without my asking, because I do this for free – ended up PayPal-ing me a lot of money because I was the only person that helped while the platforms infrastructure and appeals were ineffective. And what she said was, “Instagram was the only platform where I had pictures with my dead stepmother, and I didn’t have access to them anymore and I would have lost them if you hadn’t helped me.”

So there is a whole emotional and financial impact that this has on people. Because, obviously, you’re very stressed out and worried and terrified if you lose your main source of income or of organizing or of education and emotional support. But you also lose access to your memories and your loved ones. And I think this is a direct consequence of how platforms have marketed themselves to us. They’ve marketed themselves as the one stop shop for community or to become a solo entrepreneur. But then they’re like, oh only for those kinds of creators, not for the creators that we don’t care about or we don’t like. Not for the accounts we don’t want to promote.

York: You mentioned earlier that some of your earlier work looked at content that should be taken down. I don’t think either of us are free speech absolutists, but I do struggle with the question of authority and who gets to decide what should be removed, or deplatformed—especially in an environment where we’re seeing lots of censorial bills worldwide aimed at protecting children from some of the same content that we’re concerned about being censored.  How do you see that line, and who should decide?

So that is an excellent question, and it’s very difficult to find one straight answer because I think the line moves for everyone and for people’s specific experiences. I think what I’m referring to is something that is already covered by, for instance, discrimination law. So outright accusing people of a crime that it’s been proved offline that they haven’t committed. When that has been proven that that is not the case and someone goes and says that online to insult or harass or offend someone – and that becomes a sort of mob violence – then I think that’s when something should be taken down. Because there’s direct offline harm to specific people that are being targeted en masse. It’s difficult to find the line, though, because that could happen even like, let’s say for something like #MeToo, when things ended being true about certain people. So it’s very difficult to find the line.

I think that platforms’ approach to algorithmic moderation – blanket deplatforming for things – isn’t really working when nuance is required. The case that I was observing was very specific because it started with a conspiracy theory about a criminal case, and then people that believed or didn’t believe in that conspiracy theory started insulting each other and everybody that’s involved with the case. So I think conspiracy theories are another interesting scenario because you’re not directly harassing anyone if you say, “It’s better to inject bleach into your veins instead of getting vaccinated.” But at the same time, sharing that information can be really harmful to public beliefs about stuff. If we’re thinking about what’s happening with measles, the fact that certain illnesses are coming back because people are so against vaccines from what they’ve read online. So I think there’s quite a system offline already for information that is untrue, for information that is directly targeting specific groups and specific people in a certain manner. So I think what’s happening a lot with what I’m seeing with online legislation is that it’s becoming very broad, and platforms apply it in a really broad way because they just want to cover their backs and don’t want to be seen to be promoting anything that might be remotely harmful. But I think what’s not happening is – or what’s happening in a less obvious fashion – is looking at what we already have and thinking how can we apply it online in a way that doesn’t wreck this infrastructure that we have. And I think that’s very apparent with the case of conspiracy theories and online abuse.

But if we move back to the people we were discussing– sex workers, people that post online nudity, and porn and stuff like that. Porn has already been viewed as free speech in trials from the 1950s, so why are we going back to that? Instead of investing in that and forcing platforms to over-comply, why don’t we invest in better sex education offline so that people who happen to access porn online don’t think that that is the way sex is? Or if there’s actual abuse being committed against people, why do we not regulate with laws that are about abuse and not about nudity and sexual activity? Because being naked is not the same as being trafficked. So, yeah, I think the debate really lacks nuance and lacks ad hoc application because platforms are more interested in blanket approaches because they’re easier for them to apply.

York: You work directly with companies, with platforms that individuals and communities rely on heavily. What strategies have you found to be effective in convincing platforms of the importance of preserving content or ensuring that people have the right to appeal, etc?

It’s an interesting one because I personally have found very few things to be effective. And even when they are apparently effective, there’s a downside. In my experience, for instance, because I have a past in social media marketing, public relations and communications, I always go the PR (public relations) route. Which is making platforms feel bad for something. Or, if they don’t feel bad personally, I try to make them look bad for what they’re doing, image-wise. Because at the moment their responses to everything haven’t been related to them wanting to do good, but they’ve been related to them feeling public and political pressure for things that they may have gotten wrong. So if you point out hypocrisies in their moderation, if you point out that they’ve… misbehaved, then they do tend to apologize. 

The issue is that the apologies are quite empty– it’s PR spiel. I think sometimes they’ve been helpful in the sense that for quite a while platforms denied that shadow banning was ever a thing. And the fact that I was able to make them apologize for it by showing proof, even if it didn’t really change the outcome of shadow banning much – although now Meta does notify creators about shadow banning, which was not something that was happening before– but it really showed people that they weren’t crazy. The gaslighting of users is quite an issue with platforms because they will deny that something is happening until it is too bad for them to deny it. And I think the PR route can be quite helpful to at least acknowledge that something is going on. Because if something is not even acknowledged by platforms, you’ve got very little to stand on when you question it. 

The issue is, the fact that platforms respond in a PR fashion, shows a lack of care for their part, and also sometimes leads to changes which sound good on paper or look good on paper, but when you actually look at their implication it becomes a bit ridiculous. For instance, Nyome Nicholas Williams, who is an incredible activist and plus-size Black model – so someone who is terribly affected by censorship because she’s part of a series of demographics that platforms tend to pick up more when it comes to moderation. She fought platforms so hard for the censorship of her content that she got them to introduce this policy about breast-cupping versus breast-grabbing. The issue is that now there is a written policy where you are allowed to cup your breast, but if you squeeze them too hard you get censored. So this leads to this really weird scenario where an Internet company is creating norms of how acceptable it is to grab your breasts, or which way you should be grabbing your breasts. Which becomes a bit ridiculous because they have no place in saying that, and they have no expertise in saying that.

So I think sometimes it’s good to just point out that hypocrisy over and over again, to at least acknowledge that something is going on. But I think that for real systemic change, governments need to step in to treat online freedom of speech as real freedom of speech and create checks and balances for platforms so that they can be essentially – if not fined – at least held accountable for stuff they censor in the same way that they are held accountable for things like promoting harmful things.

York: This is a moment in time where there’s a lot of really horrible things happening online. Is there anything that you’re particularly hopeful about right now? 

I think something that I’m very, very hopeful about is that the kids are alright. I think something that’s quite prominent in the moderation of nudity discourse is “Won’t somebody think of the children? What happens if a child sees a boob?" Or something absolutely ridiculous. But every time that I speak with younger people, whether that’s through public engagement stuff that I do like a public lecture or sometimes I teach seminars or sometimes I communicate with them online– they seem incredibly proficient at finding out when an image is doctored, or when an image is fake, or even when a behavior by some online people is not okay. They’re incredibly clued up about consent, they know that porn isn't always real sex. So I think we’re not giving kids enough credit about what they already know. Of course, it’s bleak sometimes to think these kids are growing up with quantifiable notions of popularity and that they can see a lot of horrible stuff online. But they also seem very aware of consent, of bodily autonomy and of what freedoms people should have with their online content – every time I teach undergrads and younger kids, they seem to be very clued up on pleasure and sex ed. So that makes me really hopeful. Because while I think a lot of campaigners, definitely the American Evangelical far-right and also the far-right that we have in Europe, would see kids as these completely innocent, angelic people that have no say in what happens to them. I think actually quite a lot of them do know, and it’s really nice to see. It makes me really hopeful. 

York: I love that. The kids are alright indeed. I’m also very hopeful in that sense. Last question– who is your free speech hero? 

There are so many it is really difficult to find just one. But I’d say, given the time that we’re in, I would say that anyone still doing journalism and education in Gaza… from me, from the outside world, just, hats off. I think they’re fighting for their lives while they’re also trying to educate us – from the extremely privileged position we’re in – about what’s going on. And I think that’s just incredible given what’s happening. So I think at the moment I would say them. 

Then in my area of research in general, there’s a lot of fantastic research collectives and sex work collectives that have definitely changed everything I know. So I’m talking about Hacking/ Hustling, Dr. Zahra Stardust in Australia. But also in the UK we have some fantastic sex working unions, like the Sex Worker Union and  Ethical Stripper who are doing incredible education through platforms despite being censored all the time. So, yeah, anybody that advocates for free speech from the position of not being heard by the mainstream I think does a great job. And I say that, of course, when it comes to marginalized communities, not white men claiming that they are being censored from the height of their newspaper columns.