“Ghostbusters” star Leslie Jones is shown in April at an event in Las Vegas. (Photo by Chris Pizzello/Invision/AP, File)

The racist, sexist harassment of “Ghostbusters” star Leslie Jones on Twitter, and Twitter’s corresponding decision to permanently ban one of the provocateurs who set an Internet mob on her, are only the latest chapter in the troubled social media service’s attempts to become something other than a permanent cesspool. But reading Jesse Singal’s typically smart analysis of how Twitter responds to harassment based on users’ relative celebrity, rather than developing clear, transparent principles that it’s capable of applying to all users, made me wonder if the only version of Twitter that many of us actually want is a purely fantastical one — or at least one that’s impossible to achieve given the present, fallen state of humanity.

An obvious, if time- and capital-consuming solution to Twitter’s harassment problem would be for Twitter to hire sufficient staff to review reports of abuse and harassment in a timely manner. I’m not sure how many people that would require, what you’d have to pay people to read racist, sexist ugliness all day for months or even years at a time, and what kind of burnout and turnover those employees would experience. But there’s no question that the human capital costs would be considerable, and Twitter, despite its influence in culture and policy, isn’t exactly drowning in money. However preferable this fix might be, it doesn’t strike me as terribly likely.

So the question becomes whether Twitter could make modifications to the way its service works without becoming something else entirely.

The service recently announced that people will be able to apply to get their accounts verified, arguing for their public significance and submitting some form of identification to prove they’re who they say they are. Say Twitter made that process mandatory, so you couldn’t sign up unless you tied your account to a real-world identity. That might cut down on the number of people who hide behind pseudonyms and fake pictures to harass other people, but it might also drive away people who just want to use Twitter to make jokes and have conversations, or who don’t want to give a big corporation access to their identity documents.

Similarly, perhaps Twitter could reverse-polarize its current setup, making it so that users had to request permission to mention other users’ handles in Tweets, something like a friending process on Facebook. Of course, that wouldn’t prevent trolls from simply typing famous users’ names into Tweets, minus the “@” symbol, clogging up their search results, if not their mentions. And an opt-in process shifts the burden of moderation to users themselves.

Celebrities with their own social media teams might be able to approve enough people to give fans a chance to express their praise and admiration in a famous person’s mentions, while filtering out harassers. But for anyone with a reasonably large number of followers, but without a lot of time, money or connection to an organization that will do that work for them, such a switch would make it extraordinarily burdensome to figure out who to let in and who to keep out.

I love the free flow of conversation on Twitter something like 98 percent of the time, and I’ve gotten pretty good at managing when my mentions temporarily fill up with garbage. (I never search for written mentions of my name.) A different system might save me from that 2 percent of trouble, but it would also make it much harder for me to throw out random requests for help accessing journal articles or suggestions for which episodes of a TV show are worth checking out.

And a solution where users are treated as guilty until proven innocent, their accounts suspended automatically when someone reports them for harassment or abuse, would immediately become a weapon of that same harassment. Driving someone off a social media service by harassing them is one way to deny them access, but actually preventing them from speaking on that service by making political or ideological charges of abuse literally prevents someone from using that same service.

Maybe some tech wizard out there has another solution, something short of aggressive word and image filters, that could prevent people from saying all sorts of things or sharing all sorts of visuals in discussions that aren’t intended to function as harassment and intimidation. But in the meantime, we don’t want the Twitter that we have, where anyone can say anything to anyone. And it’s not at all clear that we’d want the Twitter we’d get with significant changes, either.