For all the good social media brings—connection across borders and communities—it’s impossible to ignore just how embedded abuse and harassment are.
Teenagers; high-profile celebrities, politicians and activists; members of minority groups—the abuse people receive is as relentless as it is terrifying. According to the Pew Research Center, around four in ten Americans have personally experienced online harassment, and 62 percent of people consider it a major problem.
Software engineer Tracy Chou has a similar story. In 2013, she published a Medium post and a databank on women in engineering, which helped push for diversity data disclosures at tech companies. She later co-founded Project Include, a non-profit supporting tech startups on diversity and inclusion.
Throughout all of this, as she sets out in a question and answer session on the website Reddit, she faced “constant/severe online harassment.”
“I’ve been stalked, threatened, mansplained and trolled by reply guys, and spammed with crude unwanted content,” she writes. “Now as founder and CEO of Block Party, I hope to help others who are in a similar situation.
“We want to put people back in control of their online experience with our tool to help filter through unwanted content.”
Her app works by connecting to social networks—just Twitter in the beta phase—and letting users choose who they want to hear from on their timeline and their @mentions, keeping everyone else out.
On the app’s FAQ, it explains that accounts “more likely to send unwanted content” are muted and filtered into a ‘Lockout Folder’, accessible through the app, which users can choose to open and review whenever they choose.
This is a failsafe for two reasons: to ensure that nothing useful or thoughtful is permanently blocked; and to ensure real-world threats can be identified and dealt with appropriately.
Crucially, this folder can be checked by a trusted friend by ‘Helper View’, removing the need to personally sift through abuse for important updates or lost messages.
Photo by Marvin Meyer on Unsplash
As with any application limiting what can be seen online, Block Party has faced concerns about stopping debate or enforcing echo chambers, under the pretence of making the internet safer.
Replying to this in the question and answer session, Chou wrote that the filter bubble “is real” and something that needs to be addressed, “but I think that’s more on the platform side [with regards to] what algorithms are deciding to show and five distribution to.”
“Sure, trolls, bots, harassers, etc. can still post shitty things, they have their “freedom of speech”, but you should have your freedom to not listen to them.”
She continued: “What we’re filtering out is harassment and useless/mean/rude commentary, not anything that contributes a thoughtful alternate viewpoint…Our hope with Block Party is that if we can filter down to only the most civil discourse, that actually creates the space for real discussions.”
She adds that things that are hidden are actually still accessible via the Lockout Folder, which can be checked whenever desired. “The filters are heuristics and we do not use shared allow/deny lists though users have been asking for being able to share lists.”
Another concern, supported by many on the Reddit question and answer session, was the app’s position on data and privacy.
This is the case and is, sadly, par the course for major social media apps.
Chou went into more detail on the Block Party FAQs, saying that as the Twitter API bundles permissions, “the only way that Block Party can get access to mute and block functionality is to request the highest level of access, which comes with a long list of other permissions,” she writes.
“It’s unfortunately more than we’d like to ask for, but our core service depends on being able to mute and block through the API.”
Photo by Duy Pham on Unsplash
Looking forwards, Chou mused on whether it was possible to ‘solve’ harassment online.
She said this would be two-pronged: mitigating the impact of it harassment and preventing it from happening in the first place.
“So, sure, trolls, bots, harassers, etc. can still post shitty things, they have their ‘freedom of speech’, but you should have your freedom to not listen to them. On platforms that are more free-flowing and open, like Twitter, literally anyone can mention you or tweet at you to get into your mentions/notifications.
“When they’re sending unwanted content your way, there’s no reason you should have to see it in real-time, at whatever point they happened to send it to try to bring you down.”
She also wrote that a structural flaw in how abuse is currently addressed is that the recipient has to shoulder the full burden of dealing with it, and reports for much of the content that is not direct abuse are deprioritised and ignored.
Stopping online abuse in the first place is, she says, harder to address. She notes that as with any difficult problem, it’s important to understand why it’s so commonplace, noting how easy it is to post abusive messages, and how connected this is to the fact that we forget real people read our messages, and that tech companies’ desire for us to post freely and quickly.
Chou adds that a culture of accountability would be key to stopping more harassment. As a final point, she writes that Block Party may also have the solution to stopping abuse: “Making it so that trolls don’t feel like they’ll definitely get through to you.
“Posting into the ether and being ignored is very demotivating, which is good in this case :).”
Robert Scott Lazar