Online harassment has been on the board since its inception with the social media explosion. Thoughts then chose a different way to get expressed and the virtual world became a war ground. However, this nuisance has also evolved with time. Currently, there exist numerous ways to intimidate people and this enormous abuse cannot be completely stopped with only a lone algorithm. As reported in the official publication, highlighting the MIT CSAIL’s latest research, in fact, YouTube’s algorithm or Instagram’s security tools have reportedly failed to completely remove offensive contents. Blocking users or catching trigger words, though partially helps, they are not full-proof.
Since technology alone fails to improve the situation, why not use some human help? MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has come up with an unique way using which your friends can literally decide which messages can land in the inbox and which messages should be removed. Named “Squadbox”, this crowdsourcing tool helps harassed people to create their own squad of friends who then filter the messages and work as a support system in any cyber-attack.
According to the team, people from different genres agreed to the point that they rely on friends when abusive threats are made. As MIT professor David Karger says, the “Squadbox” extends a helping hand to efficiently carry out the motive of helping a friend in need. If the key to your inbox is handed over to a dedicated group, Squadbox’s customizable tool makes the group aware when an incoming email needs to be moderated and when one has been already checked.
The basic concept is very simple to grasp. Suppose if a journalist wants to open an email account where she wants to receive news tips but refrain from the idea since she might very often receive harassments, can now create an account through squadbox. Once an email enters, it first goes to the moderator pipeline where a moderator can decide if the email is a harassment or an important one and can simply forward it back to the user’s inbox. In short, any email the user receives gets double-checked by the moderator’s group first.
We are already aware of some conventional automated solutions to keep spams away but as realized earlier, they hardly restrict a large chunk of junk emails. Another way to keep harassment in check was to give away the account’s details completely to the people you trust. However this one could be the one-stop hybrid solution that blends human support with technology in a significant way, as described by a professor of information Clifford Lampe, at the University of Michigan.
Soon, the team plans to expand the Squadbox system for social media platforms. The crowdsourcing platform also allows users to make "whitelists" and "blacklists" of senders whose emails can be directly sent or rejected to the inbox without being moderated. Also, a number of applications can be performed as per user’s choice. Users can deactivate and reactivate the system, read scores on messages' toxicity, and respond to harassers. As depicted, the system itself is an initiative to build an anti-harassment community around people to create a mass support team. The research will soon be presented at the ACM's CHI Conference on Human Factors in Computing Systems in Montreal, Canada.