Interpreting language is critically one of the most complex and subtle things that people do. The tone of voice, situation, personal history and multiple layers of context have significant roles to play. Each person who hears a same toxic remark may give a completely different response to it. Human language just does not lend itself to the kinds of strict rules of interpretation that are used by computers. Let’s see how algorithms help police and sanitise the public discussions online. Increasingly, algorithms are used to identify abuse on social media and filter abusive comments on social media channels and even news website.
Well-Known Use Cases
For example, when Google’s research project was announced, prospective users were warned about its limitations as automated moderation was not recommended then. So, the solution that was given was to use human moderators to decide what to review.
- The result was that it’s easy to find a loophole for toxic language pass through alphabet toxic-comment detector, for which human moderation was required
- When the assortment of phrases was tested, machine-learning algorithms proved out to be no match for the creativity of human insults
Another example here is Twitter which uses additional tests which are used by machine learning researchers and experts.
- A sample of the phrases was mentioned, in increasing order of toxicity scores from Perspective.
- The rules were created in large part through automation, presenting a crowd of people with sample comments and collecting opinions on those comments
- Twitter also assigned scores to new comments based on similarity to the stored data of existing comments and its ratings.
When it comes to Facebook, it just keeps a historical account of one’s personal engagement with posts by friends and brand pages and the Facebook news feed algorithm which only predicts what you want to see based on those past interactions with friends or brands.
- The algorithm puts similar posts upfront in any individuals News Feed from the same people and pages and from those with relevant posts or profiles without checking the bias or negative element in these stories or comments
Instagram does a pretty decent job here since it reverted to more of a personalised time-based feed.
- It uses relevancy and recency algorithm which reorders the content it feeds based on the user’s relationship with the person posting and post’s timeliness which at least protects civilised users from being a victim of these unregulated toxic waves of horrendous comments
Limited Use Of Algorithms
Rules like text analytics algorithms used by major software do convert open-ended texts into more conventional types of data like varied categories or use numeric scores it provides.
- These algorithms serve as effective online search technology which helps in finding documents related to the topic of interest
- Applications like e-discovery work by increasing productivity for legal teams reviewing large quantities of documents usually for litigation purposes
- When it comes to warranty claim investigation, the text analysis algorithms helps manufacturers to identify product flaws early and enables them for corrective actions
- Algorithms have proven to be useful for targeted advertising, which uses text from content that users read or create to present relevant ads
Even though algorithms are indeed helpful to a large extent, but when it comes to tackling social media, they fail to parse the meaning. Algorithms developed need to have a unified approach especially when the centre of focus is the same user base for every platform. The social media platforms need to develop and incorporate more advanced algorithms that can easily gauge the user’s perception and perspective to feed content based on their preference.