Ranking algorithms play a crucial role in online platforms ranging from search engines to recommender systems. Ranking systems are at the core of many online services, including search engines, recommender systems, or news feeds in social media. However, these ranking algorithms can affect the way information is passed and might even lead to political polarisation and shutdown of the governments.
Last year Amazon had to discard its resume ranking algorithm when it was found that the algorithm was not gender-neutral.
The frictions surfacing from increased algorithmic management of information and practice, coupled with inadequate knowledge of operating principles, have made algorithmic decisiveness, a debatable topic.
This problem arises mainly when key stakeholders pay less attention to how biased algorithms can be.
Recommending a toothbrush on an e-commerce website may not have implications but assuming that the same level of efficacy will be held for more sensitive decision making like employee ranking might have worse consequences.
Matthew Effect And Cascading Of Information
Matthew effect or Few-get-richer effect emerges in settings where there are few distinct classes of items (for example, left-leaning news sources versus right-leaning news sources), and items are ranked based on their popularity.
When items are ranked based on popularity, this leads to a self-reinforcing dynamics according to which popular items become increasingly more popular.
Recently, researchers at the University of Pompeu Fabra released a paper addressing the same.
The ‘few-get-richer’ effect adds to research on the ‘rich-get-richer’ dynamics by showing that popularity-based rankings do not only create ‘noise’ in the ranking but can also lead to a systematic ranking bias: when there are two distinct classes of items, items from the smaller class become better ranked than similar items from the larger class.
The few-get-richer effect emerges in settings characterized by two design features. The first feature consists in the ranking of items in terms of popularity (i.e., items with more clicks are higher ranked). The second feature is a partition of the available items in two (or more) distinct classes.
Model To Determine Limitations of Recommender Systems
As part of their methodology, the researchers, recruited 786 participants on Amazon Mechanical Turk. They were randomly assigned to one of 8 conditions. They first answered a question about their type: “Are you more of a cat person or a dog person?”
With three possible choices “I am a cat person” / “I am neither a cat person nor a dog person” / “I am a dog person.” On the next screen, they were shown 20 buttons with “Please click on a photo from the following list of photos of cats and dogs and rate it according to liking.
The buttons were displayed in a vertical list ranked in terms of popularity. Participants could initially see 3 to 4 buttons and had to scroll down to access the other buttons.
After clicking a button, participants have to give a rating of 1 to 5 stars. The rating task was presented as a reason to ask participants to select an item according to their preferences. The collected ratings are not discussed and participants were paid $0.15 for their time
The Consequences Of Popularity
The results showed that 30% indicated they were cat persons and 55% as dog persons.
Placing the options in terms of popularity had a systematic effect on the share of traffic attracted by options that started at the bottom of the screen.
The few-get-richer effect also has implications for the design of recommender systems. The learning efficiency of these systems is impeded by the presentation bias problem: items shown to the user can get clicks whereas items not shown get no clicks.
The recommender system thus cannot learn about the relevance of the latter items. This purportedly increases the amount of exploration (clicks on the minority class), and thus increases the learning opportunities of the system at the cost of slightly hurting the user experience.
The few-get-richer effect suggests precisely the opposite. Adding more of those items might reduce rather than increase the total amount of exploration
Developing suitable empirical approaches to render algorithms accountable and to study their social impact has gained prominence in this era of information surge and will continue to do so. The growing reach of AI into key decision making, replacing human intuition will definitely need foolproof scrutiny and results, devoid of uncharacteristic bias.