Reputation in Online Communities

Oladotun ALUKO
5 min readNov 28, 2020

In today’s society, online communities have become indispensable. Almost every one of us is part of at least one online community. Leaving a review for a seller on an e-commerce site? Dropping a like for an Instagram post? Or up-voting a post on Reddit? These are typical examples of social interactions that happen in a social community. We use these online spaces to communicate across borders, connect volunteers, collaborate creatively, and much more. In this article, I will review some ways Reputation can be managed in online spaces.

Social interactions within these communities extend much further than between human participants. They usually in some cases include artificial agents within those communities that might serve different purposes, for example moderating interactions between other humans, curating information, crawling through posts to generate intelligence and so much more.

While the potential for this type of interaction is massive as it affords members with similar interests, goals to safe ideas in a safe environment, they could be potentially harmful. By taking a closer look at the state of communities today, we find that most fall short of standards required for safe, active online interactions. We regularly experience uncertain credibility and reliability, leading to issues like fraud, internet bullying, fake news and unknowingly having our personal information sold to the highest bidders. These actions significantly undermine the trust that should serve as the basis for social interactions and ultimately necessitates the need for an effective system of trust within online communities, we, therefore, need a way to measure and manage the reputation of participants.

The concept of managing reputation is not new. Reputation systems are widely employed in e-commerce services, where buyers and sellers may give each other a rating or review after a completed transaction — encouraging good behavior in the long term. Other reputation systems are content-based, in the sense that users are evaluated based on their contribution. Examples of these would include Hacker News and Reddit which are designed to let users share news, websites, etc., with other users. These techniques can be easily adapted for use cases where products or services are being evaluated but for social interactions where individual actions are required to be evaluated, it turns out that it’s a bit more tricky. A reputation framework that allows participants to evaluate others and hence decide whether they can be trusted, will help mitigate mistrust issues surrounding social interactions online.

Previous research has concluded on the existence of two types of reputation systems for online communities; an automated system and a peer-to-peer system. Automated reputation systems are systems that collect data related to reputation from individual users (for example, number of posts), analyze it, and display it understandably to the rest of the community. Peer-to-peer reputation systems on the other hand are systems based on peer ratings. These ratings are collected and processed according to some specification, and a final value is calculated to represent the reputation of an individual.

A reputation system’s network architecture determines how ratings and reputation scores are communicated between participants in the system. Two broad categories have been identified; a centralized reputation system where there is a central authority that manages all the ratings and thereafter derives a reputation score for every participant and makes all scores publicly available. This approach is prone to different attack vectors targeted at the central authority. The other category is a decentralized reputation system where ratings are distributed among participating nodes. A typical example of this would be managing the reputations through a Blockchain system of interconnected nodes. While this approach offers the advantage of scalability and potential less prone to centralized attacks, it introduces the challenge of reaching consensus among participating nodes about a synchronized state of reputation.

A particular method of achieving consensus in distributed systems is based on a concept that can be referred to as “liquid democracy”. With this method, the ability of a member impacting the consensus can be identified by the member’s social capital or reputation or rank or karma. This social capital is earned by a member interacting with other members taking into account the reputation of those other members. The ratings for this approach can be divided into two types. The first type is an endorsement rating which can be granted or revoked by a participant to another participant at any time. The other type of rating is a transactional rating which is a history of transactions between participants with respect to a particular event. Ratings can also be explicit or implicit. Explicit vote ratings come with a rank value which can either be positive or negative. Implicit ratings come in terms of comments or reviews that are authored by a participant to another participant. The algorithm that defines this approach can be written as:

Rj = SUMit( Ri * Vijt )

where Vijt can either be an implicit rating as a positive or negative vote or implicit rating such as transfer amount, either spent or received, for node j being rated, node i supplying the rating and time, t to perform the rating. In essence, it’s an iterative type of algorithm that calculates reputation based on some initial state over a given period.

Although the reputation-based consensus is a generally more reliable consensus algorithm, some threats could potentially be exploited. Some of the potential threats include bad-mouthing attacks, on-off attacks, and newcomer or Sybil attacks. A bad-mouthing attack is the result of a dishonest rating by a participant to another participant within the network. The on-off attack is when a participant in the network behaves in an irregular manner performing good or bad alternatively. The newcomer attack is when a participant with a low reputation in the network starts all over by creating a new identity in the network.

Research around this approach is still ongoing to prove its validity in the wild. So this is by far not a conclusion on the work.

References

  1. Researching a Reputation System for Online Social Communities
  2. A survey of trust and reputation systems for online service provision
  3. Measuring Quality, Reputation and Trust in Online Communities
  4. Reputation Systems: Online Communities

--

--