LBS logo London experience. World impact.

What’s wrong with online review platforms?

Nicos Savva identifies one way in which online review platforms are flawed

By Nicos Savva 19 December 2016

whats-wrong-online-platforms-LBSR-974x296

The commonly-held desire to maximize the available information is pretty much taken as read. More data leads to better decision-making across the board, from policymakers to shoppers, company managers to house-hunters – which subsequently generates greater public benefits. Such an attitude was prevalent before the digital revolution and the internet triggered an explosion in the amount of data available to anyone with a computer or a mobile phone.

In fact, the digital revolution has led to a large amount of information being crowdsourced – a concept that is now so ubiquitous it’s easy to forget that it emerged only some ten years ago. Online platforms such as holiday booking and review site TripAdvisor, Waze – which allows users to report driving times in real time – and RateMDs, where people can review doctors, create a huge amount of knowledge that should, in principle, allow fellow community members to make better choices from among products or service providers.

If knowledge is power, then questioning the aforementioned desire for full, unfettered and unadulterated information flows could be considered controversial. To do so while citing the phenomenon of crowdsourcing is to confirm the suspicion of holding controversial, even maverick, views. Is this not the era of The Wisdom of Crowds, to borrow the title of the hugely-influential 2004 book by James Surowiecki? Yet my research, jointly with Yiangos Papanastasiou (Berkeley) and Kostas Bimpikis (Stanford), which develops a mathematical description of information crowdsourcing, shows that granting full access to crowdsourced information may be problematic.


Join the conversation: the power of online communities


By definition, the relationship among members of these online communities is dynamic and constantly evolving. For example, a favourable hotel review on TripAdvisor will encourage others in the community to visit the establishment and write their own critique.

Technological idealists want the internet to be a constantly-updated, information-rich environment where opinion and experience are democratized and everyone has their say. Some believe you get more accurate views in these communities when you have many different and varied voices. But this Panglossian view of crowdsourcing is undermined by the process through which the information is generated.

Let’s go back to our hotel review. A guest stays in the hotel and posts a review based on their own experience. Imagine that the hotel is fairly decent and that the review is favourable. Those using the platform after the review has been posted will be encouraged to visit the establishment and, in turn, write their own favourable critique. In this way the hotel’s success becomes self-reinforcing.

One may say the hotel is rightly reaping the rewards of offering good service, which leads to satisfied customers who write complimentary remarks on the platform. The problem is that this outcome is inefficient for society, because the self-reinforcing nature of the hotel’s success means that the less-explored, but potentially superior, establishments get less attention. As a result, they have less chance, if any, to prove their worth to customers.

From a utilitarian perspective, where the aim is for the platform to pool vast amounts of user-generated data to deliver the greatest possible public benefit, this is not ideal. This state of affairs is the result of a fundamental difference between the actions that maximize total benefit for society, or ‘social welfare’, and the individual incentives of the consumers using the platform.

From a social-welfare perspective, we would like consumers to write a positive review about a particular product or service for the benefit of the people reading those critiques. Although the individual consumer will usually opt for products or services that confer the greatest value to them, i.e. those with the best ratings, social welfare is often best served by choosing a completely different producer that has been getting little or no attention.

By doing so, consumers would provide fresh information rather than writing another rave review about a highly-rated product or service. This would benefit future consumers, widening the information at their disposal and diminishing the self-reinforcing nature of the platform’s rankings.


Breaking the self-reinforcing cycle of positive online reviews


How can we improve things? We have to remind ourselves that someone who visited and commented on a hotel did so because they were encouraged by a previous review. The answer lies in consciously restricting the flow of information – a ‘less-than-fully-informative’ policy – which is bound to annoy some. However, we show that this approach can provide much greater public benefit by maximizing consumer welfare in the long run.

Imagine the provision of information as a scale with an extreme position at either end. One extreme sees the platform disclose no information, so there are no reviews. The platform’s usefulness becomes somewhat limited, but such a policy certainly addresses the ‘winners-keep-winning’ issue. At the other end of the scale, we have full disclosure of all information. This is what many people using online platforms expect, but it leads to the problems of under-exploration that we have already identified.

Our research shows that offering full disclosure is more beneficial to the public than providing no information whatsoever. But while a full-information regime beats a no-information model, both come second to a carefully designed ‘less-than-fully-informative’ model. This approach sees the platform designed deliberately and carefully to give more vague and nuanced recommendations, for example by featuring more prominently providers that are less explored at the expense of highly-rated providers that are very much explored.

This encourages (or depending on your perspective) tricks consumers into exploring lesser-known businesses, giving these enterprises a chance to show their worth to future consumers. The trick is for the platform not to overdo it – recommendations need to be generally informative, so that customers continue to find it desirable and useful to follow them. It is vital they continue to use the platform, despite the fact that we must assume they suspect or know that these recommendations can sometimes be misleading in order to encourage more active exploration.

In addition to obfuscating information, another option is to provide cash incentives to consumers to explore alternatives to the top-rated providers. This approach may be difficult to implement, but it has the potential to address the gap between the actions that serve the interests of society more broadly and those of individual consumers who put their own self-interests first. Moreover, this approach could be presented as a ‘back to the future’ strategy. Until recently, it was the norm for reviewers to be paid for their work. As we have seen, the model of crowdsourced information from unpaid reviewers is structurally flawed and provides less than optimal public benefit. Once payment is offered for reviews of underexplored options, the writers will have a clear incentive to visit a broad and diverse range of providers.Nevertheless, our research shows that being less-than-fully-informative is still a worthwhile option even if the platform can actively pay for exploration – hiding information allows for exploration at a lower cost for the platform.


Exploring new ways to improve crowdsourcing


Over time, the quality of the service or product is likely to improve or deteriorate. Our research to date doesn’t explore how the information provided by the platform should reflect this fact, although one solution would be for the platform to delete old information as this is likely to be dated.

Another possible area of research is the relationship between the platform and the service or product provider. How does the information on platforms influence the level of quality that is subsequently delivered by the providers?

Competition is another issue. Our research assumes a platform has a monopoly in terms of its subject area, but it’s unlikely that any website or online community would always have the market to itself. This is particularly true if the platform adopts our proposed ‘less-than-fully-informative’ regime. A competitor may decide that a full-information policy is more attractive to consumers than a regime that suggests they can’t be trusted with all the data at the platform’s disposal.If that’s the case, the ‘less-than-fully-informative’ platform may suffer in the short term. In the longer term, however, it could be that the platform’s quality becomes clear in the richness and variety of the information on offer. A full-information platform will produce content that is objectively inferior – something the consumers will eventually spot.

Crowdsourcing has changed the way services and products are recommended, rated and reviewed. It pools a wealth of individual experiences in travel, hospitality, professional services, consumer goods and healthcare and makes it available to all. But there is a flaw in the crowdsourcing model that needs to be addressed: the information flow needs to be managed responsibly in order to reap the greatest benefit for society.

Comments (0)

585801 LBSR LBSR Footer banner Full width desktop

Subscribe to LBSR

Get the latest ideas and opinion from London Business School’s experts, straight to your inbox