Could Nudges be the Solution to Online Bias?

As social media giants and other online products continue to grow and dominate our day-to-day lives, issues such as confirmation bias remain prominent – fuelling political polarisation and the widespread consumption of misinformation.

Companies such as Facebook, Google and Twitter fight for your attention with complex recommender algorithm systems and artificial intelligence (AI) that are very effective in holding your attention or calling for it when your phone is locked.

We are hooked on using these platforms because their systems are very reliable when customising our online experience by feeding us content that we will be satisfied with.

A concern of AI in its infancy is that it is not yet smart enough to identify hyperbole or fake news which users might perceive as desirable information. Often the convenience of being fed with desirable content churns out the undesirable by-product of confirmation bias.

Confirmation bias occurs when people look for and interpret information that aligns with their pre-existing beliefs – a psychological tendency that is capitalised upon by the attention economy.

Social media platforms know that people hate being wrong, and their systems are designed to make users feel right more often, in turn making an alternative opinion a unicorn in our newsfeeds.

Learning to identify biases is possible but avoiding them requires the circulation of diverse opinions within social networks. However, the truth of the information is often unimportant to a user if their subjective reading of it confirms their belief.

The scale of information online makes assessment of all content practically impossible for companies to track in real time. Therefore, systems need to be put in place to heighten awareness of potential biases.

A study from Frontiers in Big Data1 argues that “it is the responsibility of platforms to include signals as to the quality of a source or article within their algorithm.”

In this study, the researchers presented a possible solution to confirmation bias in the form of digital nudges which subtly encourages users to adopt fact checking habits online.

Nudging in this context is influencing users to improve their online experience and choice-making. Two kinds of digital nudging strategies were assessed:

The first kind is nudging with presentation which involves structuring information to give the user a balanced view of something.

“The primary aim of the nudge is to present an unbiased view of a subject, without necessarily forcing a user to embrace it. The secondary aim, of equal importance to the first, is to ensure that the sources presented are of a sufficient level of reputability: even if occasionally headlines are sensationalised, the underlying article will not be entirely fictitious or propagandistic.” the authors write.

Figure 1 shows a post from @dylan with two links for other posts below. The link from @jordan is a similar perspective that @dylan is more likely to be drawn to. The link from @jake on the other hand presents a different perspective which may nudge the user to consider an alternative view.

Figure 1. nudge with presentation

The second kind of nudge is nudging with information which involves providing information that raises awareness.

For example, labels could nudge users toward reputable information sources and flag misleading sources to potentially combat bias. This idea expands on the well-known ‘verified badge’, the little blue tick that is used by many social media platforms to authenticate public figures and brands.

The green flag beside ‘@jake’ in Figure 2 is a symbol that represents reputability. This nudge gives credibility to @jake and can possibly nudge another user to consider his view.

Figure 2. nudge with information

The researchers found that nudging strategies can make users more aware of the trustworthiness of information sources. Results also showed potential for lowering the acceptance of a single view toward information.

Confirmation bias of course is riddled with nuance. Individual characteristics such as cultural values, religious beliefs, and interests can all play a role in a user’s reaction to opposing views.

A backfire effect can occur if the nudge strategies are too aggressive. For example, exposure to the Twitter accounts of elite political figures has been found to have a counterproductive effect on broadening the ideological horizon of opposing Twitter users2.

The goal therefore should be to encourage consideration of other opinions by incorporating alternative (and reputable) sources into the recommender algorithms.

Billions of users around the world are trapped in echo chambers that confirm their beliefs and herd groups into a tribalistic mentality. This is an equation that has led to the manipulation of elections, genocides, conspiracy theories and pandemic misinformation.

Nudging is one of countless ideas that has the potential to efficiently combat confirmation bias and other harmful phenomena online. Granted, solving the problem is no easy feat, but it doesn’t seem an issue worthy of priority for big tech companies thus far.

References

  1. Thornhill, C. et al., 2019. A Digital Nudge to Counter Confirmation Bias. Frontiers in Big Data, 2, pp.Frontiers in Big Data, 6/6/2019, Vol.2.
  2. Bail, C.A. et al., 2018. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences of the United States of America, 115(37), pp.9216–9221.