Hold On!

Hold Up

Please select a minimum of three sectors in the menu above.

Got It
  • Are algorithms keeping us from trusting our own judgement?
  • Are algorithms keeping us from trusting our own judgement?
    Tim Samuel (2020) ©

Pick this one! The science of algorithm overdependence

Algorithms are part of our online lives, impacting everything from shopping decisions to viewing habits. But do we put too much faith in them? Canvas8 spoke to Dr. Sachin Banker, assistant professor at the University of Utah, to understand whether people are overdependent on algorithms.

Location Global

Algorithms, tracking technology, and targeted advertising were hotly debated in the 2010s, culminating in the implementation of GDPR across the EU and the CCPA across some states in the US. These policies aimed to put power back in the hands of consumers, yet the business models of many big tech firms focus on tracking users’ data and offering up recommendations. Indeed, Google and Facebook make an estimated 83% and 99% of their revenue, respectively, from selling ads based on user data. Amazon has also embedded the power of algorithms into its shopping platform, which may be why 89% of American adults say they’re more likely to choose Amazon over other e-commerce sites.

For brands, algorithms are so effective because people – consciously or unconsciously – tend to trust the recommendations they serve up, even over human recommendations. Yet experts have found that consumers have developed an overdependence on algorithms to the detriment of their decision-making and overall wellbeing. It’s a topic that Dr. Sachin Banker explores in a paper he co-authored with Dr. Salil Khetani, titled ‘Algorithm Overdependence: How the Use of Algorithmic Recommendation Systems Can Increase Risks to Consumer Well-Being’. They note that “counter to prior findings, this research indicates that consumers frequently depend too much on algorithm-generated recommendations, posing potential harms to their own wellbeing and leading them to play a role in propagating systemic biases that can influence other users.”

The controversy around algorithms has only escalated in light of the COVID-19 pandemic. People have been shopping online considerably more than before the pandemic, which has left them more exposed to the influences of algorithms. To mitigate the risk of overdependence, Dr. Banker suggests that there should be education initiatives to help consumers learn about algorithms, while brands should be encouraged to be transparent with their use of algorithms. Canvas8 spoke to Dr. Banker to understand the degree to which people are influenced by algorithms and learn more about the risks of being overdependent on them.


Why is this topic important to understand?
I got interested in the topic because I started noticing that algorithms were slowly starting to take over the world. They started showing up in pretty much every context where you interact with businesses online. It seemed like if there were problems with how people are interacting with those algorithms, they could be magnified to a huge degree because they’re present in every online interaction. And it seems like most consumers just have no idea that a lot of their interactions are being influenced by some of these potentially biased algorithms. They simply trust the search results that they’re served with. For example, if you’re using a search engine like Google, the average person doesn’t realise that a lot of those search results are customised based on their past search behaviour, and can also be manipulated by firms (such as with searches for TurboTax free filing). For the most part, it seems like people are unaware of when these kinds of interactions with biased algorithms are taking place. Consumers often don’t realise that the set of offerings that show up when using shopping sites, for example, can frequently favour higher-priced or lower-quality items.

Part of what drove this research is that a lot of the academic research was focusing on counterintuitive effects, suggesting that people don’t trust algorithms very much, which didn’t seem to capture what was intuitively going on in the marketplace. Instead, we believed that people were relying more and more on these algorithms, even when the algorithms were biased. What we wanted to see was whether people were relying on and trusting these algorithms so much that they would make mistakes and make poorer decisions by relying too much on some of the algorithms that they interacted with.

People are getting used to recommendations from AI services
People are getting used to recommendations from AI services
Keira Burton (2020) ©

How did you go about conducting your study?
The general design for most of the experiments was to make decisions around product choices and in those interactions, they were being recommended some options by an algorithm. Some of the options that they recommended were what we call non-dominated options, or options that are better than another choice option, while some of them were dominated options, where the price was higher or the quality was significantly lower. We varied whether the algorithm would give them recommendations that were strictly worse or whether they were strictly better than the other option. We wanted to see whether people would notice and change their decisions based on what the algorithm recommended to them. What we found was that people were very susceptible to these algorithmic recommendations. When the algorithm recommended them some dominated options, they would be significantly more likely to choose those.

For example, participants were presented with different options for a battery charging pack, which would vary on the quantity, number of charges, types of ports, different kinds of features. The context that we had in mind when we were designing these studies was if you were shopping on an online site like Amazon for things like electronics, there are oftentimes a lot of different criteria that you’re trying to balance between, and Amazon will automatically give you recommendations when you search for those types of products. We wanted to look at those common kinds of shopping situations, where these recommendation algorithms are being implanted pretty widely.

People’s trust in algorithms can lead to buying an inferior option
People’s trust in algorithms can lead to buying an inferior option
Sam Lion (2020) ©

What were your key findings?
We found these really strong effects even when it was extremely obvious that the algorithm was giving people poor recommendations. For example, in one of our studies, we gave people choices between headphones where some of the options were worse and some were strictly better. When we allowed people to make those choices by themselves without any recommendation, the large majority of people would choose the significantly better option. Then if we gave them a recommendation with the algorithm showing and recommending a strictly worse option, the majority of the people switched their choice to the worst option. What was surprising to us was that even though the large majority of people identified the significantly better option, they switched their choice to something significantly worse just because an algorithm recommended that to them.

What I think is interesting about it is this happened when we were showing people options where it was extremely obvious that one option was better than the other – for example, the price was $100 versus $50. Our designs generally involved very clearly dominated recommendations (such as 25% price surcharges) that could be easily verified by consumers, and we continued to observe a robust effect. Thus, we think that in the actual marketplace where it can be less clear when one option is better than the other because you might not have full information about the attributes or quality (but the seller does), that this overdependence issue could be much more widespread. While consumers can verify whether recommendation algorithms are biased in pushing identical higher-priced items, it’s not easy for online shoppers to verify whether they-re being recommended lower-quality goods (that don’t have some published quality attributes, for example).

Finally, our findings also provided some insight into why this overdependence on algorithms occurs. Our studies suggest that this is not being driven by consumers making mistakes (such as by not paying attention or fully processing the choices). Rather, consumers seem to trust that algorithms know more about the product domain than they do themselves – thus, as we observed, they often favour the algorithm’s recommendation over their own choice. People who were less confident and knowledgeable about headphones were more likely to succumb to biased recommendations. But if we disrupt this belief that recommendation algorithms have greater domain expertise, by getting consumers to be more certain about their preferences, or getting people to question the recommendations they are presented with, consumers rely less on the algorithms. Because trust is easily lost, and difficult to regain, firms ought to be aware of how their recommendation algorithms operate, such that the tools are optimised to assist rather than exploit the consumers they serve.

Algorithms can be a helpful way for people to combat choice fatigue
Algorithms can be a helpful way for people to combat choice fatigue
Charlotte May (2020) ©

Insights and opportunities

People are sceptical of the power of tech’s algorithms
A rising skepticism towards big tech means that people are questioning their relationship with the technology they use. One contentious issue that has come to light is the very business model on which social media platforms are founded – targeted advertising. Indeed, this topic has even been debated on the floor of Congress, with some politicians arguing that the practice of using personal data for targeted advertising should be banned. As a result, people are paying more attention to algorithms that serve up products and brands related to their recent browsing habits – or even their conversations, with 60% of people in the UK expressing concern that their mobile devices are listening in on them. Unsurprisingly, search engines that prioritise users’ privacy, like DuckDuckGo and Qwant, have been cropping up to meet the needs of these privacy-conscious consumers. While it’s long been rumored that Twitter is working towards a decentralised social media network that would supposedly give users more control over the algorithms that surface content.

People are influenced by the ‘word-of-machine effect’
One of the factors influencing people’s decision-making when it comes to purchases is what has been dubbed the ‘word-of-machine effect’. This concept stems from the belief that artificial intelligence is more proficient than humans in giving advice about purchasing decisions when people are focused on the practical and functional aspects of a product. This is often not the case, but the implicit belief impacts people’s behaviour. Indeed, research shows that 67% of people are more likely to choose an AI-recommended product when focused on the product’s practical attributes. Stitch Fix has found success by pairing its AI with human stylists, promoting the functional benefits of its algorithm while the stylists are focused on the more experiential elements of their service. What’s more, it’s also made this process more transparent for consumers with a web page dedicated to how their algorithms serve up product offerings.

Algorithmic bias isn’t going away
As artificially intelligent systems become embedded in daily life – from shopping and job searching to policing and everything in between – the more their faults are brought to light. While human biases are well-documented, new biases within AI and machine learning models continue to be exposed, with research emerging on how exposure to these biases can impact people and their identity development. Start-ups have been created to combat algorithmic biases; Zeekit, for example, works with fashion brands to digitally dress diverse models to counteract Google’s search algorithm that shows primarily homogeneous models. But there’s a particular challenge in this space when it comes to the influence on teenagers, of whom nearly half (45%) describe themselves as being online ‘almost constantly’. Algorithms on YouTube, for instance, have been blamed for creating ‘radicalisation pipelines’ that could lead people – particularly younger audiences – towards more extreme content.

The pandemic has accelerated automated decision-making
While the automation of certain jobs and administrative processes was already well underway before the pandemic, there has been a considerable acceleration as many systems moved online and people looked to offload unnecessary tasks and decisions. People have taken on more responsibilities during the pandemic – juggling home-working and home-schooling duties – which is leading to many people feeling cognitively overloaded. As a result, people are increasingly looking for some amount of decision-making to be taken off their plate. Netflix has recognised this desire for cognitive offloading and is launching a ‘shuffle play’ feature by summer 2021, which uses an algorithm to pick shows for users instead of relying on people to find new shows themselves. It builds on the platform’s already successful recommendation system, through which 80% of the shows watched on Netflix are discovered.

Featured Experts

Dr. Sachin Banker

Dr. Sachin Banker is assistant professor of marketing at the University of Utah’s David Eccles School of Business. Previously, he was a postdoctoral research associate at Princeton University and earned his PhD from MIT.


Rachel Ousley is Associate Insight Director at Canvas8, which specialises in behavioural insights and consumer research. A San Francisco native, she worked in communications before getting her Masters in Social Cognition from UCL. Outside of work, she loves live music, learning about true crime, and drinking good wine.



  • Article image Trust an AI? The science of the ‘word-of-machine’ effect

    Algorithms have become central to discovering new music and products online – but are machine-led recommendations always welcome? Canvas8 spoke to Chiara Longoni, an assistant professor at Boston University, to understand how people’s goals impact their willingness to trust AI.

  • Article image Logically: fighting the infodemic with AI

    The rapid growth of an online ‘infodemic’ – where fake news and conspiracy theories abound – means it has become difficult to sort fact from fiction. Logically is using AI technology and fact-checkers to help both private companies and the general public combat the spread of misinformation.

  • Article image Are algorithms radicalizing boomers?

    With QAnon and conspiracy theories impacting mainstream discourse, tech companies are having to take greater control of what’s published on their platforms. Increasingly active on social media, American Boomers are caught up in the misinformation wars - but what draws them to fake news?

  • Article image Instagram pods: influencers organise against algorithms

    When Instagram announced its new algorithm that would prioritise popular content in feeds, over 340,000 people signed a petition against it. Now, influencers are forming private ‘pods’ to support each other’s posts in defence against the change. But why is there such concern over online snaps?