Privacy-focused browsers return more health misinformation, researchers claim – NS Tech

A new study overseen by the Oxford Internet Institute (OII) claims that privacy-preserving browsers such as DuckDuckGo return more health misinformation such as anti-vaccine sites than Google.

The study, published in Frontiers in Medicine, analysed the first 30 webpages returned for the search term vaccines autism, to gauge the volume of anti-vax information that came up. It found that alternative search engines (Duckduckgo, Ecosia, Qwant, Swisscows, and Mojeek) and other commercial engines (Bing, Yahoo) returned between 10-53 per cent anti-vaccine pages, while Google returned 0 per cent.

Google has attracted growing criticism in recent years over the vast amount of personal data it hoovers up. This has led to privacy-focused rivals such as DuckDuckGo popping up, which have gained a vocal support base in a short period of time.

DuckDuckGo expressly avoids personalising users search results to negate the filter bubble effect. It collates search results from more than 400 sources, but none from Google. Its website indicates that traffic has continually risen from its inception in 2013 reaching more than 55 million searches in a day this month. However, Google still massively dominates accounting for more than 90 per cent of the search market.

The authors of the study, which was overseen by the OIIs Professor Luciano Floridi and included contributions from research students across Europe, argue that their findings indicate a trade-off between a browsers privacy and information quality. But its not clear whether different approaches to privacy actually explains the differences in the respective browsers search results.

Googles search results are determined by a number of different algorithms that take into account various factors including words of your query, relevance and usability of pages, expertise of sources and your location and settings, according to its site.

Read next: Were tackling social media disinformation all wrong heres how to fix it

Google proactively rates and ranks pages based on quality. In a Google document that provides guidelines to its search quality evaluators, it says that pages that potentially spread hate, cause harm, or misinform or deceive users, should receive the Lowest rating. This can include content such as demonstrably inaccurate content or debunked or unsubstantiated conspiracy theories.

Googles algorithmic tweaks are notorious among websites for resulting in sudden dips in traffic. There have even been allegations that Google maintains a blacklist of both right-wing and left-wing sites, but this has been denied by the company.

The new research was carried out as part of student projects and wasnt explicitly funded, although the OII has received funding from Google for other work this year. In March 2019, Floridi was also appointed to Googles short-lived Advanced Technology External Advisory Council, having previously advised it on the right to be forgotten. In 2015, he was described as the companys in-house philosopher.

The report argues that the greater volume of anti-vax content turned up by other search engines is an indictment of their search algorithms, and that mechanisms should be developed to test search engines from the perspective of information quality [] before they can be deemed trustworthy providers of public health information.

Read next: How did 5G conspiracy theorists become arsonists?

However, this betrays a somewhat twisted understanding of search engines, given that they are not considered trusted providers of information neither legally nor in terms of user expectations. (The raft of illegal and otherwise vile or offensive content hosted on internet search engines, means that if they were liable for the content that appeared on them, theyd all be shut down tomorrow.) The purpose of search engines is to index content on the internet. They perform the role of displaying and, to different degrees, curating, the available content on the web.

The World Health Organisation lists vaccine hesitancy as one of the top 10 threats to global health in 2019. However, its not clear that attempting to expunge or hide anti-vax information from the web would affect these attitudes. Its extremely hard to separate out the effect of online misinformation on behaviour primarily because people come into contact with a range of external stimuli all of which have an incremental effects on their eventual beliefs and actions.

Covid-19 has provided a useful test case for examining this hypothesis, and so far evidence that online misinformation has a serious and widespread effect on health outcomes is not conclusive. For example, a recent study published in the American Journal of Tropical Medicine and Hygiene mentioned that online Covid-19 misinformation was linked to thousands of deaths, but the sources it referred to all concerned people in Iran drinking ethanol in the misguided belief that it would cure the virus.

Iran is very different to countries in the west in terms of both its internet ecosystem, which is subject to a high degree of censorship, and health education. A World Health Organisation study published in 2019 found that 46 per cent of Iranian adults had inadequate health literacy. A BBC article notes that the number of deaths in Iran attributed to Covid-19 misinformation is itself uncertain because alcohol is banned in the country and the bootleg moonshine is often contaminated.

Floridis study itself didnt investigate material harms that have resulted primarily because of online misinformation, but it does highlight a couple of examples. It mentions a case in China where a cancer patient died by following an alternative therapy that they discovered online. The report notes that in response, Chinese authorities issued new rules that require search engines to provide objective, fair, and authoritative results appearing somewhat strangely to endorse the countrys censorious approach to the web.

However, most of the scare-mongering about the impact on fake news or misinformation on health behaviours is based on anecdotal and one-off cases, as opposed to rigorous academic research. In fact, a working paper by Ciara M. Greene and Gillian Murphy of Irelands University College Dublin and University College Cork, found that casual exposure to a fake news story was unlikely to have significant effects on future behaviour, concluding that at present, there is no empirical data that supports the assumption that fake news has a grave impact on health.

Greene told the Nieman Lab that the authors suspect that real-world behavioural effects will mostly emerge in contexts where individuals seek out many stories all advocating the same position, and which are congenial to the individuals existing views; anti-vax or climate change denial networks would be a good example of this.

Read next: Does Americas clean network initiative mean the end of the open web?

Indeed, studies indicate that people who hold anti-vaccination beliefs are very likely to be plugged into a conspiratorial worldview that dictates the populace is being lied to by mainstream politicians, institutions and media. A 2016 study (PDF) found that new adoptees are already predisposed to the beliefs through government distrust and general paranoia.

As such, theres not much good evidence that hiding or removing misinformation from the internet actually dampens beliefs. In some cases, removing or demoting this kind of content can even feed into this paranoid worldview, where the individual believes that companies such as Google are colluding with the government to restrict certain information not because its bogus, but because its dangerously true.

Studies have found that those with less education (as well as older and more conservative people) are more likely to share misinformation. But many factors can come into play. A 2015 study published in Plos One, found that educational level and thinking style did not predict vaccination rejection, but psychosocial factors such as preferring alternative medicine and the endorsement of spirituality as a source of knowledge did lead to vaccine-negative attitudes.

The OII study argues that this presents a double jeopardy, in that those who hold anti-authoritarian views and an interest in alternative medicine are more likely to use alternative search providers because of privacy concerns. The report argues that therefore, while the alternative providers might reach a proportionally smaller audience they are reaching an audience that is already more receptive to anti-vaccine information and therefore more vulnerable to its effects.

However, these are also the very people likely to be more predisposed to believe that demoting search results is a result of the information being true. An emphasis on education and digital literacy might equip people to better deal with misinformation, as opposed to demoting search results. Even better, an analysis of why people are prone to mistrust of governments and experts could inform a more successful means of intervention.

DuckDuckGo didnt respond to a request for comment.

Continued here:

Privacy-focused browsers return more health misinformation, researchers claim - NS Tech

Related Posts

Comments are closed.