Download PDF version
Introduction
This explainer outlines what ‘problematic information’ is, why it spreads, the kinds of harm it can cause, and how you can evaluate claims before you share them. We also identify what governments, platforms, civil society, educators, and communities can do to strengthen our information environment. While we explain the terms and their meanings in the coming pages, we have not included ‘fake news’ , as this is a term levelled at the media to collectively delegitimise them, rather than a form of problematic information that could cause societal harm.
Key takeaways
|
KEY INFORMATION
1 | Problematic information and types
Not all online information is created equal. A lot of it is factual, educational, or genuine opinion, but a significant portion is problematic information that can mislead or manipulate readers and viewers. Two terms often used in this context are misinformation and disinformation, which, while related, have distinct meanings:
Misinformation refers to information that is false or inaccurate but is not necessarily created or shared with the intent to deceive. In other words, the person sharing misinformation often believes it to be true. It could be a rumour, a mistaken claim, or an exaggeration. Misinformation might spread because someone didn’t verify a claim before sharing it, or they misunderstood a piece of news. An example of misinformation would be an untrue rumour on social media about a celebrity death that fans share widely, genuinely thinking it’s real, only for it to be debunked later.
Disinformation refers to false information that is deliberately created and disseminated with the intention to deceive or cause harm. Those who spread it may be motivated by political goals, financial gain, or to sow chaos and confusion. For example, a fabricated ‘news story’ claiming that a certain vaccine causes dangerous side effects, created by a bad actor who knows it’s false.
Some other forms of problematic information:
A related term malinformation is less commonly used. Malinformation is information that may reflect reality or truth but is used out of context or manipulated to inflict harm. For instance, leaking someone’s private information to harm their reputation, or sharing a real photo from a past event but miscaptioning it to create a false narrative, are examples of malinformation. Here, the information itself contains a kernel of truth, but is deployed misleadingly or maliciously.
Conspiracy theories are another form of problematic information. They are elaborate, unproven stories that allege secret plots by powerful actors, often with little or no credible evidence, and
tend to spread widely in online communities. Conspiracy theories often arise to explain complex events with a simple story-line in an attempt to blame a shadowy culprit, or a narrative emerging out of a series of unrelated events.
Some examples include the false belief that the Moon landing was faked, or the baseless QAnon theory claiming that a cabal of elites run a child-trafficking ring. The various COVID-19 conspiracy theories circulated during the pandemic, like the false theory that 5G transmissions were causing illness, serve as further examples.
|
The Emergence of GenAI With public access to generative AI (GenAI) tools, visual misinformation has become more common. GenAI tools can produce hyper-realistic images, audios, and video that can make it look or sound like someone said or did something they never did. Some of this is intended to be humorous and entertain. For example, in early 2023 an AI-generated image of Pope Francis wearing a designer puffer jacket went viral on social media (see image, right). However, not everyone will recognise the fake, or understand the humour; even innocuous images like this have the potential to create harm. GenAI tools have dramatically lowered the cost and effort required to fabricate persuasive fakes, allowing bad actors to flood social media feeds |
Problematic information thrives especially when an information vacuum exists: when there is a high demand for information, but little reliable supply. In crisis situations, official announcements and advice often lag behind, creating an information vacuum which is filled by rumours and hearsay. In the early days of the COVID-19 crisis, for example, many conflicting rumours and explanations emerged, from unproven cures to wild claims about the virus, while experts and governments were cautious about giving advice that might later turn out to be wrong.
This simultaneous information overload and vacuum both overwhelmed and confused people, making it hard for them to assess which guidance to trust. When the spread of problematic information becomes uncontrollable, it may turn into an infodemic, where information practices can hamper the response to the health risk.
While there are definitional differences and contestations between these terms, all forms of problematic information can lead to severely harmful consequences for individuals, communities, and society.
Increasingly, the focus in dealing with problematic information has shifted away from questions of intent and towards assessing harm. Namely, whether false information was posted and shared in good faith or not, asking: was it harmless (like Moon landing conspiracy theories) or could it cause personal or material damage (like folk remedies for COVID-19)?
2 | Harms from problematic information
Problematic information can have serious real-life consequences across various facets of our lives. Here are some of the key risks posed by problematic information:
Public health risks: One of the clearest and well-documented harms of problematic information is in the domain of health. When people act on false health information, it can be life-threatening. A stark recent example comes from the COVID-19 pandemic: acting on misinformation, individuals consumed toxic medicines or delayed necessary treatment. False claims about vaccines have also contributed to overall vaccine hesitancy, leading to the resurgence of diseases that were once under control. Unfounded rumours that the measles, mumps, rubella (MMR) vaccine causes autism have led people to shun vaccines; consequently, measles outbreaks have occurred in areas where these diseases had been eliminated for years. The harm here is tangible in the form of hospitalisation, long-term complications, or even untimely death.
Threats to democracy and society: Problematic information has been weaponised by internal and external actors to influence election outcomes. During the 2016 U.S. presidential election, Russian disinformation campaigns on social media exacerbated political divisions: false stories and doctored images about candidates spread widely, potentially swaying some voters, or at least
deepening general mistrust of politicians. During and after the 2020 U.S. election, conspiracy theories about election fraud gained traction among millions, despite the lack of any evidence.
This culminated in the January 6 Capitol riots, where thousands attempted to overturn the legitimate election result. More generally, disinformation taints trust in democratic institutions when people are inundated with narratives that ‘everything is rigged’ or that mainstream media ‘always lie’: these erode the social contract needed for a functioning democracy. When a society cannot
even agree on what is real any more, the entire democratic system is at risk.
Violence and public safety: Problematic information can incite public disorder or violence. A chilling example occurred in India in 2017–2018: WhatsApp rumours about child kidnappers led to mob lynchings of innocent people. This is one instance that shows how problematic information can translate into real-world acts of violence and tragedy. In other cases, rumours about 5G led some individuals in the United Kingdom and elsewhere to vandalise or set fire to telecommunication equipment, damaging key communications infrastructure.
Harm to vulnerable communities: Problematic information can ruin reputations and fuel discrimination. Online rumours or false accusations can lead to targeted harassment and attacks. False narratives often target specific groups, reinforcing prejudice or stigma. During the COVID-19 pandemic, Asian people worldwide faced a surge of hate incidents due to early media descriptions of the coronavirus as the ‘China virus’. False information about minority groups, for example, extremist disinformation painting a religious or ethnic group as dangerous, can incite hate crimes or justify oppression. A tragic illustration occurred in Myanmar, where disinformation and hate speech on Facebook targeting the Rohingya Muslim minority community contributed to public support for the violent ethnic persecution of that community in 2017.
|
Environmental harm: The planet itself is not immune to the effects of misinformation. For decades, false information aimed at casting doubt on climate science has circulated: at first, claiming that global warming is a “hoax” or not caused by humans, despite the overwhelming scientific consensus; more recently, sowing confusion about appropriate responses to the climate crisis by undermining public, political, and corporate support for renewable energies, or delaying action in the present by promising technological advancements in the future. This has led to public confusion at the societal level and policy delays at the institutional level. The harm caused by such climate change misinformation is immeasurable: it has caused decades of inaction on reducing greenhouse gas emissions, with the results of this obfuscation now manifesting in ways that are harmful to both humans and the environment.. |
3 | Why does problematic information spread?
Understanding the mechanisms of its spread can help us guard against misinformation. Several factors contribute to the rapid and widespread dissemination of problematic content online:
Psychological vulnerabilities: Problematic information often targets our emotions. Such content aims to make readers and viewers angry, fearful, or anxious, in an attempt to bypass our critical thinking. Psychologically, when emotions run high, our ability to analyse information calmly and rationally declines as we enter reactive mode. Actors who spread problematic information know this and frequently craft messages to trigger outrage or fear. For example, false rumours about child kidnappers in the WhatsApp case tapped directly into primal fears about child safety, causing panic.
Additionally, confirmation bias and selective exposure play a large role, as we tend to deliberately seek out and believe information that aligns with our pre-existing beliefs or worldviews and distrust information that contradicts them. So, people will more readily source, accept, and share misinformation that ‘feels true’ to them, even if objectively false, and they’ll be sceptical of fact-checks if those go against their political or personal biases because it causes a jarring cognitive dissonance.
This means misinformation can find especially fertile ground in polarised environments as each side is primed to believe the worst claims about the other. Mis/disinformation that paints the other side negatively spreads easily, and disinformation becomes a participatory game of supporting one’s own group, and attacking the ‘other side’.
Role of digital platforms: The architecture and business models of social media and other digital platforms support the spread of problematic information. These platforms run on an attention economy: their algorithms are designed to maximise engagement in the form of clicks, likes, shares, and comments to keep us online for longer as consumers. Unfortunately, false or misleading content often outperforms sober facts in grabbing attention. Sensational claims, provocative rumours, and emotionally-charged posts get more reactions and thus are elevated by algorithms. Additionally, platform features like one-click sharing and reposting encourage the impulsive sharing of information without verification. Trending topics and viral spread can make a fringe falsehood suddenly seem ubiquitous. The result is that digital platforms often act as amplifiers for misinformation, spreading it faster and further than ever before.
Artificial amplification: While human sharing is the primary driver of the spread of problematic information, artificial amplification through automated bots and coordinated troll operations also plays a key role. Bot accounts, which are software scripts acting like genuine users, can like, repost, or share a post thousands of times in a short timespan, tricking algorithms into thinking that a topic is trending organically. This can boost the visibility of a piece of problematic information.
Similarly, troll farms, which are teams of people managing many accounts, can systematically push certain disinformation narratives. During the 2020 U.S. election campaign, for example, researchers noted Eastern European ‘troll farms’ building large Facebook audiences in certain demographic groups and then injecting propaganda into those communities. These bot and troll activities can also create a bandwagon effect: if you see that a post shows up repeatedly in your feed, with (apparently) substantial engagement from other accounts, you might assume that it’s credible or important, and even engage with it yourself, without realising that it has been artificially amplified.
High-profile superspreaders: Not all misinformation spreads from the grassroots: often the greatest amplification comes from influential figures who have large social media following. When
celebrities, high-profile pundits, journalists, or political leaders endorse a false claim, it spreads rapidly through their networks. People tend to trust messages coming from sources they perceive as authoritative or relatable. During the pandemic, for example, a handful of anti-vaccine activists and influencers leveraged their credibility with certain communities to spread the bulk of anti-vax falsehoods.
When a famous musician shares a conspiracy theory, or a politician repeats a debunked story on TV, millions are exposed in an instant. Superspreaders essentially act as accelerants and legitimisers of false information, as they provide it with the oxygen of amplification. Combatting misinformation therefore isn’t just about correcting facts: it’s also about holding prominent voices to account for what they choose to amplify, noting their outsize impact in information environments.
Bad faith practices from institutional actors: Those who should be providing us with reliable information often end up amplifying misinformation. On one hand, poor practices in journalism, such as running sensational stories without proper fact-checking, or giving equal weight to both robust evidence and fringe theories, known as ‘both-sides’ coverage or false balance, lends visibility and credibility to falsehoods. Misleading claims are amplified when media chase clicks with exaggerated headlines, or fail to debunk dubious statements. For instance, early in the COVID-19 pandemic some outlets uncritically reported on unproven remedies, like hydroxychloroquine, inadvertently promoting false hope and producing mass confusion. Low-quality journalism, or opinion masquerading as journalism, can include amplifying false claims without debunking them, and even actively linking to problematic content, or embedding it into their social media posts and videos. This makes it much easier for ordinary audiences to descend into a spiral of problematic information. On the other hand, some politicians and public figures cynically use misinformation as a tool. They spread lies or conspiracy theories to rally their base, confuse voters, distract from scandals, or delegitimise opponents.
A dramatic example is Brazil’s former president Jair Bolsonaro, who repeatedly pushed bogus claims about COVID-19, from downplaying the disease as ‘a little cold’ to falsely linking vaccines to AIDS, as part of his political stance. These falsehoods from the top misled the public and hampered Brazil’s pandemic response with severe consequences for the pandemic death toll and longer-term public health. Additionally, when major leaders propagate problematic information, it not only reaches vast audiences directly, but also gives permission for other actors to do the
same. This creates an environment where truth itself is politicised, and truthfulness in political rhetoric is treated as optional.
|
This breakdown in the public information ecosystem – driven by a feedback loop between unscrupulous politicians, sensationalist media, and partisan activists – creates conducive conditions for problematic information to spread widely, for falsehoods to gain legitimacy, and for objective truth to come second to ideological alignment. |
4 | How to navigate information online
Given the challenges of mis/disinformation overall, how can everyday users tell what is credible and what isn’t? Professional fact-checkers and researchers have developed effective strategies to evaluate information. You don’t need to be an expert in every subject; you just need to know how to investigate sources and claims. In this section, we’ll elaborate on one such technique that you can use to verify information before trusting or sharing it. Think of it as a checklist you can apply whenever you encounter a suspicious news article, an unfamiliar website, or an implausible claim on social media.
The framework to remember is SIFT, which stands for Stop, Investigate the source, Find better coverage, and Trace claims to the original context. Developed by digital literacy expert Mike Caulfield, SIFT condenses the core moves of fact-checking into four steps – see next page for the detailed approach.
Combatting visual misinformation
The SIFT approach isn’t just useful for text content: it helps to reduce the spread of visual misinformation too. Advances in image processing and GenAI tools have made it possible to create remarkably realistic artificial images and video content, including so-called deepfakes. Alternatively, images may be presented out of context: for example, using photos from past riots to claim that a current protest has turned violent. Such false visual images are a powerful vehicle for misinformation. A compelling image or video clip can convince people of a false narrative more easily than text alone due to our psychological vulnerabilities, so learning to critically evaluate visual media is crucial.
|
Reverse image search One of the most powerful techniques for verifying an image is a reverse image search. This allows you to find where else an image appears on the Internet, which can reveal its origin and context. For example, if you see a striking photo on social media, you can save the image or copy its URL and use Google Images or TinEye to look it up. These services will show you other instances of that image. You might discover that the image is old and unrelated, or has been edited to misrepresent its context. Reverse image searches can also identify fake profiles or scams: for example, a reverse image search on a social media profile picture might reveal that the photo is actually a stock image, belongs to someone else, or is used for a whole raft of accounts – all possible signs of impersonator or bot accounts. Reverse image search is quick, and can save you from falling for a hoax. |
Stop: This is a reminder to pause before you react or share. When you first encounter a piece of content that triggers a strong reaction or that you plan to rely on, stop scrolling and ask yourself, ‘Do I know this source? Does this claim sound credible or is it surprising?’ Essentially, resist the urge to impulsively share or believe the content until you’ve done some checks. If you feel an emotional spike, like anger or excitement, this marks a key moment to stop and take a breath: don’t let the content hijack your response and your emotions.
Investigate the source: This means figuring out the source of the information and whether it is trustworthy or not. Instead of reading a webpage or post in isolation, open a new tab and search for the source or author. Ask questions like ‘What is this website or account? Is it a well-known news outlet, an independent blog, a satire site, or a partisan organisation?’ Fact-checkers call this lateral reading, where you leave the content and read across other sources about it. One valuable and credible website is Snopes, where you can verify problematic information.
Relevant clues of credibility for sources can include a clear ‘About Us’ page listing editorial policies and staff names, a history of accurate reporting, citations of their information, and associations with reputable institutions. Conversely, red flags can include a lack of transparency, a name or URL mimicking a real news outlet (e.g., ‘abcnewss.com’) or a string of clickbait headlines. For social media platforms like Meta or YouTube, investigating the account or channel is key. For instance, is this the real WHO Instagram account or just someone using “WHO” in the name? A quick profile check or search can reveal impersonators.
Find better coverage: Sometimes you encounter a claim or story, and you’re not sure about the source or the details. Instead of spending too much time on that one source, a great approach is to see if the information is reported elsewhere by more credible sources. If the claim is true and significant, chances are a reputable news organisation or expert has covered it. For example, if you see a viral social media post that claims that ‘NASA has announced an asteroid will hit Earth next month!’, before panicking, search for that news on Google News. If NASA has really said so, then major news organisations like ABC, BBC, or Reuters would certainly be covering it. If you find no mainstream coverage, that’s a sign that the claim might be bogus.
Trace claims, quotes, and media to the original context: A lot of misinformation involves taking things out of context, be it a quote, an image, or research findings. This step involves tracking down the original source of a claim to see the full context. For instance, if a social media post claims, ‘According to a Harvard study, eating chocolate prolongs life by 10 years’, try to find that actual study, if it even exists. A good place to look is Google Scholar. Maybe you’ll discover the study was real but said something much more nuanced: for example that eating chocolate correlated with a small improvement in one health marker, and is not a magical life extender.
5 | A real-life scenario using the SIFT method
As Tropical Cyclone Alfred was tracking toward Australia’s east coast, a slick, fast-cut TikTok montage went viral. The video claimed to show 220 km/h winds tearing through Brisbane on 6 March 2025, urging viewers to share the post to warn family and friends. Millions watched it within hours. ABC News Verify later demonstrated that the footage was stitched together from old, overseas extreme weather clips, and published a step-by-step debunk on 7 March 2025.
|
We can apply the SIFT method in this instance: S – Stop and ask yourself: Who uploaded this? Is it designed to shock the user or prompt an emotional response? I – Investigate the source: Click on the username and the profile’s posts. Does the account have a history of credible posts or false information? F – Find better coverage: Run a quick web search to find information on wind speeds during Cyclone Alfred. Prioritise official weather sources, like the Bureau of Meteorology. T – Trace claims to the original: Pause the TikTok video, screenshot a frame, and then perform a reverse image search. |
6 | What can societies do?
Curbing the harms of misinformation is a collective challenge. The SIFT method is a valuable tool for individuals, but comprehensive solutions require whole-of-society approaches.
Pressure platforms to fix their problems: A large share of misinformation spreads on major social media and messaging platforms, so those companies play a key role in any solution. Advocacy groups, voters, the media, and others, can push for platform reforms through government legislation, by changing the incentive structures and technical systems that currently amplify viral falsehoods. This may involve adjusting algorithms to down-rank demonstrably false or incendiary content, instead of boosting it. Importantly, platforms need to increase transparency and share data on how their content goes viral, so independent researchers and regulators can hold them accountable.
Platforms can also implement simple fixes like labelling content that is false or misleading. But recent moves away from professional fact-checkers and towards community-style fact-checking on Twitter/X and Meta in the United States put the onus on platform users reaching consensus, which is not feasible as a comprehensive solution. One potential avenue of change could be legislative by bringing in regulations to support investigative journalism and fact-checking. One such example is the recent European Media Freedom Act (2023), which supports media freedom, enabling support to journalists.
Hold superspreaders and amplifiers to account: Most problematic information only goes viral if it is endorsed, spread, and amplified by individuals, groups, and organisations who already have large audiences. These influential amplifiers must be held accountable for their role in spreading such content, even if they initially did so in good faith. Content take-downs, account suspensions, and similar measures have been shown to substantially reduce the spread of mis/disinformation; they must be part of the regulatory toolkit, but administered fairly and transparently. Actors who spread problematic content for commercial gain, such as influencers, clickbait farms, and tabloid media, can also be addressed through financial penalties: platforms and regulators can remove the economic incentives for mis/disinformation by demonetising accounts known for spreading misinformation, and cutting off advertising revenue for commercial operations. Finally, where such amplifiers operate as news media, journalistic standards and ethics must be enforced much more effectively.
Support quality journalism and factchecking: Support for quality journalism and independent fact-checking investigations is important. Governments and philanthropic organisations can provide funding or grants for investigative journalism, especially to local news outlets, which have been badly impacted by changes to the media ecosystem. Funding public service media and upholding strong journalistic standards helps ensure that there are widely available sources of information that people can trust. Many fact-checking organisations are small non-profits that tirelessly verify viral claims and publish the truth, and so need sustainable support to amplify their work and integrate it into media reporting and platform content moderation processes. News outlets can prominently feature fact-check segments and explainer pieces that correct popular myths. When quality information is timely, accessible, and visible, it may have a better chance of competing more effectively with misinformation, with positive results seen from some ‘pre-bunking’ initiatives. It’s also important to call out and reform bad-quality journalism practices such clickbait headlines, unvetted reporting, and ‘both sides-ism’ that provide problematic platforms. Ethical media behaviour needs to be supported, potentially through updated codes of conduct or opportunities for reader-driven feedback.
Improve civic and media literacy: One important and long-term solution is to increase overall media and digital literacy in the population, giving citizens the critical thinking tools they need to better navigate the information landscape. Media literacy and critical thinking should be a standard part of the school curriculum from primary school. This goes far beyond a one-off class: it means integrating lessons on how to evaluate sources of information, recognise bias, and verify content into all subjects. These media literacy efforts should be complemented by workshops for
adults and seniors coordinated through public libraries, community centres, and non-profit organisations to help them avoid online scams and hoaxes and empower them in the complex information environment. Public awareness campaigns can also help. For example, governments or non-profits might run campaigns illustrating the harms of misinformation, with the goal of supporting a culture where verifying information is second nature. In Australia, the Australian Media Literacy Alliance (AMLA) is one such organisation facilitating public-facing media literacy programs across the country.
We won’t be able to ever eliminate mis/disinformation completely, but we can significantly mitigate its impact. As democratic societies, we need to invest, socially and economically, in supporting and improving our information-sharing environments and empowering good-faith actors within them. Our response should span stronger journalism, better algorithms, greater consequences for superspreaders, and a more discerning public. All these steps will help foster a flourishing society.
References and Further Reading
AAP Factcheck. (2024). Anthony Albanese video manipulated in AI scam | AAP. Aap.Com.Au.
Avram, M., Micallef, N., Patil, S., & Menczer, F. (2020). Exposure to social engagement metrics increases vulnerability to misinformation. Harvard Kennedy School Misinformation Review.
BBC. (2018, July 20). India lynchings: WhatsApp sets new rules after mob killings. BBC.
Roberts, J. T., Milani, C. R. S., Jacquet, J., & Downie, C. (Eds.). (2025). Climate obstruction: A global assessment. Oxford University Press.
Dickinson, R., Makowski, D., van Marwijk, H., & Ford, E. (2024). Exploring the Role of News Outlets in the Rise of a Conspiracy Theory: Hydroxychloroquine in the Early Days of COVID-19. COVID, 4(12), Article 12.
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29.
eClinicalMedicine. (2025). US measles outbreak: Causes, consequences and the path forward. The Lancet, 81.
Gabis, L. V., Attia, O. L., Goldman, M., Barak, N., Tefera, P., Shefer, S., Shaham, M., & Lerman-Sagie, T. (2022). The myth of vaccination and autism spectrum. European Journal of Paediatric Neurology, 36, 151–158.
Haoarchive, K. (2021). Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows. MIT Technology Review.
Homepage. (2025, August 4). Media Literacy.
Human Rights Watch. (2020, May 12). Covid-19 Fueling Anti-Asian Racism and Xenophobia Worldwide | Human Rights Watch.
Kamp, A., Sharples, R., Vergani, M., & Denson, N. (2024). Asian Australian’s Experiences and Reporting of Racism During the COVID-19 Pandemic. Journal of Intercultural Studies, 45(3), 452–472.
Lamb, W. F., Mattioli, G., Levi, S., Roberts, J. T., Capstick, S., Creutzig, F., Minx, J. C., Müller-Hansen, F., Culhane, T., & Steinberger, J. K. (2020). Discourses of climate delay. Global Sustainability, 3, e17.
Latre, F. J. P. (2024, March 7). New European law aims to protect media outlets against disinformation. The Conversation.
Linvill, D. L., & Warren, P. L. (2020). Troll Factories: Manufacturing Specialized Disinformation on Twitter. Political Communication, 37(4), 447–467.
Marineau, S. (2020, September 29). Fact check US: What is the impact of Russian interference in the US presidential election? The Conversation.
Martino, M., & Workman, M. (2025, March 6). Before you believe that cyclone video, ask yourself these questions. ABC News.
Online Harms White Paper. (2019, April 8). GOV.UK.
Painter, J., Ettinger, J., Holmes, D., Loy, L., Pinto, J., Richardson, L., Thomas-Walters, L., Vowles, K., & Wetts, R. (2023). Climate delay discourses present in global mainstream television coverage of the IPCC’s 2021 report. Communications Earth & Environment, 4(1), 118.
Phillips, T. (2020, March 23). Brazil’s Jair Bolsonaro says coronavirus crisis is a media trick. The Guardian.
Phillips, W. (2017). The Oxygen of Amplification.
Ruggeri, A. (2024, May 10). The “Sift” strategy: A four-step method for spotting misinformation. BBC.
Schraer, R., & Goodman, J. (2021, October 6). Ivermectin: How false science created a Covid “miracle” drug. BBC.
Shao, C., Ciampaglia, G. L., Varol, O., Yang, K.-C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), 4787.
Snopes.com. (n.d.). [fact checking and investigative reporting]
Starbird, K., DiResta, R., & DeButts, M. (2023). Influence and Improvisation: Participatory Disinformation during the 2020 US Election. Social Media + Society, 9(2), 20563051231177943.
Tufekci, Z. (2013). “Not This One”: Social Movements, the Attention Economy, and Microcelebrity Networked Activism. American Behavioral Scientist, 57(7), 848–870.
Viala-Gaudefroy, J. (2024, March 3). Why do millions of Americans believe the 2020 presidential election was ‘stolen’ from Donald Trump? The Conversation.
Wardle, C. (2023, May 8). Misunderstanding Misinformation. Issues in Science and Technology.
Waterson, J., & Hern, A. (2020, April 6). At least 20 UK phone masts vandalised over false 5G coronavirus claims. The Guardian.
World Health Organisation (WHO). (n.d.). Infodemic.
Additional resources:
Here are some sources that were not cited in this guide, but may provide useful further insights:
Bruns, A., Harrington, S., & Hurcombe, E. (2020). ‘Corona? 5G? or both?’: the dynamics of COVID-19/5G conspiracy theories on Facebook. Media International Australia, 177(1), 12–29.
Bruns, A., Hurcombe, E., & Harrington, S. (2022). Covering conspiracy: Approaches to reporting the COVID/5G conspiracy theory. Digital Journalism, 10(6), 930–951.
Harrington, S., Bruns, A., Matich, P., Angus, D., Hurcombe, E., & Jude, N. (2024). ‘Big Lies’: Understanding the role of political actors and mainstream journalists in the spread of disinformation. Media International Australia, 1329878X241291317.
Montaña-Niño, S., Vziatysheva, V., Dehghan, E., Badola, A., Zhu, G., Vinhas, O., Riedlinger, M., & Glazunova, S. (2024). Fact-checkers on the fringe: Investigating methods and practices associated with contested areas of fact-checking. Media and Communication, 12.
Riedlinger, M., Watt, N., & Montaña-Niño, S. (2025, January 8). Meta is abandoning fact checking – this doesn’t bode well for the fight against misinformation. The Conversation.
Riedlinger, N. W., Michelle. (2025, March 25). Why voting in a fact-checking void should worry you. Crikey.
Find useful educational resources and a game at: https://crankyuncle.com/
