How can individuals practice critical thinking and effectively evaluate the credibility of sources in an age where information abounds but is not always accurate or truthful? Project CO-INFORM applied co-creation methods to develop verification tools with and for stakeholders such as journalists, policymak-ers, and citizens, to better prepare for situations in which the distinction between fact and fiction is not always evident.
Misinformation poses a significant threat to social cohesion and the stability of communities, undermining the essential foundations of trust and informed decision-making that are crucial for a healthy, functioning democracy. When false or misleading information spreads, either voluntarily or not, it distorts public perception and skews the understanding of critical issues, leading to misinformed opinions and choices. This can have far-reaching consequences, from eroding public trust in institutions and media to inciting social unrest and polarizing communities. The spread of misinformation can also impede effective public health responses, as seen in the case of vaccine misinformation, and can influence political processes, potentially swaying elections based on falsehoods.
CO-INFORM, active between 2018 and 2021, took place at an opportune time to address the issue of mis (and dis) information, a time in which false facts were being spread about happenings such as the political aftermath of the Syrian refugee crisis of 2015, and as social media became the bread and butter of people’s communication and information habits. “Countries like Greece, Austria, Germany, and Sweden had a kind of first wave of migration-related misinformation, which was an inspiration for finding new ways of dealing with it in a cross-societal way,” says Mattias Svahn, former coordinator of Project CO-INFORM. Previously working at Stockholm University while carrying out the project, he is currently working at the Swedish Defense Research Agency. “When CO-INFORM started, we were researching misinformation related to the refugee crisis, but of course that developed into misinformation related to the pandemic.”
Such a development further pushed the project team, composed of social scientists, software developers, journalists, and fact checkers from 6 different countries, to find solutions by involving stakeholders from all facets of society in a far-reaching and integrated way. “This collaborative approach is pivotal in creating an ecosystem where information is not just consumed, but scrutinised and understood, fostering a society that is not only informed but resilient to the waves of misinformation,” says Svahn.
This was an innovative approach when the project started, as until then, misinformation had been approached as a somewhat separate research field from the rest of other cross-cutting challenges related to security, social services, and commercialisation. “The project was well anchored for its time as a bridge between the early research in misinformation as something specific, into an issue for all of society.”
The project sought to predict the credibility of sources by modelling the signals that suggest whether a particular claim is accurate or not. As opposed to accuracy, which requires human fact checkers to assess that a claim is true or not using sufficient evidence and knowledge, credibility can be modelled through automated systems that summarise various criteria for fact-checking. For example, the Washington Post uses labels such as “One Pinocchio” or “Four Pinocchios”. CO-INFORM chose to assign a so-called “credibility value” between -1.0 and 1.0 to specific claims, where -1.0 is not credible at all, 0 is neutral, and +1.0 is as credible as possible. A credibility confidence dimension was also incorporated in the model to address the probability that this label assessed the claim correctly, depending on the strength of the signals available. This depends on factors such as whether similar claims were posted in the past, whether the style of a tweet is similar to credible tweets, etc.
Throughout the design process, co-creation workshops were crucial. “The purpose of having a series of co-creation workshops was to have a continuous sounding board at intermittent points of the project to give input on these misinformation tools,” says Svahn. “Key moments in a design process gave influence. The co-creation workshops helped us understand how to make design choices for the best fulfilment of a design goal.” The two main products consisted in a browser plugin to raise citizens’ awareness of misinforming content and a dashboard for fact-checking journalists showing what kinds of misinformation is detected and how it will spread in the near future. Svahn adds, “In the beginning, the tools available were largely confined to the technological realm, primarily utilized by software engineers. Nowadays, these tools have become more mainstream, opening the door to wider adoption and integration into various aspects of daily life.”
The project was not without its challenges, however; while tools can be developed to predict and detect the credibility of a claim, how misinformation can spread in the near and far future remains a greater question mark. “How do you know when or where a particular group is going to begin spreading misinformation?” Svahn asks. He mentions the concept of “pre-bunking”, or metaphorically, “inoculation”, of people against misinformation, as a potential solution. This analogy draws from the medical practice of vaccination, where a weakened or inactive form of a virus is introduced to stimulate the immune system to fight the disease. Similarly, in the context of misinformation, vaccination involves exposing people to misinformation that may come in the near future to help them recognize and resist false or misleading information when it is there. However, experimental setups are far from a realistic simulation of reality. “It’s easier to talk about pre-bunking or inoculation but it is harder to deal with in actual practice.
For instance, there have been false claims about social security services abducting children in Sweden. To address this, Swedish authorities have engaged with various groups and stakeholders across Sweden, helping them see first-hand how these narratives are completely unfounded and disconnected from reality.
Svahn envisions a future where every segment of society recognizes the significance of misinformation and collaborates effectively with key local stakeholders to address this challenge. This involves schools, public organisations and commercial companies. He also hopes that big social media platforms can be incentivised to tackle the spread of misinformation first-hand. “Facebook algorithms are geared towards stimulating interaction, and an angry one at that, which is equally financially viable as a positive interaction,” he notes. He makes a reference to EU laws such as the Digital Services Act. “These new laws are a step in the right direction to incentivise social media companies, without whose participation the containment of misinformation cannot go forward.”
On the other hand, he cautions a worst-case scenario: “When narratives get a life of their own, they become disconnected from the topic that they originally started with. Misinformation is a corrosive influence in society, and combating it is not only a way to create a more resilient society, but also a safer society.”