Long before an explosion of fake news influenced the 2016 election, concern was growing in many quarters about the spread of false or misleading information on the internet. One upside of last fall's fake news epidemic is that it has fanned a larger conversation about the integrity of online information—a conversation that's now getting a big funding boost.
A group of tech industry leaders, academic institutions and nonprofits are launching a $14 million fund to support the News Integrity Initiative, a global consortium focused on "helping people make informed judgments about the news they read and share online." The initiative will be administered by the CUNY Graduate School of Journalism.
The founding funders here include Facebook, the Craig Newmark Philanthropic Fund, Ford Foundation, Democracy Fund, John S. and James L. Knight Foundation, the Tow Foundation, AppNexus, Mozilla and Betaworks.
It's rare to see so many big names join forces in common cause. The breadth of players underscores a growing view that no single sector can tackle the news integrity challenge on its own. Certainly, philanthropy can't. This initiative speaks to a collective desire among concerned institutions to "expand the conversation to include other affected and responsible parties: ad agencies, brands, ad networks, ad technology, PR, politics, civil society."
Each participant clearly understands the importance of promoting news literacy. Both the Craig Newmark Foundation and the Knight Foundation have been particularly active in the space as of late, awarding civic-minded gifts to ensure that fact-based journalism thrives in the public sphere. Meanwhile, other participants like Facebook have a more pragmatic interest in the subject. The proliferation of fake news affects the bottom line and erodes their brand.
But motivations aside, how will these deep-pocketed funders tackle fake news? What, really, can be done, here?
One much-discussed strategy is to address the content and platform factors—using technology to determine what is fake. Another is to bolster the resources and reach of producers of "real" news. Still another is to better educate consumers of information so that they can assess the quality of the information they encounter online.
In recent months, some funders, including Craig Newmark and Pierre Omidyar, have beefed up the resources of trusted sources in journalism like ProPublica. But the News Integrity Initiative appears to be taking a broad approach, with an eye on identifying promising places to drill deeper. One emphasis will be on conducting new studies and experiments to learn more about how people view and share news. CUNY journalism school professor Jeff Jarvis said, "We plan to be very focused on a few areas where we can have a measurable impact."
Facebook's role is important here, and it's building new capacity to address these issues, including bringing in Campbell Brown as its head of news partnerships. Brown has said, "As part of the Facebook Journalism Project, we want to give people the tools necessary to be discerning about the information they see online."
Digging into the role of consumers with an eye on changing behavior is important, given that quick technological fixes to the fake news problem seem unlikely. As this astute piece by Danah Boyd in Vox notes, a piece of fake news can actually be a relative thing. An algorithm can't determine what's fake. Technology can only take you so far—a realization driven home by YouTube's recent inability to curb "contextual" hate speech.
Of course, once you start looking more closely at consumer behavior, you run into a whole other set of deeply embedded problems—like, for example, the urge people have to believe and share news items that confirm their own views and biases, even items that are off the wall.
It will be interesting to watch how the News Integrity Initiative handles this tangled and alarming challenge.