Trust Must Be Earned: A Well-Funded Anti-Fake News Initiative Begins to Take Shape

 Tond Van Graphcraft/shutterstock

 Tond Van Graphcraft/shutterstock

In the world of philanthropy, it's relatively easy to identify and understand the scope of a specific problem. It's what comes after that's difficult.

Such is the case with the scourge of "fake news." While a broad swath of institutional and individual funders are united in stemming the tide of viral sensationalism, it's far from clear how to achieve this goal. Technology clearly has its limits. Algorithms can't distinguish the contextual "fakeness" of a news item, much less the nuances of consumer behavior and ingrained prejudices.

It's precisely these challenges that the News Integrity Initiative (NII) hoped to address when it launched earlier this year.

Administered by the CUNY Graduate School of Journalism and funded by Facebook, the Craig Newmark Philanthropic Fund, Ford Foundation, Democracy Fund, John S. and James L. Knight Foundation, the Tow Foundation and others, the consortium's mission is focused on "helping people make informed judgments about the news they read and share online."

When I first reported on this initiative, I wondered how the strategy would play out. Would participants address technology's role in determining what is "fake?" Would it bolster the resources and reach of producers of "real" news? Would it better educate consumers to assess the quality of the information they encounter online? All of the above?

Now, five months in, we have some answers. In a blog post, NII Managing Director Molly de Aguiar presented a roadmap for the project moving forward. 

"Through grants, applied research, events, and other strategies," she said, "we are tackling issues of 'trust' and 'manipulation' in an effort to nurture a new kind of news literacy, broadly defined."

Given the breadth of the challenge, the initiative narrowed its focus to supporting activities that, among other things, "build enduring trust and mutual respect between newsrooms and the public through sustained listening, collaboration and transparency" and "demonstrate ways to improve community conversations and increase understanding and empathy among opposing viewpoints and experiences."

Read her entire post and it becomes evident that while the initiative "will support efforts that track and diminish the influence and impact of disinformation," stakeholders are emphasizing the human element over technological fixes. (Indeed, the word "technology" doesn't appear in the post.)

To that end, the NII will also extend the longstanding work of the Tow-Knight Center for Entrepreneurial Journalism’s efforts to "build bridges between platforms and publishers," and form an alliance of partners from different sectors who are "devoted to journalism that serves as a force for building trust, empathy and solutions in our communities."

In short, by "prioritizing journalism that listens, responds to and reflects the needs and goals of the communities," the initiative is built on the premise that the key weapon against fake news is trust. And trust must be earned.

Of course, roadmaps, by their very nature, are broad documents. Details come later. The fact that Facebook is involved with the initiative suggests there will likely be some sort of technological component moving forward.

Also consider that for all intents and purposes, the initiative's stakeholders are starting from scratch, here. "Fake news" is a relatively new problem in the media world. Funders and concerned citizens like Pierre Omidyar and Craig Newmark can't pull a binder of best practices off the shelf, nor can they afford to allocate limitless resources to identify best practices.

And so, at the very least, the initiative's roadmap provides a kind of thematic baseline for practitioners moving forward.