Is “Tech for Good” Actually Good? A New Framework Seeks to Help Funders Vet Proposals

enzozo/shutterstock

There’s been an increasing number of technology-related proposals in philanthropy that purport to have positive impacts on equity and justice. Known as “tech for good” or “tech justice,” these projects often promise to advance human rights and social justice, and while good-intentioned, can bring about unintended negative consequences. 

To that end, the Ford Foundation has published a new report titled “A Guiding Framework for Vetting Technology Vendors Operating in the Public Sector.” The framework, primarily aimed at philanthropic program officers, is meant to be used as an additional tool in funders’ due diligence processes as they analyze the ways new digital-technology-based proposals might impact the public sector, focusing especially on technology’s impact on human rights, social and economic justice, and democratic values.

Philanthropic funders are often approached to support the creation of new tech for potential use in public agencies. Last year, for example, Schmidt Futures — along with other funders like the Ballmer Group, the Gates Foundation and Schusterman Family Philanthropies — directed $13 million to develop tech that will allow low-income Americans to better access government benefits.

“I get a lot of pitches for projects that involve technology,” said Cynthia Conti-Cook, who is a technology fellow for gender, racial, and ethnic justice at the Ford Foundation. “But I learned the hard way that there are a lot of questions that you have to ask because there are potentially a lot of long-term consequences that aren’t always immediately apparent.”

The framework can be used during initial assessments of such proposals as training material for new program officers, and to vet new “tech for good” proposals from existing grantees. Ford commissioned TARAAZ, a research and advocacy organization that works at the intersection of technology and human rights, to create the report. 

“The report was commissioned in order to essentially leave something behind that captures the kind of analysis and framing and thinking that I’ve been doing with my team over the past three and a half years.”

“Technology for good”

The report’s focus is on technology that is used — or can be used — in the public sector by public agencies, as opposed to commercial tech. That could include tech for use in the criminal legal system, public welfare systems, public health, or migration management. 

The public sector often collaborates with philanthropy, and one of the ways these partnerships manifest is that philanthropic organizations will provide funding to develop, test and advance a potential solution to a public problem. If the project or pilot is successful, then local, state or federal governments may implement the tested program. 

“In the philanthropic space, numerous technology-related proposals compete for attention,” the report notes. “Their approaches and themes vary widely, but one thing that many of them have in common is the assumption that they will use technology not just to reduce inefficiency, but to increase equity and justice.”

The report is not intended to disparage the use of technology by public agencies, nor does it discourage funders from providing grant dollars in support of “tech for justice.” 

“What we mean by this framework is, yes, we acknowledge that these benefits exist in public services, in social welfare services, and we need them. If we need them, we can have them in a way that is not harmful,” said Roya Pakzad, the report’s lead author, and founder and director of TARAAZ. While public agencies can benefit from these technologies, there may be long-term consequences to implementing them. 

Many experts and advocates have argued that these projects, while well-intended, “often replicate the status quo, offering facile or impractical solutions to deeply rooted systemic social problems, while in some cases replicating the same inequalities or injustices they seek to alleviate,” according to the report.

For example, many advocates, experts and funders see the abundance of data in our digital world as an important tool for social justice. This, however, can also backfire, as it can also be used to surveil and monitor groups that have been historically marginalized and oppressed. 

According to Pakzad, the report does not propose that technology cannot be used for good in the public sector. 

“It was not our intention for making this framework to say, OK, there are human rights issues with technology, and now we condemn all kinds of technology. That was not our intention. Our intention was to say, ‘Yes these technologies, there are benefits to them. But there are harms to them, too’.… This is the framework to address the harms in order to maximize the benefits that they promised to bring to public agencies and public services,” Pakzad said. 

Red flags

The framework consists of 21 red flags divided into seven categories — theory of change and value proposition; business model and funding; organizational governance, policies and practices; product design, development and maintenance; third-party relationships, infrastructure and supply change; government relationships; and community engagement. 

The report provides an explanation of the red flags, examples, questions program officers can use to identify the red flags, and links to additional resources. One example of a red flag funders should look for is a new product that has no prospect of policy, cultural or systemic change, and instead promises a quick fix or Band-Aid to a longstanding issue. Or a project that replaces an existing product, but has a user interface or design that is inaccessible to people with disabilities or is intimidating to those lacking technical or digital literacy. 

The last of the red flags categories involves community engagement. One of the examples here is tokenism in community engagement, where engagement is not meaningful and is instead treated as a box to check. 

According to Pakzad, in some “tech for good” projects, community engagement only exists on paper. Vendors may say they spoke with community members, but funders should push for more detailed information such as what the type of engagement was, how many times vendors spoke with community members, whether members were compensated for their time, and whether they followed up with community members and implemented their suggestions. There is a significant difference between a one-off meeting and continuous conversations. In addition, community engagement may not happen until the end of the development phase, and as such, is simply a box to check off rather than actual engagement.

Implementing the framework

According to Conti-Cook, the project came about from conversations not just with grantees who wanted funders to understand the potential harms of the projects they are seeding, but with tech vendors themselves. 

“They asked for some articulated guardrails that they can hold themselves accountable to, and also that they could point to if they were asked to do something that they felt was too risky,” Conti-Cook said. “They could really point to a substantive document that articulates that concern and educates some of the public sector folks, agencies and procurement officers that are asking for technologies to be built.”

Funders can also use the framework to assess what types of non-financial support they can provide to grantees. If funders see that there is something lacking in certain areas that the framework describes, they can provide additional support to improve grantees’ work. 

Pakzad explained that she and her colleagues don’t want the framework to be a checklist of questions people go through, and as such, didn’t provide a score-based metric for it. Instead, it can be implemented as an addition to existing due diligence frameworks, human rights impact assessment frameworks, algorithmic impact assessment frameworks or privacy impact assessment frameworks, among others. 

The framework shouldn’t stop conversations about new technology and innovations entering the public sector, Pakzad said, but should instead provide direction to the kind of mitigation strategies that can be used to address any foreseeable harm “tech for good” may cause. 

Conti-Cook hopes that Ford will include the report as part of their orientation for new program officers, and hopes that other foundations will do something similar. “We hoped that this gave funders and gave procurement officers some confidence to say yes to technology, knowing that they were more informed about and could articulate some of the concerns that might be just a little bit abstract from them.”