I’m a big fan of how entrepreneurs can use and manage data, but nonprofits have to use and manage data too. Most people know (or would not be surprised to learn) that data append services help nonprofits data-cleanse at the end of the year. This is vital when you devote so much time to finding new donors and keeping consistent donors in the loop–and keeping up with changes in their contact info.
But what about other facets of data management in nonprofits? Specifically, what about nonprofits’ relationship to “big data,” or data sets “too large or complex to be dealt with by traditional data-processing application software,” as Wikipedia defines the term? Interestingly, we’ve recently seen several articles on big data and nonprofits, and depending on which article you read, you might conclude that nonprofits can easily use big data, that nonprofits can only ride on the coattails of private businesses that use big data, or you may learn many ways your organization can both acquire and use it.
You can access some big data for free
Kayla Matthews’ piece last September at Smart Data Collective points out that nonprofits who can’t afford costly data platforms can get free data sources mediated by entities as diverse as Amazon, Pew Research and the U.S. Food and Drug Administration. They offer “open data” aggregation and platform services that interest groups can use at no cost. There’s also a group called the Nonprofit Open Data Collective, “a consortium of nonprofit representatives and researchers [that] is working to analyze electronically submitted Form 990 data and make it more useful to everyone who wants to see it.”
There are high-visibility organizations using it
Matthews provides a couple of powerful anecdotes in her post from last October, including the Jane Goodall Institute’s use of data entered by private citizens throughout Africa speaking to the status of and threats to chimpanzee populations, and UNICEF’s dissemination of health stats like infant mortality into public hands. It’s not just about being nice, though, as Matthews points out: “Viewing the hard data for themselves might encourage individuals to give generously when it’s time for fundraising campaigns.”
One of our favorite new apps–and new approaches–is Branch, which builds on the success of Kiva, a great platform helping small entrepreneurs –such as beginning family farmers– crowdsource startup loans. It turns out that Kiva’s co-founder, Matthew Flannery, started Branch as a new nonprofit, hoping to solve a challenge that came to be associated with Kiva: “Due to having limited connectivity, loan officers in those countries would have to travel to each borrower to distribute the money, resulting in additional costs. However, with mobile dominating digital technology worldwide, it’s now becoming possible to skip the loan officers entirely and send the money directly to the borrower via mobile. Flannery wants to use machine learning to assist with making sound lending decisions and swiftly deliver loans via mobile payment.” Pretty cool.
So what’s the problem?
For smaller organizations, the problem may simply be scale of human resources to data. In small organizations, people have a lot of hats to wear and no one person may have the capacity and training for big data management. But there may be other challenges intrinsic to the models and iterations of nonprofits. The bloggers at Pursuant say that a leading problem nonprofits have with big data is that they compartmentalize it too much. “Most organizations already have a lot of data,” they write, “but they store it in departmental silos . . . Instead of synthesizing the data from all sources, nonprofits look at one area at a time. But that approach doesn’t unleash the power of big data. Information gleaned from donor data files, special events, emails opened or closed, and what donors click on at your website must be looked at holistically. But doing that requires breaking out of departmental silos. Diffused data isn’t good for the donor, your mission, or your organization’s long-term sustainability.”
So there’s capacity, but there’s also too much diffusion of data across areas that don’t interact much with each other, and so offer no incentive or expertise on data synthesis.
Good data management
Avery Phillips, writing for Inside Big Data, suggests that taking data management to the next level “requires a structured approach that incorporates cleaning up the data (e.g. paring it down to genuinely useful and trusted information) and creating larger networks of employees involved in the decision-making process beyond those that are tasked with handling the data itself.” Your organization might not be big enough to do that yourself, so consultants may be inevitable. While that costs money, the stories in the various posts we read, including the UNICEF example, suggests it could make your organization even more money. It’s up to you whether you are at the level you believe is appropriate for the service you need. The important thing, Phillips says, is that your IT team isn’t the only department “aware of pertinent big data that might influence” an organizational decision. What you want is “a larger umbrella of team members . . . incorporated into the ‘web of knowledge’ that big data can provide. . . helping them to maneuver themselves into the ever-crowded spotlight, communicate their mission statement effectively, and raise funds at unprecedented rates.”
So, while data management services are available no matter what, the challenge and promise is in managing your data holistically and with as many voices included in the analysis as possible.