Speaking the Language of Philanthropy in the Age of AI

Kate Nimety (she/her), EVP, Strategy
Cody Culp (he/him), AVP, Technical Strategy

The Human-Plus-AI Model: Efficiency, scalability, and growth

Philanthropy teams today face immense challenges: Data, segmentation, and compliance work have exploded in volume, yet staff shortages and burnout are common. Staff turnover is high – with gift officer turnover averaging about 18-24 months – leading to gaps in continuity. And even as budgets flatten or shrink, expectations for personalization in donor outreach continue to grow.

We’re seeing the adoption of AI accelerate across the sector, offering relief for these challenges through efficiency and scale. Yet while AI can strengthen operations, it cannot replace human judgment. Successful implementation hinges on a single concept: the human-plus-AI model. In this model, the human layer is essential to ensuring productive outreach and personalization without triggering donor fatigue.

If we focus on what is proven, responsible, and worth the effort, AI enables efficiency, scalability, and growth, particularly in specialized fields like health care philanthropy.

Generated vs. Genuine: Empathy and the human touch

When it comes to donor expectations around AI implementation, transparency is key: Donors support efficiency in philanthropy, but they can feel the difference between generated and genuine communications. According to Cherian Koshy and Nathan Chappell’s Donor Perceptions of AI Survey, while 82% of donors are familiar with AI, around 60% worry about privacy and a loss of human touch. Thirty-one percent say they would be less likely to give if outreach feels impersonal and AI-generated. That’s 31% of your donor population, and you don’t get to pick which 31% are turned off. You might have influencers, loyal donors, or future planned givers in the mix.

The key differentiator in sustaining human interactions? Empathy. While AI can recognize emotion (sympathy), it cannot connect authentically through shared experience (empathy). Because it takes human oversight to interpret emotional nuance, the human touch is essential to ensuring AI reflects compassion and donor intent.

Implementation: From data and systems to staff training

At Zuri Group, we help our partners streamline their systems to support their fundraising efforts and maximize efficiency. When it comes to implementing AI in your fundraising program, clean, reconciled data and smoothly connected systems are crucial first steps.

Here’s how the pieces fit together to support an effective implementation of AI models:

Implementing AI in your fundraising program with an eye toward efficiency, governance, and ethics means putting all these pieces together. We encourage our partners to start small and learn fast with these six steps:

  1. Audit data and processes: Evaluate current systems for AI readiness.
  2. Establish oversight: Set up governance to manage AI implementation.
  3. Start small: Begin with limited, controlled applications.
  4. Measure outcomes: Assess performance and identify issues.
  5. Expand gradually: Scale AI use based on proven successes.
  6. Review impact: Regularly evaluate environmental and ethical effects.

Establishing AI Governance: Prioritizing ethics, people, and culture

While the adoption of AI implementation is accelerating among nonprofits, few organizations have formal AI governance yet. Governance is critical to the ethical and effective implementation of AI because it protects both data and donor intent. Remember, though: AI is evolving, so it’s OK to change your mind and alter your governance along with those changes.

Ethics and Oversight

Ethical AI acknowledges its limits rather than claiming to feel what humans feel. Oversight to manage AI implementation should include:

  • Explainability: Every AI-assisted decision should be explainable and traceable.
  • Human approval: Donor-facing communication should always require staff review.
  • Internal review: AI projects should be tested for bias and fairness regularly.

People and Culture

AI will only succeed when staff feel confident that it supports – not replaces – their relationships. Organizations must invest in AI data and literacy for staff and reinforce the role AI plays in enhancing their own expert judgment. Encourage staff to experiment, and reward transparency to support governance and promote a safe culture of experimentation that will drive long-term success.

Leveraging AI models to support human expertise

Philanthropy teams wondering where to begin should first consider the types of AI models available and how combining these different types of machine learning can support human engagement.

  • Descriptive/diagnostic models summarize data and flag anomalies.
  • Predictive models forecast outcomes such as retention or future giving behaviors.
  • Prescriptive models suggest next steps (with human review).
  • Generative models create content. (These carry the highest reputational risks but are the most widely used type of AI model today.)

We recommend teams begin with descriptive and predictive use cases. By starting small with limited, controlled use cases and then measuring the outcomes to scale AI use based on proven successes, you can maximize oversight and learnings to move forward successfully.

From here, consider how purpose-built AI can support segmentation, solicitation, and stewardship. It’s critical that human oversight and review remain part of the process, even if AI is helping to make segmentation decisions or generate email messages. The way to ensure you’re connecting with your donors is to infuse human empathy into your outreach:

  • Segmentation: Use AI to define “high-potential” donors based on real-time donor scoring by flagging records with recent positive activity. Try using AI to predict behavior indicators like retention risk, giving likelihood, and preferred communication style. Review these indicators for accuracy, and ensure that you know the logic the AI tool used to identify them.
  • Solicitation: AI can support sequenced outreach to drive a 1:1 meeting cadence or generate predrafted, editable emails for staff to review. Try asking AI to recommend timing, message tone, and ask level – and be sure to review and edit the sentiment, details, and/or tone before sending.
  • Stewardship: By creating unified donor profiles based on historical giving and impact area, AI models can suggest stewardship content for staff to review. Try asking AI to draft personalized thank-you messages that align with these identified donor behaviors and histories and add a personalized touch before hitting send.

When used effectively, AI augments the work we do and the art of fundraising: By positioning the data most relevant to the fundraiser, it can support informed decision-making and simplify the effort to craft personalized, relevant outreach.

Environmental Impact of AI Usage

In the health care and philanthropy sectors, it’s critical to acknowledge that AI’s environmental footprint is a cause of ethical friction. Considering CO2 emissions side by side, it is difficult to ignore the vast difference between the emissions produced by human behavior and those produced by AI technology. Take, for example, a 2019 study by researchers at the University of Massachusetts, which found that the average footprint of one American in a year – or even air or automobile travel – creates a great deal less CO2 than the training of an AI model:

In order to keep this exponential impact in check, we recommend nonprofit practitioners use smaller, efficient models when possible and disclose impact metrics to remain transparent and ethical about the environmental impacts of AI usage.

AI Speaks Data. Fundraisers Speak Heart.

The future of philanthropy belongs to those who remain committed to blending data and heart. While purpose-built AI can amplify empathy, it cannot replace it. Embedding these tools into a CRM or advancement stack can provide access to consolidated donor data, insights, and automation that can drive leading-edge engagement efforts. But these tools are most successful when used as a starting point, combined with human oversight and interpretation.

In health care and nonprofit organizations, donor trust, compliance, fairness, and privacy are of utmost importance. Developing a governance framework that ensures your AI aligns with these imperatives is uniquely important.

As AI implementation accelerates rapidly, nonprofit and health care organizations seeking to leverage its efficiency without compromising their authenticity should prioritize using AI as a tool to scale and personalize, while maintaining human relationships. Donors want to feel known, valued, and respected, and it is human-generated empathy that will be crucial to keeping the heart of philanthropy intact.

Join Our Mailing List.