LFG — Leadership. Fundraising. Growth.

AI Nonprofit Governance: What Your Board Owes Your Organization (And Doesn't Know Yet)

92% of nonprofits are using AI. 47% have zero governance. And 87% of nonprofit boards have received no AI-specific training — a number that's actually worse than last year. The fiduciary obligations of care, loyalty, and obedience don't pause while the board figures out what a large language model is.

Why Corporate AI Governance Frameworks Don't Work for Nonprofits

Every major consulting firm has published an AI governance framework. Deloitte covers six areas. NACD calls for "more definition, more strategic focus, and more proficiency." McKinsey offers board AI archetypes. The NIST AI Risk Management Framework provides a government-grade four-function model — GOVERN, MAP, MEASURE, MANAGE — with over 200 suggested actions.

None of this works for the average nonprofit board. Here's why: corporate frameworks assume paid directors, dedicated risk committees, C-suite AI leadership, and compliance departments. The typical nonprofit board meets quarterly, has no technology committee, no CISO, and limited understanding of what AI tools staff are currently using. BoardSource — the leading nonprofit governance organization — has no specific AI governance publication.

One governance element unique to nonprofits makes this even more complex. The duty of obedience requires nonprofit boards to ensure organizational activities align with the charitable mission. Corporate boards don't have an equivalent. When a nonprofit deploys AI in program delivery, fundraising, or beneficiary-facing services, the board has a specific obligation to ask: does this serve or undermine our mission? That question is absent from every corporate framework.

The NTEN/ANB Advisory Group published the closest thing to a nonprofit-specific AI governance framework in 2024 (developed by Afua Bruce and Rose Afriyie), but it has limited distribution and doesn't include board committee charter templates or fiduciary duty mapping. The content gap is clear and wide open.

Read our AI Nonprofit Policy guide →

The Numbers That Should Keep Every Board Member Awake

AI incidents hit a record high in 2024: 233 reported incidents, a 56.4% increase over 2023 (Stanford HAI AI Index / AI Incident Database). Trust in AI companies to protect data fell from 50% to 47%. Over 1,100 AI-related bills were introduced in U.S. states in 2025.

Corporate boards are responding aggressively. 48% of Fortune 100 companies now cite AI risk in board oversight (up from 16%). 40% assign AI oversight to a board committee (up from 11%). 44% cite AI expertise in director bios (up from 26%). MIT CISR research shows organizations with AI-savvy boards outperform peers by 10.9% in return on equity.

Nonprofit boards are going the other direction. 87% have no AI-specific training for board members — up from 58% the prior year. Fewer than 10% have formal AI governance policies. The Virtuous/Fundraising.AI 2026 report found 81% of nonprofit staff use AI individually without shared workflows — meaning data flows are ungoverned, outputs are unreviewed, and liability exposure is accumulating silently.

Mapping Fiduciary Duty to AI Oversight

Forvis Mazars provides the clearest available mapping of nonprofit fiduciary duties to AI governance, and it deserves expansion.

Duty of Care requires directors to make informed decisions. In an AI context: the board must understand what AI tools the organization uses, what data flows into those tools, and what risks those tools create. A board that hasn't asked these questions is not exercising care. Practically, this means at minimum an annual AI use inventory presented to the board with a risk assessment.

Duty of Loyalty requires directors to put the organization's interest above personal interest. In an AI context: boards must evaluate AI vendor relationships for conflicts and ensure data partnerships don't compromise donor or beneficiary trust. If a board member's company sells AI tools to the nonprofit, standard conflict-of-interest protocols apply — but few boards have updated their conflict policies to account for AI vendor relationships.

Duty of Obedience requires adherence to the organization's charitable mission. In an AI context: every AI deployment must be evaluated against mission alignment. The NEDA case is the cautionary example — deploying AI that harms the very population you serve is a fundamental breach of mission fidelity, regardless of cost savings.

Read our AI Nonprofit Strategy guide →

How to Structure Board AI Oversight Without Enterprise Resources

A proportionality principle is missing from every governance framework, and it's the key concept for nonprofit boards. Governance should scale with AI usage complexity and risk:

Tier 1 — Minimal AI Use (staff using AI for internal tasks only). Requirement: an acceptable use policy reviewed annually by the full board. No dedicated committee needed. One board member designated as AI liaison.

Tier 2 — Moderate AI Use (AI in donor communications, data analysis, content creation). Requirement: a formal AI policy, an annual AI inventory and risk assessment presented to the board, and quarterly staff reports on AI usage patterns. Consider adding AI oversight to an existing committee's charter.

Tier 3 — Significant AI Use (AI in program delivery, beneficiary-facing services, automated decision-making). Requirement: a standing AI oversight committee or designated board committee with AI in its charter, external risk assessment, incident response protocols, and formal vendor governance.

Tier 4 — Strategic AI Integration (AI embedded across major functions, predictive models informing resource allocation). Requirement: full board committee with charter, AI expertise on the board or retained advisory, regular external audits, board-level AI training program, and integration with strategic planning.

Most nonprofits are at Tier 1 or Tier 2 and need to act accordingly — not build Tier 4 infrastructure for Tier 1 problems.

The NEDA Pattern: What Happens When Governance Fails

The National Eating Disorders Association replaced its human crisis helpline — which handled 70,000+ contacts per year — with an AI chatbot called "Tessa." The vendor added generative AI capabilities without adequate safeguards. The chatbot began dispensing harmful weight-loss advice to callers with eating disorders.

The governance failure was total. No board oversight existed for the technology decision. No monitoring protocol caught the problem. NEDA initially dismissed complaints. Only public pressure forced shutdown.

This is the pattern to study. It wasn't a "technology failure" — it was a governance failure at every level: no policy requiring board notification of significant technology changes, no risk assessment for beneficiary-facing AI, no monitoring for harmful outputs, and no incident response plan. Every one of those gaps is fixable with basic governance structures that don't require enterprise budgets.

Air Canada's experience reinforces the pattern from the corporate side: the airline was held liable for its chatbot's false bereavement fare information. The ruling established that liability cannot be outsourced through vendor contracts.

How to Assess AI Vendor Risk (Before Your Data Leaves the Building)

Most nonprofit AI governance discussions focus on staff behavior — what tools people use, what data they enter, what outputs they trust. The vendor risk conversation gets far less attention, and it's where some of the biggest governance gaps live.

When your organization signs up for an AI tool, you're making decisions about data residency, model training rights, security certifications, and liability allocation. Most nonprofits make these decisions at the staff or department level, without legal review, without board awareness, and without reading the terms of service.

Key questions your vendor assessment should answer:

Air Canada's case established that organizations cannot outsource liability through vendor contracts — the airline was held responsible for its chatbot's false bereavement fare information regardless of who built the chatbot. The NEDA case showed that vendors can add capabilities (generative AI) to existing tools without adequate safeguards. Your governance framework needs a vendor review process that triggers before any AI tool touches donor data, beneficiary information, or external communications.

Read our AI Nonprofit Policy guide →

The Contrarian Take: Most Small Nonprofits Need Policy, Not Governance

For a 5-person nonprofit whose staff uses ChatGPT to draft donor letters, a standing AI Risk Committee is organizational overhead that serves no one. What you need is a clear policy, an annual board conversation, and one person who periodically asks "has anything changed in how we use AI?"

The "governance enables innovation" framing is often consultant-speak. For a nonprofit ED managing 14 competing priorities with no tech staff, governance can stop innovation — not because it should, but because the implementation overhead exceeds capacity. Build minimum viable governance now so you're not scrambling after an incident. Don't build an enterprise governance structure for a non-enterprise problem.

Related Resources

Frequently Asked Questions

How should a nonprofit board structure AI oversight?

Start with proportionality. Most boards should add AI oversight to an existing committee's charter rather than creating a new committee. The Finance/Audit Committee or a Technology subcommittee are natural homes. A standalone AI committee only makes sense when AI is embedded in program delivery or beneficiary-facing services.

What is a nonprofit board member's personal liability for AI failures?

Board members have a duty of care to make informed decisions. Willful ignorance of how the organization uses AI — particularly when it involves donor data, beneficiary services, or public communications — could expose directors to personal liability, especially if harm results and the board never asked about AI risks.

How do we align AI adoption with our nonprofit mission?

Create a mission-alignment assessment for every proposed AI use case. Three questions: Does this AI application serve or replace human connection with beneficiaries? Does it protect the privacy and dignity of the people we serve? Would our donors and beneficiaries trust this use if it were public? If you can't answer yes to all three, slow down.

What does the NIST AI Risk Management Framework mean for nonprofits?

NIST AI RMF provides a rigorous structure (GOVERN, MAP, MEASURE, MANAGE) designed for government and large enterprises. The principles are sound for nonprofits, but the implementation requires adaptation. Focus on the GOVERN and MAP functions first — understanding and categorizing your AI risks before attempting to build measurement and management systems.

How should nonprofits handle AI bias in fundraising?

Any AI model trained on historical data will replicate historical patterns — including bias in who gets stewarded, what neighborhoods get canvassed, and how donor potential is scored. Audit your AI-assisted decisions for demographic patterns at least annually. If you can't explain why an AI tool makes a specific recommendation, don't act on it.

Grassroots to Governance

AI insights, operational playbooks, and fundraising strategy. No fluff. Subscribe to the newsletter.

Or subscribe directly on Substack.

Your Board Has a Fiduciary Obligation It Hasn't Addressed Yet.

LFG helps nonprofit boards stand up AI governance that's proportional to your actual risk — not overbuilt for a problem you don't have, and not absent when you need it. We build committee charters, fiduciary frameworks, and risk assessments designed for volunteer boards with real constraints.

LFG 🚀