AI is a growth opportunity for nonprofits
Artificial intelligence (AI) is reshaping the nonprofit sector, opening more possibilities for impact by increasing efficiency and personalised engagement. In 2025, AI is top of mind for nonprofit leaders, which is reflected in recent statistics.
Raisely’s 2025 Fundraising Benchmarks study found that nearly 47% of fundraisers see AI as the biggest digital opportunity ahead.
Another report highlights the same trend:
- 60% of nonprofits believe AI will deliver major benefits for the sector.
- By 2025, 71% of nonprofits will be using AI in areas like fundraising and event planning.
- Use of AI for donor profiling and segmentation has already grown from 4% in 2023 to 6% in 2024, with 43% of nonprofits planning to adopt it in the coming years.
However, leaders are becoming increasingly aware of the ethical concerns of AI. For nonprofit technical leaders and CEOs, the stakes are high, with AI adoption opening concerns and risks around:
- Data privacy & security
- Bias and misinformation
- Trust and transparency
- Environmental impact
To mitigate any potential risks associated with AI use, nonprofit leaders must consider AI ethics.
AI ethics can be defined as “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.
AI ethics are necessary because misuse, abuse, poor design, or unforeseen negative outcomes of AI systems can lead to several harms to people and society. Consequently, nonprofit leaders must strategically approach AI adoption with thoughtful planning.
Considerations for nonprofit leaders using AI tools
As nonprofits embrace AI, it’s important to ensure that adoption is guided by ethical principles.
Key factors in using AI responsibly
- Data privacy and security: Nonprofits must ensure that any AI tool complies with regulations such as GDPR, CCPA, and/or other local donor privacy standards to protect sensitive information.
- Bias and misinformation: Human oversight is essential to minimise bias and inaccuracies, helping ensure that AI-driven insights and communications remain fair and reliable.
- Trust and transparency: Being open about when and how AI is used helps maintain donor and stakeholder confidence, reinforcing accountability and integrity.
- Environmental impact: Running AI systems can be energy-intensive. Organisations should look for cloud solutions with smaller carbon footprints and prioritise sustainable options that reflect their values.
Below are three types of AI technologies nonprofits might use, along with ethical considerations for each:
1. Large language models (LLMs)
Popular examples include ChatGPT and Microsoft Copilot. They can quickly generate content like thank-you letters, grant applications, or personalised donation requests. But will those messages feel authentic?
Example
A community sports club might use ChatGPT to draft renewal emails for hundreds of families. While the tool saves staff time, the resulting messages fall flat, leading to fewer renewals. Reviewing and editing ensures communications keep the organisation’s genuine tone and don’t sound generic.
Ethical Considerations
Bias and representation: Because LLMs are trained on existing data, they can unintentionally reinforce stereotypes or exclude parts of a community. Therefore, human oversight is vital to keep messaging fair and inclusive.
Authenticity: People are increasingly attuned to “AI-sounding” text (repetitive jargon, overly formal tone, or formulaic structures). LLMs should be seen as drafting assistants, not replacements for human communicators.
Shadow AI: When employees use unapproved tools, sensitive data can slip through the cracks. Research shows 80% of business leaders are concerned about this risk. Examples include:
- Linking AI tools to unsecured sources, which could expose proprietary information.
- AI systems inheriting the same permissions as their users, meaning an over-permissioned account might unintentionally grant access to confidential data.
- Poor data lifecycle management, leaving outdated or unnecessary data stored in AI systems and creating long-term vulnerabilities.
2. Machine learning (ML)
ML identifies patterns in large datasets and uses them to predict outcomes. For nonprofits, this could mean anticipating which donors are most likely to give again, forecasting the success of a fundraising campaign, or even personalising supporter journeys. Platforms like Dynamics 365 CRM are already embedding ML features to help organisations segment donors and predict engagement trends.
Example
An international aid charity might use ML to analyse donor giving patterns and predict which supporters are most likely to respond to an emergency appeal. These insights can make fundraising more efficient, but they also require handling sensitive financial and personal data with care.
Ethical considerations
Bias in training data: If a fundraising model is built on donor data that over-represents one demographic, it may unfairly prioritise that group while overlooking others, reinforcing inequities.
Consent and autonomy: Donors and beneficiaries may not have explicitly agreed to their data being used for training or validating AI models. Nonprofits should consider whether their use of data respects privacy rights and personal agency.
Data security: It’s crucial to understand how AI vendors store and protect donor information. Even small oversights in data storage or lifecycle management can create significant risks. Leaders should review vendors’ privacy and security policies before implementation.
3. AI Automation
Automation tools can streamline customer service and internal processes, saving staff time for higher-value tasks. For example, chatbots can respond to common queries about event times, donation receipts, or office locations.
Example
A mental health nonprofit might consider using a chatbot to handle enquiries. While automation works well for simple FAQs like “What time is your support group?” or “Where is your office?”, questions involving distress or personal circumstances require a trained human response.
Ethical considerations
Human dignity and care: Automating sensitive conversations can dehumanise interactions and risk causing harm. Automation should never replace human judgement where safety, well-being, or emotional support is involved.
Appropriate boundaries: Nonprofits must decide which interactions are suitable for automation and which demand empathy and human connection.
Shadow AI risks: Just like with LLMs, staff may experiment with free automation tools that connect to donor databases or email systems. Without approval and security checks, this can expose sensitive data and undermine trust.
The point? LLMs, machine learning, and automation can transform nonprofit operations – but only when used with care and responsibility. Next, we’ll discuss practical steps nonprofits can take to implement AI ethically.
Practical steps for ethical AI use in nonprofits
Responsible AI adoption requires clear policies, training, and ongoing oversight. Here are three practical steps to guide nonprofit leaders:
1. Create an ethical AI policy
Outline rules and regulations for AI use - and clarify where AI should not be used.
Tip: a strong policy should cover:
- AI risk management
- Data privacy and security
- Transparency and accountability
- Acceptable uses of AI
- A list of approved tools
By setting these boundaries and communicating them clearly and often, organisations can protect both staff and stakeholders. Policies should also be reviewed regularly to keep up with new tools, regulations, and best practices.
2. Do your due diligence
Every AI tool comes with its own risks. Leaders should understand security policies, data handling practices, and environmental impact of the tools they are using.
Tip: choose tools with strong security standards. For example, Microsoft Copilot has robust safeguards built in, making it a safer choice than many free, consumer-grade apps. Monitor how staff are using AI, especially with sensitive information.
3. Combine AI with human oversight.
AI should be used to help staff, not to replace personal interaction or decision-making.
Tip: empower your staff to use AI responsibly. Provide training and encourage safe experimentation. Consider enrolling staff in AI-focused webinars or courses.
Did you know?
A 2024 study found that 38% of employees using AI tools admitted to sharing sensitive work data, often unintentionally. This highlights the importance of staff training and clear AI policies.
Key Takeaway
AI will influence every cause, but its value depends on ethical, thoughtful use. Nonprofits that combine AI with human judgement, clear policies, and sustainability practices will be best placed to amplify their mission while protecting the people and communities they serve.