AI-generated threats are getting more sophisticated
Phishing no longer looks like a sketchy Gmail address and poor grammar. Now, it looks like your CEO asking for an urgent invoice. Or a fake voicemail from a board member. Or even a deepfake Teams call.
And it’s not just hypothetical. In 2024, a finance worker was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.
AI-generated attacks are up 4,000% since 2022, and they’re only getting more sophisticated.
Attackers are now using AI to analyse social media, emails, and online behaviour to create hyper-personalised phishing messages. These scams mimic writing styles, reference recent activity, and even continue legitimate email threads to avoid suspicion. Deepfake audio and video tools are being used to convincingly impersonate executives and board members, leading to large-scale financial fraud and data leaks.
Static security tools can’t keep up. AI-driven phishing campaigns continuously evolve to bypass traditional defences by exploiting psychological triggers, perfect grammar, and near-flawless style. Meanwhile, self-learning malware is adapting in real time, hiding in plain sight by mimicking normal system activity.
In addition, the rise of Cybercrime-as-a-Service means even low-skilled attackers now have access to powerful AI tools on the dark web. These groups run automated A/B tests to fine-tune phishing content for maximum engagement.
Large Language Models (LLMs) and open-source AI tools are giving threat actors the ability to:
- Write malicious code in real time
- Tailor attacks using scraped personal info
- Learn how your org operates
That’s what makes AI-generated threats so dangerous. They don’t feel like attacks. They feel like business as usual.
How confident are you that your team would spot these attacks today?
Cybersecurity software every nonprofit needs in 2025
The evolving cybersecurity landscape isn’t just about keeping up with patches and updates. It’s about adjusting your security posture from reactive to proactive.
For many nonprofits, that means taking a second look at tools like:
- Microsoft Secure Score: to get a quantifiable sense of your current setup
- Conditional Access and Multi-Factor Authentication: to add layers without slowing people down
- Endpoint Detection and Response (EDR): to catch what traditional antivirus software misses
If you’re still relying on manual monitoring or a mix-and-match stack, it’s time to rethink the approach.
Where nonprofits are most vulnerable
Nonprofits have a unique risk profile. You’re dealing with:
- Limited IT teams
- Legacy systems or hybrid environments
- Staff and volunteers using personal devices
- Sensitive data from donors, clients, or government partners
When you add a mission-first culture, where cybersecurity can feel secondary, the risk increases.
AI-generated threats exploit exactly these conditions. Especially when there are distractions, outdated systems, and low awareness.
Practical steps to improve your security posture
Here’s what you can do, even without a major overhaul:
- Start with Microsoft Secure Score: It’s built into Microsoft 365 and gives you a clear, actionable baseline.
- Run an attack simulation: Tools like Microsoft Defender Attack Simulator let you test how your team would respond to a phishing or malware scenario.
- Invest in user awareness: Tools are essential, but human error still accounts for most breaches.
- Make security part of culture: It can’t sit in a silo. Bring leadership, finance, and program teams into the conversation.
- Download our Cybersecurity Checklist for Nonprofits and Charities
Why human behaviour still matters in AI
AI might be driving the threat, but it’s still human decisions that open the door.
Your users are your greatest risk and your strongest line of defence.
It’s been shown that organisations that train people and test regularly reduce phishing risk by up to 86%.
That’s why technical leaders need to think beyond the firewall. Focus on behaviour, training, and a culture where people feel empowered to report weird emails or suspicious logins.
The future of cybersecurity software with AI
It’s not all doom and gloom. AI is also powering a new era of cybersecurity tools which are smarter and more scalable for lean teams.
From Microsoft Copilot to automated threat detection, we’re seeing a shift toward AI-enhanced defence that nonprofits can access if they know where to look and how to set it up properly.
The best leaders won’t just react to AI. They’ll use it to build smarter, more resilient systems.
Partnering for stronger cybersecurity in the nonprofit sector
Cybersecurity in the age of AI isn’t about fear. It’s about focus.
Focus on building systems that adapt. Focus on training people who are alert. And focus on making security part of your everyday operations.
You don’t need a 12-person team to stay safe. You just need consistency and the right tools.
And a partner who knows how to help.
If you need help navigating this complex landscape, partnering with experts who understand nonprofit challenges can make all the difference.
We work with over 80 nonprofits across New Zealand, Australia, and Canada and are committed to ensuring nonprofits stay safe from cyberattacks.
Want to improve your nonprofit cybersecurity?
Download the Cybersecurity Checklist for Nonprofits