AI in Cybersecurity 2025: Why the Human Element Still Matters

A glowing fingerprint on a futuristic biometric scanner embedded in a circuit board, symbolizing advanced cybersecurity and digital identity verification.
By Colin Ellingson, Fulton May Solutions

Despite all the buzz around AI vs. AI in cybersecurity, I think we’re missing the real point: while human error still causes many of the biggest breaches, it’s also human insight — not just automation — that’s key to stopping them.

At Fulton May, we work with some of the most advanced security partners in the industry — from Arctic Wolf to Mimecast — and we see firsthand how critical it is to marry cutting-edge AI with real human judgment. But the tools only go so far. The biggest vulnerabilities we see in 2025 aren’t coming from gaps in tech. They’re coming from fear, hesitation, and simple human error.

The Most Dangerous Threats We’re Seeing in 2025? Still People.

Let’s start with what’s new. One of the most dangerous trends we’re seeing right now is business email compromise (BEC) attacks that involve long loiter times. These aren’t smash-and-grab attempts. Attackers are sitting silently inside Microsoft 365 tenants, studying executive behaviors and waiting for the perfect moment, often just before payroll or wire transfers. It’s a patient, calculated human attack, and no amount of AI alone can stop that, unless someone reports suspicious behavior early.

Another scary one? “Vishing” via Microsoft Teams. Attackers pose as IT or Power BI consultants on video calls, using screen sharing and remote access to steal credentials and deploy ransomware.

We’ve seen situations where executives thought they were on a routine call — until partway through, they realized something was wrong. They disconnected the session, but only after unknowingly exposing sensitive infrastructure.

Then, they didn’t want to admit what had happened. That’s where the real risk lies.

Person typing on a laptop with holographic warning symbols on screen in an office environment, concept of cybersecurity threat

Empathy > Blame: Building a Culture of Security

The most common failure we see isn’t someone clicking the wrong link. It’s what happens after — the silence, the cover-up, the fear of getting in trouble.

We’ve seen situations where employees tried to “quietly” report incidents, or didn’t report them at all, worried they’d lose their jobs. That hesitation gives attackers time. A breach that could have been contained in minutes becomes a days-long incident.

If you take one thing from this post, let it be this: create a culture where people feel safe reporting mistakes. That culture is your first line of defense. It’s also one of the hardest things to build, because it takes more than policies — it takes leadership.

Tools Matter — But Not Without Process

Of course, tech still plays a major role. But the best outcomes come from combining human insight with the right tools. Our go-to stack includes:

  • Arctic Wolf for Managed Detection and Response (MDR), which uses both AI and human analysts in what they call a “concierge security operations center.”
  • Mimecast for inline email protection, especially against advanced BEC threats.
  • KnowBe4 for ongoing cyber awareness training and simulated phishing attacks.
  • Microsoft Azure Rights Management for encryption at rest — crucial for protecting sensitive data sitting in email inboxes or file servers.

We also coach clients on setting clear, human-led processes: requiring phone verification before any ACH changes or wire transfers, limiting exposure during project scoping, and encrypting sensitive communications by default. These aren’t glamorous fixes, but they work.

artic wolf logo
mimecast logo
know be 4 logo
azure logo

AI Will Never Fully Replace People

AI is powerful. But when you remove the human from the equation, you get false positives — or worse, you get tricked. We once worked with a client using Darktrace, an all-AI solution. It was flagging and quarantining machines three or four times a day — all false alarms. They switched to Arctic Wolf, and now they maybe get one or two alerts per year.

That’s what happens when you let human expertise drive the tools, not the other way around.

Don’t Wait for the Breach

The toughest conversations I have are with leaders who only recognize the value of security after something happens. It’s easy to underestimate the risk when everything’s running smoothly. But our job is to help clients avoid the worst — and celebrate the quiet victories.

We’ve stopped ransomware before it spread. We’ve flagged fraud attempts before money was lost. But those wins only happen when teams trust each other, and when leadership invests early — not just in tech, but in culture.

Cybersecurity in 2025 isn’t about AI vs. AI. It’s about people and AI — working together.

Not sure where your cybersecurity blind spots are?

The team at Fulton May will be happy to walk you through what we’re seeing and how you can strengthen your defenses — human and otherwise. Let’s talk.

Share:
More Posts
Skip to content