← (My) POV
HR as Strategic Function April 8, 2026

When AI becomes a weapon: The harassment risk HR leaders might miss

Most HR teams are still writing AI policies around ChatGPT use cases. Meanwhile, AI-generated harassment is already happening in workplaces — and the gap between policy and reality is a liability, not just an oversight.

Read the source article →

The news

HR Executive published a piece on a workplace risk that most organizations aren’t prepared for: AI being used as a tool for harassment. The article argues that the threat goes well beyond deepfakes — any employee with a smartphone can use widely available AI tools to target a colleague, and most workplace policies have no framework to address it. Read the full piece here.

My take

HR teams are being asked to govern a technology that their legal counsel doesn’t fully understand, their C-suite is still excited about, and their employees are already using in ways no one anticipated. That’s not a new dynamic — it’s happened with every major technology shift. But what’s different with AI-enabled harassment is how low the barrier to harm has become.

This isn’t a future risk. I’ve talked to enough HR leaders in the past year to know that incidents are already happening — AI-generated text impersonating colleagues, manipulated images, synthetic voice audio. The tools to do this are free, fast, and accessible to anyone with a grievance and twenty minutes. And when HR investigates, most existing harassment policies offer no guidance because they were written before this category of harm existed.

Here’s what concerns me most from a strategic standpoint: HR is often brought into these situations after the damage is done, without the authority or resources to respond effectively. That’s not a policy problem — that’s a positioning problem. If HR doesn’t establish itself as the function that owns the governance framework for AI risk as it relates to people, that ownership will default to Legal or IT. And those functions will optimize for liability reduction and security controls, not for the human experience of the person who was targeted.

The organizations that get this right will be the ones where HR proactively drafted updated conduct policies, ran manager training before an incident occurred, and had an investigation protocol ready. That’s HR operating as a strategic function — not just responding to the moment, but anticipating it.

The so-what

I’d tell my HR leader clients: don’t wait for a high-profile incident to force the policy conversation internally. Bring a draft framework to your CHRO and General Counsel now, before the first complaint lands on someone’s desk. The window to be proactive is shorter than most people think.

AI conduct policy isn’t an IT issue dressed up in HR clothing — it’s a core people risk, and HR should own it. The function that defines the framework gets the seat at the table. The one that reacts to the incident gets the cleanup crew.

Want this kind of thinking on your team?

I work as a fractional CMO for HR Tech companies. Let's talk about what you're building.

Let's Talk