
Exabeam, a global leader in intelligent analytics and automation for security operations, has published the results of its new international report From Human to Hybrid: How AI and the Analytics Gap Are Fueling Insider Risk. The study explores how artificial intelligence and analytical shortcomings are intensifying insider threats.
AI makes insider threats more effective and outpaces external attacks
Based on a survey of 1,010 cybersecurity professionals across key sectors, the research shows that insider threats have surpassed external attacks in terms of security concern — and that AI is accelerating this shift.
According to the report, 64% of respondents now consider both malicious and compromised insiders to be a greater risk than external actors.
Generative AI (GenAI) plays a key role, making attacks faster, stealthier, and more difficult to detect.“Insider threats are no longer just humans,” said Steve Wilson, Chief Product and AI Officer at Exabeam. “They are AI agents entering with valid credentials, mimicking trusted voices, and acting at machine speed. The question is no longer just who has access — but whether you can detect when that access is being abused.”
The growth of insider threats shows no signs of slowing down
Insider-driven activity is intensifying across industries, fueled by both malicious intent and accidental compromise. Over the past year, more than half of organizations (53%) reported a notable increase in insider incidents, and most (54%) expect this trend to continue. Government organizations are preparing for the sharpest rise (73%), followed by manufacturing (60%) and healthcare (53%), driven by broader access to sensitive systems and data. This surge is not uniform; risk trajectories vary significantly by geography and sector. The Asia-Pacific and Japan regions lead the forecasts for insider threat growth (69%), reflecting heightened awareness of identity-driven attacks. In contrast, nearly one-third (30%) of respondents in the Middle East expect a decrease — a sign that could either indicate higher confidence in existing safeguards or a potential underestimation of evolving risks. These differences underscore the complexity of the insider threat landscape and the need for protection strategies aligned with regional realities.
AI enables faster, smarter, and more sophisticated insider attacks
Artificial intelligence has become a force multiplier for insider threats, allowing malicious actors to operate with unprecedented efficiency and sophistication. Two of the top three current insider threat vectors are now AI-related: advanced phishing and social engineering, cited as the most concerning tactics (27%). These attacks can adapt in real time, mimic legitimate communications, and exploit trust at a scale and speed no human adversary can match.
Unauthorized GenAI use further complicates the landscape, creating a dual risk where the same tools designed to enhance productivity can be weaponized. More than three-quarters of organizations (76%) report some level of inappropriate or unauthorized GenAI use, with the technology (40%), financial services (32%), and government (38%) sectors experiencing the highest levels.
Regional variations are also revealing: in the Middle East, unauthorized GenAI use ranks as the top insider concern (31%), reflecting both rapid AI adoption and governance gaps. Globally, the combination of internal access and AI capabilities is producing threats that evade traditional controls and demand more advanced behavioral detection.
Most insider threat programs still fail at detection
While 88% of organizations claim to have insider threat programs, the majority lack the behavioral analytics necessary for early anomaly detection. Only 44% use User and Entity Behavior Analytics (UEBA), a fundamental capability for identifying insider threats. Many continue to rely on Identity and Access Management (IAM), security training, Data Loss Prevention (DLP), and Endpoint Detection and Response (EDR) — tools that provide visibility but lack the behavioral context needed to identify subtle or emerging risks.
AI adoption is widespread: 97% of organizations use some form of AI in their insider threat tools, yet governance and operational readiness lag behind. Over half of executives believe AI tools are fully deployed, while managers and analysts report that many are still in pilot or evaluation phases. Security teams continue to face persistent challenges — privacy resistance, tool fragmentation, and difficulty interpreting user intent remain significant blind spots.
Steve Kirkwood, CISO at Exabeam, said: “AI has added a layer of speed and sophistication to insider activity that traditional defenses were never designed to detect. Security teams are leveraging AI to uncover these evolving threats, but without strong governance or clear oversight, it’s a race they struggle to win. This paradigm shift demands a fundamentally new approach to insider threat defense.”
Closing the detection gap in insider threats
As insider threats accelerate under the influence of AI, identity misuse, and behavioral blind spots, success will favor organizations that align leadership priorities with operational realities. Progress requires moving beyond surface-level compliance toward context-aware strategies that accurately distinguish between human and AI-driven activity and foster collaboration across teams to close visibility gaps.
Bridging this gap demands more than policy adjustments. It requires leadership engagement, cross-functional collaboration, and governance models that keep pace with the rapid evolution of AI. Success will be defined by the ability to shorten detection and response times, reduce the “window of opportunity” for insider actions, and continuously adapt strategies as threats evolve.