Every conversation about AI regulation focuses on models and data. But regulators are quietly zeroing in on something more fundamental: who is running those models, triggering workflows, and accessing regulated data-including your AI agents.
By August 2026, treating AI agents as anonymous scripts or "just automation" is no longer just technical debt-it's a regulatory liability. This article unpacks what the EU AI Act and upcoming frameworks actually require for non-human identities, why SSO-only approaches break down, and how identity governance must adapt.
The 2026 enforcement moment: AI shifts from experiments to regulated infrastructure
The EU AI Act moved from slides to reality: it's now law-with a staged rollout.
The AI Act entered into force on August 1, 2024, with most rules applying from August 2, 2026.1glocertinternational.com
For most enterprises, focus on:
- 2 August 2026 - obligations for high-risk AI systems in Annex III (employment, credit, essential services, etc.) take effect2aiacto.eu
- Violations can trigger fines up to €35 million or 7% of global annual turnover3artificialintelligenceact.eu
And regulators aren't stopping there:
- NIS2: Member States must transpose cybersecurity rules by October 17, 2024; penalties reach €10 million or 2% of global turnover for essential entities4puppet.com
- DORA (Regulation (EU) 2022/2554): applies January 17, 2025; requires event logging for access control and identity management in financial entities5systemic-rm.com
- HIPAA Security Rule: mandates unique user IDs, access controls, and audit trails for systems handling electronic protected health information6brickergraydon.com
Different acronyms, same principle: regulators expect evidence of who (or what) accessed what, when, and under which policy. That includes AI agents, bots, service accounts-any non-human identity involved.
AI regulations are quietly redefining "identity"
Many still define identity as people logging in. Regulators don't.
The AI Act speaks to providers and operators of AI, risk management, and logging. NIS2 and DORA drive identity management, access controls, and tamper-proof logs. HIPAA and CMMC focus on unique IDs and auditable access for any system touching protected data.
It's irrelevant if an action was by:
- A human using SSO
- A headless automation with an API token
- An AI agent acting across Jira, GitHub, Slack, or your EMR
To regulators, these are all identities:
- They access sensitive data
- Can change system state
- Can cause incidents
"Non-human identity" is no longer a niche IAM concern-it's a compliance object:
- AI agents acting for users
- LLM copilots with tokens
- RPA bots, integration users in SaaS
- Service accounts behind MLOps
If you can't answer, "Which AI agents exist, what can they do, who owns them?", you face an identity governance gap-not just an AI governance gap.
Why SSO-only and manual controls fail AI agents
Most high-growth companies rely on:
- SSO for 20% of easy-integrate apps
- Manual provisioning for the other 80%
- Ad-hoc scripts and API keys for bots and agents
That suffices for human logins. It fails the moment auditors chase AI agent actions end-to-end.
The structural gaps
No first-class identity for agents
Agents often run under:- Shared "automation" accounts
- Long-lived, broad-scope tokens
- Local service accounts outside SSO
No fine-grained visibility into agent access
Even knowing the app, you rarely track:- Which repos the agent touches in GitHub
- Which Slack channels it reads
- Which patient or financial records it queries
No continuous evidence trail
Logs are:- Scattered across apps
- Never unified as one non-human identity
- Not reviewable alongside human access
Rubber-stamp reviews, not real decisions
Human access reviews already verge on "spreadsheet theater." With agents, managers approve identities they don't even recognize.
Bringing human-only IGA to AI-driven identity sprawl is like showing up to a gunfight with a knife. You check boxes-but can't control risk.
What regulators will actually ask you to prove (for AI agents)
Read the AI Act and its peers. Three themes repeat: traceability, accountability, proportionality.
1. Traceability
High-risk AI under the AI Act requires logs, technical docs, post-market monitoring-showing actual system behavior.7digital-strategy.ec.europa.eu
For AI agents, that means:
- Linking an agent's action in SaaS to:
- A unique non-human identity
- The policy granting its permission
- A human owner responsible
If you can't reconstruct "this agent closed a case, changed this record, triggered this payment" from immutable logs, you're exposed under the AI Act, DORA, or HIPAA.
2. Accountability
NIS2 and DORA explicitly make boards and management responsible for ICT risk and identity governance.8doragrc.com
So, for non-human identities:
- Every agent must have an owner (team or role)-not just a technical maintainer
- Decommissioning must kill agent tokens, service accounts, and app entitlements-not just offboard a user
3. Proportionality (least privilege in practice)
The AI Act is risk-driven. HIPAA, DORA, and CMMC use their own language-all converge on least privilege, separation of duties, and tighter controls for critical systems.9quickblox.com
For agents, proportionality means:
- Narrow API scopes (e.g., read-only)
- Time-bound, just-in-time elevation for sensitive actions
- Explicit separation of "suggest" vs. "execute" powers
SSO-only, manual, and complete IGA: how they fare for AI agents
Many assume "we have SSO and logs-so we're fine." Here's how that aligns with 2026 requirements:
| Control area | SSO-only / manual reality | 2026 regulatory expectation | What complete identity governance for humans + AI agents enables |
|---|---|---|---|
| Inventory of non-human identities | Scattered: bots and agents hidden in apps/repos | Central view of all operators | Unified inventory: humans, AI agents, service accounts, contractors-all in one system |
| Access model | Broad roles, shared accounts, long-lived tokens | Role- and risk-based, least-privilege centric | Fine-grained entitlements tied to policies/workflows |
| Evidence & logs | App-level logs, inconsistent, hard to join | Immutable, tamper-proof logs of access/activity | Bank-grade encryption, immutable audit logs with identity context, auditor-ready |
| Lifecycle (create/change/delete) | Manual, decentralized, especially for agents | Documented, repeatable, demonstrable | Policy-based automations: provision/change/deprovision across all identities/apps |
| Cross-framework coverage | Fragmented by team/framework | Single, mapped controls for AI Act, NIS2, DORA, HIPAA, SOC 2, CMMC | Unified controls reused as audit evidence |
The short version: if your governance only covers humans in SCIM-compatible apps, you're out of step with regulator-defined risk.
Building AI agent identity governance that stands up in 2026 audits
You don't need a "bespoke AI compliance stack." You need complete identity governance that treats AI agents as first-class citizens.
Here's what it takes.
1. Create a single inventory of all identities
For every AI agent, bot, or service account, track:
- Purpose (problem it solves, business owner)
- Data domains (what systems/data it touches)
- Technical footprint (SaaS/on-prem/cloud accounts, keys, credentials)
- Compliance impact (HIPAA, PCI, critical infrastructure, customer data, etc.)
Spreadsheets break when you pass a handful of agents.
2. Assign ownership and accountability
Every AI agent needs both:
- Business owner (director/manager) responsible for its purpose
- Technical owner (team) responsible for operations
- Onboarding/offboarding rules-owner exits, agent reviews or retires
That's the answer for, "Who approved this agent's access?"
3. Enforce least privilege at a fine-grained level
Least privilege for AI agents means:
- Which projects/repos/queues are accessible?
- Can the agent change production data or only propose?
- Is write access time-limited for specific workflows?
Iden drives governance down to the resource level-channels, repos, projects-not just groups or roles.
This detail is crucial when auditors ask why an AI incident-response agent only reads (not writes) in sensitive systems.
4. Automate lifecycle: joiner, mover, leaver-including agents
Joiner/mover/leaver flows apply to AI agents, too.
This means:
- Creation - agents are provisioned by policy, not scripts
- Change - scope expansions require justification and approval
- Retirement - deprecated agents lose all access/tokens/accounts everywhere, not just in a few apps
Iden applies lifecycle automation to human and non-human identities, across apps-even without SCIM or APIs.
5. Move access reviews from spreadsheets to real-time decisions
Access reviews for agents can't be a yearly rush. They must be:
- Continuous (triggered by changes-risk, usage, ownership)
- Contextual (enriched with actual agent activity)
- Actionable (one-click revoke/right-size across systems)
Agentic workflows (AI-driven, autonomous workflows) let the platform surface risky patterns (e.g., idle agents with excessive access) and propose precise fixes.
How complete identity governance platforms close the non-human gap
Don't bolt AI-specific tools atop fragmented identity data. Fix the core: govern every identity-human and machine-across every app.
Iden is built for that foundation.
Iden automates provisioning across 175+ applications and manages human and non-human identities-including AI agents and service accounts-in a single platform.
Key AI agent governance capabilities:
- Universal coverage, no SCIM tax
- Integrates with SCIM, API, and non-API apps-even the long-tail tools where agents hide
- Standard plan support-no forced enterprise upgrades for agent automation
- Unified identity view for humans and machines
- One dashboard: employees, contractors, AI agents, service accounts
- Fine-grained controls per regulatory requirements
- Channel-, repo-, project-level entitlements so least-privilege becomes reality
- Immutable audit logs, continuous governance
- Bank-grade encryption and immutable trails provide real-time regulatory proof
- Agentic workflows for compliance
- AI-driven, autonomous workflows reconcile identities, flag anomalies, and enforce policy automatically
Don't stack parallel tools for AI-extend your identity governance to cover this new species of identities with the automation and rigor human users always needed.
Actionable next steps before August 2026
IT leaders, CISOs, compliance owners-practical next moves for lean teams:
Within 30 days:
- Inventory AI agents, bots, service accounts in critical systems
- Flag those touching:
- EU citizens (AI Act, NIS2)
- Financial services (DORA, CMMC)
- Health data (HIPAA)
- Identify agents bypassing SSO (API keys, app-local accounts)
Within 90 days:
- Assign business and technical ownership for each non-human identity
- Pilot fine-grained, policy-driven access for select agents in 3-5 key apps
- Implement central, immutable audit logs for agent actions
Before August 2, 2026:
- Roll out complete identity governance-across your SaaS stack, including non-SCIM/non-API apps
- Automate joiner/mover/leaver flows and access reviews for all identities
- Map controls to AI Act, NIS2, DORA, HIPAA, SOC 2, CMMC-be ready to "show," not just "promise"
Treating AI agents as first-class identities means clearing audits faster and operating with greater agility than competitors still cobbling together scripts, tickets, and hopes.
Frequently Asked Questions
How does the EU AI Act actually affect our internal AI agents?
If your agents power high-risk Annex III AI systems-employment, credit, essential services-the AI Act's risk management, logging, documentation, and oversight apply to your entire system, not only external providers.2aiacto.eu You must clearly show which agents exist, their access, and how governance and logging work.
Do we really need identity governance for non-human identities, or is traditional IAM enough?
Traditional IAM/SSO handles authentication and groups for human users in part of your app stack. Once AI agents, bots, and service accounts take actions across SaaS-often without SCIM or usable APIs-you need complete identity governance:
- Inventory all non-human identities
- Control entitlements at a granular level
- Automate lifecycle and access reviews-systemwide
Otherwise you can't deliver the audit-ready evidence regulators want.
How do NIS2 and DORA intersect with the EU AI Act for AI agents?
NIS2 and DORA don't say "AI agents"-they require identity management, access control, and logging wherever critical functions are supported.10digital-operational-resilience.net If an agent can affect trading, payments, or incident response, it's in scope. The AI Act then layers on AI-specific requirements, assuming foundational identity governance is in place.
We're a lean IT team. How realistic is it to do all this before 2026?
If you attempt manual, per-app configs, it's unrealistic. That's where automation-first IGA platforms like Iden deliver: plug-and-play connectors, zero-engineering onboarding, policy-driven workflows, stack-wide.
Start with your high-risk apps/agents: you'll see wins-fewer tickets, stronger offboarding, better audit posture-without having to build an IAM team.
Is this about AI governance or identity governance?
Both. But AI governance without identity governance is theater. You can have great AI policies and registers, but if:
- You don't know which agents have which actions, and
- You can't right-size or revoke access quickly,
you're trusting, not controlling.
Treat AI agents as a new species of identity. Govern them continuously and automatically-just like your people. That's how you turn 2026 from a compliance scramble into a competitive edge.


