In my time at Ivanti, I've witnessed firsthand how AI acts as a force multiplier across enterprise organizations. When deployed strategically, AI accelerates decision-making and operational execution at scale in a way that teams simply can't sustain manually. However, without clear and enforceable AI guardrails, implementing AI opens organizations up to serious new risks.

Ivanti’s 2026 State of Cybersecurity Report highlights a growing disconnect I’ve observed across the industry: optimism about AI is rising, yet governance and preparedness are not keeping pace. Currently, just 50% of organizations say they have formal guardrails in place to guide the deployment and operation of AI systems and agents.

As adoption accelerates faster than governance, I'm seeing organizations face growing internal risks — shadow AI use, inconsistent data quality, biased outputs and uneven employee training to name a few.

From where I sit — spanning legal, security and HR — I can tell you this: AI governance isn't an abstract compliance exercise. It's a core requirement for trust, accountability and control.

The state of enterprise AI: a risky Wild West

Responsible AI at scale requires deliberate governance with enforceable guardrails for all employees. Ignore that, and shadow AI use will continue to grow. Our 2025 Technology at Work research report revealed that 46% of office workers use AI that aren't employer-provided. Even more concerning, nearly a third of employees (32%) keep their use of AI tools at work a secret from their employers.

Too many organizations are deploying AI without an overarching governance, and the consequences of this approach are real. Organizations can expose sensitive data. They can violate regulatory obligations. It could potentially erode market trust. A team deploys an AI platform without proper guardrails, and suddenly you have biased outputs or degraded performance. Without human oversight, AI systems generate inaccurate recommendations or trigger inappropriate actions. That creates dangerous false confidence in AI-driven outcomes.

What is an AI governance framework?

An AI governance framework is the blueprint for how we design, deploy and oversee AI systems across their lifecycle. Its purpose is to align AI use with business objectives, legal obligations and enterprise risk tolerance — with transparency and accountability built in from day one.

At Ivanti, our framework clarifies:

  • Who is accountable for AI decisions and outcomes
  • How risks are identified, assessed and mitigated
  • What guardrails must be in place before AI systems go live
  • How AI performance, behavior and impact are monitored over time

In practice, governance enables scale. Clear frameworks let us move beyond fragmented pilots and operationalize AI across the enterprise. Without it, adoption stalls.

Our position is simple: governance doesn't block innovation. It makes innovation sustainable.

3 layers of AI guardrails in an AI governance framework

As part of Ivanti’s AI Governance Council, I've learned that a comprehensive framework requires multiple layers of guardrails. Each addresses a different category of risk. Together, they form the foundation for safe, reliable AI use.

Technical Guardrails

Technical guardrails keep AI systems within predefined safety and operational parameters.

Data guardrails: Data guardrails protect data integrity and ensure AI systems are trained and operated on trusted inputs. These guardrails are typically owned by data and security teams, who establish standards for data sourcing, validation, access controls and ongoing quality monitoring. Poor data quality remains a major barrier to effective AI deployment, particularly in security, where incomplete, biased or unvalidated data can skew outcomes and degrade detection accuracy over time.

Model guardrails: Model guardrails address robustness, explainability, and bias detection to ensure AI systems behave as intended over time. These guardrails are typically designed by security, data science and platform teams, who define testing requirements for drift, bias and performance degradation before deployment and continuously thereafter, especially as models are retrained or exposed to changing operational data.

Application and output guardrails: application and output guardrails validate AI-generated outputs, particularly in decision-support or automated response scenarios. These guardrails are typically implemented by security and operations teams, who define approval thresholds, escalation paths, and human-in-the-loop controls. Without them, systems may generate inaccurate recommendations or take inappropriate actions, reinforcing false confidence in automation.

Infrastructure guardrails: infrastructure guardrails protect the systems that host and support AI workloads and are typically owned by IT and security teams. These teams enforce secure deployment practices, access controls, logging and auditability across cloud and on-prem environments, while ensuring AI services are integrated into existing security monitoring and incident response workflows.

Ethical guardrails

Ethical guardrails align AI behavior with organizational standards and define accountability when AI affects people, customers or business outcomes.

Ivanti’s AI Governance Council plays a central role here. We navigate the “gray areas” of autonomous agents. We bring together legal, security, HR, and business leaders to define acceptable use, escalation paths and accountability. When should humans intervene? How are decisions audited? Who ultimately owns the outcome when things go wrong?

When that governance is missing, the consequences escalate quickly.

Recent incidents show the cost of unclear ethical guardrails. For example, Grok, an AI chatbot developed by xAI, drew widespread criticism after generating unconsented and inappropriate images of real individuals. The failure was not only technical — it was governance-related due to ethical boundaries that weren’t sufficiently defined.

The same issue arises inside enterprises. When AI blocks a user account, flags an employee, or restricts customer access, we must know who owns the decision if it's wrong. Whether AI is used in security, HR or customer-facing systems, the ethical principles are consistent. Governance ensures accountability is defined before automation causes harm.

Regulatory and legal guardrails

Regulatory and legal guardrails ensure AI use complies with evolving global regulations, sector rules and data protection laws. Because these requirements change rapidly, teams can't operate in functional silos.

Legal must lead AI governance early. At Ivanti, we work closely with security and IT to interpret obligations and translate them into enforceable controls. Success depends on aligning from the outset to ensure compliance requirements are embedded into AI design and deployment.

Recent incidents show why regulatory guardrails cannot be an afterthought. European and UK regulators confirmed that Clearview AI’s facial recognition operations, built on scraping billions of images, were subject to privacy laws like GDPR and took enforcement actions based on violations, showing the legal risk organizations face when governance doesn’t align with regulatory expectations.

The lesson is clear. Legal and product development teams must work together early to embed regulatory obligations into AI design, deployment and operations. Governance ensures compliance requirements are enforced by default, not retroactively after regulatory scrutiny begins.

AI governance vs. AI risk management risk

Governance and risk management are closely related but distinct. Here's my take: governance sets the rules and accountability structures. Risk management focuses on identifying and mitigating specific AI-related threats throughout the system lifecycle.

Common AI risks include data leakage, bias, unreliable outputs, over-reliance on automated decisions and security weaknesses introduced through unmanaged tools or integrations. As AI systems become more autonomous, these risks compound.

Integrating AI risk mitigation into governance ensures risks are not addressed in isolation. We evaluate them alongside business impact, operational resilience and organizational risk appetite. This lets us prioritize controls where they matter most and avoid blanket restrictions that slow progress without reducing risk.

Challenges in scaling AI governance

Many organizations start with narrow AI pilots in individual teams. Scaling to enterprise-wide adoption introduces new challenges

Silos are the fastest way to undermine governance. Security, IT, legal and business teams often operate on conflicting assumptions. We need shared ownership across teams. As my colleague Sterling Parker explains, a successful vision requires involving stakeholders across the business to prevent "AI sprawl."

This transition demands a human-centric operating model. Our governance body clearly defines where AI can amplify existing roles, where additional training is required and where human oversight remains essential. Continuous feedback from employees helps ensure AI is applied where it delivers value without creating gaps in accountability or trust. We prioritize upskilling to replace fear with active adoption.

Our cybersecurity research shows that mature organizations approach these challenges differently. Organizations that rank themselves as the most advanced in cybersecurity (Level 4s) are nearly 3x as likely to use comprehensive AI guardrails compared to organizations with an intermediate level of cybersecurity maturity (Level 2s).

They invest early in governance, align leadership around shared frameworks and treat AI as a strategic capability rather than a collection of tools. These organizations are far more likely to operationalize AI across the enterprise while maintaining trust and control.

How to implement responsible AI

Building the framework is table stakes. Execution is where AI governance lives.

Start with clear policies on acceptable use and escalation. These must be practical and tied directly to your existing risk structures.

Governance must be accessible. Responsible AI is an enterprise-wide mandate, not a specialist silo. Targeted training ensures every user understands their role in upholding these guardrails.

Take a governed approach to AI enablement. “Governed enablement” assumes AI is already in use across the enterprise and defines where and how it can operate safely. It requires continuous monitoring and enforcement to ensure systems remain aligned with policy as usage and risks evolve. This is an ongoing discipline, not a one-time project.

The future of responsible AI starts now

AI is reshaping how organizations operate at a pace that cannot be ignored. The question is no longer whether to adopt it, but how to scale it safely. Organizations with strong governance scale without sacrificing trust. Those that delay widen the gap between threat and preparedness.

At Ivanti, we're committed to building AI governance that enables innovation while protecting what matters most — our people, our customers, and our operations. This is critical work and the time to act is now.

To learn more about the AI deployment gap and how leading organizations are closing it, explore Ivanti's 2026 State of Cybersecurity Report.