<?xml version="1.0" encoding="utf-8"?><rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Ivanti Blog: Posts by </title><description /><language>en</language><atom:link rel="self" href="https://www.ivanti.com/en-gb/blog/authors/brooke-johnson/rss" /><link>https://www.ivanti.com/en-gb/blog/authors/brooke-johnson</link><item><guid isPermaLink="false">ffb1e62b-07be-4e85-b9c8-ab0e55a51105</guid><link>https://www.ivanti.com/en-gb/blog/ai-governance-framework-responsible-ai-guardrails</link><atom:author><atom:name>Brooke Johnson</atom:name><atom:uri>https://www.ivanti.com/en-gb/blog/authors/brooke-johnson</atom:uri></atom:author><title>How to Implement an AI Governance Framework Using Safe, Ethical and Reliable AI Guardrails</title><description>&lt;p&gt;In my time at Ivanti, I've witnessed firsthand how AI &lt;a href="https://www.ivanti.com/en-gb/company/artificial-intelligence"&gt;acts as a force multiplier across enterprise organizations&lt;/a&gt;. When deployed strategically, AI accelerates decision-making and operational execution at scale in a way that teams simply can't sustain manually. However, without clear and enforceable AI guardrails, implementing AI opens organisations up to serious new risks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report" target="_blank" rel="noopener"&gt;Ivanti’s 2026 State of Cybersecurity Report&lt;/a&gt; highlights a growing disconnect I’ve observed across the industry: optimism about AI is rising, yet governance and preparedness are not keeping pace. &lt;b&gt;Currently, just 50% of organisations say they have formal guardrails in place to guide the deployment and operation of AI systems and agents.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;As adoption accelerates faster than governance, I'm seeing organisations face growing internal risks — shadow AI use, inconsistent data quality, biassed outputs and uneven employee training to name a few.&lt;/p&gt;

&lt;p&gt;From where I sit — spanning legal, security and HR — I can tell you this: AI governance isn't an abstract compliance exercise. It's a core requirement for trust, accountability and control.&lt;/p&gt;

&lt;h2&gt;The state of enterprise AI: a risky Wild West&lt;/h2&gt;

&lt;p&gt;Responsible AI at scale requires deliberate governance with enforceable guardrails for all employees. Ignore that, and shadow AI use will continue to grow. Our &lt;a href="https://www.ivanti.com/resources/research-reports/tech-at-work" target="_blank" rel="noopener"&gt;2025 Technology at Work research report&lt;/a&gt; revealed that 46% of office workers use AI that aren't employer-provided. Even more concerning, nearly a third of employees (32%) keep their use of AI tools at work a secret from their employers.&lt;/p&gt;

&lt;div class="flourish-embed flourish-chart" data-src="visualisation/20628247"&gt;&lt;/div&gt;

&lt;p&gt;Too many organisations are deploying AI without an overarching governance, and the consequences of this approach are real. Organisations can expose sensitive data. They can violate regulatory obligations. It could potentially erode market trust. A team deploys an AI platform without proper guardrails, and suddenly you have biassed outputs or degraded performance. Without human oversight, AI systems generate inaccurate recommendations or trigger inappropriate actions. That creates dangerous false confidence in AI-driven outcomes.&lt;/p&gt;

&lt;h2&gt;What is an AI governance framework?&lt;/h2&gt;

&lt;p&gt;An AI governance framework is the blueprint for how we design, deploy and oversee AI systems across their lifecycle. Its purpose is to align AI use with business objectives, legal obligations and enterprise risk tolerance — with transparency and accountability built in from day one.&lt;/p&gt;

&lt;p&gt;At Ivanti, our framework clarifies:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;b&gt;Who is accountable&lt;/b&gt; for AI decisions and outcomes&lt;/li&gt;
	&lt;li&gt;&lt;b&gt;How risks are identified&lt;/b&gt;, assessed and mitigated&lt;/li&gt;
	&lt;li&gt;&lt;b&gt;What guardrails must be in place&lt;/b&gt; before AI systems go live&lt;/li&gt;
	&lt;li&gt;&lt;b&gt;How AI performance, behaviour and impact&lt;/b&gt; are monitored over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, governance enables scale. Clear frameworks let us move beyond fragmented pilots and operationalize AI across the enterprise. Without it, adoption stalls.&lt;/p&gt;

&lt;p&gt;Our position is simple: governance doesn't block innovation. It makes innovation sustainable.&lt;/p&gt;

&lt;h2&gt;3 layers of AI guardrails in an AI governance framework&lt;/h2&gt;

&lt;p&gt;As part of Ivanti’s AI Governance Council, I've learned that a comprehensive framework requires multiple layers of guardrails. Each addresses a different category of risk. Together, they form the foundation for safe, reliable AI use.&lt;/p&gt;

&lt;h4&gt;Technical Guardrails&lt;/h4&gt;

&lt;p&gt;Technical guardrails keep AI systems within predefined safety and operational parameters.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Data guardrails&lt;/b&gt;: Data guardrails protect &lt;a href="https://www.ivanti.com/en-gb/use-cases/data-protection-application-security"&gt;data integrity&lt;/a&gt; and ensure AI systems are trained and operated on trusted inputs. These guardrails are typically owned by data and security teams, who establish standards for data sourcing, validation, &lt;a href="https://www.ivanti.com/en-gb/products/network-access-control"&gt;access controls&lt;/a&gt; and ongoing quality monitoring. Poor data quality remains a major barrier to effective AI deployment, particularly in security, where incomplete, biassed or unvalidated data can skew outcomes and degrade detection accuracy over time.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Model guardrails: &lt;/b&gt;Model guardrails address robustness, explainability, and bias detection to ensure AI systems behave as intended over time. These guardrails are typically designed by security, data science and platform teams, who define testing requirements for drift, bias and performance degradation before deployment and continuously thereafter, especially as models are retrained or exposed to changing operational data.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Application and output guardrails: &lt;/b&gt;application and output guardrails&lt;b&gt; &lt;/b&gt;validate AI-generated outputs, particularly in decision-support or automated response scenarios. These guardrails are typically implemented by security and operations teams, who define approval thresholds, escalation paths, and human-in-the-loop controls. Without them, systems may generate inaccurate recommendations or take inappropriate actions, reinforcing false confidence in automation.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Infrastructure guardrails:&lt;/b&gt; infrastructure guardrails protect the systems that host and support AI workloads and are typically owned by IT and security teams. These teams enforce secure deployment practices, access controls, logging and auditability across cloud and on-prem environments, while ensuring AI services are integrated into existing security monitoring and &lt;a href="https://www.ivanti.com/en-gb/products/automation"&gt;incident response workflows&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;Ethical guardrails&lt;/h4&gt;

&lt;p&gt;Ethical guardrails align AI behaviour with organisational standards and define accountability when AI affects people, customers or business outcomes.&lt;/p&gt;

&lt;p&gt;Ivanti’s AI Governance Council plays a central role here. We navigate the “grey areas” of autonomous agents. We bring together legal, security, HR, and business leaders to define acceptable use, escalation paths and accountability. When should humans intervene? How are decisions audited? Who ultimately owns the outcome when things go wrong?&lt;/p&gt;

&lt;p&gt;When that governance is missing, the consequences escalate quickly.&lt;/p&gt;

&lt;p&gt;Recent incidents show the cost of unclear ethical guardrails. For example, Grok, an AI chatbot developed by xAI, &lt;a href="https://www.thetimes.com/uk/technology-uk/article/grok-ai-x-holocaust-survivor-bikini-auschwitz-6kh5ddxh6" rel="noopener" target="_blank"&gt;drew widespread criticism&lt;/a&gt; after generating unconsented and inappropriate images of real individuals. The failure was not only technical — it was governance-related due to ethical boundaries that weren’t sufficiently defined.&lt;/p&gt;

&lt;p&gt;The same issue arises inside enterprises. When AI blocks a user account, flags an employee, or restricts customer access, we must know who owns the decision if it's wrong. Whether AI is used in security, HR or customer-facing systems, the ethical principles are consistent. Governance ensures accountability is defined before automation causes harm.&lt;/p&gt;

&lt;h4&gt;Regulatory and legal guardrails&lt;/h4&gt;

&lt;p&gt;Regulatory and legal guardrails ensure AI use complies with evolving global regulations, sector rules and data protection laws. Because these requirements change rapidly, teams can't operate in functional silos.&lt;/p&gt;

&lt;p&gt;Legal must lead AI governance early. At Ivanti, we work closely with security and IT to interpret obligations and translate them into enforceable controls. Success depends on aligning from the outset to ensure compliance requirements are embedded into AI design and deployment.&lt;/p&gt;

&lt;p&gt;Recent incidents show why regulatory guardrails cannot be an afterthought. European and UK regulators &lt;a href="https://privacyinternational.org/news-analysis/5692/tribunal-confirms-clearview-ai-bound-gdpr" rel="noopener" target="_blank"&gt;confirmed&lt;/a&gt; that Clearview AI’s facial recognition operations, built on scraping billions of images, were subject to privacy laws like GDPR and took enforcement actions based on violations, showing the legal risk organisations face when governance doesn’t align with regulatory expectations.&lt;/p&gt;

&lt;p&gt;The lesson is clear. Legal and product development teams must work together early to embed regulatory obligations into AI design, deployment and operations. Governance ensures compliance requirements are enforced by default, not retroactively after regulatory scrutiny begins.&lt;/p&gt;

&lt;h2&gt;AI governance vs. AI risk management risk&lt;/h2&gt;

&lt;p&gt;Governance and &lt;a href="https://www.ivanti.com/resources/research-reports/cybersecurity-risk-management" target="_blank" rel="noopener"&gt;risk management&lt;/a&gt; are closely related but distinct. Here's my take: governance sets the rules and accountability structures. Risk management focuses on identifying and mitigating specific AI-related threats throughout the system lifecycle.&lt;/p&gt;

&lt;p&gt;Common AI risks include data leakage, bias, unreliable outputs, over-reliance on automated decisions and security weaknesses introduced through unmanaged tools or integrations. As AI systems become more autonomous, these risks compound.&lt;/p&gt;

&lt;p&gt;Integrating AI risk mitigation into governance ensures risks are not addressed in isolation. We evaluate them alongside business impact, operational resilience and organisational &lt;a href="https://www.ivanti.com/blog/risk-appetite" target="_blank" rel="noopener"&gt;risk appetite&lt;/a&gt;. This lets us prioritise controls where they matter most and avoid blanket restrictions that slow progress without reducing risk.&lt;/p&gt;

&lt;h2&gt;Challenges in scaling AI governance&lt;/h2&gt;

&lt;p&gt;Many organisations start with narrow AI pilots in individual teams. Scaling to enterprise-wide adoption introduces new challenges&lt;/p&gt;

&lt;p&gt;Silos are the fastest way to undermine governance. Security, IT, legal and business teams often operate on conflicting assumptions. We need shared ownership across teams. As my colleague Sterling Parker explains, a successful vision requires involving stakeholders across the business to prevent "AI sprawl."&lt;/p&gt;

&lt;p&gt;&lt;object codetype="CMSInlineControl" type="Video"&gt;&lt;param name="platform" value="youtube"&gt;&lt;param name="lang" value="en"&gt;&lt;param name="id" value="GpoZdJeC3Bw"&gt;&lt;param name="cms_type" value="video"&gt;&lt;/object&gt;&lt;/p&gt;

&lt;p&gt;This transition demands a human-centric operating model. Our governance body clearly defines where AI can amplify existing roles, where additional training is required and where human oversight remains essential. Continuous feedback from employees helps ensure AI is applied where it delivers value without creating gaps in accountability or trust. We prioritise upskilling to replace fear with active adoption.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report" target="_blank" rel="noopener"&gt;cybersecurity research&lt;/a&gt; shows that mature organisations approach these challenges differently. Organisations that rank themselves as the most advanced in cybersecurity (Level 4s) are nearly 3x as likely to use comprehensive AI guardrails compared to organisations with an intermediate level of cybersecurity maturity (Level 2s).&lt;/p&gt;

&lt;div class="flourish-embed flourish-chart" data-src="visualisation/27433090"&gt;&lt;/div&gt;

&lt;p&gt;They invest early in governance, align leadership around shared frameworks and treat AI as a strategic capability rather than a collection of tools. These organisations are far more likely to operationalize AI across the enterprise while maintaining trust and control.&lt;/p&gt;

&lt;h2&gt;How to implement responsible AI&lt;/h2&gt;

&lt;p&gt;Building the framework is table stakes. Execution is where AI governance lives.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Start with clear policies&lt;/b&gt; on acceptable use and escalation. These must be practical and tied directly to your existing risk structures.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Governance must be accessible.&lt;/b&gt; Responsible AI is an enterprise-wide mandate, not a specialist silo. Targeted training ensures every user understands their role in upholding these guardrails.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Take a governed approach to AI enablement. “&lt;/b&gt;Governed enablement” assumes AI is already in use across the enterprise and defines where and how it can operate safely. It requires continuous monitoring and enforcement to ensure systems remain aligned with policy as usage and risks evolve. This is an ongoing discipline, not a one-time project.&lt;/p&gt;

&lt;p&gt;The future of responsible AI starts now&lt;/p&gt;

&lt;p&gt;AI is reshaping how organisations operate at a pace that cannot be ignored. The question is no longer whether to adopt it, but how to scale it safely. Organisations with strong governance scale without sacrificing trust. Those that delay widen the gap between threat and preparedness.&lt;/p&gt;

&lt;p&gt;At Ivanti, we're committed to building AI governance that enables innovation while protecting what matters most — our people, our customers, and our operations. This is critical work and the time to act is now.&lt;/p&gt;

&lt;p&gt;To learn more about the AI deployment gap and how leading organisations are closing it, explore &lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report" target="_blank" rel="noopener"&gt;Ivanti's 2026 State of Cybersecurity Report&lt;/a&gt;.&lt;/p&gt;
</description><pubDate>Tue, 24 Feb 2026 13:00:02 Z</pubDate></item><item><guid isPermaLink="false">c9b93eca-83b3-46b0-85c8-ea8c6bd3e59c</guid><link>https://www.ivanti.com/en-gb/blog/international-womens-day-2025</link><atom:author><atom:name>Brooke Johnson</atom:name><atom:uri>https://www.ivanti.com/en-gb/blog/authors/brooke-johnson</atom:uri></atom:author><title>Accelerating Action on Gender Equality: A Message from Ivanti’s Brooke Johnson on International Women’s Day</title><description>&lt;p&gt;International Women’s Day is March 8, 2025. This year’s theme is “Accelerate Action.” As things currently stand, &lt;a href="https://www.weforum.org/stories/2024/06/global-gender-gap-2024-what-to-know/" rel="noopener" target="_blank"&gt;data from the World Economic Forum&lt;/a&gt; indicate that it will be 2158 until we reach full gender parity. That’s roughly five generations from now. I believe that’s five – maybe even six or seven -- generations too long.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;I also believe the generations that came before me would agree. Every day, but particularly on International Women's Day, I am so grateful to have grown up surrounded by strong female role models. Even if I didn't realise it at the time, these women shaped my understanding of leadership and possibility.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The influence of strong women in my life didn't stop after childhood. Far from it. My best friend Beth, whom I met in law school, helped define my approach to career and advocacy. Beth and I bonded initially over academics (and our shared love of shoes), but she quickly became my career counsellor, personal advocate and sometimes therapist. Through her example as an exceptional attorney who effectively prioritises what matters most, she taught me a crucial lesson that I’ve shared often. Still, it’s worth repeating: It’s okay to not always say you can “have it all.” Instead, do your best at whatever you choose to take on.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;I want every girl and every woman to feel the support, encouragement and advocacy I felt. That’s not to say it was smooth sailing, particularly given my choice to enter male-dominated fields. It’s not enough to be aware of the gaps and lack of equality for women. We need action.&amp;nbsp;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;That’s why I’m so excited about this year's theme for International Women’s Day, "Accelerate Action.” This theme challenges us to move beyond awareness to create tangible change. According to the &lt;a href="https://www.internationalwomensday.com/" rel="noopener" target="_blank"&gt;International Women’s Day&lt;/a&gt; site, this year’s theme “emphasises the importance of taking swift and decisive steps to achieve gender equality. It calls for increased momentum and urgency in addressing the systemic barriers and biases that women face in both personal and professional spheres.”&amp;nbsp;&lt;/p&gt;

&lt;p&gt;So, let’s talk about some of the ways we’re accelerating action.&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;Connecting to champion change&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;As the initial advocate for Ivanti's Women's Connection group, I've witnessed firsthand how creating spaces for authentic dialogue drives meaningful change. While our content focuses on helping women navigate their career journeys, our group welcomes everyone — regardless of gender identity. This inclusivity strengthens our ability to address the unique challenges women face in the workplace and create solutions that benefit all.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The objective of our Women's Connection group is straightforward yet powerful: to inspire and foster the growth and development of Ivantians. By connecting women with other women, as well as creating a safe space for men and women to have dialogue about important topics, we're creating a support network that helps group members navigate career challenges and opportunities with greater confidence and clarity.&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;Strength in numbers&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;Numbers tell part of our story of progress. In 2023, women represented 24% of our new hires at Ivanti — matching industry benchmarks. Through focused, intentional action, we've increased that to 31% in 2024. There is more work to be done, but this shows that our intentional actions are paying off. We implemented specific strategies, including:&amp;nbsp;&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;Ensuring at least one female candidate appears on every shortlist when possible.&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;Revising job descriptions to use gender-neutral language, recognising how certain terms like "aggressive" might discourage female applicants.&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;Highlighting benefits that appeal to diverse candidates such as flexible schedules and remote work options.&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;The impact of everyday excellence&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;What inspires me most about the women I work with is their incredible resilience and supportive nature. Each day, I witness the profound impact we can have on one another by uplifting each other, actively listening to one another's challenges and solving problems collaboratively. Our mutual support influences my approach to both leadership and advocacy.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The diverse perspectives and innovative solutions that arise from our discussions have taught me the invaluable lesson of inclusive dialogue. As a leader, this has reinforced the importance of not just taking input but genuinely understanding and integrating different viewpoints. It has made me more empathetic, reflective and adaptive in my decision-making process.&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;It takes &lt;em&gt;all&lt;/em&gt; of us&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;Women should not be alone in the push for change. I’m grateful that, at Ivanti, we have the unwavering partnership of our male allies and our CEO, who consistently champion gender equality. Their advocacy, combined with the courage, wisdom and excellence of our women executives and team members throughout Ivanti, creates a powerful force for change.&lt;/p&gt;

&lt;p&gt;I can’t emphasise enough that it takes all of us. That includes you, the person reading this. The path forward requires collective effort, sustained commitment and accelerated action. I invite you to consider: How will you contribute to creating a more equitable future for girls and women? Every action, every connection and every opportunity to support women's advancement brings us further on the path. This International Women’s Day, and every day, I’m taking on the challenge — and channelling my friend Beth by choosing to do my best.&amp;nbsp;&lt;/p&gt;
</description><pubDate>Sat, 08 Mar 2025 05:01:01 Z</pubDate></item><item><guid isPermaLink="false">5181b36a-3b52-405a-8dbd-a6d71b8e6a9c</guid><link>https://www.ivanti.com/en-gb/blog/cyber-pivott-act</link><atom:author><atom:name>Brooke Johnson</atom:name><atom:uri>https://www.ivanti.com/en-gb/blog/authors/brooke-johnson</atom:uri></atom:author><category>Ivanti News</category><category>Security</category><title>US Federal Government's Role in Filling the Cybersecurity Talent Gap</title><description>&lt;p&gt;Currently, there are 500,000 vacant cybersecurity positions in the United States – affecting businesses and government agencies alike. And with the frequency, sophistication and intensity of cyberattacks increasing, including those directed at federal agencies and critical infrastructure, the need for government and industry to work together to train, retain and develop workers with the required technical expertise and skills has never been greater. According to Congressional sources, cyber attacks on critical infrastructure rose by 30 percent globally in 2023, creating a growing gap in the cyber workforce that leaves organisations vulnerable to cybersecurity threats due to staffing shortages.&lt;/p&gt;

&lt;p&gt;As the new Congress and administration are beginning to govern in Washington DC, it is imperative that cybersecurity and the development of a capable cyber workforce to combat highly resourced nation state threat actors be prioritised as a bipartisan issue. The US government has the opportunity to play a pivotal role in formulating policies and programmes that offer necessary technical training to cultivate a domestic cybersecurity workforce. Additionally, these efforts will assist businesses in defending against the increasing frequency of cyberattacks.&lt;/p&gt;

&lt;p&gt;At Ivanti, we support the policy proposals put forward in the Cyber PIVOTT Act, introduced by House Homeland Security Committee Chairman Mark Green. This legislation aims to enhance the accessibility of cyber training and education by establishing a new full-scholarship programme for two-year degrees, including those offered at community colleges and technical schools, in exchange for required government service. Ivanti endorses this legislation and looks forward to supporting its enactment and the role that it will play in strengthening the US cybersecurity industry as a whole. We recognise the need for legislation to address cybersecurity and workforce needs, as well as to create incentives for young professionals to pursue careers in cybersecurity. This initiative takes direct aim at the persistent industry shortage of a skilled cybersecurity workforce.&lt;/p&gt;

&lt;p&gt;Chairman Green’s Cyber PIVOTT Act is an important step in meeting this critical need for the cybersecurity industry. Congress should demonstrate its commitment to the cybersecurity of the nation by enacting this legislation immediately.&lt;/p&gt;
</description><pubDate>Wed, 05 Feb 2025 14:30:01 Z</pubDate></item></channel></rss>