<?xml version="1.0" encoding="utf-8"?><rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Ivanti Blog: Posts by </title><description /><language>en</language><atom:link rel="self" href="https://www.ivanti.com/blog/authors/brooke-johnson/rss" /><link>https://www.ivanti.com/blog/authors/brooke-johnson</link><item><guid isPermaLink="false">9e2cfeac-8bf5-4822-9d7e-64aac32964cb</guid><link>https://www.ivanti.com/blog/ai-governance-framework-responsible-ai-guardrails</link><atom:author><atom:name>Brooke Johnson</atom:name><atom:uri>https://www.ivanti.com/blog/authors/brooke-johnson</atom:uri></atom:author><category>Artificial Intelligence</category><title>How to Implement an AI Governance Framework Using Safe, Ethical and Reliable AI Guardrails</title><description>&lt;p&gt;In my time at Ivanti, I've witnessed firsthand how AI &lt;a href="https://www.ivanti.com/company/artificial-intelligence"&gt;acts as a force multiplier across enterprise organizations&lt;/a&gt;. When deployed strategically, AI accelerates decision-making and operational execution at scale in a way that teams simply can't sustain manually. However, without clear and enforceable AI guardrails, implementing AI opens organizations up to serious new risks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report"&gt;Ivanti’s 2026 State of Cybersecurity Report&lt;/a&gt; highlights a growing disconnect I’ve observed across the industry: optimism about AI is rising, yet governance and preparedness are not keeping pace. &lt;b&gt;Currently, just 50% of organizations say they have formal guardrails in place to guide the deployment and operation of AI systems and agents.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;As adoption accelerates faster than governance, I'm seeing organizations face growing internal risks — shadow AI use, inconsistent data quality, biased outputs and uneven employee training to name a few.&lt;/p&gt;

&lt;p&gt;From where I sit — spanning legal, security and HR — I can tell you this: AI governance isn't an abstract compliance exercise. It's a core requirement for trust, accountability and control.&lt;/p&gt;

&lt;h2&gt;The state of enterprise AI: a risky Wild West&lt;/h2&gt;

&lt;p&gt;Responsible AI at scale requires deliberate governance with enforceable guardrails for all employees. Ignore that, and shadow AI use will continue to grow. Our &lt;a href="https://www.ivanti.com/resources/research-reports/tech-at-work"&gt;2025 Technology at Work research report&lt;/a&gt; revealed that 46% of office workers use AI that aren't employer-provided. Even more concerning, nearly a third of employees (32%) keep their use of AI tools at work a secret from their employers.&lt;/p&gt;

&lt;div class="flourish-embed flourish-chart" data-src="visualisation/20628247"&gt;&lt;/div&gt;

&lt;p&gt;Too many organizations are deploying AI without an overarching governance, and the consequences of this approach are real. Organizations can expose sensitive data. They can violate regulatory obligations. It could potentially erode market trust. A team deploys an AI platform without proper guardrails, and suddenly you have biased outputs or degraded performance. Without human oversight, AI systems generate inaccurate recommendations or trigger inappropriate actions. That creates dangerous false confidence in AI-driven outcomes.&lt;/p&gt;

&lt;h2&gt;What is an AI governance framework?&lt;/h2&gt;

&lt;p&gt;An AI governance framework is the blueprint for how we design, deploy and oversee AI systems across their lifecycle. Its purpose is to align AI use with business objectives, legal obligations and enterprise risk tolerance — with transparency and accountability built in from day one.&lt;/p&gt;

&lt;p&gt;At Ivanti, our framework clarifies:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;b&gt;Who is accountable&lt;/b&gt; for AI decisions and outcomes&lt;/li&gt;
	&lt;li&gt;&lt;b&gt;How risks are identified&lt;/b&gt;, assessed and mitigated&lt;/li&gt;
	&lt;li&gt;&lt;b&gt;What guardrails must be in place&lt;/b&gt; before AI systems go live&lt;/li&gt;
	&lt;li&gt;&lt;b&gt;How AI performance, behavior and impact&lt;/b&gt; are monitored over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, governance enables scale. Clear frameworks let us move beyond fragmented pilots and operationalize AI across the enterprise. Without it, adoption stalls.&lt;/p&gt;

&lt;p&gt;Our position is simple: governance doesn't block innovation. It makes innovation sustainable.&lt;/p&gt;

&lt;h2&gt;3 layers of AI guardrails in an AI governance framework&lt;/h2&gt;

&lt;p&gt;As part of Ivanti’s AI Governance Council, I've learned that a comprehensive framework requires multiple layers of guardrails. Each addresses a different category of risk. Together, they form the foundation for safe, reliable AI use.&lt;/p&gt;

&lt;h4&gt;Technical Guardrails&lt;/h4&gt;

&lt;p&gt;Technical guardrails keep AI systems within predefined safety and operational parameters.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Data guardrails&lt;/b&gt;: Data guardrails protect &lt;a href="https://www.ivanti.com/use-cases/data-protection-application-security"&gt;data integrity&lt;/a&gt; and ensure AI systems are trained and operated on trusted inputs. These guardrails are typically owned by data and security teams, who establish standards for data sourcing, validation, &lt;a href="https://www.ivanti.com/products/network-access-control"&gt;access controls&lt;/a&gt; and ongoing quality monitoring. Poor data quality remains a major barrier to effective AI deployment, particularly in security, where incomplete, biased or unvalidated data can skew outcomes and degrade detection accuracy over time.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Model guardrails: &lt;/b&gt;Model guardrails address robustness, explainability, and bias detection to ensure AI systems behave as intended over time. These guardrails are typically designed by security, data science and platform teams, who define testing requirements for drift, bias and performance degradation before deployment and continuously thereafter, especially as models are retrained or exposed to changing operational data.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Application and output guardrails: &lt;/b&gt;application and output guardrails&lt;b&gt; &lt;/b&gt;validate AI-generated outputs, particularly in decision-support or automated response scenarios. These guardrails are typically implemented by security and operations teams, who define approval thresholds, escalation paths, and human-in-the-loop controls. Without them, systems may generate inaccurate recommendations or take inappropriate actions, reinforcing false confidence in automation.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Infrastructure guardrails:&lt;/b&gt; infrastructure guardrails protect the systems that host and support AI workloads and are typically owned by IT and security teams. These teams enforce secure deployment practices, access controls, logging and auditability across cloud and on-prem environments, while ensuring AI services are integrated into existing security monitoring and &lt;a href="https://www.ivanti.com/products/automation"&gt;incident response workflows&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;Ethical guardrails&lt;/h4&gt;

&lt;p&gt;Ethical guardrails align AI behavior with organizational standards and define accountability when AI affects people, customers or business outcomes.&lt;/p&gt;

&lt;p&gt;Ivanti’s AI Governance Council plays a central role here. We navigate the “gray areas” of autonomous agents. We bring together legal, security, HR, and business leaders to define acceptable use, escalation paths and accountability. When should humans intervene? How are decisions audited? Who ultimately owns the outcome when things go wrong?&lt;/p&gt;

&lt;p&gt;When that governance is missing, the consequences escalate quickly.&lt;/p&gt;

&lt;p&gt;Recent incidents show the cost of unclear ethical guardrails. For example, Grok, an AI chatbot developed by xAI, &lt;a href="https://www.thetimes.com/uk/technology-uk/article/grok-ai-x-holocaust-survivor-bikini-auschwitz-6kh5ddxh6" rel="noopener" target="_blank"&gt;drew widespread criticism&lt;/a&gt; after generating unconsented and inappropriate images of real individuals. The failure was not only technical — it was governance-related due to ethical boundaries that weren’t sufficiently defined.&lt;/p&gt;

&lt;p&gt;The same issue arises inside enterprises. When AI blocks a user account, flags an employee, or restricts customer access, we must know who owns the decision if it's wrong. Whether AI is used in security, HR or customer-facing systems, the ethical principles are consistent. Governance ensures accountability is defined before automation causes harm.&lt;/p&gt;

&lt;h4&gt;Regulatory and legal guardrails&lt;/h4&gt;

&lt;p&gt;Regulatory and legal guardrails ensure AI use complies with evolving global regulations, sector rules and data protection laws. Because these requirements change rapidly, teams can't operate in functional silos.&lt;/p&gt;

&lt;p&gt;Legal must lead AI governance early. At Ivanti, we work closely with security and IT to interpret obligations and translate them into enforceable controls. Success depends on aligning from the outset to ensure compliance requirements are embedded into AI design and deployment.&lt;/p&gt;

&lt;p&gt;Recent incidents show why regulatory guardrails cannot be an afterthought. European and UK regulators &lt;a href="https://privacyinternational.org/news-analysis/5692/tribunal-confirms-clearview-ai-bound-gdpr" rel="noopener" target="_blank"&gt;confirmed&lt;/a&gt; that Clearview AI’s facial recognition operations, built on scraping billions of images, were subject to privacy laws like GDPR and took enforcement actions based on violations, showing the legal risk organizations face when governance doesn’t align with regulatory expectations.&lt;/p&gt;

&lt;p&gt;The lesson is clear. Legal and product development teams must work together early to embed regulatory obligations into AI design, deployment and operations. Governance ensures compliance requirements are enforced by default, not retroactively after regulatory scrutiny begins.&lt;/p&gt;

&lt;h2&gt;AI governance vs. AI risk management risk&lt;/h2&gt;

&lt;p&gt;Governance and &lt;a href="https://www.ivanti.com/resources/research-reports/cybersecurity-risk-management"&gt;risk management&lt;/a&gt; are closely related but distinct. Here's my take: governance sets the rules and accountability structures. Risk management focuses on identifying and mitigating specific AI-related threats throughout the system lifecycle.&lt;/p&gt;

&lt;p&gt;Common AI risks include data leakage, bias, unreliable outputs, over-reliance on automated decisions and security weaknesses introduced through unmanaged tools or integrations. As AI systems become more autonomous, these risks compound.&lt;/p&gt;

&lt;p&gt;Integrating AI risk mitigation into governance ensures risks are not addressed in isolation. We evaluate them alongside business impact, operational resilience and organizational &lt;a href="https://www.ivanti.com/blog/risk-appetite"&gt;risk appetite&lt;/a&gt;. This lets us prioritize controls where they matter most and avoid blanket restrictions that slow progress without reducing risk.&lt;/p&gt;

&lt;h2&gt;Challenges in scaling AI governance&lt;/h2&gt;

&lt;p&gt;Many organizations start with narrow AI pilots in individual teams. Scaling to enterprise-wide adoption introduces new challenges&lt;/p&gt;

&lt;p&gt;Silos are the fastest way to undermine governance. Security, IT, legal and business teams often operate on conflicting assumptions. We need shared ownership across teams. As my colleague Sterling Parker explains, a successful vision requires involving stakeholders across the business to prevent "AI sprawl."&lt;/p&gt;

&lt;p&gt;&lt;object codetype="CMSInlineControl" type="Video"&gt;&lt;param name="platform" value="youtube"&gt;&lt;param name="lang" value="en"&gt;&lt;param name="id" value="GpoZdJeC3Bw"&gt;&lt;param name="cms_type" value="video"&gt;&lt;/object&gt;&lt;/p&gt;

&lt;p&gt;This transition demands a human-centric operating model. Our governance body clearly defines where AI can amplify existing roles, where additional training is required and where human oversight remains essential. Continuous feedback from employees helps ensure AI is applied where it delivers value without creating gaps in accountability or trust. We prioritize upskilling to replace fear with active adoption.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report"&gt;cybersecurity research&lt;/a&gt; shows that mature organizations approach these challenges differently. Organizations that rank themselves as the most advanced in cybersecurity (Level 4s) are nearly 3x as likely to use comprehensive AI guardrails compared to organizations with an intermediate level of cybersecurity maturity (Level 2s).&lt;/p&gt;

&lt;div class="flourish-embed flourish-chart" data-src="visualisation/27433090"&gt;&lt;/div&gt;

&lt;p&gt;They invest early in governance, align leadership around shared frameworks and treat AI as a strategic capability rather than a collection of tools. These organizations are far more likely to operationalize AI across the enterprise while maintaining trust and control.&lt;/p&gt;

&lt;h2&gt;How to implement responsible AI&lt;/h2&gt;

&lt;p&gt;Building the framework is table stakes. Execution is where AI governance lives.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Start with clear policies&lt;/b&gt; on acceptable use and escalation. These must be practical and tied directly to your existing risk structures.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Governance must be accessible.&lt;/b&gt; Responsible AI is an enterprise-wide mandate, not a specialist silo. Targeted training ensures every user understands their role in upholding these guardrails.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Take a governed approach to AI enablement. “&lt;/b&gt;Governed enablement” assumes AI is already in use across the enterprise and defines where and how it can operate safely. It requires continuous monitoring and enforcement to ensure systems remain aligned with policy as usage and risks evolve. This is an ongoing discipline, not a one-time project.&lt;/p&gt;

&lt;p&gt;The future of responsible AI starts now&lt;/p&gt;

&lt;p&gt;AI is reshaping how organizations operate at a pace that cannot be ignored. The question is no longer whether to adopt it, but how to scale it safely. Organizations with strong governance scale without sacrificing trust. Those that delay widen the gap between threat and preparedness.&lt;/p&gt;

&lt;p&gt;At Ivanti, we're committed to building AI governance that enables innovation while protecting what matters most — our people, our customers, and our operations. This is critical work and the time to act is now.&lt;/p&gt;

&lt;p&gt;To learn more about the AI deployment gap and how leading organizations are closing it, explore &lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report"&gt;Ivanti's 2026 State of Cybersecurity Report&lt;/a&gt;.&lt;/p&gt;
</description><pubDate>Tue, 24 Feb 2026 13:00:02 Z</pubDate></item><item><guid isPermaLink="false">b912c747-6821-48fd-8a59-45d4b0af4bb0</guid><link>https://www.ivanti.com/blog/international-womens-day-2025</link><atom:author><atom:name>Brooke Johnson</atom:name><atom:uri>https://www.ivanti.com/blog/authors/brooke-johnson</atom:uri></atom:author><category>Ivanti Culture</category><title>Accelerating Action on Gender Equality: A Message from Ivanti’s Brooke Johnson on International Women’s Day</title><description>&lt;p&gt;International Women’s Day is March 8, 2025. This year’s theme is “Accelerate Action.” As things currently stand, &lt;a href="https://www.weforum.org/stories/2024/06/global-gender-gap-2024-what-to-know/" rel="noopener" target="_blank"&gt;data from the World Economic Forum&lt;/a&gt; indicate that it will be 2158 until we reach full gender parity. That’s roughly five generations from now. I believe that’s five – maybe even six or seven -- generations too long.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;I also believe the generations that came before me would agree. Every day, but particularly on International Women's Day, I am so grateful to have grown up surrounded by strong female role models. Even if I didn't realize it at the time, these women shaped my understanding of leadership and possibility.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The influence of strong women in my life didn't stop after childhood. Far from it. My best friend Beth, whom I met in law school, helped define my approach to career and advocacy. Beth and I bonded initially over academics (and our shared love of shoes), but she quickly became my career counselor, personal advocate and sometimes therapist. Through her example as an exceptional attorney who effectively prioritizes what matters most, she taught me a crucial lesson that I’ve shared often. Still, it’s worth repeating: It’s okay to not always say you can “have it all.” Instead, do your best at whatever you choose to take on.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;I want every girl and every woman to feel the support, encouragement and advocacy I felt. That’s not to say it was smooth sailing, particularly given my choice to enter male-dominated fields. It’s not enough to be aware of the gaps and lack of equality for women. We need action.&amp;nbsp;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;That’s why I’m so excited about this year's theme for International Women’s Day, "Accelerate Action.” This theme challenges us to move beyond awareness to create tangible change. According to the &lt;a href="https://www.internationalwomensday.com/" rel="noopener" target="_blank"&gt;International Women’s Day&lt;/a&gt; site, this year’s theme “emphasizes the importance of taking swift and decisive steps to achieve gender equality. It calls for increased momentum and urgency in addressing the systemic barriers and biases that women face in both personal and professional spheres.”&amp;nbsp;&lt;/p&gt;

&lt;p&gt;So, let’s talk about some of the ways we’re accelerating action.&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;Connecting to champion change&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;As the initial advocate for Ivanti's Women's Connection group, I've witnessed firsthand how creating spaces for authentic dialogue drives meaningful change. While our content focuses on helping women navigate their career journeys, our group welcomes everyone — regardless of gender identity. This inclusivity strengthens our ability to address the unique challenges women face in the workplace and create solutions that benefit all.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The objective of our Women's Connection group is straightforward yet powerful: to inspire and foster the growth and development of Ivantians. By connecting women with other women, as well as creating a safe space for men and women to have dialogue about important topics, we're creating a support network that helps group members navigate career challenges and opportunities with greater confidence and clarity.&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;Strength in numbers&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;Numbers tell part of our story of progress. In 2023, women represented 24% of our new hires at Ivanti — matching industry benchmarks. Through focused, intentional action, we've increased that to 31% in 2024. There is more work to be done, but this shows that our intentional actions are paying off. We implemented specific strategies, including:&amp;nbsp;&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;Ensuring at least one female candidate appears on every shortlist when possible.&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;Revising job descriptions to use gender-neutral language, recognizing how certain terms like "aggressive" might discourage female applicants.&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;Highlighting benefits that appeal to diverse candidates such as flexible schedules and remote work options.&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;The impact of everyday excellence&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;What inspires me most about the women I work with is their incredible resilience and supportive nature. Each day, I witness the profound impact we can have on one another by uplifting each other, actively listening to one another's challenges and solving problems collaboratively. Our mutual support influences my approach to both leadership and advocacy.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The diverse perspectives and innovative solutions that arise from our discussions have taught me the invaluable lesson of inclusive dialogue. As a leader, this has reinforced the importance of not just taking input but genuinely understanding and integrating different viewpoints. It has made me more empathetic, reflective and adaptive in my decision-making process.&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;It takes &lt;em&gt;all&lt;/em&gt; of us&amp;nbsp;&lt;/h2&gt;

&lt;p&gt;Women should not be alone in the push for change. I’m grateful that, at Ivanti, we have the unwavering partnership of our male allies and our CEO, who consistently champion gender equality. Their advocacy, combined with the courage, wisdom and excellence of our women executives and team members throughout Ivanti, creates a powerful force for change.&lt;/p&gt;

&lt;p&gt;I can’t emphasize enough that it takes all of us. That includes you, the person reading this. The path forward requires collective effort, sustained commitment and accelerated action. I invite you to consider: How will you contribute to creating a more equitable future for girls and women? Every action, every connection and every opportunity to support women's advancement brings us further on the path. This International Women’s Day, and every day, I’m taking on the challenge — and channeling my friend Beth by choosing to do my best.&amp;nbsp;&lt;/p&gt;
</description><pubDate>Sat, 08 Mar 2025 05:01:01 Z</pubDate></item><item><guid isPermaLink="false">53bbb28d-9178-44be-8ceb-456315aefb51</guid><link>https://www.ivanti.com/blog/cyber-pivott-act</link><atom:author><atom:name>Brooke Johnson</atom:name><atom:uri>https://www.ivanti.com/blog/authors/brooke-johnson</atom:uri></atom:author><category>Ivanti News</category><category>Security</category><title>US Federal Government's Role in Filling the Cybersecurity Talent Gap</title><description>&lt;p&gt;Currently, there are 500,000 vacant cybersecurity positions in the United States – affecting businesses and government agencies alike. And with the frequency, sophistication and intensity of cyberattacks increasing, including those directed at federal agencies and critical infrastructure, the need for government and industry to work together to train, retain and develop workers with the required technical expertise and skills has never been greater. According to Congressional sources, cyber attacks on critical infrastructure rose by 30 percent globally in 2023, creating a growing gap in the cyber workforce that leaves organizations vulnerable to cybersecurity threats due to staffing shortages.&lt;/p&gt;

&lt;p&gt;As the new Congress and administration are beginning to govern in Washington DC, it is imperative that cybersecurity and the development of a capable cyber workforce to combat highly resourced nation state threat actors be prioritized as a bipartisan issue. The US government has the opportunity to play a pivotal role in formulating policies and programs that offer necessary technical training to cultivate a domestic cybersecurity workforce. Additionally, these efforts will assist businesses in defending against the increasing frequency of cyberattacks.&lt;/p&gt;

&lt;p&gt;At Ivanti, we support the policy proposals put forward in the Cyber PIVOTT Act, introduced by House Homeland Security Committee Chairman Mark Green. This legislation aims to enhance the accessibility of cyber training and education by establishing a new full-scholarship program for two-year degrees, including those offered at community colleges and technical schools, in exchange for required government service. Ivanti endorses this legislation and looks forward to supporting its enactment and the role that it will play in strengthening the US cybersecurity industry as a whole. We recognize the need for legislation to address cybersecurity and workforce needs, as well as to create incentives for young professionals to pursue careers in cybersecurity. This initiative takes direct aim at the persistent industry shortage of a skilled cybersecurity workforce.&lt;/p&gt;

&lt;p&gt;Chairman Green’s Cyber PIVOTT Act is an important step in meeting this critical need for the cybersecurity industry. Congress should demonstrate its commitment to the cybersecurity of the nation by enacting this legislation immediately.&lt;/p&gt;
</description><pubDate>Wed, 05 Feb 2025 14:30:01 Z</pubDate></item><item><guid isPermaLink="false">d0170491-7bc9-4f51-a3a4-9139f5971fbd</guid><link>https://www.ivanti.com/blog/privacy-please-why-a-comprehensive-federal-framework-is-essential-to-protect-consumer-data-privacy</link><atom:author><atom:name>Brooke Johnson</atom:name><atom:uri>https://www.ivanti.com/blog/authors/brooke-johnson</atom:uri></atom:author><category>Security</category><title>Privacy, Please! Why a Comprehensive Federal Framework is Essential to Protect Consumer Data Privacy</title><description>&lt;p&gt;Laws vary by state. That’s expected. Fairbanks, Alaska, enacted a law prohibiting the provision of alcoholic beverages to moose, so don’t even think about it. In a part of Washington State, good luck trying to kill Bigfoot. (Not because Bigfoot doesn’t exist, but specifically because it’s illegal per a 1969 law.)&lt;/p&gt;

&lt;p&gt;But what happens when state-specific regulations are used to address a topic that transcends geographic boundaries like, say,&amp;nbsp;the internet?&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Where you live in the United States doesn’t have much impact on&amp;nbsp;what&amp;nbsp;you can access on the internet, provided you have access, but it&amp;nbsp;does&amp;nbsp;impact the protection and use of your privacy and data. At Ivanti, we’re serious about cybersecurity – and we think this is a problem.&lt;/p&gt;

&lt;h2&gt;Patchwork regulations hurt everyone&lt;/h2&gt;

&lt;p&gt;We’re left with a patchwork of misaligned and even contradictory state-specific policies with the absence of a comprehensive federal framework. Perhaps&amp;nbsp;fifty&amp;nbsp;of them.&lt;/p&gt;

&lt;p&gt;That’s prohibitively challenging for organizations with nationwide operations and puts user privacy at significant risk. A federal standard allows our industry and consumers to clearly understand the protections they may enjoy regardless of where they live.&lt;/p&gt;

&lt;p&gt;Ivanti connects industry-leading endpoint management, zero trust security and service management solutions for organizations worldwide, and we believe that privacy is a right for all we serve.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;We are strongly committed to protecting our customer’ personal data, and we design all our products with this goal.&lt;/p&gt;

&lt;h2&gt;The US is falling behind&lt;/h2&gt;

&lt;p&gt;We’re a global company and comply with the unique privacy regulations in each country. We can attest to the fragmented regulatory framework that results.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;It’s complex, contradictory and can be very inefficient. It’s also confusing for customers and puts US companies at a distinct disadvantage to their foreign competitors.&lt;/p&gt;

&lt;p&gt;A strong, comprehensive federal policy enacted by the United States would provide consistency and predictability for all parties involved.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Consumers should feel confident in trusting that their personal data is protected in a secure digital environment. Likewise, businesses should be free to innovate and compete in a stable regulatory environment.&lt;/p&gt;

&lt;h2&gt;Making our voice heard&lt;/h2&gt;

&lt;p&gt;There is an ongoing bipartisan effort to realize a federal policy, and we at Ivanti are advocates for that effort. We recently submitted a letter to Congress to voice our support and urge action.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;We’ve expressed appreciation for the congresspeople driving this effort. We have also made it clear we want to engage with them in any lawful and helpful way to advance a national policy supporting protections for individuals – maintaining a safe flow of data across borders.&lt;/p&gt;

&lt;p&gt;Also, please don’t wrestle bears in Louisiana. That’s illegal, too. These examples of state-specific laws are a bit silly, but data privacy is serious – and affects all of us. This is a critical piece of legislation in a crucial moment and the time to act is now.&lt;/p&gt;

&lt;p&gt;&lt;/p&gt;
</description><pubDate>Fri, 21 Oct 2022 15:46:41 Z</pubDate></item></channel></rss>