<?xml version="1.0" encoding="utf-8"?><rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Ivanti Blog: Posts by </title><description /><language>en</language><atom:link rel="self" href="https://www.ivanti.com/en-gb/blog/authors/daniel-spicer/rss" /><link>https://www.ivanti.com/en-gb/blog/authors/daniel-spicer</link><item><guid isPermaLink="false">f4b3f118-416f-4002-8306-7ad634daabe2</guid><link>https://www.ivanti.com/en-gb/blog/shadow-ai</link><atom:author><atom:name>Daniel Spicer</atom:name><atom:uri>https://www.ivanti.com/en-gb/blog/authors/daniel-spicer</atom:uri></atom:author><category>Security</category><title>Is Shadow AI Quietly Reshaping Your Workplace Security Posture?</title><description>&lt;p&gt;AI tools have seen a meteoric rise in the workplace. What was once the domain of highly specialised tech roles is now commonplace: Ivanti’s &lt;a href="https://www.ivanti.com/resources/research-reports/tech-at-work" target="_blank" rel="noopener"&gt;2025 Technology at Work Report&lt;/a&gt; found that 42% of office workers say they’re using gen AI tools, like ChatGPT, at work — up 16 points from the previous year.&lt;/p&gt;

&lt;p&gt;The catch? These productivity gains happen under the table. Among those who reported using gen AI tools, 46% say that some (or all) of the tools they use are &lt;em&gt;not&lt;/em&gt; employer-provided. And, one in three workers keep AI productivity tools a secret from their employers.&lt;/p&gt;

&lt;div class="flourish-embed flourish-chart" data-src="visualisation/22346584"&gt;&lt;/div&gt;

&lt;p&gt;Gen AI tools can be a productivity multiplier. But they’re also a risk to data security — particularly when they’re used without employer oversight.&lt;/p&gt;

&lt;h2&gt;What is shadow AI?&lt;/h2&gt;

&lt;p&gt;Unsanctioned use of AI is just another flavour of shadow IT (i.e. the use of technology without IT approval).&lt;/p&gt;

&lt;p&gt;The risks that shadow AI introduces are similar to other shadow IT risks, but with an additional layer of concern: the sheer amount of proprietary data generative AI requires to be effective. Free generative AI tools (and some paid tools as well) may use an organisation’s data or employee searches to train their model, amplifying the risk of data leaks and noncompliance.&lt;/p&gt;

&lt;p&gt;The recent revelation that shared ChatGPT conversations were &lt;a href="https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/" rel="noopener" target="_blank"&gt;crawlable by search engines&lt;/a&gt; (although OpenAI swiftly changed course) should be a wake-up call that, without proper controls, third parties can use your data in ways you object to. Some free tools, ChatGPT included, can be configured to meet security policies, but that’s simply not possible when employees use them covertly.&lt;/p&gt;

&lt;p&gt;Free tools like ChatGPT aren’t the only shadow AI risk. An unexpected source is actually existing software. With the rush to add AI features, tools that might previously have been IT-approved may now pose new risks, and if infosec teams don’t know about and evaluate these new features, they effectively circumvent third-party risk management processes.&lt;/p&gt;

&lt;h2&gt;Why a risk-first approach to AI is crucial&lt;/h2&gt;

&lt;p&gt;Whether for gen AI or other tools, shadow IT is the result of not having a defined and reasonable way to test tools or get work done. Given that AI isn’t going away, companies need to approach adoption proactively, because banning tools doesn’t mean employees won’t try to use them in an effort to boost their productivity and make their jobs easier.&lt;/p&gt;

&lt;p&gt;I spend the bulk of my time assessing risk, including the risks AI tools pose. Often, we have to assess risk as it relates to an opportunity to improve the business — in this case, employee productivity gains and second-order impacts (like employee satisfaction or having time to work on more strategic projects).&lt;/p&gt;

&lt;p&gt;In short, we need to ask: Is there a way to introduce the tools employees are asking for and reap the benefits they offer while keeping the risk to an acceptable level?&lt;/p&gt;

&lt;p&gt;This is where a &lt;a href="https://www.ivanti.com/resources/research-reports/proactive-security" target="_blank" rel="noopener"&gt;risk-first approach&lt;/a&gt; enters the picture. A risk-first approach to AI adoption focuses on the data that needs to go into the AI and how the third party handles that data. This approach is similar to vendor risk management, allowing organisations to use established practices and processes, but adjusted for AI-focused questions.&lt;/p&gt;

&lt;p&gt;&lt;img alt="Horizontal color gradient arrow illustrates a spectrum from &amp;quot;Reactive response&amp;quot; to &amp;quot;Proactive response.&amp;quot; On the left, &amp;quot;Reflexive bans of AI tools&amp;quot; result in &amp;quot;Circumvention&amp;quot; and &amp;quot;Unknown risk.&amp;quot; On the right, &amp;quot;Risk-first approach&amp;quot; results in &amp;quot;Employee engagement,&amp;quot; &amp;quot;Safe, sanctioned adoption,&amp;quot; and &amp;quot;Known, managed risk.&amp;quot;" src="https://static.ivanti.com/sites/marketing/media/images/blog/2025/12/183216-shadow_ai_and_the_risk_first_approach_b.jpg"&gt;&lt;/p&gt;

&lt;p&gt;Key question to ask include:&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;Will our data be used to train the AI model?&lt;/li&gt;
	&lt;li&gt;How long is our data retained?&lt;/li&gt;
	&lt;li&gt;What protections exist to reduce the risk of our data being exposed?&lt;/li&gt;
	&lt;li&gt;Who has the rights to intellectual property generated using the AI?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Minimising AI sprawl is a critical piece of this work. As more vendors introduce specialised AI tools — and as you bring on more vendors and grant their AI tools access to your data — your risk increases. This is also true of existing tools that suddenly introduce AI without cost or contract changes, making it difficult to keep an accurate inventory of AI tools.&lt;/p&gt;

&lt;h2&gt;Adopting an AI governance framework at Ivanti&lt;/h2&gt;

&lt;p&gt;Within Ivanti, we combat shadow AI with a risk-first approach that starts and ends with &lt;a href="https://www.ivanti.com/resources/research-reports/dex-security" target="_blank" rel="noopener"&gt;employee engagement&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img alt="Four connected colored boxes form a process flowchart: &amp;quot;Employee engagement&amp;quot; leads to &amp;quot;Pathways to request AI tool approval,&amp;quot; then &amp;quot;Risk assessment,&amp;quot; and finally &amp;quot;Adoption and periodic review,&amp;quot; with an arrow looping back from the last step to the first." src="https://static.ivanti.com/sites/marketing/media/images/blog/2025/12/183216-shadow_ai_and_the_risk_first_approach_c.jpg"&gt;&lt;/p&gt;

&lt;h3&gt;Bringing AI use out of the shadows&lt;/h3&gt;

&lt;p&gt;While we’d never encourage shadow AI, employees that use it have valuable knowledge to share about how to integrate AI into workflows. So instead of banning all AI use, we have to make sure that employees have a clear path to request AI tools to use at work and that there are regular opportunities for open dialogue.&lt;/p&gt;

&lt;p&gt;Fostering open dialogue makes employees feel comfortable discussing which tools help them succeed and ultimately means they will use them (or equivalent tools) safely. This provides an opportunity for employees to be active partners in developing appropriate governance — rather than trying to skirt restrictions.&lt;/p&gt;

&lt;h3&gt;A measured approach to AI implementation and adoption&amp;nbsp;&lt;/h3&gt;

&lt;p&gt;Once a tool is approved, it is important to ensure proper implementation and that you understand what data you have given it access to. This is particularly important when you consider the data governance and security risk that gen AI tools pose to organisations. When we view AI through the lens of data governance, it can help address many parts of AI risk.&lt;/p&gt;

&lt;p&gt;At Ivanti, we take a measured approach: We dedicate a team to run controlled tests of gen AI tools with other teams. We then establish feedback loops, and adoption rolls out gradually to avoid disruption.&lt;/p&gt;

&lt;h3&gt;Building a feedback loop for AI tools&lt;/h3&gt;

&lt;p&gt;We have to consistently ask:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;How are Ivanti's employees using AI?&lt;/li&gt;
	&lt;li&gt;Do they like it?&lt;/li&gt;
	&lt;li&gt;What feedback do they have?&lt;/li&gt;
	&lt;li&gt;How can we improve the tool?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ongoing conversation ensures we're using AI responsibly while meeting employees' productivity needs.&lt;/p&gt;

&lt;p&gt;It’s not about jumping on the AI bandwagon. It’s about knowing if it’s worth it — for the business and for the people using it. Shadow AI boosts the productivity of one person. But take that productivity and expand it, and you have a meaningful improvement for the company as a whole.&lt;/p&gt;

&lt;h2&gt;Proactively combating shadow AI&lt;/h2&gt;

&lt;p&gt;The running theme here is that even though AI, and particularly shadow AI, poses new and concerning risks, it is here to stay. Employees who use AI under the radar aren’t ill intentioned; if anything, they’re trying to benefit the business, even if they’re going about it the wrong way.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;A proactive, &lt;a href="https://www.ivanti.com/blog/ai-cybersecurity-best-practices-meeting-a-double-edged-challenge" target="_blank" rel="noopener"&gt;risk-first approach to AI adoption&lt;/a&gt; recognises this reality. Instead of reactive bans that only encourage circumvention, we have to engage employees to understand the problems they’re using AI to solve so that we can provide them with safe options that meet our security and data privacy requirements.&amp;nbsp;&lt;/p&gt;
</description><pubDate>Mon, 15 Dec 2025 14:00:01 Z</pubDate></item><item><guid isPermaLink="false">cca37f25-013e-40d3-81a9-7cf798cb5c0a</guid><link>https://www.ivanti.com/en-gb/blog/an-update-on-ivantis-ongoing-commitment-to-enhanced-product-security</link><atom:author><atom:name>Daniel Spicer</atom:name><atom:uri>https://www.ivanti.com/en-gb/blog/authors/daniel-spicer</atom:uri></atom:author><category>Security</category><category>Ivanti News</category><title>An Update on Ivanti's Ongoing Commitment to Enhanced Product Security</title><description>&lt;p&gt;In April 2024 the Ivanti CEO issued an &lt;a href="https://www.ivanti.com/blog/our-commitment-to-security-an-open-letter-from-ivanti-ceo-jeff-abbott" target="_blank" rel="noopener"&gt;open letter&lt;/a&gt; on our commitment to product security. We are very proud of the progress we have made, but as we all know, Security is a journey of continuous improvement. Ivanti is committed to this journey and to protecting our customers as the threat landscape continues to evolve.&lt;/p&gt;

&lt;p&gt;Similar to other companies that develop network security and edge products, our edge products have been targeted and exploited by sophisticated threat actor attacks. While these products are not the ultimate target, they are increasingly the route that well-resourced nation state groups are focusing their effort on to attempt espionage campaigns against extremely high-value organisations.&lt;/p&gt;

&lt;p&gt;Our response to any incident is to learn from it, invest in improving our products, and ultimately make it harder for our products to be abused by sophisticated adversaries.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;A final, important point: we all continue to reap value from important security industry partnerships. By collaborating closely with government and security industry partners we are stronger and more secure together. We thank our collaborators and look forward to redoubling our efforts in the future.&lt;/p&gt;

&lt;h3&gt;Bolstering Product Security and Embracing Secure by Design Frameworks&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Specialised Security Resources:&lt;/strong&gt; the Ivanti Security team is comprised of highly skilled security specialists who support Ivanti’s overall security, and a dedicated Product Security Team focused on the security of our solutions. The size of this team has increased more than 8X over the past few years, along with meaningful elevation in threat expertise.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Leading Third-Party Partnerships and Tooling:&lt;/strong&gt; we have expanded engagements with leading security and threat intelligence experts and utilise industry leading static and dynamic code analysis tooling during the development process to validate the security of our solutions and ensure Ivanti developers adhere to secure coding practices.&amp;nbsp;&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Secure by Design Alignment:&lt;/strong&gt; our development process includes robust security protocols throughout the product lifecycle, including rigorous threat modelling, vulnerability assessment, and security measures specifically to improve our solutions’ resilience against current and emerging threats – additional details can be found on &lt;a href="https://www.ivanti.com/en-gb/resources/secure-by-design/2024"&gt;our website.&lt;/a&gt;&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Product Security Optimization:&lt;/strong&gt; we have invested significant resources in our Ivanti Neurons cloud platform to alleviate the burden of security for our customers, including automated security updates, MFA enabled out of the box, and unified role-based access control (RBAC) system.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Network Security Group Enhancements:&lt;/strong&gt; we have evolved the Network Security Group, which is responsible for developing &lt;a href="https://www.ivanti.com/en-gb/products/connect-secure-vpn" target="_blank"&gt;Ivanti Connect Secure&lt;/a&gt;, in both focus, size and product leadership. As of October, this group is led by Michael Riemer – an industry veteran and cybersecurity expert with deep knowledge on this product line. Under Michael’s leadership we have increased internal engineering resources and engaged additional specialised contracted resources, which are in high demand across the network security industry.&amp;nbsp;&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Prioritising Product Security Enhancements for Ivanti Connect Secure (ICS):&lt;/strong&gt; we have prioritised product security enhancements for ICS. This includes our new 25.x version that upgrades to Oracle Linux OS to be completed in 2H 2025. We have also made other significant security enhancements in our Network Security products such as Secure Boot with TPM key management, Non-Root Privilege Access Control, a modernised web service, and WAF component.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Enhancements to the Integrity Checker Tool (ICT):&lt;/strong&gt; the ICT has been an effective tool at identifying threat actor efforts since its introduction in 2021 and is a prime example of Ivanti’s commitment to proactive security for our solutions. This tool has aided in our forensic efforts and in the case of the vulnerability disclosed on January 8, alerted our customer to threat actor activity on the same day it occurred. This allowed us to respond swiftly and develop a fix for the issue.&amp;nbsp;&lt;/p&gt;
	&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Elevating our Vulnerability Management Programme&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;Vulnerability Identification:&lt;/strong&gt; we have enhanced internal scanning, manual exploitation and testing capabilities, increased collaboration and information sharing with the security ecosystem, and further enhanced our responsible disclosure process, including becoming a CVE Numbering Authority. While this creates a natural and intended increase in vulnerability disclosure (and consequently, media coverage), it is not indicative of increased risk; on the contrary, it demonstrates our commitment to transparency and going above and beyond industry standards.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Providing Enhanced Support for Secure Product Deployments in the Field&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;Platform Upgrades:&lt;/strong&gt; we are working with customers to accelerate customer migration from End-of-Life solutions, including eliminating barriers—be they contractual, technical, or financial—that slow adoption of our most advanced and secure solutions. Together with our customers, we are making significant strides towards achieving full migration to our latest solutions.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;On-Prem Security Support: &lt;/strong&gt;for customers that require on-prem solutions, we have systematically improved our product documentation and are providing best practices to equip them with the tools and knowledge necessary to navigate and mitigate security challenges within their unique operational environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Sharing Information with our Customers and Community&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;Information Sharing and Transparency:&lt;/strong&gt; we have actively participated in events and created multiple forums for dialogue with our customers, which has deepened our understanding of evolving needs, and enabled us to share crucial insights and lessons learned. We have formalised a strategic programme to collect feedback from customers throughout the customer's lifecycle, enabling a continuous loop of feedback to ensure ongoing alignment with customer needs.&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Tue, 11 Feb 2025 13:00:01 Z</pubDate></item></channel></rss>