<?xml version="1.0" encoding="utf-8"?><rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Ivanti Blog: Posts by </title><description /><language>en</language><atom:link rel="self" href="https://www.ivanti.com/blog/authors/mike-lloyd/rss" /><link>https://www.ivanti.com/blog/authors/mike-lloyd</link><item><guid isPermaLink="false">df0eabfd-fa6d-4ee5-aa07-8f09e2717296</guid><link>https://www.ivanti.com/blog/ai-cybersecurity-best-practices-meeting-a-double-edged-challenge</link><atom:author><atom:name>William Graf</atom:name><atom:uri>https://www.ivanti.com/blog/authors/william-graf</atom:uri></atom:author><atom:author><atom:name>Mike Lloyd</atom:name><atom:uri>https://www.ivanti.com/blog/authors/mike-lloyd</atom:uri></atom:author><category>Security</category><category>Artificial Intelligence</category><title>AI Cybersecurity Best Practices: Meeting a Double-Edged Challenge</title><description>&lt;p&gt;Artificial intelligence is already showing its potential to reshape nearly every aspect of cybersecurity – for good and bad.&lt;/p&gt;

&lt;p&gt;If anything represents the proverbial double-edged sword, it might be AI: It can act as a formidable tool in creating robust cybersecurity defenses or can dangerously compromise them if weaponized.&lt;/p&gt;

&lt;h2&gt;Why is AI security important?&lt;/h2&gt;

&lt;p&gt;It’s incumbent upon organizations to understand both the promise and problems associated with AI cybersecurity because of the ubiquity of all iterations of AI in global business. Its use by bad actors is already a source of concern.&lt;/p&gt;

&lt;p&gt;According to McKinsey, &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai" rel="noopener" target="_blank"&gt;AI adoption by organizations surged to 72% in 2024, up from about 50% in prior years&lt;/a&gt; across multiple regions and industries. But the intricate nature and vast data requirements of AI systems also make them prime targets for cyber-attacks. For instance, input data for AI systems can be slyly manipulated in adversarial attacks to produce incorrect or damaging outputs.&lt;/p&gt;

&lt;p&gt;A compromised AI can lead to catastrophic consequences, including data breaches, financial loss, reputational damage and even physical harm. The prospect for misuse is immense, underscoring the critical need for robust AI security measures.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Research by the &lt;a href="https://www3.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2024.pdf" rel="noopener" target="_blank"&gt;World Economic Forum&lt;/a&gt; found that almost half of executives worry most about how AI will raise the risk level from threats like phishing. Ivanti’s &lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report" target="_blank"&gt;2024 cybersecurity report&lt;/a&gt; confirmed those concerns.&lt;/p&gt;

&lt;div class="flourish-embed flourish-chart" data-src="visualisation/16336537"&gt;&lt;/div&gt;

&lt;p&gt;Despite the risks, the same Ivanti report found that IT and Security professionals are largely optimistic about the impact of AI cybersecurity. Almost half (46%) feel it’s a net positive, while 44% think its impact will be neither positive nor negative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read more: &lt;a href="https://www.ivanti.com/resources/research-reports/state-of-cybersecurity-report" target="_blank"&gt;2024 State of Cybersecurity Report - Inflection Point&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;Potential AI cyber threats&lt;/h2&gt;

&lt;p&gt;AI introduces new attack vectors that require specific defenses. Examples include:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;Site hacking:&lt;/strong&gt; Researchers have &lt;a href="https://www.newscientist.com/article/2418201-gpt-4-developer-tool-can-hack-websites-without-human-help/" rel="noopener" target="_blank"&gt;found&lt;/a&gt; OpenAI’s large language model can be repurposed as an AI hacking agent capable of autonomously attacking websites. Cyber crooks don’t need hacking skills, only the ability to properly prompt the AI into doing their dirty work.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;Data poisoning:&lt;/strong&gt; Attackers can manipulate the data used to train AI models, so they malfunction. This could involve injecting fake data points that influence the model to learn incorrect patterns or prioritizing non-existent threats, or subtly modifying existing data points to bias the AI model toward outcomes that benefit the attacker.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;Evasion techniques:&lt;/strong&gt; AI could be used to develop techniques that evade detection by security systems, such as creating emails or malware that don't look suspicious to humans but trigger vulnerabilities or bypass security filters.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;Advanced social engineering:&lt;/strong&gt; Since it can analyze large datasets, an AI can identify targets based on certain criteria, such as vulnerable past behaviors or susceptibility to certain scams. Then, it can automate and personalize an attack using relevant information scraped from social media profiles or prior interactions so it’s more believable and likely to fool the recipient. Plus, generative AI can draft phishing messages without grammar or usage mistakes to look legitimate.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;Denial-of-service (DoS) attacks:&lt;/strong&gt; AI can be used to orchestrate large-scale DoS attacks that are more difficult to defend against. By analyzing network configurations, it can detect vulnerabilities then manage botnets more effectively as it tries to overwhelm a system with traffic.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;Deepfakes:&lt;/strong&gt; AI can generate convincing visual or sonic imitations of people for impersonation attacks. For example, it could mimic the voice of a high-level executive to trick employees into wiring money to fraudulent accounts, sharing sensitive information like passwords or access codes or approving unauthorized invoices or transactions. If a company uses voice recognition in its security systems, a well-crafted deepfake might fool these safeguards and access secure areas or data. One Hong Kong company was &lt;a href="https://www.voanews.com/a/deepfake-scam-video-cost-company-26million-hong-kong-police-says/7470542.html" rel="noopener" target="_blank"&gt;robbed of $26 million&lt;/a&gt; via a deepfake scam.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A “soft” threat presented by AI is complacency. There's always a risk of over-reliance on AI systems, which might lead to laxity in monitoring and updating them. One of the most important measures for protecting an enterprise from AI issues is through continuous training and monitoring, whether AI is being deployed in cybersecurity or other operations. Ensuring that AI operates with the organization's best interests in mind demands ongoing vigilance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch: &lt;a href="https://www.ivanti.com/webinars/2023/generative-ai-for-infosec-hackers-what-security-teams-need-to-know"&gt;Generative AI for InfoSec &amp;amp; Hackers: What Security Teams Need to Know&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;AI cybersecurity benefits&lt;/h2&gt;

&lt;p&gt;AI cybersecurity solutions deliver the most significant value to an organization in the following ways:&lt;/p&gt;

&lt;h3&gt;Enhanced threat detection&lt;/h3&gt;

&lt;p&gt;AI excels at identifying patterns in vast datasets to detect anomalies indicative of cyber-attacks with unprecedented accuracy. While human analysts would be overwhelmed by the volume of data or alerts, AI improves early detection and response.&lt;/p&gt;

&lt;h3&gt;Improved incident response&lt;/h3&gt;

&lt;p&gt;AI can automate routine incident response tasks, accelerating response times and minimizing human error. By analyzing past incidents, AI can also predict potential attack vectors so organizations can strengthen defenses.&lt;/p&gt;

&lt;h3&gt;Risk assessment and prioritization&lt;/h3&gt;

&lt;p&gt;AI can evaluate an organization's security posture, identifying vulnerabilities and prioritizing remediation efforts based on risk levels. This helps optimize resource allocation and focus on critical areas.&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;Security considerations for different types of AI&lt;/h2&gt;

&lt;p&gt;Security challenges associated with AI vary depending on the type being deployed.&lt;/p&gt;

&lt;p&gt;If a company is using generative AI, the focus should be on protecting training data, preventing model poisoning and safeguarding intellectual property.&lt;/p&gt;

&lt;p&gt;In the case of weak (or “narrow”) AI such as customer support chatbots, recommendation systems (like Netflix), image-recognition software, assembly line and surgical robots, the organization should prioritize data security, adversarial robustness and explainability.&lt;/p&gt;

&lt;p&gt;Autonomous “strong” AI (aka Artificial General Intelligence) is a work in progress that doesn’t yet exist. But if it arrives, companies should focus on defending control mechanisms and addressing existential risks and ethical implications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch: &lt;a href="https://www.ivanti.com/webinars/2023/sci-fi-or-reality-how-to-transform-it-service-management-with-generative-ai"&gt;How to Transform IT Service Management with Generative AI&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;Latest developments in AI cybersecurity&lt;/h2&gt;

&lt;p&gt;The rapid evolution of AI is driving corresponding advances in AI cybersecurity that include:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;Generative AI threat modeling:&lt;/strong&gt; AI cybersecurity tools can simulate attack scenarios to help organizations find and fix vulnerabilities proactively.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;AI-powered threat hunting:&lt;/strong&gt; AI can analyze network traffic and system logs to detect malicious activity and potential threats.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;Automated incident response:&lt;/strong&gt; AI cybersecurity solutions can automate routine incident response tasks like isolating compromised systems and containing threats.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;AI for vulnerability assessment:&lt;/strong&gt; Can analyze software code to find possible vulnerabilities so developers can build more secure applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;AI cybersecurity courses&lt;/h2&gt;

&lt;p&gt;Investing in AI cybersecurity education is crucial for building a workforce that understands how to use these tools. Numerous online platforms and universities offer courses covering various aspects of AI security, from foundational knowledge to advanced topics.&lt;/p&gt;

&lt;p&gt;Top cybersecurity solution providers will offer &lt;a href="https://advantagelearning.ivanti.com/" target="_blank"&gt;a wide range of courses and training&lt;/a&gt; to give your team the skills it needs to get the most out of your platform.&lt;/p&gt;

&lt;h2&gt;AI cybersecurity best practices&lt;/h2&gt;

&lt;p&gt;Implementing a comprehensive strategy for putting AI into action for cybersecurity is essential.&lt;/p&gt;

&lt;h3&gt;1. Set out data governance and privacy policies&lt;/h3&gt;

&lt;p&gt;Early in the adoption process, establish robust data governance policies that cover data anonymization, encryption and more. Include all relevant stakeholders in this process.&lt;/p&gt;

&lt;h3&gt;2. Mandate AI transparency&lt;/h3&gt;

&lt;p&gt;Develop or license AI models that can provide clear explanations for their decisions, rather than using “black box” models. &amp;nbsp;This is so security professionals can understand how the AI arrives at its conclusions and identify potential biases or errors. These "glass box” models are provided by Fiddler AI, DarwinAI, H2O.ai and IBM Watson tools such as AI Fairness 360 and AI Explainability 360.&lt;/p&gt;

&lt;h3&gt;3. Stress strong data management&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;AI models rely on the quality of data used for training. Ensure you're using diverse, accurate and up-to-date data so your AI can learn and identify threats effectively.&lt;/li&gt;
	&lt;li&gt;Impose robust security measures to protect the data used in training and operating an AI model, as some may be sensitive. Any breaches could expose it, compromise AI effectiveness or introduce vulnerabilities.&lt;/li&gt;
	&lt;li&gt;Be mindful of potential biases in your training data. Biases can lead the AI to prioritize certain types of threats or overlook others. Regularly monitor and mitigate bias to ensure your AI is making objective decisions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Learn about: &lt;a href="https://www.ivanti.com/blog/the-importance-of-accurate-data-to-get-the-most-from-ai"&gt;The Importance of Accurate Data to Get the Most From AI&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;4. Give AI models adversarial training&lt;/h3&gt;

&lt;p&gt;Expose AI models to malicious inputs during the training phase so they’re able to recognize and counteract adversarial attacks like data poisoning.&lt;/p&gt;

&lt;h3&gt;5. Implement continuous monitoring&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;Conduct continuous monitoring and threat detection systems to identify bias and performance degradation.&lt;/li&gt;
	&lt;li&gt;Use anomaly detection systems to identify unusual behavior in your AI models or network traffic patterns to detect potential AI attacks that try to manipulate data or exploit vulnerabilities.&lt;/li&gt;
	&lt;li&gt;Regularly retrain your AI cybersecurity models with fresh data and update algorithms to ensure they stay effective against evolving threats.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;6. Keep humans in the loop&lt;/h3&gt;

&lt;p&gt;AI is not infallible. Maintain human oversight, with security professionals reviewing and validating AI outputs to catch potential AI biases, false positives or manipulated results the AI might generate.&lt;/p&gt;

&lt;h3&gt;7. Conduct regular testing and auditing&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;Routinely assess your AI models for vulnerabilities. Like any software, AI cybersecurity products can have weaknesses attackers might exploit. Patching them promptly is crucial.&lt;/li&gt;
	&lt;li&gt;AI models can generate false positives, identifying non-existent threats. Adopt strategies to minimize false positives and avoid overwhelming security teams with irrelevant alerts.&lt;/li&gt;
	&lt;li&gt;Conduct frequent security testing of your AI models to identify weaknesses that attackers might exploit. Penetration testing expressly designed for AI systems can be very valuable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;8. Have an incident response plan&lt;/h3&gt;

&lt;p&gt;Create a comprehensive incident response plan to effectively address AI-related security incidents.&lt;/p&gt;

&lt;h3&gt;9. Emphasize employee training&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;Educate employees about the risks associated with AI and how social engineering tactics might be used to manipulate them into compromising AI systems or data security.&lt;/li&gt;
	&lt;li&gt;Conduct red-teaming exercises that simulate AI-powered attacks, which help test your security posture and spot weaknesses attackers might exploit.&lt;/li&gt;
	&lt;li&gt;Collaborate with industry experts and security researchers to stay abreast of the latest AI threats and best practices for countering them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;10. Institute third-party AI risk management&lt;/h3&gt;

&lt;p&gt;Carefully evaluate the security practices of third-party AI providers. Do they share data with other parties or use public datasets? Do they follow &lt;a href="https://www.ivanti.com/blog/secure-by-design-principles-are-more-important-than-ever"&gt;Secure by Design&lt;/a&gt; principles?&lt;/p&gt;

&lt;h3&gt;11. Other best practices&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;Integrate your AI solution with threat intelligence feeds so it can incorporate real-time threat data and stay ahead of new attack vectors.&lt;/li&gt;
	&lt;li&gt;Ensure your AI solution complies with relevant industry standards and regulations. This is mandatory in certain sectors. For instance, in the automotive and manufacturing sectors, an AI must comply with ISO 26262 for automotive functional safety, General Data Protection Regulation (GDPR) for data privacy and National Institute of Standards and Technology guidance. AI in healthcare must comply with the Health Insurance Portability and Accountability Act in the U.S., GDPR in Europe and FDA regulations for AI-based medical devices.&lt;/li&gt;
	&lt;li&gt;Track metrics like threat detection rates, false positives and response times. This way, you’ll know the effectiveness of your AI and areas for improvement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Win by being balanced&lt;/h2&gt;

&lt;p&gt;For any organization venturing into this bold new AI cybersecurity frontier, the way forward is a balanced approach. Leverage the copious strengths of AI – but remain vigilant as to its limitations and potential vulnerabilities.&lt;/p&gt;

&lt;p&gt;Like any technology, AI is not inherently good or bad; it is used by both good and bad actors. Always remember to treat AI like any other tool: Respect it for what it can do to help but stay wary of what it can do to harm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read: &lt;a href="https://www.ivanti.com/company/artificial-intelligence"&gt;Ivanti’s Position on Artificial Intelligence&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
</description><pubDate>Thu, 17 Oct 2024 12:28:03 Z</pubDate></item><item><guid isPermaLink="false">7cef4aa8-a9fe-4aef-9507-5119fec45850</guid><link>https://www.ivanti.com/blog/five-simple-cybersecurity-tips</link><atom:author><atom:name>Mike Lloyd</atom:name><atom:uri>https://www.ivanti.com/blog/authors/mike-lloyd</atom:uri></atom:author><category>Security</category><title>Five Simple Cybersecurity Tips Anyone Can Follow</title><description>&lt;p&gt;&lt;a href="https://www.ivanti.com/" target="_blank"&gt;&lt;img alt="Get expert insights you can't find anywhere else - watch now" src="https://static.ivanti.com/sites/marketing/media/images/blog/2019/10/cta-experts.png"&gt;&lt;/a&gt;October is National Cybersecurity Awareness Month (NCSAM), which was launched by the National Cybersecurity Alliance and the U.S. Department of Homeland Security in October 2004.&lt;/p&gt;

&lt;p&gt;According to the Stay Safe Online NCSAM website, “the theme for 2019 is ‘Own IT. Secure IT. Protect IT.’, helping to encourage personal accountability and proactive behavior in digital privacy, security best practices, common cyber threats and cybersecurity careers.”&lt;/p&gt;

&lt;p&gt;In a world where it can feel like there’s a new security breach almost every day, cybersecurity can be overwhelming. Here are five simple cybersecurity tips that anyone can follow to become more secure.&lt;/p&gt;

&lt;h2&gt;1. Consider using passphrases instead of passwords&lt;/h2&gt;

&lt;p&gt;Passwords are becoming more and more insecure, as many of us use the bare minimum requirements for password length. On top of that, we often use the same password for multiple sites. These passwords are typically either difficult for us to remember and very easy to crack or extremely easy for us to remember and even easier to crack. Examples of these passwords are password1, Br0Nco5#, P#rQ_h67+xL9!&lt;/p&gt;

&lt;p&gt;A solution to this is to use passphrases instead—for two reasons: length and hash tables. An eight-character password, such as our Br0Nco5# example, would take about nine hours to crack using a modern tool. Alternatively, “peanutbutterelephant” would take about 16 billion years to crack using the same tool, even though it has no special characters or numbers.&lt;/p&gt;

&lt;p&gt;Every password has a unique hash, which is a fingerprint for the password. When passwords are cracked using a hash table (which is essentially a giant list of cracked passwords), the password-cracking tool compares the hash on the list with the hash or your password. Hash tables can consist of millions or billions of strings of characters to compare with your passphrase. By creating a longer passphrase, you greatly decrease the possibility that it will end up on the table.&lt;/p&gt;

&lt;h2&gt;2. Keep your software up to date&lt;/h2&gt;

&lt;p&gt;Each one of us has dozens of applications and pieces of software from various vendors. These applications are developed and tested by people, and people make mistakes, which means that some applications can have bugs in them. These bugs can be very small and benign, or they can cause huge holes in the security of the product. Depending on the severity of the bug, updates will be released quickly that will hopefully fix the problem.&lt;/p&gt;

&lt;p&gt;Updating software to the newest version means you’re less likely to have an exploitable bug in your version of the software. This is especially important for software made by smaller teams, as updates usually don’t come out as frequently.&lt;/p&gt;

&lt;p&gt;In addition, since so many attacks can occur through outdated or broken browser settings or add-ons and plug-ins, it’s especially important to keep your browser items up to date.&lt;/p&gt;

&lt;h2&gt;3. Don’t forget mobile device security&lt;/h2&gt;

&lt;p&gt;More and more, our phones and tablets are becoming our main source of productivity. We check our email, use social media, play games, watch cat videos, and more—all through our devices. This also means that more and more attackers are using mobile devices as points of attack.&lt;/p&gt;

&lt;p&gt;One of the easiest ways to help secure your device is to activate a secure way of unlocking the phone. This is done with a PIN or password. Most new devices also have biometric options, such as a fingerprint or face recognition.&lt;/p&gt;

&lt;p&gt;Another thing that is important to remember is application security. App stores generally vet the applications for safe practices, but not all apps are equal. Be very careful that you are downloading a legitimate version of the app. For example, if you see “Candy Crush 47,” take a moment to think about that. How could there possibly be 47 of these? That app is probably suspicious.&lt;/p&gt;

&lt;p&gt;You should also consider mobile data encryption. Most modern phones and operating systems are including encryption as a standard. However, some information may not fall under that encryption umbrella, so be sure to find out what is and isn’t encrypted, instead of assuming that everything is OK.&lt;/p&gt;

&lt;p&gt;This last point is a little controversial, but don’t forget about “Find my device” services. Turning on device location services means there is a potential that others can see you, too. However, in doing so, you may be able to track down your phone if it’s ever stolen or misplaced. This is a risk/reward situation.&lt;/p&gt;

&lt;h2&gt;4. Back up your data&lt;/h2&gt;

&lt;p&gt;One of the most frustrating things that can occur is when you start your workday, and upon opening your laptop, you see a message that says, “All of your data belongs to us.” Oh no, you’ve been a victim of ransomware! Can you pay the ransom? Even if you could, don’t. Unfortunately, your only option is to erase your entire hard drive and reinstall the operating system, which means you’ve just lost all of your work. Weeks, months, and possibly years of files, pictures, cat videos (why?)… all gone.&lt;/p&gt;

&lt;p&gt;To prevent such a sad day from occurring, back up your data. This can be done using several methods. Most organizations will offer cloud backup services such as Dropbox or OneDrive. If you need a solution at home, there are several free options, or you can look at purchasing an external hard drive.&lt;/p&gt;

&lt;h2&gt;5. Don’t ever say, “It will never happen to me.”&lt;/h2&gt;

&lt;p&gt;Sadly, this is a phrase heard much too often. It’s assumed that most of us aren’t in danger of having a security issue affect us. We believe that we have nothing an attacker wants, or our security controls are top notch, so we don’t need to worry.&lt;/p&gt;

&lt;p&gt;The truth is, most attackers will go for the easy targets, the low-hanging fruit. This is how social engineering has become so popular. An attacker could spend days or weeks trying to penetrate a system, or they could just sweet talk their way to getting information by sending a phishing email or making a few phone calls.&lt;/p&gt;

&lt;p&gt;Instead, a good recommendation is to practice a heightened state of awareness. However, it must be stressed that this is not the same as paranoia. It’s simply a state where you question the information in front of you with the intention of avoiding social engineering attacks.&lt;/p&gt;
</description><pubDate>Fri, 04 Oct 2019 22:00:28 Z</pubDate></item></channel></rss>