Application Whitelisting Best Practices: Lessons from the Field

October 09, 2013

If you’re like most IT professionals, you’ve probably heard analyst firms like Gartner and Forrester recommend using application whitelisting to defend your endpoints. The latest generation of application whitelisting provides flexible protection against modern, sophisticated malware and targeted attacks. However, application whitelisting is not something you turn on overnight. Attend this in-depth technical webcast as we dive into the latest technologies, including reflective memory protection, and other whitelisting approaches, to learn best practices to begin preparing for your 2014 endpoint security strategy and the inevitable transition from traditional signature-based protection to a holistic solution that incorporates whitelisting.

Transcript:

Chris: Hello, everyone, and welcome to this Lumension In-Depth Technical Webcast on "Developing Best Practices For Application Whitelisting." My name is Chris Merritt, and I'll be your moderator for today's event. As you know, endpoint security is evolving. New vulnerabilities are being disclosed every day, new malware creation is exploding and getting much more sophisticated, and traditional signature based AV cannot keep up. You know that patch management and AV are necessary, but not sufficient layers of an endpoint defense. This has led to a resurgence in interest in whitelisting, which is used in a lot of other places in your security arsenal like firewalls, or database security platforms, or email filters, and so forth. 
 
Whitelisting works by permitting only those files you explicitly allow to execute. In contrast with AV, which operates on a blacklisting principle. So meaning that any file which is on the list is not allowed to execute. Obviously, the list of permissible files is more or less than the list of stuff that isn't allowed to run these days. And whitelisting can also address other issues on the endpoint like uncontrolled local admin settings, new or potentially unwanted programs, or again, malware. Intelligent application whitelisting is an important addition to your risk mitigation strategy. And taking prudent measures to establish a best practices approach can help reduce costs and risks in the long-term. 
 
In today's webcast, we're gonna take a deep dive into the best practice of implementing application control, or whitelisting, on your endpoints developed by Lumension over hundreds of customer engagements, and to get real world perspective from the trenches. This process, which is flexible and simple enough to be adapted into your environment, revolves around getting your endpoints into a safe and secure mode, or as we call it "into lockdown." This includes laying the groundwork for a successful application whitelisting process, creating the policies protecting the endpoints managing the environment. 
 
Now before we move on, let me tell everyone that we'll be sending a link to the slides and the recording of this webinar in the next couple of days. We'll also send you a copy of our Application Control Best Practices Guide whitepaper, the second version of this now, when it becomes available. So be looking for that the next couple of days. Joining me on today's webcast is David Murray, who is the Group Product Manager for Endpoint Production, as we mentioned. With over 15 years of product management experience since joining Lumension in 2009, David has integrated both anti-virus and application control modules into the Lumension Endpoint Management and Security Suite. Welcome, David.
 
David: Thank you. 
 
Chris: So before we hand it off, or move into the main part of our presentation, a quick housekeeping note. If you have any questions or comments at any point during the survey, please submit them via the question tab at the top of your screen there. We're gonna answer them as we go. So please, give us your questions. We want this to be interactive as possible, because it's our hope that you're gonna walk away with some actionable ideas to hone your application whitelisting process. So your comments and questions are going to be key in guiding this conversation. 
 
So let's set the stage a little bit before we move into the best practices. The endpoint is now a primary entry way for malware into your network. We know that it's under increasing attack today from physical attack via rogue USB devices, to network attacks via viruses and malware, which exploit mostly vulnerable third-party apps. This problem seems to be escalating every day to the point where we're seeing diminishing effectiveness of the AV only approach. AV is no longer the best and only defense needed. In fact, recent data show that an average detection rate of new malware after 30 days is only 62%, and the best rate is 93%. 
 
So this means that if you happen to have the best AV solution installed for any particular time or piece of malware, you're still gonna see about 7% of the malware get through. But of course since most of us rely on only a single AV solution, our results will vary mostly for the worse. Malware is really exploding. Here's some data from avtest.org for the last 12 months from June. And you see that the average has really risen dramatically in the last six months of 2012 versus the first six months of 2013. And we see in this slide that the increase in the last 12 months is about 60%. It's phenomenal growth in the last 12 months. 
 
And in fact, the sources of endpoint risk, according to John Pescatore at Gartner, constitute a lot of basics that we need to cover. Sixty-five due to misconfigurations, 30% due to missing patches, and only 5% due to zero-days. So we really need to focus on our basic blocking and tackling. Why do we care about malware? Well it's a pain obviously, but it cost us money. Each piece of malware that gets into the network hits your bottom line via the impact on your everyday operational costs, and wasted bandwidth, IT resources. And that's not even looking at the costs associated with a data breach. 
 
So when we implement a whitelist approach, we reduce the uncontrolled change we allow in our environment. Which not only protects your data, but it increases your operational efficiency and improves your cost effectiveness. Which allows you to shift the focus of your manpower and budgetary resources to other strategic initiatives. The things that CEOs, and CFOs, and folks like that, understand and care about. So let me close my part out here by saying we believe there's no single technology that's a silver bullet that will magically protect your endpoints against all malware. It takes several overlapping technologies working together to protect against today's threats.
 
And this defense in-depth approach to your security does not begin and end with AV, rather it starts on a foundation of solid patch and configuration management, the addition of whitelisting to prevent malicious or merely unwanted applications from executing in your network. And AV, which now instead of doing the bulk of the heavy lifting, is basically being used to clean up after malware that's been blocked by whitelisting. 
 
So with that said, before I hand it over to David, I'd like to push our first poll of the day. What is your current impression of application whitelisting? A, don't know, this is my first exposure to it. In which case, welcome. B, it's server-side only technology. C< it's difficult to implement on desktops. And D, it's an effective security solution. So I'm gonna leave this up for a little bit as I hand it over to David. 
 
David: Thanks, Chris. To the next slide here. Okay. So over the next few slides I'm going to take you through the best practices that we've established and lessons that we've learned from discussions with numerous customers following their implementations of application whitelisting. We're gonna start out by summarizing the application whitelisting process, which you can see on this slide. So starting at the top, the first step in this process is to lay the groundwork to prepare endpoints ahead of introducing application whitelisting, and then scanning those endpoints to create the whitelist. 
 
Next, we create policies so the changes can take place on these endpoints. We refer to this as "trusted change," changes that have been authorized by a trust engine. These policies prepare the endpoints to be put into a lockdown or enforcement mode. As I'll point out throughout this presentation, investing in this phase is key to the success of the implementation. While there is always the temptation to fast forward through the process and jump straight into lockdown, taking the time to create and adjust the policies really saves an effort in the longer term. The more you do upfront, the less you have to do later, and the more successful the implementation will be.
 
Once the policies have been fully implemented, you're ready now to move onto the next phase which is to protect endpoints by putting them into an enforcement or lockdown mode. From this point on, only those changes which are supported by policies will be allowed. If you've done the groundwork in creating your policies, this should be a seamless transition. However, there probably will be something you didn't anticipate, so you will need to fine tune your policies. Finally, there will be a need for ongoing management. While policies will cater for the majority of software changes that take place on your endpoints, other changes will also occur. Businesses will change, user needs will change and you need to be able to adapt to these changes, and you need to develop processes to facilitate them. 
 
So let's move forward to look in more detail at each of these phases and the associated best practices. In this first phase, we're going to lay the groundwork for introducing application whitelisting by patching endpoints and cleaning them of malware. Once we are done, we go ahead and create the endpoints whitelist and put the endpoints into an audit mode. We also identify the applications that are executing on those endpoints, and we can then achieve an immediate benefit by preventing any applications from running that we don't want to have in our environment. 
 
Okay, before you start to install or deploy Application Control, you should have a clear understanding of what you want to accomplish. Application Control is very flexible and can be used for a wide variety of enforcement goals. It's successfully used in organizations with policies ranging from extremely strict to minimally intrusive. Determining what levels of enforcement your organization needs in the beginning will best prepare you for a successful implementation. 
 
So as you can see on the slide here, organizations typically fall into one of three categories, permissive, moderate or stringent. Determining which category your organization fits into provides a good baseline perspective when determining how to configure Application Control policies as you deploy. If your organization has a written policy, then determining what you want to do should be straight forward. Simply identify the allowed usage cases in the policy and make a list of those cases. You can then create a policy for each case. If your organization doesn't have a written policy, you'll have to determine the appropriate level of permissiveness for your organization. 
 
Okay. So starting with permissive. These organizations are looking to do little in terms of enforcement. The primary goal is usually auditing and reporting of user activity, or the need to address a very specific issue such as denying access to certain applications or application types, for instance, maybe peer-to-peer applications. So the written data security policy at these organizations is usually informal or brief. External regulations and compliance concerns are minimal or nonexistent. 
 
The next category you may fall into is moderate, and actually most organizations fall into this category. So these organizations typically have a written data security policy. I want to be able to enforce that policy without relying on voluntary user compliance. Their goal is to improve overall security against malware while maintaining end-user flexibility. So these organizations typically have some external auditor compliance needs they must address. 
 
And then finally, the third category is stringent. So these organizations deal in highly confidential information, they're typically very closely monitored either from within the organization itself or possibly by an external authority. The goal of these organizations is to prevent all but carefully vetted applications from running on endpoints, except for very specific cases which are allowable according to their data security policy. To a certain extent, end-user flexibility is sacrificed in order to maintain a more secure environment. Chris, you've got a good number of votes at this stage. Do you want to look at the poll results now or we keep going?
 
 
Chris: Yeah, let's do that. I'm gonna stop the polling, and it looks like we have a pretty good mix here. Forty-two percent say that whitelisting is an effective security solution, 26% say it's difficult to implement on desktops, 21% say they don't know, this is their first exposure. And 11% say it's in server-side only technology. So pretty good mix there. 
 
David: Pretty good mix. That's some of the messages I guess we've heard in the past. A lot of people think difficult on desktops, more geared toward servers. And that's maybe something that's we're working to change that attitude. And hopefully, you take some messages from this presentation that might help with that. 
 
Chris: Great. Let's move on.
 
David: Okay. So back to laying the groundwork. So prior to introducing whitelisting or application control, organizations should ensure that all applications are fully up to date and patched, especially with security patches. And also conduct a deep scan of endpoints for malware to ensure that no malware doesn't end up on the on the whitelist. Something to consider when you're doing this, you do want to avoid end-user disruption where possible. So conduct these patch and clean scans during off-hours, because these steps, patching and scanning endpoints, will impact endpoint resources. 
 
There are some good reasons to take this approach, patching and cleaning before introducing whitelist, versus maybe an alternative approach where we have to take endpoints back from users. By doing it this way, organizations can avoid the need to reimage each endpoint, which can take tremendous time and effort, impact productivity, or limit the rollout if each user has to wait for a hardware refresh before implementing the Application Control. And organizations can also avoid the need to use a one-size-fits all gold image on all the endpoints, because after all, different departments have different needs. Your finance team has tremendously different application needs from those of your engineering team. 
 
Okay. So let's start with the patching. So ensure that all of the application on the endpoints are fully patched. Make sure you get all of the security and configuration patches installed. Not only on the operating system itself, but all of the third party applications, like your CRM applications, accounting software and so on. Depending on how stringent your existing patching process is, this might take some time and effort, a number of reboots. So as discussed already, it's probably best to do this during off-hours so as not to impede end-user productivity. 
 
And in addition to being a preparation for introducing Application Control, it does enhance your fundamental security, and it also will minimize the remedial work needed after Application Control is installed. So this gives you a good basis from which to build your new whitelisting regime. Okay. So once you have the patching completed, the next step is you conduct a deep malware scan. Now you probably already conduct anti-virus scans on a regular basis, but what we want to do now is to conduct a very thorough scan to identify and remove any dormant malware on the endpoints. For example, there could be malware buried deep within archives. 
 
So take a look at the antivirus options available to you and select a more thorough scan options such as scanning within archives, scanning boot sector, scanning memory, and so on. The other options you mightn't normally selects because it causes scans to run longer and the....
 
[00:08:14]
[silence]
[00:21:10]
 
David: Hi, Chris. I'm back. Looks like my line dropped. Can you hear me okay? 
 
Chris: Yes. Welcome back, David. 
 
David: Okay. Sorry about that. So let me get back to where I was at. So we're talking about doing a deep malware scan. And I think the point I was making when I dropped was, this is likely to utilize a significant amount of resources on your endpoints. So do this off-hours, maybe over a weekend, when end-user is not going to be using the endpoints and avoid any end-user disruption. So if you do need to do execute it during working hours, ensure you communicate with the users so they are aware of what's happening. And if your solution allows this, throttle back the CPU usage to minimize the impact while the scan is taking place. 
 
A general best practice, a message we've heard from our customers in terms of being successful as part of the rollout is, communicate with your user population throughout the rollout of Application Control. And doing that ensures that the rollout goes smoothly, as users know what to expect. So doing the scan allows you to introduce Application Control to as clean as an endpoint and possible without having to resort to reimaging it, and thus impact end-user settings, and any specific applications that might below this. Not to mention, of course, the time, and effort, and productivity that that causes. 
 
So this is not to say the endpoints will be perfectly clean. It may or may not be by the time you complete the patching and antivirus scan, but you're as clean as possible. And once Application Control is running, you're at least assured nothing new is getting out to the box. So having patched and cleaned our endpoints, we're ready now to introduce Application Control, and create a whitelist, and put the endpoints into an audit mode. So to do this, we're gonna scan the endpoints using what we call "an easy auditor scan" to create the initial whitelist. 
So we want to simplify the whitelist creation by taking a snapshot of each end point. There is no need to rely on potentially out of date images, and no need to revert end-user installed applications or settings. The scan would also identify the applications currently running in your environment. This is what's really happening in your environment, and it can actually be quite eye opening. By its nature, scanning an endpoint is a fairly dis-concessive [SP] activity, so it can have a performance impact. Similar to the antivirus scan, if you have the flexibility to execute the scan our of hours, you should do so to avoid any impact on your users. Otherwise, good communication with users ahead of time so they know what's going on. 
 
Okay. So some tips for your initial endpoints scan. In this phase, we recommend you start with a small number of endpoints, typically less than 10. So the administrator can review the initial logs without being overwhelmed with data. So once the application control policies have been defined, the number of endpoints that can then be expanded. Also, the endpoint should be selected from a good cross selection from different departments, so HR, IT, sales, engineering, and so on. And also from different operating systems, if you use different operating systems in your environment. So that the result is the application library is going to be populated with a diverse range of applications. You're gonna get enough information from a small number of endpoints to create policies and your administrators won't get overloaded with logged data. The experience and policies from those initial endpoints can then be applied to additional endpoints as you expand your rollout.
 
Okay. So you've done your scan. Why is it coming back into the application library? And so in addition to creating the initial whitelist, one of the reasons that you scan endpoint, is to understand what applications are being used. So your scans will return thousands of files back into the application library, either TMLs [SP], XZs [SP], and so on. And you want to organize those into applications, so you have things like Adobe Acrobat Reader, Microsoft Excel, and so on, and you can manage on that basis. 
 
Once you've got your applications in place, you might want to organize these applications into groups, maybe by business purpose, like accounting applications, marketing applications and so on. Or maybe by application type, you've got maybe peer-to-peer applications, comms programs drivers and so on. Whatever really makes sense for your organization.
 
As you scan more endpoints, you should also note that this will grow in size and diversity. In order to make decisions in the application library, and we'll see this in probably in a couple of slides, you can leverage file reputation data, which is obtained by comparing the file hashes you've got in the application library with a cloud based hash repository as you categorize these files. Indicators dimension, this obtained from the Dimension Endpoint Integrity Service. And the ratings it provides enable you to make decisions, as we'll see later. 
 
So the reason you're doing this is to make policy management easier and more intuitive. It allows you to create blanket policies and then manage the exceptions. So for instance, allow all users in the accounting group access to the accounting packages while on the one hand, not allowing access to these applications for non-accounting personnel. And on the other hand, giving you flexibility to add applications on an as needed or exception basis. So maybe allow the chief financial officer use his or her favorite IM tool, but nobody else can access this. 
 
Okay. So here you can see what the application would look like as you organize your scans files into applications and application groups. Again, the lesson learned here is, organize your files as you go rather than leaving it for a period of time. This will ease the downstream workload, especially when it comes to policy creation. This step isn't actually mandatory. Some organizations just looking for easy lockdown with trusted updated and minimal overhead, don't need to create application groups unless they explicitly want to deny them. However, some organizations with more ambitious plans will find that taking the time to organize the thousands of files will give them greater flexibility and capability down the road. 
 
I mentioned earlier the reputation ratings or the scores. So you can use these to help make some initial trust decisions on the files being collected in your endpoint scans. So when you're trying to make decisions whether to authorize or deny files or applications, you want to be able to understand whether this is a file you can trust. Is the file really what it claims to be? So you've already got some file meta data. As you can see here, the file name, the file path and the endpoint, whether it's signed or not. And that's all good information. But what would be really useful is if you could validate these files. And that's what you're seeing here, is a validation or verification of the files in your environment against the cloud database. 
 
So these ratings really are the level of confidence that these files are what they say they are and you need to have that type of capability for decision making. And this is especially true as you put together your denied applications policies, which we're gonna go on and discuss next. Okay. So with denied applications policies, you can get an instant benefit even before you move into lockdown. What are those applications that you don't want to have running in your environment?
 
So these policies can be applied for all users or it can be limited to specific groups as appropriate. You've got unknown software, which would include files or applications which are neither much in your environment, or maybe they're not highly rated by EIS. Unwanted software might include applications like hacking tools, music streaming software, and insecure instant messaging, voice over IP applications, and so on. So, many customers have reported tremendous benefits from just implementing this one step of introducing denied application policies. 
 
One additional powerful use of the denied applications policy is to block the spread of malware, especially prior to getting updated signatures from your antivirus vendor. For instance, a known bad file is reported by a user, but it's not being picked up by their antivirus. In this case, Application Control can immediately show how many systems got this file and immediately block it across all of the managed machines. 
 
Some additional tips when creating denied application policy. So unwanted software can be installed onto a test endpoints and scanned by an easy auditor and grouped in the application library for instantaneous effect. It doesn't have to exist on an end-user's box, so you don't have to rely on luck or chance to block unwanted applications. And the other important point is that denied applications policies can be applied to users even prior to endpoints being put into easy auditor or easy lockdown. It can be implemented for any end point which have the application control module installed. 
 
Okay. So if you've gone ahead and set up some denied applications policies and push these out, when users now try to run these applications, they're gonna get blocked. So one of the best practices here is to communicate out to users ahead of time, and let them know what's going to be blocked and why it's going to be blocked. And you also need to have an escalation strategy defined in the event that you block something that's going to be needed. It's always difficult to anticipate user needs and some users might have a legitimate need for an application that you've decided to block.
 
In terms of the message that you present to the user when the applications are blocked, you should customize this message, if possible, so that you can add things like your own company logo, so the user is aware that this is a message from your organization. Add a customized message to inform the user why this application is being blocked, what steps they should take if they need the application. And you can also add a URL, which could be linked to a helpdesk ticketing system or maybe a repository of approved software. So don't rely on users remembering what the process is, deliver that message to them at the time block the application. 
 
Okay. So as you're laying the groundwork for your transition to a whitelisted environment, you need to start working on the scope and means of end-user communications. As with every organizational change, this will take some time for your users to assimilate. Of course, you already have executive backing for the project or you wouldn't be rolling it out. But as you lay the groundwork for implementation, you want to make sure an executive champion takes a visible commitment to the project and is included in your communications program.  
 
So starting with user communications, your users may be accustomed to relatively unfiltered control of their endpoints. While these aren't malicious, some of the applications they use may not fall into the legitimate usage category in your organization. If you plan to start blocking applications which users could previously use, you would be more successful if you complement the rollout of enforcement with an informational campaign. So the first step is to have a clear policy. Users will want to know exactly what the rules are which are newly being enforced. Along with this, you need a reasonable explanation of why this enforcement is being put in place. So that messaging could include things like, the protection of users from malware being introduced in the environments through unapproved applications, this will translate to better uptime, better system performance and better productivity. 
 
Maybe the protection of users and organization to protect against the loss of sensitive data be it corporate intellectual property, customer or employee personal information, or just general sensitive information that you would have. Maybe it would be the need for the organization to comply with regulatory requirements, or to avoid statutory fines and lawsuits by ensuring the network is secure. So really there's a myriad of other reasons your organization has for enforcing its data security policy, but just be sure to have this messaging which explains these in beneficial terms. So go ahead and communicate your message by email, company newsletters, posters in common areas, and similar methods, well in advance of enforcements. Allow users to ask their questions, and have their concerns addressed before you start to enforce. 
 
And as you start your enforcement program, be sure to keep the communication channels open so users are clear on your security policy, how it's being enforced and how to handle any special cases they may encounter. Another important element to a successful deployments is an executive sponsor. So the change will be more readily embraced when users see that the executive team is behind the initiative. Supportive and positive message from the executive level has proven to increase the success of any new type of policy enforcement. 
 
This might not actually seem that importance, but actually in terms of our lessons learned, when we interviewed customers about their deployments, and what had been important in terms of achieving a successful rollout, a lot of this came down to user communications and executive support. One of the more common pieces of feedback we received was, there was a clear management directive to make this work and that was why it was successful. 
 
Okay. Now that you have a number of endpoints in audit mode, you need to go ahead to the next step which is to create policies to allow trusted change to take place in those endpoints. As you're in audit mode, you're going to have endpoint logs, which will tell you whenever changes have taken place on the endpoints. So a change has happened anytime something runs that isn't on the whitelist, and that goes and creates a log entry. So you can wait for those logs to happen, or you can get ahead of the logs and start to put policies in place for applications that you know are going to change. In terms of best practices, you should develop a strategy for all applications in your environment and decide how you're going to allow change to occur for them. Also, whether you allow users to update them directly on the endpoint, or whether you block automatic updates and update the application centrally and push out new versions. 
 
I've mentioned earlier that you should avoid the temptation to rush into lockdown. Customers obviously want to go into lockdown as quickly as possible, but you really need to invest the time to create policies to support trusted change before you go into lockdown. Doing this will save you from a lot of headaches and support calls in the long run. As a best practice, I would recommend that you remain in audit mode for at least a month and that should cover at least one patch, ideally, a major event like a quarter end. This time period and these events will ensure you have enough time to stabilize your policies before going into lockdown. Okay. So before we go ahead and look at these policies, Chris, do you wanna conduct another poll?
 
Chris: Yeah. Let's push this next poll. Curious to know in our audience, how many auto updaters do you have on your average endpoint? A, one to two. B, three to five. C, six to 1. D, 11 or more. And E, don't know. So we'll leave this up a little bit, David, and let you continue. 
 
David: Okay. When we'll come back and look at us in a little while. So let's start looking at the policies. The first and probably the most powerful of the trust engine capabilities is Trusted Updater. So updaters are the application executables that cause software updates on your endpoints, filed and sold by a trusted updater are added directly to the whitelist. If an updater isn't a trusted updater and it runs, it will actually perform software updates and update your applications. However, because it is not a trusted updater, these updated application files won't be whitelisted, and will be blocked when you try and run them once you're in enforcement mode. So an important thing to remember, updaters update the endpoints, but they only update to whitelist is they're trusted updaters.
 
Because of that, every updater you've got in your environment should either be exclusively denied or it should be added as a trusted updater prior to lockdown. So what do we use Trusted Updater for? We used Trusted Updater for software distribution or pattern remediation tools like our own dimension pattern remediation, Windows updates, WSoft, Tivoli, and so on. You would use it for a third-party antivirus solution. Again, the mentioned antivirus, Kaspersky, McAfee, Symantec, and so on. And so is updating applications, the likes of Firefox and iTunes that update themselves. Why would you add these tools to Trusted Updater? So if an updater is a trusted updater, it automates the whitelist maintenance, it reduces your workload and reduces the risk of human error. So in terms of best practices, the more updaters that you can trust, the easier your life is going to be as an administrator. 
 
So some tips as you develop your trusted updater policies. First of all, for Windows updates, be sure you have trusted Windows update on endpoint, if you're using it. Failure to do so could cause some problems, potentially even on usable endpoints. We do have pre-built Windows update policy in Application Control to facilitate that, so that makes it makes it fairly simple. If you're using Lumension pattern of remediation, that's enabled by default. So no additional configuration is required.
 
For other software distribution or pattern remediation tools, if you're using those, you need to try those particular executables in the Trusted Updater policy. Where it gets more interesting is with things like third party antivirus solutions. Antivirus solutions are updated every day with new signature files and also occasionally with new engine files. So you do need to create a trusted updater policy to allow for that change. It's important to note, however, you need to trust the antivirus updater and not the antivirus scan engine. If you end up trusting the antivirus scan engine, every file that the kind of engine touches you could potentially be added to the whitelist. And that really has the same effect as turning off Application Control, which wouldn't be a good thing. 
 
Also browsers. Similar to anti-virus solutions, you shouldn't trust browsers, your Firefox, IE, Explorer, and so on. This has the same effect as trusting the entire internet. So instead, you should add the browser updater updated for Firefox, for example. So you add these to the Trusted Updater policy to support browser updates. Various applications will update themselves. So we have a concept of self-updating applications. Each application will typically have one or more updater files, which are used to perform these updates. So these updater files will already have been whitelisted by the easy auditor scan. The easy auditor scan so they can execute. But you need to make these updaters trusted if you want the application updates to get on the whitelist. 
 
So in the case of Apple, they have a file called "Software Updater," and you make that a trusted updater, it will take care of those files. Another important point to note for Trusted Updater is there can be many different variants of the same update file and different endpoints even though they're on the same operating system. So just because you have the update files from one endpoint in your policy doesn't mean you have a solution for all endpoints. You just can't use the name of the updater file. That wouldn't be a very secure approach. So what we use is a hash-based approach, which is far more secure. 
 
So you need a tool that uses the file hashes for all of the variants. And that's the reason that you need to get all of those into the policy. So the reason you want to remain in audit mode for an extended period of time is to ensure that you haven't missed any of these variants. And the way you'll know that you've missed them is once they run, some log entries will be created, and then you can go and update your policies. 
 
That's Trusted Updater. It's a pretty important mechanism to use in terms of automating the whitelist maintenance, but let's move on to look at some of the other policy types. 
 
So the next one is Trusted Publisher. Trusted Publisher allows applications to execute based on their digital signing certificates. To support software changes, I would recommend your first approach should always be via Trusted Updater. But Trusted Updater doesn't work for all applications, and this is where you could consider Trusted Publisher as an alternative. Trusted Publisher typically used for cloud distribution applications which don't reside on disk until runtime. An example of such application, or an example of such applications, would include things like WebEx or GoToMeeting. Trusted Publisher can also be used for browser plugins, which generally don't have updater tools, so don't lend themselves well to Trusted Updater. And it can also be used for in-house signed custom applications. So this provides a really easy way to trust changes on your own in-house software. If it's signed with your certificate, then it's authorized to run. 
 
Some tips as you develop your Trusted Publisher policies. The file which is authorized to execute by a Trusted Publisher will be allowed to load all dependent processes. They don't have to be signed. Only the initial executable needs to be signed. Note also that most software vendors have multiple certificates, and not all certificates for the same vendor are authorized. Only the specific certificates on the policy. So you need to get all the certificates into the policy. If you run into issues, check to see if the certificates match or if they need to be updated. 
 
Trusted Publisher can be used to run programs and install applications. And those installed applications will run as long as the initial executable is signed, as well. But as I've already said, a general best practices is use Trusted Updater for installs where possible as Trusted Publisher doesn't actually update the whitelist, unlike Trusted Updater. 
 
Okay. So moving on to the next trust mechanism, Trusted Path. Trusted Path authorizes applications to run based on their location. In many respects, this is your fallback if you're unable to accommodate to change via either Trusted Updater or Trusted Publisher. So with Trusted Path, it allows execution of an application if it's stored in one of the paths specified in the policy. 
 
So you would use this for unsigned executables that change frequently, maybe if every install of the application is unique, and also for a shared network paths. For example, build up with locations for in-house software development. Now allowing applications to execute based on their location, based on their path might not seem like a particularly secure solution. But you have some additional security features built in. You can specify ownership restrictions, an authorized owner. So that the file can only execute in the Trusted Path if the owner is specified in the policy. And you can also secure files in these paths by operating system privileges. So you can restrict who can access this paths, who can write files to this path. That combination greatly increases the security of this trusted change policy while still giving you some flexibility. Okay. Chris, we wanna have a look now at our poll results for the latest poll.
 
Chris: Yeah. Let's take a look at those. So it looks like 33% of folks have one to two automated updaters on their endpoints. Next, three to five at 24%, and don't know at 24%. And then trailing in at 10% each, it's six to 10 and 11 or more. So does that kind of jive with what you've seen out in the field, maybe?
 
David: Yeah. I mean the interesting point I think people are making there as well is that you think of an endpoint, there are lots of applications on the endpoints. But actually when it comes to creating trusted updaters, the number of trust updaters that you're going to need to create isn't actually that large. And somewhere in the region of 10 maybe even up to 20 trusted updaters policies will cover the vast majority of your applications. So it doesn't actually take that long to get those in place. So it's interesting to receive that feedback. 
 
Chris: Yeah.
 
David: Okay. So during this pre-enforcement phase, it's important to get maintenance of the whitelist stabilized. You want to identify any missing policies or policies that need to be updated now, instead of trying to do it on the fly once the endpoints are in lockdown. As I mentioned earlier, timeframe varies, but it should last for at least one month, incorporating at least one patch Tuesday and significant corporate events such as a quarter end. It's important to emphasize this last point. One of the key lessons learned from our extensive experience in deploying Application Control around the world, organizations large and small, we've learned there's a direct correlation between the care taken during this step and the overall success of Application Control as a viable, advanced protective layer against malware. 
 
It's absolutely necessary to account for all sources of expected and trusted change in the endpoint environment in order to avoid negatively impacting user productivity or overloading the administrator. It's very much the old adage of, fail to prepare, prepare to fail. So you need to get that preparation done. By reviewing the log, the Easy Auditor logs in particular, you can identify when and how changes have taken place in endpoints. Log entries appear when executables are run that aren't on the whitelist or aren't covered by one of the trusted change policies you put in place. If the endpoint had been in lockdown, these files would be blocked. So it's important to create or update policies to cater for these changes prior to going into lockdown. 
 
There are some other log queries that can be also useful in this phase. One that you should definitely review before you go into lockdown is the All Applications added by Trusted Updater's log query. This shows you what's being added to the whitelist. If you've made a mistake and added the wrong file to the Trusted Updater, the Trusted Updater, for example, the antivirus engine as I mentioned earlier, the log query results will probably show a large number of files being added to the whitelist unexpectedly. So we refer to this as a "trust leak," and you should run this query to check trust leaks, because if you've got a trust leak, you're not achieving the level of security that you should be getting with Application Control. 
 
So during the monitoring period, the Easy Auditor Logs can be monitored daily and should be monitored daily. Avoid leaving this for a number of days, as the logs can rapidly build up and it becomes a bigger challenge to maintain. So you can create a daily log query for this log file and have it have an email to you. And that acts as a daily reminder to review the logs and update the policies. After your initial week or two have passed and a number of new log entries start to dwindle, start to roll Easy Auditor out to increase number of endpoints and continue to monitor until the logs have stabilized. 
 
One of the other policies that support change on the endpoint is Local Authorization. So as you get to the end of the monitoring period and before you go into lockdown, this is a policy you might want to consider providing for your users. Local authorization requests the user to authorize any executables which aren't on the whitelist or otherwise authorized by one of the other Trusted Change Policies. It acts as a third mode of enforcement. So we have these two modes I talked about, Easy Auditor and lockdown. Those are two modes, but you have this third mode in between those which is blocking mode with local authorization. 
 
Now depending on your risk tolerance to your security posture, you may never hit full lockdown. You may also decide to deploy local authorization selectively. So for instance, you may provide management with local authorization permissions but maybe not folks in the call center. When local authorization provides additional end user flexibility, this doesn't mean the administrator is out of the loop. Administrators maintain visibility into all applications approved by end users by a local authority station and decides to add into the whitelist. So others have the option to use them, or put aside for this individual user to continue to permit local use but not allow others to use these applications, or maybe just decide this is not something we want our environment and you can add them to a denied application policy, thereby preventing continued use. 
 
So the administrator still retains control even though you have this additional flexibility. So why might you want to introduce this policy at this point? There's a couple of reasons. Firstly, from your perspective as an administrator, it would help you understand whether you're truly ready to go into lockdown. If users are being requested to authorize lots of files, and you'll be able to determine this from the logs, it means your policies probably are incomplete and you need to make further updates. So it's an easy way of testing the waters before you go into lockdown. The second reason may be somewhat more important is, it makes your users aware those enforcements are coming. Up to this point, you've probably been communicating regularly with your users, letting them know what's going on. But because they're in audit mode, it probably hasn't really impacted on them very much. They're probably still going ahead downloading applications directly from the internet and introducing risky changes. 
 
So using Local Authorization makes them sit up and take notice that you're entering a new phase in the implementation. Now anytime they run something that isn't authorized, they're asked to authorize this. And that by itself, starts to bring around bring about a change in behavior. They can still go ahead and authorize this, so they aren't being blocked and that's important. If it's something they genuinely need to do their job. So it maintains that flexibility, but they become aware that change is on the way. 
 
It also helps to limit the spread of malware. At this point, everybody can authorize new files on their own endpoint, so users have the ability to introduce malware but only if they specifically authorize it and only for their endpoint. And that way it does curtail the spread of malware. So instead of going straight into monitor mode or audit mode into lockdown mode, Local Authorization provides a transition step between the two. And the feedback we've received in this feature is it really helps to ease that transition, and overcomes the fear that administrators sometimes have of making that final step into lockdown. So in terms of lessons learned, this is a good one to take note of. 
 
Okay. So to help users with authorization decisions, the local authorization dialogue presents the user with a file verification rating, which is obtained dynamically from the endpoint integrity service cloud-based reputation database. And this can be leveraged by users, along with of course all the other file meta data presented in the dialogue, to make authorization decisions. Needless to say, teaching end-users about the various ratings and how to use the local authorization dialogue is an important message to get across as part of your overall user communications. 
 
Okay. Next I wanna talk about Advanced Memory Protection. Memory infections have become very much the attack vector of choice for both targeted attacks and advanced persistent threats, or APTs, whereby the attacker exploits vulnerabilities in unpatched systems to gain entrance onto the endpoint. An example of this is phishing emails, emails that look authentic and contain a link in them, and you click on the link, and suddenly you're infected. Going back a few years, the primary memory infection technique was remote DLL injection, which involves remotely copying a DLL to the file system and loading it. From the attacker's perspective, the problem with that technique of course is that because it requires loading DLLs from the file system, chances are it's gonna be blocked by antivirus but it'll definitely get blocked by application whitelisting. 
 
So more recently, we've seen the evolution of techniques like reflective memory injection, or RMI, which works by injecting a payload into a compromised host process which is already running in memory. It never touches the hard drive, so it's undetectable by file-based security systems such as antivirus or whitelisting. Reflective memory injection is a technique you will associate with advanced persistent threats or a man behind the keyboard attack, whereby having injected a payload into the running processes and memory, the process can then be controlled by the remote loader. Enabling confidential information to be stolen, for example. 
It's also virtually invisible. It doesn't show up on a list of loaded modules for a process. So having gained a foothold, this is the type of attack that can remain undetected for quite a period of time. So Memory Protection Policy is a separate policy from the whitelist policy, just like the Denied Applications Policy. So endpoints don't need to be in lockdown to benefit from this layer of protection. It can be run in either auditor or enforcement mode. The reason you would run it audit mode initially, is that there are cases where memory injection is actually used for legitimate reasons. They're fairly rare and they can be handled on an ad hoc basis by adding exceptions to the policy. Where we've seen this, for example, is with counterfeit deterrence associated with printers and scanners.
 
We filter these out automatically once we become aware of them. But you might encounter some other variance that we haven't yet seen. So if you do encounter some memory based events, while you're in audit mode, you should contact support to let us know. It could mean you have an actual infection but it could also mean you have encountered a legitimate application which is using RMI, and we can update the product to filter that out. So after a few days in audit mode, if you haven't seen any of these events, you can go ahead and turn on enforcement so you are protected from memory based attacks. 
 
So having a memory protection capability is a really important parts of your overall defense. And it very much complements the file based application whitelisting antivirus technologies to protect against attacks that don't originate from the file system. Okay. So this brings us to our next phase which is protecting endpoints. Once the policies are in place and the logs are stabilized, you're now ready to take that next step and go into lockdown. It's a very simple transition to make. There will probably be some fine tuning of policies that you need to make post lockdown, but otherwise if you've done the groundwork, it should be a fairly seamless transition. 
 
To go into lockdown, you're going to need to scan the endpoints again to create a clean endpoint whitelist. You went and create a whitelist on the original scan when you went into audit mode, but there probably were various changes that didn't make it onto the whitelist because you hadn't got the policies in place. So if you were to simply turn on enforcement with the old whitelist, there would probably be lots of applications that will get blocked. So you need to run through the scan cycle again. As with the first time, if possible, conduct it out of hours to minimize disruption. Or if you need to run it during working hours, communicate with users so they know what to expect.
 
Once the scan completes, the endpoint is in lockdown, and now any executable which isn't on a whitelist or otherwise authorized by a trusted change policy, will be blocked. If you have previously applied a local authorization policy, you might decide to leave that in place for a period of time to provide flexibility. Or you could decide to remove that policy now so you really have full enforcements. Again at this point, user communication is key. Users will start to see actual enforcement changes not just cosmetic changes like a new icon in their system tray. Reiterative to your users what's been enforced and when. 
 
As when you deploy new technology in your production environments, you want to start with a small group of users and endpoints. As you switch these users to lockdown mode, you want to confirm that the policies are behaving as expected, monitor for any issues and adjust as need be. This last step will occur continuously as business needs evolve and personnel change, but if you put the time in during the earlier phases, these changes should be minor. So once your initial test group is stabilized, you can start with the next group.
 
So when you plan to start enforcements, communicate to end users that the enforcement of whitelist policies is the beginning. Users may see some changes on their endpoint and this could generate a number of help desk calls from curious or concerned users, and you can address this communication in advance of enforcements. Users may also see some change in their ability to download their updates applications, and you can solicit and direct their feedback on this by communicating in advance. You will want to hear from anyone who is unintentionally disrupted by a policy setting so you can adjust the policies if need. 
 
We have lots of installs at this stage, and as I mentioned already, the success of the entire program that is securely enabling the productivity tools users want or need to use, often hinges on the robustness of the user communications. So it is important that you start small as you move into enforcement mode. You've done your homework, but there is always some surprise that arises. So be sure to watch the logs and talk with users and support staff to assess how the implementation is going. Once things have settled down, you're ready to move to the next group. 
 
Choosing your test groups depends on the particular situation. Oftentimes organizations will start with the more technically savvy IT team to make sure they have a handle on things. But from there, you can go with a number of different strategies. Maybe start with the easiest groups first progressing to the more difficult ones, those with complex systems or application needs as you gain confidence in your rollout. Maybe you start with the nearest groups first progressing to the more geographically dispersed ones later on in order to concentrate your support efforts. Or possibly, you could start with the most at risk group first or the most valuable highest impact assets web-facing servers. Or maybe the simplest assets, fixed function kiosks. 
 
As you can see there's a number of different rollouts strategies which could be employed. So knowledge of the network, the risks, and the organization goals will guide you to the best approach for you. A best practice is to create or use existing groups for moving systems between audit and lockdown mode, so as to minimize your workload. So once you're in lockdown, you should continue to monitor logs to understand what kind of changes are being introduced into your environments. For example, you might still have a number of users who are locally authorizing applications. You should review these newly introduced applications to understand whether they should possibly be denied, or maybe they should be authorized for all users. So you need to think globally and anticipate future needs. 
 
 
I talked earlier on about denying applications and getting an immediate benefit from doing that. At this point in the process, now that we're in lockdown, the ability to authorize application becomes probably more important. You might need to authorize applications that have been blocked or locked on endpoints, or you might have users with local authorization privileges, and that have authorized an application that you want to make available for others. So you should maintain a test endpoint, so that you can scan and use software proactively so it can then be authorized for some or all users. Alternatively, if the software has already been locked in an endpoint, you can just go to the logs and authorize it from there by creating an authorized applications policy, or supplemental easy lockdown policy as in the screenshot here. 
 
Okay. Now that the endpoints are in lockdown, you're in control of the applications which can execute in your environment. You do need to continue to monitor the logs to determine if policy updates are required. And you also need to be able to manage change and ensure the procedures have been implemented so users can request approval from new or blocked applications. So at this point, you should be in control of your environments. Your endpoints are in lockdown, you've implemented a trusted change engine which minimized the overhead by automating whitelist maintenance. 
 
You can review how new software is entering the environment and what you need to further tweak your policies to prevent this form of change from entering the environments. There's many different avenues by which executables find their way into your environment. Using the dynamic whitelist and trust engine capabilities in Application Control, you can ensure your environment stays secure. Along the way, you'll hear the types of questions you will want to ask to keep things that way. So firstly, is this a known badge that is known malware? If so, your Application Control will stop it, as might AV. This is a new file, this isn't on the whitelist and won't get authorized because none of the trusted change mechanisms that you have established will support it. So it will be blocked. 
 
The fact that it has been blocked will be logged, and you will have an opportunity to review these log events to try and get a better understanding of the types of threats you're facing and how these are getting into your environment, so you can take appropriate action. Whether it be user education or updating one of your other areas of defense. Another question, is this unwanted? That is, does it have a place in your organization or no place in your organization? If so, Application Control will stop it. Should my users have this? Again, do we want this running in our environment? If not Application Control will stop it.
 
By reviewing the logs and through your helpdesk ticketing system, you can decide if these new applications fit with your business goals and help your users to be productive. What is trying to install this server? Who is trying to install this? That is, do I trust this installer or updater? If not, Application Control will stop it. If this is an update that is already on your endpoints, you will need to decide whether it should be added as a trusted updater or explicitly denied so it doesn't attempt to modify whitelisted files. Who wrote this? And that is, do I trust the digital certificate associated with the application? If not, Application Control will stop it. Where did it come from? Do I trust that the path that the application is trying to execute from again? If not, Application Control is gonna stop it. And finally, is this a known good? That is, do I trust it is what it says it is, it hasn't been changed and I want to allow users to have access to us? If so, Application Control will allow it.
 
So by changing the focus from it is bad and stopping there, to one where you ask a lot more questions to determine if you're going to trust a new executable to run in your environment, and using the technical means provided by Application Control, you're reducing the chances of malware infecting your systems, and improving the visibility and control you have over your network. We get a lot of questions about what's happening under the hood with Application Control. How does Application Control make decisions and in what order does it make those decisions? The decision flow, as you can see on the screenshots here, it follows a very well defined path.
 
As an executable attempts to run, Application Control runs the following checks. First, at the very top, we check to see if it's a denied application. Then we check to see if it's on the whitelist. Next, we see if it's a trusted app. And finally, we check to see if it's locally authorized. So let's take a look at each of the steps in a little bit more detail. Let's start with the Denied Apps check. So the system checks the blacklist of denied applications created by the administrator. If a match is found,b the executable is locked and the event is logged as locked denied. If a match isn't found, the next step in the process is initiated.
 
As I've mentioned before, this represents a significant win for our organizations even if they don't move into full enforcement or lockdown mode straightaway, or even ever. The Denied Applications Policy can be implemented independently of the whitelisting capability and allows you to limit the types of programs running in your environment. They might include the time wasters. You might want to ban use of games on your endpoints, for example. Or maybe inappropriate or unwanted might include perfectly legitimate programs like peer-to-peer applications which might not be appropriate for some organizations, those with strict security policies. Or potentially, some of your illegal applications. You might want to ban the use of key loggers or penetration testing tools on your endpoints. 
 
Next, we look at the Authorized Applications check. So the system checks the whitelist of authorized applications created during easy lockdown process and also the supplemental whitelist created by the administrator, when they authorized blocked applications. So if a match is found, the executable is permitted and the event is logged as allowed or authorized. If a match isn't found, it moves onto the next step in the process. So this is where the power of the dynamic whitelist comes to the fore. All whitelisted applications are permitted to run and that list is constantly and automatically updated by the trust engine as legitimate changes are being made to applications on your endpoints.
 
In addition, here you're proactively managing a limited list of legitimate executables. A typical endpoint will have probably about 25,000 executables, as opposed to constantly reacting to the ever increasing list of malware, which numbers about 30,000 per day at the moment. We then look at the Trusted Apps Check. So the system checks the trust engine to determine whether application is authorized based on the rules that the administrator has defined. So that trust engine is a combination of the Trusted Updater, Trusted Publisher, and Trusted Path Policies. And if a match is found, the executable is permitted, and the event is logged as allowed, including the type of updated, publisher, or path. And if a match isn't found, the next step in the process is initiated.
 
We then look at the Local Authorization check. So the system checks to see if the executable has been authorized by the user by a local authorization. If a match is found, the executable is permitted, and the event is logged as local auth allowed. If not and the end-user has local authorization permissions, he or she is given the option to approve the executable. If a match isn't found or if user doesn't approve it, the executable is blocked and logged local auth denied. The local authorization can be implemented as a permanent process for some or all of your users depending on your risk tolerance or security posture, or used as an interim step between audit enforcement modes to ensure your policies have covered all legitimate applications and to ease that transition, as I mentioned earlier on. 
 
A file permitted by local authorization can be added to the Authorized Application Policy by the administrator, or left for that particular user to use the Denied Applications Policy. You can offer the organization the necessary flexibility to maintain business productivity and to attain business goals, and with an appropriate security balance. Another advantage with Local Authorization, as I mentioned, should one user accidentally permit some malware, there is little likelihood it will be authorized on all your endpoints. So it acts to contain infections locally.
 
Okay. So those are the different policies in the in the decision flow. A couple of extras there. You see on the side, we have the, as I mentioned, endpoint integrity service. You know that the end-user's trust decision is augmented by data from the endpoint integrity service, as previously discussed in this session. And also we talked about Advanced Memory Protection as a separate policy which checks for and kills unauthorized threads within running processes.
 
Okay. Change is inevitable. Overtime you will have changes in your environment. Perhaps you will acquire a new company which needs to be incorporated. You might have new hardware, new operating systems, user needs will also change. Users will want to be able to use different applications, users will move between departments, will need to be authorized for a different set of applications, business needs will also change. All of these changes require you to adapt and have flexibility in your implementation, so you can balance security with productivity. From a best practice perspective, you need to implement change control processes, you need to have clear escalation procedures so the change can be requested, and if approved, implemented in a timely manner. These procedures need to be clearly communicated users as well. 
 
There will be time-critical situations where users won't have the ability to escalate or will need an immediate resolution. So consider, for example, the case of a sales engineer, or support engineer on a customer site where they may need to install some new software and simply can't afford to wait around to get approval. For such users, you can consider giving them local authorization capabilities or maybe having a procedure whereby these users can request local authorization for a temporary period while they're on the road. Once they get back in the office, you can then decide whether any new software they have added should be authorized or denied. The key point here is to be prepared in advance for such situations. There needs to be flexibility and you need to decide how much flexibility you want to provide, and how you approach balancing security and flexibility across the organization, which ties back to your risk profile as we discussed earlier. 
 
Okay that brings us to the end of the presentation. So I hope this has been useful for you and you've identified some best practices from the lessons we've learned today that can be applied to introducing application whitelisting in your organization. So with that, Chris, I'll hand it back to you and maybe we can get through some questions. 
 
Chris: Great. Thanks, David. So yeah, before we move into the Q&A and we do have a couple of questions, and again if you have questions send them in. We'll answer them as we can. I did want to let everyone know that you can access free scanner tools that are available on the Lumension site, which would allow you to check what kinds of applications are in your environment today. David mentioned 25,000 executable files on a typical endpoint. These free tools will help you work for those and give you an idea of really what is the size of the issue you're up against. You can also watch an online demo, or download a free trial of Application Control. So a lot of resources available to you. David, let's ask a few questions in the couple of minutes we have left. The first question, I guess, do they have to start with a same gold image for all endpoints? Does it have to be an identical endpoint?
 
David: No. What we do is we take a snapshot of each endpoint to create a unique whitelist. We do have customers that as they're rolling out gold images, they'd like to be able to take those and just apply the same whitelist on all of those. But we still go and scan even those endpoints. Because even though they appear identical, often there are minor differences in them. You might have different hardware in those machines, different drivers. So the approach we've taken to date is to scan the endpoints. It's a fairly quick scan where we keep making it shorter and shorter. And that allows you to get that whitelist in place pretty quickly. So the key message is, no two endpoints are exactly the same, so we just go and take a snapshot of each endpoint.
 
Chris: Yeah, that makes sense. The one-size-fits-all notion really doesn't apply to modern computing anymore, does it?
 
David: No, no, it doesn't. And you don't want to go through the hassle and productivity impact of taking computers back from users and re-imaging them, and using that kind of vanilla whitelist on all of them. It just doesn't work for people.
 
Chris: So another question. What sorts of applications have you seen in denied apps, the files, the first step in that process you showed a little earlier?
 
David: All right. It's a big mix of applications really, Chris. And it depends on what the user is trying to achieve. Very often it's related to network bandwidth. We've seen customers who have got benefit from cutting out streaming applications, file sharing applications, just to cut down on network bandwidth utilization, as well as eliminating some of those time-wasting software that don't add to user productivity. Sometimes it's things like customers trying to enforce a corporate policy around specific applications. Maybe you've got one instant messaging client you want your users to use, because it's more secure and users are used to using different instant messaging clients. So you want to enforce that policy, and the easy way of doing that is just to use denied applications. So there's a variety of reasons but they tend to come down to network utilization, productivity, security. 
 
Chris: And some of it comes back to what you opened up with. What's your security stance? What's your risk tolerance? Right? Permissive organizations might not worry so much about some things like bandwidth utilization, whereas other organizations, if they have heavy compliance requirements, they may be required to use a specific IM because that's the one that they can maintain archives on. 
 
David: Yeah. That's it.
 
Chris: Yeah. Okay. We have just a few minutes left. So an interesting question came up. We talked a lot about trust and to paraphrase the question. How do I know whether to trust an executable? 
 
David: So in terms of trust, I mean there's two types of trust. First of all, just to authorize it. And the second one is to add it as a trusted updater so that he can actually perform updates to the whitelist. In terms of the more obvious on which is to just to authorize the file, when the file comes back into the logs or into the application library, you can then leverage the reputation database and in our case, the Endpoint Integrity Service, to get a file verification rating information on the file. And that will give you a sense of, is this file really what it claims to be? Is it something that that's well known, and so on?
 
So using the rating, we use a scale of one to 10, and depending on how high it is in that rating, it makes that decision easier or more skeptical depending on what the rating is. But you've also got other file meta data that would be available to you. Obviously things like the file name, and certificate information, the manufacturer, and so on. So it's a combination, but really, the reputation data, that verification raising really helps to make that decision. In terms of the other part of the question then potentially is around, what do I add as a trusted updater? For any application, it would have some file that will be used to update it. 
 
We do have lists of those that we maintain, but generally they'll be things like, you will have the word "update" probably somewhere in the name, update, LAV [SP] update, AV [SP], something of that nature. And you you'll start to see these files getting blocked in the logs or start to them seeing them making changes to the whitelist in the case of adding the Trusted Updater logs. 
 
Chris: There is a follow up question to that. As a Windows shop, which you typically add is updaters? I mean you gave a little bit of a list. Before, you said typically, you see 10 to 20 in your experience. Can you give kind of an idea of what those are? 
 
David: Yes. I mean for Windows updates, we actually have a case that we provide. And I think there's about… I don't exactly how many files are individual files are in it. It's something like six or seven different files that Microsoft uses to perform Windows update, and then there's a whole raft of variants of those. So we have a collection of several hundred that form practically, and that would be used for that. But then you would have other updaters beyond just Microsoft. You would have your Firefox updaters, your Google Chrome updaters, and so on. 
 
But typically, the types of updaters, the ones…
 
[crosstalk]
 
David: Yeah. The ones that you would start to get on straightaway are anything to do with your browser, your antivirus engine, the more common manufacturers of the likes of Adobe. Then you would have your distribution software, whether that's Windows updates, FCCM [SP], whatever it is, you would create those as well. Once you've got through those, you're down to smaller numbers at that stage, and it will vary obviously depending on the customer environment. But getting those more common ones in place, that's your obvious starting point. 
 
Chris: So let's close out with a question from the audience about implementation time. Do you have a rule of thumb as to time required by IT to implement this on a per user basis? So for example, 100 users takes 20 hours of engineering time. Do you have anything like that?
 
David: I'm not sure we have any hard and fast rules around that. I mean the actual… You can just consider the phases we've just gone through. Obviously the whole implementation cycle, you do need to allow time for Updater to run to make sure you have identified them all. So our recommendation is that you need to run for at least a month, so you cover a patch Tuesday, and anything else that will be significant in your environments like a quarter end, or something of that nature. So that tends to dominate the actual rollout. It just depends on how you break it up into groups, as well. So if you've only got 100 users, it's relatively straightforward in that you would probably start with from an original 10. And once you have the policies in place for those, you would maybe just go to the full 100 at that point in time. If you've got 10,000, you've got to start at 10, go to 100, and maybe 1,000 after that, and so on. Build it up a bit more gradually.
 
Scanning an endpoint for an individual user, typically it's gonna take…you want to make sure the endpoints are patched. It depends how out of date you are from a patching perspective. You wanna run an anti-virus scan, and then you're going to scan to create a whitelist. So depending on how far back you are from a patching perspective, that's a bigger or smaller process. But you're talking about somewhere in the region of a day or two from start to finish there. 
 
Chris: Right. And then also, the number of policies you create based on what your risk tolerance is. All of that's gonna play into the timing or the amount of effort that's required to rollout whitelisting. 
 
David: Yeah, getting policies in place, initially, is fairly quick. It's the monitoring period after that that tends to be…because you're waiting to find those exceptions. So within a week or two, you can probably have all your policies in place. It's then waiting to see, did you miss something?
 
Chris: All right. All right. Well, we're right up against the end of the time allotted to us. Thank you, David, for taking the time to walk us through these best practices on the application whitelisting. And thanks to the audience for all of the great questions. If we did not get to your question, we'll send you an email later and we can open that dialogue there. So thanks again and have a great day.